Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense 17318,2,,17317,1/1/2020 0:38,,0,,"

There is normally more to generalisation than just increasing training data. It helps to make the task noisy, through various means. One common and popular method is to use dropout, which encourages the network to utilise every node, and avoids dependencies on small clusters of nodes.

So how does making the task more noisy help with generalisation? Well it's easy to explain with a conceptual example, but I don't think that's what you're looking for, rather a more mathematical approach.

The best way to think of it is with a simple example of a polynomial data set, in only 2 dimensions. If you consider this, and the way the network slowly approaches optimums through back propagation, the concept that the classification boundary gradually approaches the optimum, which in all cases is an over-fit function, isn't too far fetched.

Now considering this, this would suggest that in order to properly train a network, we would need some way of determining when the network isn't super close to the optimum (as it will have over-fit by then) but also isn't so far from it that it's worse than randomly picking. This is were methods to improve generalisation come in, we want to hit that sweet spot.

If the learning rate is too high, we will overshoot that sweet spot and miss out entirely (the range can sometimes be very small), if it's too low, we may get stuck in tiny local minimums and never escape, or it could take years to reach it.

Using the example of dropout from before, this increases noise, and makes training harder on the network. Due to more difficult training, the network approaches the optimal function at a slower rate, and makes it easier to cut training when the network has generalised well.

Conveniently, this extends to n-dimensional problems as well. Now, as far as I know, there is no robust mathematical proof of why this works for n-dimensional problems. The reason for this, explains why neural networks even exist: We don't know how to mathematically classify these issues to the degree a NN does ourselves. Because of this, there will always be gaps in our knowledge. We will never be able to quantitatively say what a neural network is doing, without making them obsolete. So until we can figure it out for ourselves, we'll have to just perform testing against unseen data to see if the network has indeed generalised.

",26726,,26726,,1/1/2020 2:12,1/1/2020 2:12,,,,7,,,,CC BY-SA 4.0 17323,1,,,1/1/2020 7:55,,1,36,"

I come from a background of scorecard development using logistic regression. Steps involved there are:

  1. binning of continuous variables into intervals (eg age can be binned into 10-15 years, 15-20 years, etc)

  2. weight of evidence transformation

  3. coarse classing of bins to ensure event rate has a monotonic relationship with the variable

Variable selection is made from the coarse classed transformed variables.

I was wondering if I should follow the same steps for ANN models. That is, should I bin the continuous variables and apply some transformation before performing variable selection and modeling.

",32439,,2444,,12/21/2021 15:45,12/21/2021 15:45,"When using neural networks, should I bin the continuous variables and apply some transformation before performing variable selection and modeling?",,0,1,,,,CC BY-SA 4.0 17324,1,,,1/1/2020 21:01,,2,134,"

Let's assume an extreme case in which the kernel of the convolution layer takes only values 0 or 1. To capture all possible patterns in input of $C$ number of channels, we need $2^{C*K_H*K_W}$ filters, where $(K_H, K_W)$ is the shape of a kernel. So to process a standard RGB image with 3 input channels with 3x3 kernel, we need our layer to output $2^{27}$ channels. Do I correctly conclude that according to this, the standard layers of 64 to 1024 filters are only able to catch a small part of (perhaps) useful patterns?

",31988,,22659,,6/1/2020 12:22,6/1/2020 12:22,What is the reasoning behind the number of filters in the convolution layer?,,2,2,,,,CC BY-SA 4.0 17325,1,,,1/1/2020 21:31,,5,611,"

Just wondering why a softmax is typically used in practice on outputs of most neural nets rather than just summing the activations and dividing each activation by the sum. I know it's roughly the same thing but what is the mathematical reasoning behind a softmax over just a normal summation? Is it better in some way?

",30885,,,,,1/2/2020 10:57,Why is a softmax used rather than dividing each activation by the sum?,,1,2,,,,CC BY-SA 4.0 17326,1,17336,,1/2/2020 3:11,,3,494,"

A LSTM model can be trained to generate text sequences by feeding the first word. After feeding the first word, the model will generate a sequence of words (a sentence). Feed the first word to get the second word, feed the first word + the second word to get the third word, and so on.

However, about the next sentence, what should be the next first word? The thing is to generate a paragraph of multiple sentences.

",2844,,,,,1/3/2020 11:06,How to use LSTM to generate a paragraph,,2,3,,,,CC BY-SA 4.0 17328,2,,17325,1/2/2020 10:57,,2,,"

There are probably multiple different explanations and reasonings, but I can offer you one. If your output vector contains negative values, to get something that's related to probabilities (all components positive, summing to $1$) you cannot do what you suggested because you can possibly get a negative probability which doesn't make sense. Good property of exponential function used in softmax, in this case, is that it cannot give negative values, so regradless of your output vector, you will never get negative probability.

You could suggest adding some positive offset vector $\mathbf d$ to your output vector to get rid of negative values but there are couple of problems. First, you cannot know in advance your range of negative output values so that you can know what offset vector to use to cover all possible cases. Second, with such strategy in some cases you could get unrealistic results. For example let's assume output vector is $[-0.1, 0.2, 0.3]^T$ and offset vector is $[0.1, 0.1, 0.1]^T$. If you add those 2 you get $[0, 0.3, 0.4]^T$. Probability of first class would be $0$ since the numerator is $0$. That is very overconfident result and we probably wouldn't want to get the result of $0$ for this class. The result would change depending on the offset vector. Let's say that offset vector now is $[0.3, 0.3, 0.3]^T$. Addition of offset vector and output vector gives $[0.2, 0.5, 0.6]$. In this case probability of first class is now $0.2/(0.2 + 0.5 + 0.6) = 0.15$. We see that changing the offset vector changes the values of probabilities and as components of offset vector $\rightarrow \infty$ probabilities for all classes $\rightarrow 0.33$. We would likely want to get the same result regardless of the scale of the values of the vector, we only care about its relative relationships. Another good property of softmax is that it is shift invariant \begin{align} p_i &= \frac{e^{x_i + d}}{\sum_{j=1}^n e^{x_j + d}}\\ &= \frac{e^d \cdot e^{x_i} }{ e^d \cdot \sum_{j=1}^n e^{x_j} }\\ &= \frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}} \end{align} so we see that the probability of $i$-th component is independent of offset vector. Apparently softmax doesn't care about the scale of values, it manages to capture the relative relationships of the components.

",20339,,,,,1/2/2020 10:57,,,,0,,,,CC BY-SA 4.0 17331,1,,,1/2/2020 13:27,,2,46,"

I’m doing research on natural language processing (NLP). I’d like to put together my own model. However, I'm running into a concept I am not familiar with, namely, distractors. A google search does not reveal much.

I've been reading this article specifically: https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313

In the section under ""Multi-Tasks Losses"" it reads:

Next-sentence prediction: we pass the hidden-state of the last token (the end-of-sequence token) through a linear layer to get a score and apply a cross-entropy loss to classify correctly a gold answer among distractors.

I understand how transformers and coss-entropy works, however I'm not sure what a distractor or a ""gold answer"" is for that matter.

In this context, what does the author mean by distractor?

",20271,,20271,,1/2/2020 16:44,1/2/2020 16:44,What role do distractors play in natural language processing?,,0,0,,,,CC BY-SA 4.0 17332,1,,,1/2/2020 15:09,,2,98,"

The reason I am asking this question is because I am about to start a PhD in NLP. So I am wondering if there would be as much job opportunities in research in industry as oppose to in academia in the future (~ 5 to 10 years) or would it be mostly a matter of using a library off the shelf. I have done some research and it seems NLP is AI-complete, which means it's probably a problem that will be ""solved"" only when AGI is solved , but still I would appreciate any input.

",32462,,32462,,1/3/2020 21:14,1/3/2020 21:14,Is NLP likely to be sufficiently solved in the next few years?,,0,3,,,,CC BY-SA 4.0 17333,1,,,1/2/2020 16:08,,2,463,"

Given an audio track, I'm trying to find a way to recognize the audio language. Only within a small set (e.g. English vs Spanish). Is there a simple solution to detect the language in a speech?

",9053,,9053,,1/4/2020 17:07,1/4/2020 17:07,How to use AI for language recognition?,,1,2,,,,CC BY-SA 4.0 17334,1,17542,,1/2/2020 17:44,,1,122,"

The description of feature selection based on a random forest uses trees without pruning. Do I need to use tree pruning? The thing is, if I don't cut the trees, the forest will retrain.

Below in the picture is the importance of features based on 500 trees without pruning.

With a depth of 3.

I always use the last four signs 27, 28, 29, 30. And I try to add to them signs from 0 to 26 by means of cycles, going through possible combinations. Empirically, I assume that the trait number 0, 26 is significant. But, on both pictures it is not visible. Although the quality of classification with the addition of 0, 26 has improved.

",31808,,,,,1/16/2020 17:41,Interpretation of feature selection based on the model,,1,0,,,,CC BY-SA 4.0 17335,2,,17333,1/2/2020 21:02,,1,,"

Google has an API you can use. https://cloud.google.com/translate/. Their API can translate audio to text. They also have an API for converting speech to text. The language detection feature should let you detect the language in the resulting text. They have client libraries for the most popular programming languages.

",32467,,,,,1/2/2020 21:02,,,,4,,,,CC BY-SA 4.0 17336,2,,17326,1/2/2020 21:07,,3,,"

Take the sentence that was generated by your LSTM and feed it back into the LSTM as input. Then the LSTM will generate the next sentence. So the LSTM is using it's previous output as it's input. That's what makes it recursive. The intial word is just your base case. Also you should consider using GPT2 by open AI to do this. It's pretty impressive. https://openai.com/blog/better-language-models/

",32467,,,,,1/2/2020 21:07,,,,0,,,,CC BY-SA 4.0 17337,1,,,1/2/2020 21:12,,3,64,"

I've been thinking about what ""mathematical model"" can be used to model every possible thing (including itself).

Examples: a simple neuron network models a function but doesn't model an algorithm. A list of instructions models an algorithm but doesn't model relations between elements...

You might be thinking ""maybe there is nothing that can model everything"" but in reality ""language"" does model everything including itself. The issue is that it's not an organized model and it's not clear how to create it from scratch (e.g. if you will send it to aliens that don't have any common knowledge to start with).

So what is some possible formalization of a mathematical model that models every possible thought that can be communicated?

Edit 1:

The structure formalization I'm looking for has to have a few necessary properties:

  1. Hierarchical: the representation of ideas should rely on other ideas. (E.g. an programming function is a set of programming functions, the concept ""bottle of water"" is sum of two concepts ""water"" and a ""bottle""...)
  2. Uniqueness of elements: When an idea uses in its definition another idea, it must refer to one specific idea, not recreate it each time. For example, when you think of a digit ""9"" and the digit ""8"", you notice that both have a small circle at the top, you don't recreate a new concept ""circle"" every time, instead, you use a fixed concept ""circle"" for everything. By contrast, a neural network might recreate the same branch for different inputs. So two representations of concepts must be different iff they have a difference.)
",31723,,31723,,1/3/2020 22:29,1/3/2020 22:29,What can model everything?,,0,11,,,,CC BY-SA 4.0 17339,2,,17324,1/2/2020 21:27,,0,,"

Let $n=C*K_w*K_h$. Then you should only need $n$ filters. Not $2^n$ to keep all the information. If you just used the rows of the identity matrix as your filters than your convolution would just be making an exact copy so it definitely wouldn't be throwing away information. On the other hand, there will be a max pooling operation. To simplify the question let's suppose we have a 3 channels, and a 1 by 1 kernel. And then let's suppose it is just one convolution followed by global max pooling. Also, let's use your assumption that it's all binary. If you have $m$ filters then the final output will be $m$ dimensional no matter how many input points you have. So clearly information is being thrown away there. But that's not such a bad thing. Throwing away irrelevant information gets us closer to the features we need the problem at hand. The parts that get thrown away by max pooling correspond to features not being found in a particular part of the image.

",32467,,,,,1/2/2020 21:27,,,,4,,,,CC BY-SA 4.0 17340,2,,17317,1/2/2020 21:42,,0,,"

A neural network is composed of continuous functions. Neural networks are regularized by adding an l2 penalty on the weights to the loss function. This means the neural network will try to make the weights as small as possible. The weights are also initiallized with a N(0, 1) distribution so the initial weights will also tend to be small. All of this means that neural networks will compute a continuous function that is as smooth as possible while still fitting the data. By smooth I mean that similar inputs will tend to have similar outputs when run through the neural network. More formally, $||x-y||$ small implies $||f(x)-f(y)||$ small where f represents the output from the neural network. This mean that if a neural network sees an novel input $x$ that is close to an input from the training data $y$, then $f(x)$ will tend to be close to $f(y)$. So the end result is that the neural network will classify $x$ based on what the labels for the nearby training examples were. So the neural network is actually a little like k-nearest neighbors in that way.

Another way for neural networks to generalize is using invariance. For example, convolutional neural networks are approximately translation invariant. So this means that if it sees and image where the object in question has been translated then it will still recognize the object.

But it's not giving us the exact function we want. The loss function is a combination of classification accuracy and making the weights small so that you can fit the data with a function that is as smooth as possible. This tends to generalize well for the reasons I said before but it's just an approximation. You can solve the problem more exactly using minimal assumptions with a Gaussian process but Guassian processes are too slow to handle large amounts of data.

",32467,,32467,,1/4/2020 21:42,1/4/2020 21:42,,,,0,,,,CC BY-SA 4.0 17341,2,,11542,1/2/2020 21:45,,1,,"

If the AI is static (heuristic and fixed), it will always pursue the stated goal. However, such a system would be ""brittle"", and either break or produce bad output if confronted with input not previously defined, or outside its model.

If the AI evolves via learning, even where the goal is specific, its interpretation of that goal might change, and produce unexpected results. (The ""I, Robot"" scenario.)

If the AI is emergent, by which I mean it evolves in way that cannot be predicted, it might evolve new goals.

To answer the question directly:

Hypothetically, if there was an AGI or artificial superintelligence, or ultraintelligent machine tasked with protecting humans, and that AI perceived humans to be destroying themselves, that AI would, if able, take control of human society. (I don't see this as contradicting its goal.)

However, it must be stated that, in a condition of imperfect & incomplete information, where the problem is intractable, the AI is just guessing like we humans do, even if it makes better guesses, as in the case of narrowly intelligent AIs like AlphaGo.

",1671,,,,,1/2/2020 21:45,,,,9,,,,CC BY-SA 4.0 17342,2,,17313,1/2/2020 21:59,,0,,"

I think if you got the dataset, then a standard 1d convolutional neural network would work to some extent. It's not that there is some property of nearby sounds that it would pick up on. It would just memorize all the sounds that tend to come from your desk. I think the coding part would be pretty standard stuff. But collecting the data will be hard. You have to get a really big labeled dataset of sounds coming from your desk and sounds coming beyond a 3 foot radius. This dataset has to be realistic and representative of the real world. Getting that dataset would be pretty tricky but it is doable if you put multiple microphones in your house in order to triangulate the exact positions of all sounds. It would be like GPS but using sounds waves instead of light waves.

",32467,,,,,1/2/2020 21:59,,,,0,,,,CC BY-SA 4.0 17343,2,,17306,1/2/2020 22:06,,1,,"

GPT2 predicts the next word that people will say. https://openai.com/blog/better-language-models/ Facebook predicts what will make you keep using their site. Youtube predicts what videos you will click on.

",32467,,,,,1/2/2020 22:06,,,,0,,,,CC BY-SA 4.0 17344,1,17367,,1/3/2020 1:14,,4,103,"

I've been using several resources to implement my own artificial neural network package in C++.

Among some of the resources I've been using are

https://www.anotsorandomwalk.com/backpropagation-example-with-numbers-step-by-step/

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/

https://cs.stanford.edu/people/karpathy/convnetjs/intro.html,

as well as several others.

My code manages to replicate the results in the first two resources exactly. However, these are fairly simple networks in terms of depth. Hence the following (detailed) question:

For my implementation, I've been working with the MNIST Database of handwritten digits (http://yann.lecun.com/exdb/mnist/).

Using the ANN package I wrote, I have created a simple ANN with 784 input neurons, one hidden layer with 16 neurons, as well as an output layer with ten neurons. I have implemented ReLU on the hidden layer and the ouput layer, as well as a softmax on the output layer to get probabilities.The weights and biases are each individiually initialized to random values in the range [-1,1]

So the network is 784x16x10.

My backpropagation incorporates weight gradient and bias gradient logic.

With this configuration, I repeatedly get about a 90% hit rate with a total average cost of ~0.07 on the MNIST training set comprising 60,000 digits, and a slightly higher hit rate of ~92.5% on the test set comprising 10,000 digits.

For my first implementation of an ANN, I am pretty happy with that. However, my next thought was:

""If I add another hidden layer, I should get even better results...?"".

So I created another artificial network with the same configuration, except for the addition of another hidden layer of 16 neurons, which I also run through a reLU. So this network is 784x16x16x10.

On this ANN, I get significantly worse results. The hit rate on the training set repeatedly comes out at ~45% with a total average error of ~0.35, and on the test set I also only get about 45%.

This leads me to either one or both of the following conclusions:

A) My implementation of the ANN in C++ is somehow faulty. If so, my bet would be it is somewhere in the backpropagation, as I am not 100% certain my weight gradient and bias gradient calculation is correct for any layers before the last hidden layer.

B) This is an expected effect. Something about adding another layer makes the ANN not suitable for this (digit classification) kind of problem.

Of course, A, B, or A and B could be true.

Could someone with more experience than me give me some input, especially on whether B) is true or not?

If B) is not true, then I know I have to look at my code again.

",32471,,,,,1/4/2020 11:48,Is it expected that adding an additional hidden layer to my 3-layer ANN reduces accuracy significantly?,,1,4,,,,CC BY-SA 4.0 17345,2,,17317,1/3/2020 3:06,,0,,"

A fairly recent paper posits an answer to this:
Reconciling modern machine learning practice and the bias-variance trade-off. Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal
https://arxiv.org/abs/1812.11118
https://www.pnas.org/content/116/32/15849

I'm probably not qualified to summarize, but it sounds like their conjectured mechanism is: by having far more parameters than are needed even to perfectly interpolate the training data, the space of possible resulting functions expands to include ""simpler"" functions (simpler here obviously not meaning fewer parameters, but instead something like ""less wiggly"") that generalize better even while perfectly interpolating the training set.

That seems completely orthogonal to the more traditional ML approach of reducing capacity via dropout, regularization, etc.

",21542,,,,,1/3/2020 3:06,,,,0,,,,CC BY-SA 4.0 17346,1,,,1/3/2020 8:28,,2,21,"

So I am trying to use a majority vote classifier combining different models and I was wondering if it is acceptable to use different training sets for the individual models (including different features) if these sets all come from one larger dataset?

Thanks

",32477,,,,,1/3/2020 8:28,Is it acceptable to use various training sets for the individual models when using a majority vote classifier?,,0,0,,,,CC BY-SA 4.0 17347,1,,,1/3/2020 10:44,,2,32,"

I'm trying to determine the frequency from a signal with NN. I'm using the Adeline model for my project and I'm taking a few samples in each 0.1-volt step in a true signal and a noisy one.

First question: am I wrong?

Second question: my network works fine until the frequency of my sample for the test is equal to the frequency of my sample for the training. Otherwise, my network doesn't work and gives me the wrong answer. What do I need to do for this model? for solving this problem I must use nonlinear steps like logarithmic steps. but How to use logarithmic steps in MatLab?

Edition: I understand my problem is not Overfitting! I found that my samples step are linear and my samples are nonlinear so this is wrong for solving this problem I must use nonlinear steps like logarithmic steps. but How to use logarithmic steps in MatLab?

",32481,,32481,,1/4/2020 14:15,1/4/2020 14:15,Determine Frequency from Noisy Signal With Neural Networks (With Adeline Model),,0,1,,,,CC BY-SA 4.0 17348,2,,17326,1/3/2020 10:55,,2,,"

As you know, an LSTM language model takes in the past word and tries to predict the new one and continue over a loop. A sentence is divided into tokens and depending on different method, the tokens are divided differently. Some model maybe character based models which simply uses each character as input and output. In this case you can treat punctuation as one character and just run the model as normal. For word based model which is commonly used in many systems, we treat punctuation as it's own token. It is commonly called a end of sentence token. There is also a specific token for end of output. This makes the system knows when to finish and stop prediction.

Also, just so you know for language model trying to generate original text, they feed the output as the input of the next data point, but the output they choose is not necessarily the one with the best accuracy. They set a threshold and choose upon that. It can introduce diversity to the language model so taht even though the staring word is the same, the sentence/paragraph will be different and not the same one again and again.

For some state-of-the-art models, you can try GPT-2 as mentioned by @jdleoj23 . This is a character based(actually byte based but basically the same, it treats each unicode symbol individually) model that uses attention and transformers. The advantage of character based system is that even inputs that have spelling errors can be inputted into the model and new words not in the dictionary can be introduced.

However if you want to learn more about how language model works, and not just striving for the best performance, you should try implementing one simple one by yourself. You can try following this article which uses keras to make a language model. https://machinelearningmastery.com/develop-word-based-neural-language-models-python-keras/

The advantage of making a simple one is taht you can actually understand the encoding process, the tokenization process, the model underneath and others instead of relying on other people's code. The article uses keras Tokenizer but you could try writing your own using regex and simple string processing.

Hope my help is useful for you.

",23713,,23713,,1/3/2020 11:06,1/3/2020 11:06,,,,0,,,,CC BY-SA 4.0 17349,1,,,1/3/2020 15:27,,3,338,"

I am working on a project that requires time-series prediction (regression) and I use LSTM network with first 1D conv layer in Keras/TF-gpu as follows:

model = Sequential()
model.add(Conv1D(filters=60, activation='relu', input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding='causal', strides=1))
model.add(CuDNNLSTM(units=128, return_sequences=True))
model.add(CuDNNLSTM(units=128))
model.add(Dense(units=1))

As an effect my model is clearly overfitting:

So I decided to add dropout layers, first I added layers with 0.1, 0.3 and finally 0.5 rate:

model = Sequential()
model.add(Dropout(0.5))
model.add(Conv1D(filters=60, activation='relu', input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding='causal', strides=1))
model.add(Dropout(0.5))
model.add(CuDNNLSTM(units=128, return_sequences=True))
model.add(Dropout(0.5))
model.add(CuDNNLSTM(units=128))
model.add(Dense(units=1))

However I think that it has no effect on the network learning process, even though 0.5 is quite large dropout rate:

Is this possible that dropout has little/no effect on a training process of LSTM or maybe I do something wrong here?

[EDIT] Adding plots of my TS, general and zoomed in view.

I also want to add that the time of training increases just a bit (i.e. from 1540 to 1620 seconds) when I add the dropout layers.

",22659,,22659,,1/7/2020 14:00,3/2/2020 16:54,Can dropout layers not influence LSTM training?,,1,0,,,,CC BY-SA 4.0 17350,1,,,1/3/2020 15:43,,2,57,"

What are some common approaches to estimate the transition or observation probabilities, when the probabilities are not exactly known?

When realizing a POMDP model, the state model needs additional information in terms of transition and observation probabilities. Often these probabilities are not known and an equal distribution is also not given. How can we proceed?

",27777,,2444,,1/3/2020 20:09,1/3/2020 20:09,What are some approaches to estimate the transition and observation probabilities in POMDP?,,0,1,,,,CC BY-SA 4.0 17351,1,,,1/3/2020 16:08,,1,493,"

Is there some established Object Detection algorithm that is able to detect the four corners of an arbitrary quadrilateral (x0,y0,x1,y1,x2,y2,x3,y3) as opposed to the more typical perpendicular rectangular (x,y,w,h) ?

",21583,,,,,9/25/2021 10:03,"Object Detection Algorithm that detects four corners of arbitrary quadrilateral, not just perpendicular rectangular",,1,0,,,,CC BY-SA 4.0 17352,1,,,1/3/2020 16:51,,4,93,"

I've written a program to analyse a given piece of text from a website and make conclusary classifications as to its validity. The code basically vectorizes the description (taken from the HTML of a given webpage in real time) and takes in a few inputs from that as features to make its decisions. There are some more features like the domain of the website and some keywords I've explicitly counted.

The highest accuracy I've been able to achieve is with a RandomForestClassifier, (>90%). I'm not sure what I can do to make this accuracy better except incorporating a more sophisticated model. I tried using an MLP but for no set of hyperparameters does it seem to exceed the previous accuracy. I have around 2000 datapoints available for training.

Is there any classifier that works best for such projects? Does anyone have any suggestions as to how I can bring about improvements? (If anything needs to be elaborated, I'll do so.)

Any suggestions on how I can improve on this project in general? Should I include the text on a webpage as well? How should I do so? I tried going through a few sites, but the next doesn't seem to be contained in any specific element whereas the description is easy to obtain from the HTML. Any help?

What else can I take as features? If anyone could suggest any creative ideas, I'd really appreciate it.

",32490,,32490,,1/4/2020 15:52,1/4/2020 15:52,Is there any classifier that works best in general for NLP based projects?,,2,0,,,,CC BY-SA 4.0 17353,2,,17349,1/3/2020 17:34,,2,,"

A couple of points:

  1. Have you firstly scaled your data, e.g. using MinMaxScaler? This could be one reason why your loss readings remain high.

  2. Additionally, consider that while Dropout can be useful for reducing overfitting, it is not necessarily a panacea.

Let's take an example of using LSTM to forecast fluctuations in weekly hotel cancellations.

Model without Dropout

# Generate LSTM network
model = tf.keras.Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history=model.fit(X_train, Y_train, validation_split=0.2, epochs=20, batch_size=1, verbose=2)

Over 20 epochs, the model achieves a validation loss of 0.0267 without Dropout.

Model with Dropout

# Generate LSTM network
model = tf.keras.Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dropout(0.5))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history=model.fit(X_train, Y_train, validation_split=0.2, epochs=20, batch_size=1, verbose=2)

However, validation loss is slightly higher with Dropout at 0.0428.

  1. Make sure you have specified the loss function correctly. If you are forecasting a time series, then you are most likely working with interval data. Therefore, mean_squared_error is an appropriate loss function as one is trying to estimate the deviation between the predicted and actual values.

As a counterexample, binary_crossentropy would not be suitable as the time series is not a classification set. However, misspecifying the loss function is a common error. Therefore, you also want to make sure you are using the appropriate loss function and then work from there.

",22692,,22692,,3/2/2020 16:54,3/2/2020 16:54,,,,4,,,,CC BY-SA 4.0 17354,2,,17310,1/3/2020 18:47,,2,,"

The whole idea behind those distributed optimization methods is that data should be local in every node/worker. Thus, if you only send the loss value to the central node, this node can't compute the gradients of this loss, and thus can't do any training. However, if you don't want to send gradients, a family of distributed optimization algorithms called consensus-based optimization can be used to only send the local weight of the model to neighbouring nodes, and those nodes use their local gradient and the models from their neighbours to update their local models.

",32493,,2444,,1/3/2020 21:17,1/3/2020 21:17,,,,0,,,,CC BY-SA 4.0 17355,2,,17256,1/3/2020 18:55,,3,,"

In reallity any continous function on a compact can be approximated by a neural network having one hidden layer with a finite number of neurones (This is the Universal Approximation Theorem). Thus you only need one hidden layer to approximate the multiplication on a compact, note that you need to apply a non linear activation on the hidden layer to do this.

",32493,,21583,,1/4/2020 20:15,1/4/2020 20:15,,,,2,,,,CC BY-SA 4.0 17356,2,,17304,1/3/2020 19:08,,5,,"

$\ell_{2,1}$ is a matrix norm, as stated in this paper. For a certain matrix $A \in \mathbb{R}^{r\times c}$, we have $$\|A\|_{2,1} = \sum_{i=1}^r \sqrt{\sum_{j=1}^c A_{ij}^2}$$ You first apply $\ell_2$ norm along the columns to obtain a vector with r dimensions. Then, you apply $l_1$ norm to that vector to obtain a real number. You can generalize this notation to every norm $\ell_{p,q}$.

",32485,,53322,,3/10/2022 17:57,3/10/2022 17:57,,,,0,,,,CC BY-SA 4.0 17357,1,17375,,1/3/2020 20:52,,2,228,"

Will CNN, LSTM, GRU and transformer be better classified as Computational Intelligence (CI) tools or Artificial General Intelligence (AGI) tools? The term CI arose back when some codes like neural networks, GA, PSO were considered doing magical stuff. These days CI tools do not appear very magical. Researchers want codes to exude AGI. Do the current state of art Deep Learning codes fall in the AGI category?

",12850,,2444,,1/3/2020 20:55,1/4/2020 23:41,"Are CNN, LSTM, GRU and transformer AGI or computational intelligence tools?",,1,0,,,,CC BY-SA 4.0 17358,2,,17352,1/3/2020 21:09,,1,,"

The accuracy depends on various factors. Might not always be the algorithm. For example a cleaner data with a poor algorithm might still give better results and vice versa.

What are the preprocessing techniques you are using? This preprocessing techniques article is a good starting point for html data. And by vectorising I assume you mean word2vec, use a pre-trained word2vec model. Like google's word2vec model it's trained on a lot of data(about 100 billion words).

LSTM performs good whenever the intent of the sentence is important. Check out this. Ram hit Vijay and Vijay hit Ram, might mean the same to most algorithms. Example, Naive Bayes.

",32408,,32408,,1/3/2020 21:24,1/3/2020 21:24,,,,0,,,,CC BY-SA 4.0 17360,1,,,1/3/2020 22:50,,3,891,"

I am currently using a loss averaged over the last 100 iterations, but this leads to artifacts like the loss going down even when the current iteration has an average loss, because the loss 100 iterations ago was a large outlier.

I thought about using different interval lengths, but I wonder if an average over the last few iterations really is the right way to plot the loss.

Are there common alternatives? Maybe using decaying weights in the average? What are the best practices for visualizing the loss?

",25798,,2444,,12/29/2021 23:03,12/29/2021 23:03,What is the best way to smoothen out a loss curve plot?,,1,3,,,,CC BY-SA 4.0 17361,2,,17352,1/4/2020 1:36,,1,,"

First of all, there are multiple factors on how well models will work. Amount of data, source of data, hyperparameters, model type, training time etc... All of these will affect the accuracy. However, no classifier will work best in general. It all depends on the different factors, and not one can satisfy all, at least for now.

For improving the accuracy, we need to first make those factors ideal so that the classification will have a higher accuracy.

First of all, how many data do you have? If you are using html webpage, you probably need at least 10000 data samples. If you have at least that amount of data you should be ok with overfitting. You also need to clean the data. One way to do it is to tokenize it. Tokenization of text data basically means to split the text into words and make a dictionary out of it. Then each word is encoded to a specific number where each same word have the same encoding. You are using the raw HTML as input, which have a lot of unnecessary information and tags and stuff, you can try removing those or completely remove all html tags if they are not required. The key to cleaning the data is to extract the pieces of information that is important and necessary for the model to work.

Then, you should explore the model. For a NLP (Natural Language Processing) task, your best bet is to choose a RNN (Recurrent Neural Network). This type of network have memory cells taht helps with text type data as text often have distant linkage in a paragraph, for example one sentence may use a ""she"" that refers to a person mentioned in two sentence before, and if you just feed every single encoding of words in a MLP, it would not have this memory for the network to learn long term connection between text. A RNN also is time dependent, meaning it processes each token one by one according to the direction of time. This makes the text more intuitive to the network as text is designed to be read forward, not all at once.

Your current method is to first vectorize the HTML code, then feed it into a random forest classifier. A random forest classifier works great, but it cannot scale when there is more data. The accuracy of a random forest classifier will stay mostly the same when data increase while in deep neural networks the accuracy will increase with the amount of data. However a deep neural network will require a large amount of data to start of with. If your amount of data is not too much (< 10000), this method should be your choice of method. However if you plan to add more data or if teh data is more, you should try a deep learning based method.

For deep learning based method, ULMFit is a great model to try. It uses a LSTM(Long Short Term Memory) network (which is a type of RNN) with a language model pretraining and many different method to increase the accuracy. You can try it with the fast.ai implementation. https://nlp.fast.ai/

If you wish to try a method that you can practically implement yourself, you could try to use a plain LSTM with one hot encoding as input. However, don't use word2vec to do preprocessing as your input data is html code. The word2vec model is for normal English text, not the html tags and stuff. Moreover a custom encodings will work better as in the training process you can train teh encoding as well.

Hope I can help you

",23713,,,,,1/4/2020 1:36,,,,8,,,,CC BY-SA 4.0 17363,2,,17351,1/4/2020 7:58,,1,,"

You can use OpenCV's cv2.minAreaRect() to detect oriented/rotated rectangular bounding boxes. Below's an example result from OpenCV-Python-tutorials:

Alternatively, you could train a supervised object detection model to output 8 co-ordinate values (x0,y0,x1,y1,x2,y2,x3,y3) of the quadrilateral by training with a labeled oriented-bounding-box dataset. You could also create the bounding box labels yourself for the same by using tools such as VGG Annotator Tool among others.

",30644,,,,,1/4/2020 7:58,,,,0,,,,CC BY-SA 4.0 17364,1,,,1/4/2020 9:01,,2,59,"

Imitation learning uses experiences of an (expert) agent to train another agent, in my understanding. If I want to use an on-policy algorithm, for example, Proximal Policy Optimization, because of it's on-policy nature we cannot use the experiences generated by another policy directly. Importance Sampling can be used to overcome this limitation, however, it is known to be highly unstable. How can imitation learning be used for such on-policy algorithms avoiding the stability issues?

",29879,,2444,,11/5/2020 22:33,11/5/2020 22:33,Can we use imitation learning for on-policy algorithms?,,0,0,,,,CC BY-SA 4.0 17365,1,,,1/4/2020 9:21,,2,56,"

I have a hard time formulating this question(I'm not knowledgeable enough I think), so I'll give an example first and then the question:

You have a table of data, let's say the occupancy of a building during the course of the day; each row has columns like ""people_inside_currently"", ""apartment_id"", ""hour_of_day"", ""month"", ""year"", ""name_of_day""(monday-sunday), ""amount_of_kids"", ""average_income"" etc.

You might preprocess two columns into a column ""percent_occupied_during_whole_day"" or something like that, and you want to group the data points in accordance with this as the main focus.

What I'm wondering is: why use machine learning(particularly unsupervised clustering) for this? Why not just put it into an SQL database table(for example), calculate two columns into that new one, sort by descending order, and then split it into ""top 25%, next 25%, next 25%, last 25%"" and output this as ""categories of data""? This is simpler, isn't it? I don't see the value of, for instance, making a Principle Component Analysis on it, reducing columns to some ""unifying columns"" which you don't know what to call anymore, and looking at the output of that, when you can get so much clearer results by just simply sorting and dividing the rows like this? I don't see the use of unsupervised clustering, I've googled a bunch of terms, but only found tutorials and definitions, applications(which seemed unnecessarily complex for such simple work), but no explanation of this.

",20747,,,,,1/4/2020 9:21,Why machine learning instead of simple sorting and grouping?,,0,2,,,,CC BY-SA 4.0 17366,1,,,1/4/2020 9:38,,3,46,"

I have a dataset in which class A has 99.8%, class B 0.1% and class C 0.1%. If I train my model on this dataset, it predicts always class A. If I do oversampling, it predicts the classes evenly. I want my model to predict class A around 98% of the time, class B 1% and class C 1%. How can I do that?

",32499,,,,,1/4/2020 9:38,Rarely predict minority class imbalanced datasets,,0,6,,,,CC BY-SA 4.0 17367,2,,17344,1/4/2020 11:48,,2,,"

You probably got the back propagation wrong. I have done a test on the accuracy on adding an extra layer and the accuracy went up from 94% to 96% for me. See this for details:

https://colab.research.google.com/drive/17kAJ2KJ36grG9sz-KW10fZCQW9i2Tf2c

To run the notebook click Open in playground and run the code. There is a commented line which add 1 extra layer. The syntax should be easy to understand even though it is in python.

For back propagation, you can try to see this python implementation of multi layer perceptron backpropagation.

https://github.com/enggen/Deep-Learning-Coursera/blob/master/Neural%20Networks%20and%20Deep%20Learning/Building%20your%20Deep%20Neural%20Network%20-%20Step%20by%20Step.ipynb

A network will not usually decrease it's accuracy by almost a half in normal scenario when you add an extra layer, though it is possible to have the network decrease accuracy when you add an extra layer due to overfitting. Though if this happen the performance drop won't be that dramatic.

Hope I can help you.

",23713,,,,,1/4/2020 11:48,,,,3,,,,CC BY-SA 4.0 17369,1,17781,,1/4/2020 13:19,,4,296,"
  • I have items called 'Resources' from 1 to 7.
  • I have to use them in different actions identified from 1 to 10.
  • I can do a maximum of 4 actions each time. This is called 'Operation'.
  • The use of a resource has a cost of 1 per each 'Operation' even if it is used 4 times.
  • The following table indicates the resources needed to do the related actions:
|        |            Resources             |
|--------|----------------------------------|
| Action |  1 |  2 |  3 |  4 |  5 |  6 |  7 |
|--------|----------------------------------|
|     1  |  1 |  0 |  1 |  1 |  0 |  0 |  0 |
|     2  |  1 |  1 |  0 |  0 |  1 |  0 |  0 |
|     3  |  1 |  0 |  1 |  0 |  0 |  1 |  0 |
|     4  |  0 |  1 |  0 |  0 |  0 |  0 |  0 |
|     5  |  1 |  0 |  1 |  1 |  0 |  1 |  0 |
|     6  |  1 |  1 |  1 |  0 |  0 |  0 |  0 |
|     7  |  0 |  1 |  0 |  0 |  0 |  0 |  0 |
|     8  |  1 |  0 |  1 |  0 |  1 |  0 |  0 |
|     9  |  0 |  1 |  0 |  1 |  0 |  0 |  0 |
|    10  |  1 |  1 |  1 |  0 |  0 |  0 |  1 |

The objective is to group all the 'Actions' in 'Operations' that minimize the total cost. For example, a group composed by actions {3, 7, 9} needs the resources {1, 2, 3, 4, 6} and therefore has a cost of 5, but a group composed by actions {4, 7, 9} needs the resources {2, 4} and therefore has a cost of 2.

It is needed to get done all the actions the most economically.

Which algorithm can solve this problem?

",6207,,6207,,1/4/2020 20:41,1/31/2020 17:50,Which algorithm to use to solve this optimization problem?,,2,5,,,,CC BY-SA 4.0 17370,1,,,1/4/2020 13:40,,3,135,"

I am working on speaker identification problem using GMM (Gaussian Mixture Model). I have to just identify one user present in the given audio, so for second class noise or silent audio may use or not just like in image classification for an object we create a non-object class.

I have used a silent class is always showing the user is present ( which is not).

If any other model can give better accuracy fulfil the condition that only 30 sec of audio of a particular user is available and given test audio may has long size.

",15368,,,,,1/10/2020 7:14,Speaker Identification / Recognition for less size audio files,,0,2,,,,CC BY-SA 4.0 17371,1,17411,,1/4/2020 15:50,,3,3016,"

How to calculate mean speed in FPS for an object detection model like YOLOv3 or YOLOv3-Tiny? Different object detection models are often presented on charts like this: I am using the DarkNet framework in my project and I want to create similar charts for my own models based on YOLOv3. Is there some easy way to get mean FPS speed for my model with the ""test video""?

",30992,,23713,,1/6/2020 2:48,1/8/2020 10:30,Calculation of FPS on object detection task,,2,0,,,,CC BY-SA 4.0 17373,1,,,1/4/2020 18:24,,2,42,"

How can a system recognize if two strings have the same or similar meaning?

For example, consider the following two strings

  1. Wikipedia provides good information.

  2. Wikipedia is a good source of information.

What methods are available to do this?

",32506,,2444,,1/4/2020 21:16,1/4/2020 21:16,How can a system recognize if two strings have the same or similar meaning?,,1,0,,,,CC BY-SA 4.0 17374,2,,17373,1/4/2020 18:51,,1,,"

Getting the intent of the sentence is not an easy task. To get you started on what to do, have a look on word vectors. You can also download pre-trained word2vec models. They help in getting similarity of words and reasoning with words. To get the intent of a sentence, you can use LSTM.

Fun fact most NLP algorithms strip away punctuation with is sufficient for most cases, but to give a counter example.

The defendant, who looked apologetic, was found guilty.
The defendant who looked apologetic was found guilty.

They mean different things and are difficult to catch the intent even with the best algorithms.

PS: For those wondering the difference, in the second sentence it seems like there were two defendants, and it was the one who looked apologetic who was found guilty while the other walked away free.

",32408,,,,,1/4/2020 18:51,,,,0,,,,CC BY-SA 4.0 17375,2,,17357,1/4/2020 23:41,,0,,"

CNNs, LSTMs, GRUs and transformers are or use artificial neural networks. The expression computational intelligence (CI) is often used interchangeably with artificial intelligence (AI). CI can also refer to a subfield or superfield of AI where biology is often an inspiration. See What is Computational Intelligence and what could it become? by Włodzisław Duch.

RNNs are Turing complete and CNNs have been shown to be universal function approximators (they can approximate any continuous function to an arbitrary accuracy given a sufficiently deep architecture), but that doesn't mean we will be able to create AGI with them, unless you believe that AGI is just a bunch of algorithms, but, IMHO, that alone doesn't produce AGI. See also the computational theory of mind.

To conclude, CNNs, LSTMs, GRUs and transformers are deep learning tools (so they could also be considered CI tools, given some definitions of CI), which might be useful for the development of AGI.

",2444,,,,,1/4/2020 23:41,,,,2,,,,CC BY-SA 4.0 17376,1,,,1/5/2020 0:42,,1,42,"

I have two convex, smooth loss functions to minimise. During the training (a very simple model) using batch SGD (with tuned optimal learning rate for each loss function), I observe that the (log) loss curve of the loss 2 converges much faster and is much more smooth than that of the loss 2, as shown in the figure.

What can I say more about the properties of the two loss functions, for example in terms of smoothness, convexity, etc?

",22474,,22474,,1/5/2020 0:57,1/5/2020 0:57,Deduce properties of the loss functions from the training loss curves,,0,3,,,,CC BY-SA 4.0 17377,1,,,1/5/2020 9:01,,3,25,"

I am proposing a modified version of Sequence-to-Sequence model with dual decoders. The problem that I am trying to solve is Neural Machine Translation into two languages at once. This is the simplified illustration for the model.

                            /--> Decoder 1 -> Language Output 1
Language Input -> Encoder -|
                            \--> Decoder 2 -> Language Output 2

What I understand about back propagation is that we are adjusting the weights of the network to enhance the signal of the targeted output. However, it is not clear to me on how to back propagate in this network because I am not able to find similar implementations online yet. I am thinking of doing the back propagation twice after each training batch, like this:

$$ Decoder\ 1 \rightarrow Encoder $$ $$ Decoder\ 2 \rightarrow Encoder $$

But I am not sure whether the effect of back propagation from Decoder 2 will affect the accuracy of prediction by Decoder 1. Is this true?

In addition, is this structure feasible? If so, how do I properly back propagate in the network?

",32511,,,,,1/5/2020 9:01,How to back propagate for implementation of Sequence-to-Sequence with Multi Decoders,,0,0,,,,CC BY-SA 4.0 17378,1,17380,,1/5/2020 13:34,,1,123,"

I am trying to predict pseudo-random numbers using the past numbers with a multiplayer perceptron. The error while training is very low. However, as soon as I test it with a test set, the model overfits and returns very bad results. The correlation coefficient and error metrics are both not performing well.

What would be some of the ways to solve this issue?

For example, if I train it with 5000 rows of data and test it with 1000, I get:

Correlation coefficient                  0.0742
Mean absolute error                      0.742 
Root mean squared error                  0.9407
Relative absolute error                146.2462 %
Root relative squared error            160.1116 %
Total Number of Instances             1000     

As mentioned, I can train it with as many training samples as I want and still have the model overfits. If anyone is interested, I can provide/generate some data and post it online.

",31766,,2444,,1/6/2020 14:08,1/6/2020 19:26,Why does my model overfit on pseudo-random numbers training data?,,1,9,,,,CC BY-SA 4.0 17379,1,17391,,1/5/2020 21:10,,1,96,"

I'm evaluating the accuracy in detecting objects for my image data set using three deep learning algorithms. I have selected a sample of 30 images. To measure the accuracy, I manually count the number of objects in each image and then calculate recall and precision values for three algorithms. Following is a sample:

Finally to select the best model for my data set, can I calculate the mean Recall and mean Accuracy? For Example:

",32343,,,,,1/7/2020 2:45,Can we calculate mean recall and precision,,1,5,,,,CC BY-SA 4.0 17380,2,,17378,1/6/2020 2:36,,1,,"

Simply said, predicting pseudo random number is just not possible for now. Pseudo random numbers generated now have a high enough ""randomness"" so that it cannot be predicted. Pseudo random numbers is the basis of modern cryptography which is widely used in the world wide web and more. It may be possible in the future through faster computers and stronger AI, but for now it is not possible. If you train a model to fit on pseudo random numbers, the model will just overfit and thus creating a scenario as shown in the question. The training loss will be very low while test loss will be extremely high. The model will just ""remember"" the training data instead of generalising to all pseudo random numbers, thus the high test loss.

Also, as a side note, loss is not represented by %, instead it is just a raw numeric value.

See this stack exchange answer for details.

",23713,,32408,,1/6/2020 19:26,1/6/2020 19:26,,,,1,,,,CC BY-SA 4.0 17381,2,,17371,1/6/2020 2:46,,1,,"

You can use the dataset test set as ""frames"" of video. Test the images with your model and calculate the images per second of the result and that is the same as frames per second. However you should set the batch size to 1 as in the real world scenario. You should also display each image with teh corresponding boxes after inference and remove the accuracy calculation as to imitate the real world situation.

",23713,,,,,1/6/2020 2:46,,,,0,,,,CC BY-SA 4.0 17382,1,,,1/6/2020 7:54,,3,65,"

This post refers to Fig. 1 of a paper by Microsoft on their Deep Convolutional Inverse Graphics Network:

https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/kwkt_nips2015.pdf

Having read the paper, I understand in general terms how the network functions. However, one detail has been bothering me: how does the network decoder (or ""Renderer"") generate small scale features in the correct location as defined by the graphics code? For example, when training the dataset on faces, one might train a single parameter in the graphics code to control the (x,y) location of a small freckle. Since this feature is small, it will be ""rendered"" by the last convolutional layer where the associated kernels are small. What I don't understand is how the information of the location of the freckle (in the graphics code) propagates through to the last layer, when there are many larger-scale unpooling + convolutional layers in-between.

Thanks for the help!

",32505,,23713,,1/6/2020 8:41,1/6/2020 8:41,How are small scale features represented in an Inverse Graphics Network (autoencoder)?,,1,0,,,,CC BY-SA 4.0 17383,2,,17360,1/6/2020 8:00,,2,,"

You can use the Exponential Moving Average method. This method is used in tensorbaord as a way to smoothen a loss curve plot. The algorithm is as follow: However there is a small problem doing it this way. As you can see S_t is initialized with the starting value, which makes the starting curve inaccurate. The green curve is the ideal curve for the algorithm, but the purple curve is the predicted curve. The curve is not correct on the start. To solve this, a correction factor is added in, thus making the algorithm this: This introduces WeightedCount which decreases over time to 0.

Exponential Moving Average is also used is other areas of deep learning, the most notable being some optimization algorithms. It is used in Adam, RMSProp and other similar optimizers to smooth out the gradients to make the path to minimal loss a more direct and straightforward path.

",23713,,,,,1/6/2020 8:00,,,,0,,,,CC BY-SA 4.0 17384,2,,17382,1/6/2020 8:37,,2,,"

Simply said, there is no specific ""meaning"" to the features generated. They are simply features that are fitted through math and calculus, and nobody knows what they represent exactly, and will never knows. However we can run PCA (Principal Component Analysis) to see which feature is the most ""important"" of all, aka which feature affects the most in the output image. Then, you can try adjusting the value to manually see and guess what teh value do, but you will never know what exactly it does as it is an arbitrary feature, not specifically set by the network. One value may mean multiple things, or just things we human don't understand. See this amazing video about this for details:

https://youtu.be/4VAkrUNLKSo

This video explains what PCA does and also an example of the features generated by the network.

For small scales features, they may simply be ignored as they don't contribute much to the loss or accuracy, or maybe they are represented by a big dot or something else until the last few layers. With just 80 features one cannot fully represent a face with such details, and with the resolution the networks like these are trained on, small features like these probably won't be shown in the image.

",23713,,,,,1/6/2020 8:37,,,,2,,,,CC BY-SA 4.0 17385,1,17389,,1/6/2020 9:34,,3,386,"

I trying to understand the Bellman equation for updating the Q table values. The concept of initially updating the value is clear to me. What is unclear is the subsequent updates to the value. Is the value replaced with each episode? It doesn't seem like this would learn from the past. Maybe average the value from the previous episode with the existing value?

Not specifically from the book. I'm using the equation

$$V(s) = \max_a(R(s, a) + \gamma V(s')),$$

where $\gamma$ is the learning rate. $0.9$ will encourage exploration, and $0.99$ will encourage exploitation. I'm working with a simple $3 \times 4$ matrix from YouTube

",32525,,2444,,1/6/2020 14:03,1/6/2020 16:05,Is the Q value updated at every episode?,,1,0,,,,CC BY-SA 4.0 17386,1,,,1/6/2020 10:40,,3,115,"

Consider the following game on a MNIST dataset:

  1. There are 60000 images.
  2. You can pick any 1000 images and train your Neural Network without access to the rest of images.
  3. Your final result is prediction accuracy on all dataset.

How to formalize this process in terms of information theory? I know that information theory works with distributions, but maybe you can provide some hints how to think in terms of datasets instead of distributions.

  1. What is the information size of all datasets. My first idea was that each image is iid from uniform distribution and information content = -log2(1/60000). But common sense and empirical results (training neural network) show that there are similar images and very different images holding a lot more information. For example if you train NN only on good looking images of 1 you will get bad results on unusual 1s.
  2. How to formalize that the right strategy is to choose as much as possible different 1000 images. I was thinking to take image by image with the highest entropy relative to the images you already have. How to define distance function.
  3. How to show that all dataset contains N bits of information, training dataset contain M bits of information and there is a way to choose K images < 60000 that hold >99.9% of information.
",32526,,,,,1/6/2020 15:11,How to formalize learning in terms of information theory?,,1,0,,,,CC BY-SA 4.0 17388,2,,17386,1/6/2020 15:03,,2,,"

In short: It is easy to quantify information, but it is not easy to quantify its usefulness

I'm not sure how exactly you are looking to formalise your experiment, but it might be helpful to consider these points:

  1. There is no such thing as an absolute measure of information. The amount of information contained in some dataset is dependent on the underlying assumptions that are made when interpreting it, and therefore, the quantity of information conveyed is also dependent on the encoder/decoder (for example, a neural network). See the Wikipedia article on Kolmogorov Complexity.

  2. Entropy is a useful measure of information content when you assume each sample is iid, but this would be a very bad assumption to make for natural images, since they are highly structured. For example, imagine an image with 50% black pixels and 50% white pixels that can be arranged in any configuration - not matter how you arrange them, wether it looks like random noise, a text paragraph, or a chequer board, the entropy value will be identical for each, even though our intuition tells us otherwise (see attached image). The discrepancy between our intuition and the entropy value arises because our intuition does not interpret the image through the ""lens"" of iid pixels, but rather, hierarchical receptive fields in the visual cortex (somewhat analogous to convolutional neural networks).

  3. Calculating the entropy of pixel values in one image is somewhat useful, but calculating the ""entropy"" of a set of images would not be useful, because each image as a whole is treated as if it were a unique arbitrary symbol. I assume this is what you meant by ""the information size of all datasets""

  4. KL-divergence is a distance function that is often used to compare two distributions. Intuitively, it represents the redundant bits generated by a non-ideal compression program that assumes an incorrect data distribution. However, KL-divergence between two natural images will not give you a particularly meaningful result.

If I am not mistaken, you want to find some information metric that will enable you to pick the smallest number of the most optimal images for training and get a good test score with the network. Is that correct? It is an interesting idea. However, in order to define such a metric, we might have to know in advance what features of an image are the most significant for classification, which in some ways defeats the point of using machine learning in the first place, where non-obvious and ""hidden"" features are exploited.

",32505,,32505,,1/6/2020 15:11,1/6/2020 15:11,,,,4,,,,CC BY-SA 4.0 17389,2,,17385,1/6/2020 16:05,,3,,"

I think you are a bit confused about what is the update function and the target.

The equation you have there, and what is done in the video is the estimation of the true value of a certain state. In Temporal-Difference algorithms this is called the TD-Target.

The reason for your confusion might be that in the video he starts from the end state and goes backwards using that formula to get the final value of each state. But that is not how you update the values, that is where you want to get to at the end of iterating through the states.

The update formula may have several forms depending on the algorithm. For TD(0), which is a simple 1-step look ahead off-policy where what is being evaluated is the state (as in your case), the update function is:

$$ V(s) = (1 - \alpha) * V(s) + \alpha * (R(s,a) + \gamma V(s')), $$ where alpha is the learning rate. What alpha will do is balance how much of your current estimate you want to change. You keep $1 - \alpha$ of the original value and add $\alpha$ times the td-target, which uses the reward for the current state plus the discounted estimate of the value of the next state. Normal values for alpha can be 0.1 to 0.3, for example.

The estimate will slowly converge into the real value of the state which is given by your equation: $$ V(s) = \max_a(R(s, a) + \gamma V(s')). $$

Also, the $\gamma$ is actually the discount associated with future states, as it is said in the video you referenced. It basically says how much importance you give to future states rewards. If $\gamma = 0$, then you only care about the reward in your current state to evaluate it (this is not what is used). On the other extreme if $\gamma = 1$ you will give as much value for a reward received in a state 5 steps ahead as you will to the current state. If you use some intermediate value you will give some importance to future rewards, but not as much as for the present one. The decay on the reward received on a state $n steps$ in the future is given by $\gamma^n$.

Another thing that I would correct is that the exploration - exploitation balance is not in any way related to $\gamma$. It is normally balanced by some policy, for example $\epsilon - greedy$. This one for example says that a certain % of the actions you take are random, which in turn makes you explore less valued states.

",24054,,,,,1/6/2020 16:05,,,,0,,,,CC BY-SA 4.0 17390,1,,,1/6/2020 16:15,,1,108,"

I am using the shapenet dataset. From this dataset, I have 3d models in .obj format. I rendered the images of these 3d models using pyrender library which gives me an image like this :

Now I am using raycasting to voxelize this image. The voxel model I get is something like below :

I am not able to understand why I am getting the white or light brown colored artifacts in the boundary of the object.

The reason I could come up with was maybe the pixels at the boundary of the object contain two colors, so when I traverse the image as numpy array, I get an average of these two colors which gives me these artifacts. But I am not sure if this is the correct reason.

If anyone has any idea about what could be the reason, please let me know

",32534,,,,,1/6/2020 16:15,Rendering images and voxelizing the images,,0,6,,,,CC BY-SA 4.0 17391,2,,17379,1/6/2020 16:26,,3,,"

For the precision metric for example you have:

$$ Precision = \frac{TP}{TP+FP}, $$ with TP = True Positive and FP = False Positive.

Imagine you have the following values:
Image 1: $TP = 2, FP = 3$
Image 2: $TP = 1, FP = 4$
Image 3: $TP = 3, FP = 0$

The precision scores as you calculated will be:
Image 1: $2/5$
Image 2: $1/5$
Image 3: $1$
Your average will be: $0.533$

On the other hand if you sum them all up and then calculate the precision value you get:

$P = \frac{6}{6+7} = 0.462$

This proves that averaging the precision scores is not the same as calculating the total precision in one go.

Since what you want is to know how precise your algorithm is, independently of the precision for each image, you should sum all the TP and FP and only then calculate the precision for each model. This way you will not have a biased average. The average would give the same weight to an image with a larger number of objects as to another image which had fewer objects.

",24054,,24054,,1/7/2020 2:45,1/7/2020 2:45,,,,2,,,,CC BY-SA 4.0 17393,1,17397,,1/6/2020 18:56,,3,107,"

I'm trying to understand distributional RL, based on this article. In one of the equations, there is a symbol $\operatorname{sup dist}$.

\begin{align} \operatorname{sup dist}_{s, a} (R(s, a) + \gamma Z(s', a^*), Z(s, a)) \\ s' \sim p(\cdot \mid s, a) \end{align}

What does $\operatorname{sup dist}$ mean?

",32540,,2444,,1/7/2020 11:45,1/7/2020 11:45,What does the notation sup dist mean in distributional RL?,,1,0,,,,CC BY-SA 4.0 17394,1,,,1/6/2020 19:03,,3,124,"

So we think a computer is dumb because it can only follow instructions. Therefor I am trying to create an AI that can give instructions.

The idea is this: Create a geometric scene (A) then make a change in scene such as turning a square red or moving a circle right one unit. This becomes the new scene B. Then the computer compares the scenes A and B and it's goal is to give the shortest possible instruction that will change scene A to scene B. Examples might be:

""Turn the green square red"".

or

""Move the yellow square down"".

Or when we get more advanced we might have:

""Move the green square below the leftmost purple square down.""

Equally, this task could be seen as finding a description of the change. e.g. ""The green square has turned red"".

The way it would work is that there'd be a simplified English parser, and the computer would generate a number of phrases and check whether these achieved the desired result.

I would probably give it some prior knowledge of things like colours, shape-names, and so on. Or it could learn these by example.

Eventally I would hope it to generate more complicated loop type expressions such as ""Move the square left until it reaches the purple circle."" And these would essentially be little algorithms the AI has generated in words.

I've got some ideas how to do this. But do you know any similar projects that I could look at? If you were implementing this, how would you go about it?

[In other words we have an English parser that is interpreted to change a scene A into a scene B. But we want the AI to learn, given scenes A and B, how to generate instructions.]

",4199,,,,,1/6/2020 19:03,Creating an AI than can learn to give instructions,,0,0,,,,CC BY-SA 4.0 17397,2,,17393,1/6/2020 20:25,,3,,"

It doesn't seem that it is a ""proper"" symbol.

I guess that $\sup$ simply refers to the supremum, that is, you want to select actions that maximize the quantity that comes to the right of $\sup$, while $\text{dist}$ is simply a proxy for any possible distance between distributions. For example, you can replace $\text{dist}$ with the Kullback-Leibler divergence or with the mutual information.

",30983,,,,,1/6/2020 20:25,,,,0,,,,CC BY-SA 4.0 17398,1,17422,,1/6/2020 21:08,,2,101,"

I am quite new in the AI field. I am trying to create a neural network, in a language (Dart) where I couldn't find examples or premade libraries or tutorials. I've tried looking online for a strictly ""vanilla"" python implementation (without third-party libraries), but I couldn't find any.

I've found a single layer implementation, but it's done only with matrices and it's quite cryptic for a beginner.

I've understood the idea between the feed forwarding, a neuron calculates the sum of its inputs, adds a bias and activates it.

But I couldn't find anything a neuron-level explanation of the math behind backpropagation. (By neuron-level I think of the math down to the single neuron as a sequence of operations instead of multiple neurons treated as matrices).

What is the math behind it? Are there any resources to learn it that are suitable as a beginner?

",32545,,2444,,1/7/2020 11:42,1/9/2020 1:01,What is the neuron-level math behind backpropagation for a neural network?,,1,5,,,,CC BY-SA 4.0 17399,1,17400,,1/7/2020 7:33,,3,47,"

Good day everyone.

I am curious if it is possible for an AI to plot a time-series graph based on a single input. Using free fall impact as an example.

Assuming we drop a ball from height 100m and record the force it receives relative to time. We will get a graph that looks something like below.

Now we drop the ball from a height of 120m, record the forces, and we get another graph in addition to our original.

What I am wondering is: If we have a large set of data on 60m to 140m (20m interval) height drops, would we be able to generate a regression model that plots the responses when given an arbitrary drop height? (i.e plot force response when dropped from 105m)

Thank you all very much for your time and attention.

",32551,,,,,1/7/2020 9:10,"Given enough graphical data, could you train an AI to plot a polynomial graph based on the input conditions?",,1,0,,,,CC BY-SA 4.0 17400,2,,17399,1/7/2020 9:10,,2,,"

Yes this is possible, using any machine learning approach that supports regression. You have two main approaches:

  • Input $h$ the height of the drop, multiple outputs, one per time offset that you want to plot. Each individual output calculates the predicted force at a specific offset time.

  • Inputs $h$ the height of the drop and $t$ a time offset, one output. The single output calculates the predicted force due to given height and at given time.

The main thing to bear in mind is that statistical learning techniques typically do not generate physics-like models. Test inputs close to training examples should generate reasonable graphs that interpolate between those from training data. Test inputs far away from the training examples (e.g. you train on data of drops from 60m to 140m, but use an input of 10m or 200m) will likely generate wildly incorrect outputs. The main exception to this is if your ML model includes some good guesses at the underlying physics model, in which case it is possible that a regression algorithm will tune the parameters of that model plus filter out terms that should not be part of the model, resulting in a system that extrapolates much better. That is very unlikely happen by chance, it requires up-front design.

",1847,,,,,1/7/2020 9:10,,,,1,,,,CC BY-SA 4.0 17401,1,,,1/7/2020 11:12,,2,84,"

The light field of a certain scene is the set of all light rays that travel through the volume of that scene at a specific point in time. A light field camera, for example, captures and stores a subset of the light field of a scene.

I've got an unstructured subsampling of such a scene (a few billion rays, each having a direction and light intensity information).

What I wish to do now is to create an approximation of the original scene that created this light field, with the approximation consisting of 3 arbitrarily positioned (alpha-)textured 2D planes (in 3D space), where each point on the surface radiates light towards uniformly in all directions based on the pixel color at this position.

So, I guess, this is like finding regions in the volume where similarly 'colored' rays intersect, such that the planes maximize the number of intersections they can cover.

So, the available data is the few billions of rays, the desired output is the parameters(position, normal and size) of the three planes plus one RGBA texture for each.

I'm asking here about experiences and opinions: Is this problem rather well-suited for a machine learning approach or rather not?

Edit:

A classical algorithm I could think of to solve this would be to voxelize the volume and use pathtracing to add a color sample for each ray to all cells along its way, then give each cell some value based on how similar all its contained samples are and then search for planar surfaces that intersect as many high rated cells as possible.

But maybe machine learning is better suited for such a problem?

",32111,,2444,,12/12/2021 12:57,12/12/2021 12:57,Is this a problem well suited for machine learning?,,0,3,,,,CC BY-SA 4.0 17402,2,,17369,1/7/2020 15:10,,0,,"

A possible approach would be starting by grouping the actions into all possible groups. For 30 actions grouped in a maximum of 6 this would mean: $$ C^{30}_6 + C^{30}_5 + C^{30}_4 + C^{30}_3 + C^{30}_2 + C^{30}_1 = 768 211 $$ Operations.

Then, I would define the constraints to simplify the problem, as in a Constraint Satisfaction Problem (CSP). The main constraint that you have is that one action can only be represented in one Operation. So, if you choose 1 of the 768ks operations to start with, the second option will be restricted by the condition.

Then I would do some sort of planning and sum the costs until the end doing a depth-first search. Whenever I find a total cost better the previous best, you would update the best value. (Remember that the search ends when between all the operations used, all the actions were explored once.)

While doing the search the costs are added up on the go. You can precompute a vector with the same size as the number of operations where each value corresponds to the total cost of the operation, and then sum it as you are going down the path.

You should also add some heuristic to prevent checking the same operations in different orders. For example doing operating [2,3,4] followed by [7,10], where each number represents an action, is the same as doing: [7,10] followed by [2,3,4].

",24054,,,,,1/7/2020 15:10,,,,0,,,,CC BY-SA 4.0 17403,1,,,1/7/2020 15:32,,4,123,"

For example, I have a paragraph that I want to classify in a binary manner. But because the inputs have to have a fixed length, I need to ensure that every paragraph is represented by a uniform quantity.

One thing I've done is taken every word in the paragraph, vectorized it using GloVe word2vec, and then summed up all of the vectors to create a "paragraph" vector, which I've then fed in as an input for my model. In doing so, have I destroyed any meaning the words might have possessed?

Considering these two sentences would have the same vector:

My dog bit Dave

Dave bit my dog

How do I get around this? Am I approaching this wrong?

What other way can I train my model? If I take every word and feed that into my model, how do I know how many words I should take? How do I input these words? In the form of a 2D array, where each word vector is a column?

I want to be able to train a model that can classify text accurately. Surprisingly, I'm getting a high (>90%) for a relatively simple model like RandomForestClassifier just by using this summing up method. Any insights?

",32490,,2444,,10/9/2021 14:06,10/9/2021 14:06,Does summing up word vectors destroy their meaning?,,2,0,,,,CC BY-SA 4.0 17404,2,,17403,1/7/2020 15:48,,2,,"

Summing up a sequence of word vector maybe used in practice sometimes. However, the operation of addition is non-reversible, meaning that once you sum up a few numbers, you cannot get the original numbers back. However summing up a sequence of word vectors may work depending on your task. You should also normalize the values, or just use average value.

For details: https://towardsdatascience.com/document-embedding-techniques-fed3e7a6a25d#ecd3

To feed data with different lengths, you can also try padding and trimming it. Set a constant L for length of paragraph and trim/pad all list of word vectors to this length. Padding adds 0 vectors to the begining of the list and trimming trims the first part of a text until it is equal to L. Even in LSTM networks padding and trimming is still used as even though you can feed as long of a text to a LSTM as you want, you still have to process the word vectors in batches, which requires them to be the same length.

Example code in python on padding/trimming vector:

def pad_trim(list_vec, L):
    //list vec: {vec1,vec2,vec3......}
    //Assuming vecN have a size of 200
    return list_vec[-L:0] if len(list_vec) > L else [[0] * 200] * (L-len(list_vec)) + list_vec

However in inference, you can ignore maximum length if you used a RNN based method, although as the network has not been trained on lengths more than L, it may perform better or worse.

Generally speaking, you should go for concatenating if possible, so you can keep all information in the sentence. However both may work just fine depending on your task.

For RNN based and CNN based model example, you should check this out: https://medium.com/jatana/report-on-text-classification-using-cnn-rnn-han-f0e887214d5f

",23713,,23713,,1/7/2020 23:33,1/7/2020 23:33,,,,1,,,,CC BY-SA 4.0 17405,2,,17217,1/7/2020 16:23,,0,,"

I ran a lot of randomly created networks to solve this problem, but none of the structures where able to reliably ""solve"" this problem.

Of course, some of them where able to solve it one time, some of them even twice but there where only one which solved it 3 times:

  • LearningRate: 0,510141694690167
  • Momentum: 0,962972165068133;
  • Layer/Neuron-Count: 2 (14, 9)
  • SigmundAlphaValue: 2;
",32255,,,,,1/7/2020 16:23,,,,0,,,,CC BY-SA 4.0 17406,1,,,1/7/2020 20:27,,2,276,"

I'm working on a genetic algorithm with a constraint on the sum of the alleles, e.g. if we use regular binary coding and a chromosome is 5-bits long I'd like to constrain it so that the sum of the bits has to be 3 or less (011100 is valid but 011110 is not). Moreover, the fitness function is such that invalid chromosomes cannot be evaluated.

Any ideas on how this problem could be approached?

I've started looking into the direction of messy GAs (since those can be over-specified) but I'm not sure if there's anything there.

",32573,,2444,,1/7/2020 21:57,1/8/2020 9:13,How can I develop a genetic algorithm with a constraint on the sum of alleles?,,1,0,,,,CC BY-SA 4.0 17407,1,17408,,1/7/2020 22:47,,2,230,"

I followed the videos/slides of Berkley RL course, but now I am a bit confused when implementing it. Please see the picture below.

In particular, what does $i$ represent in the REINFORCE algorithm? If $\tau^i$ is the trajectory for the whole episode $i$, then why don't we average across the episodes $\frac{1}{N}$, which approximates the gradient of the objective function? Instead, it is a sum over the $i$. So, do we update the gradients per episode or have batches of episodes to update it? When I compare the algorithm to Sutton's book as shown below, I see that there we update the gradients per episode.

But wouldn't it then contradict the derivation on the Levine's slide that the gradient of the objective function $J$ is the expectation (therefore sampling) of the gradients of the logs?

Secondly, why do we have a cumulative sum of the returns over $T$ in Sutton's version but do not do it in Levine's (instead, all returns are summed together)

",2254,,2444,,1/8/2020 20:54,1/8/2020 20:54,What is the difference between Sutton's and Levine's REINFORCE algorithm?,,1,0,,,,CC BY-SA 4.0 17408,2,,17407,1/8/2020 2:47,,2,,"

About the first question, you are right. The $i$ denotes a sample trajectory corresponding to a whole episode. However, Sutton's version is exactly the same one as Levine's if you choose $N=1$.

About the second question, the Policy Gradient theorem only tells you what is the gradient up to a constant, so basically any constant is irrelevant. Now, even if you do know the constant, you are going to multiply the gradients by an arbitrary learning rate $\alpha$. So, you can think that the factor $\frac{1}{N}$ is actually already considered ""inside"" $\alpha$.

",30983,,30983,,1/8/2020 3:02,1/8/2020 3:02,,,,3,,,,CC BY-SA 4.0 17409,2,,17406,1/8/2020 9:13,,1,,"

There are multiple ways to handle 'illegal' individuals, each one with pros and cons:

  • Abortive methods: The individuals that violate constraints are eliminated as soon as discovered (i.e. after crossover or mutation) and new individuals are generated in order to keep the population stable. This usually implies a slower creation of new generations, as some individuals are discarded.

  • Contraceptive methods: The crossover and mutation are written in such a way to make it impossible for a newly generated individual to violate any constraints. This way to act is usually more efficient, as no individuals are discarded, but it might not be possible.

  • Penalization function: The fitness function gives a huge penalization to the individuals that violate constraints (usually proportionally to the constraints it violates). In this way they usually do not get to reproduce and their genes are eventually lost. You'd go this way if illegal individuals are very rare and there is no possibility for them to take over the full population (causing the algorithm to fail).

By experience, try to act on the crossover algorithm first (contraceptive way), it is the best way to exclude constraints violations. If this is not possible, pick one of the other two methods, depending on how often the illegal individuals are generated (not so often -> penalization, very ofter -> abortive) and on how easy it is to penalize individuals that violate constraints (easy -> penalization, not easy -> abortive).


A contraceptive way to handle your example is writing the crossover as following:

bool[] crossover(bool[] p1, bool[] p2)
{
    // classical binary crossover
    bool[] child = p2.copy()
    child[rndSection] = p1[rndSection]

    // contraceptive part
    while (CountTrue(child) > 3) child[RandomIndexOfTrue(child) = false

    return child
}

But this is a fairly complex problem and each scenario has its own specifications.

",15530,,,,,1/8/2020 9:13,,,,0,,,,CC BY-SA 4.0 17410,1,,,1/8/2020 10:15,,1,73,"

I'm trying to understand Multiple Imputation with Chained Equation (MICE) imputation process (a statistical method for imputing missing data). I have read some articles and I have understood how the imputation happens, but I didn't get the "pooling" step.

After analyzing the resulting datasets with Rubin's rules, how to pool these datasets? How to get only one dataset?

In the end, do I combine all these datasets? If yes, how? Or do I compare every dataset's estimators with Rubin's estimators and choose one dataset?

",32560,,2444,,3/3/2021 9:51,11/23/2022 15:03,How exactly does MICE imputation combine multiple datasets into one?,,1,1,,,,CC BY-SA 4.0 17411,2,,17371,1/8/2020 10:30,,0,,"

@Clement Hui

Thanks for your answer, I ask AlexeyAB from Darknet the same question and he add now flag for Darknet for this type of model speed measurments: https://github.com/AlexeyAB/darknet/issues/4627

I added -benchmark flag for detector demo, now you can use command 2652263

./darknet detector demo obj.data yolo.cfg yolo.weights test.mp4 -benchmark

But for very fast models the bottleneck will be in the Video Capturing from file/camera, >or in Video Showing (you can disable showing by using -dont_show flag).

I think that it is the best solution, you only need the newest version of Darknet (from AlexeyAB).

",30992,,-1,,6/17/2020 9:57,1/8/2020 10:30,,,,0,,,,CC BY-SA 4.0 17417,2,,9022,1/8/2020 16:32,,0,,"

I am watching the same course too, and I think that in the example graph, the cost function is not a sum of MSE (Mean squarred errors), but it could be a cubic one, so a sum of cubical errors, and thus the cost function could be negative: as there are a variety of cost functions, the MSE ones are not adapted for every problems, and other formulations could work better.

",32594,,23713,,1/9/2020 8:13,1/9/2020 8:13,,,,0,,,,CC BY-SA 4.0 17418,1,,,1/8/2020 17:02,,2,38,"

I'm working on a project (court-related). At a certain point, I have to extract the reason of the legal compensation. For instance, let's take these sentences (from a court report)

Order mister X to pay EUR 5000 for compensation for unpaid wages

and

To cover damages, mister X must pay EUR 4000 to mister Y

I want to make an algorithm that is able from this sentence to extract the motive of legal compensation. For the first sentence

Order mister X to pay EUR 5000 for compensation for unpaid wages

the algorithm's output must be ""compensation for unpaid wages"" or ""compensation unpaid wages "".

For the second sentence, the algorithm's output must be ""cover damages"". Output can be a string or a list of string, it doesn't matter.

As I'm not an NLP expert (but I have already worked on a project on sentiment analysis, so I know some stuff about NLP), and there are so many articles, I don't know where to start.

I'm working on French texts, but I can get away with working on English texts.

",32597,,2444,,1/8/2020 20:59,1/8/2020 20:59,How can I extract the reason of the legal compensation from a court report?,,0,1,,,,CC BY-SA 4.0 17421,1,,,1/8/2020 22:01,,5,436,"

As experiment, I have tried using an autoencoder to encode height data from the alps, however the decoded image is very pixellated after training for several hours as show in the image below. This repeating patter is larger than the final kernel size, so I would think it would possible to remove these repeating patterns from the image to some extent.

The image is (1, 512, 512) and is sampled down to (16, 32, 32). This is done with pytorch. Here is the relevant sample of the code in which the exact layers are shown.

        self.encoder = nn.Sequential(
                # Input is (N, 1, 512, 512)
                nn.Conv2d(1, 16, 3, padding=1), # Shape (N, 16, 512, 512)
                nn.Tanh(),
                nn.MaxPool2d(2, stride=2), # Shape (N, 16, 256, 256)
                nn.Conv2d(16, 32, 3, padding=1), # Shape (N, 32, 256, 256)
                nn.Tanh(),
                nn.MaxPool2d(2, stride=2), # Shape (N, 32, 128, 128)
                nn.Conv2d(32, 32, 3, padding=1), # Shape (N, 32, 128, 128)
                nn.Tanh(),
                nn.MaxPool2d(2, stride=2), # Shape (N, 32, 64, 64)
                nn.Conv2d(32, 16, 3, padding=1), # Shape (N, 16, 64, 64)
                nn.Tanh(),
                nn.MaxPool2d(2, stride=2) # Shape (N, 16, 32, 32)
            )
        self.decoder = nn.Sequential(
                # Transpose convolution operator
                nn.ConvTranspose2d(16, 32, 4, stride=2, padding=1), # Shape (N, 32, 64, 64)
                nn.Tanh(),
                nn.ConvTranspose2d(32, 32, 4, stride=2, padding=1), # Shape (N, 32, 128, 128)
                nn.Tanh(),
                nn.ConvTranspose2d(32, 16, 4, stride=2, padding=1), # Shape (N, 32, 256 256)
                nn.Tanh(),
                nn.ConvTranspose2d(16, 1, 4, stride=2, padding=1), # Shape (N, 32, 512, 512)  
                nn.ReLU()
            )

Relevant image: left side original, right side result from autoencoder

So could these pixellated effects in the above image be resolved?

",10364,,,,,7/1/2020 0:31,Autoencoder produces repeated artifacts after convergence,,1,2,0,,,CC BY-SA 4.0 17422,2,,17398,1/9/2020 1:01,,1,,"

Backpropagation is actually a lot easier than it is made out to be - if you have a basic understanding of calculus and the chain rule, and the single multi-variable calculus rule that to combine 2 gradient vectors, you simply add them.

This is hands down the best walk through of back prop I've found on the internet. If you are still confused after that, feel free to ask me any further questions. Here is also a quick forward and backward pass example I made for a simple CNN (only a few layers though, and the gradient only goes back to channel 1 of filter 1)

",26726,,,,,1/9/2020 1:01,,,,1,,,,CC BY-SA 4.0 17423,1,,,1/9/2020 3:41,,3,120,"

I have built a CNN-LSTM neural network with 2 inputs and 2 outputs in Keras. I trained the network with model.fit_generator() (and not model.fit()), to load just parts of the training data when needed, because the training data is too large to load at once.

After the training the model was not working. So I checked training data (before and after augmentation). The training data are correct. So I thought the reason why the model does not work must be that I have not found the optimal hyperparameters yet.

But how can I do hyperparameter optimization on a network with multiple inputs and outputs and trained with model.fit_generator()? All I can find online is hyperparameter optimization of networks with a single input and single output and trained with model.fit().

",32606,,2444,,1/9/2020 14:22,1/9/2020 14:22,How can I do hyperparameter optimization for a CNN-LSTM neural network?,,0,1,,,,CC BY-SA 4.0 17424,1,17430,,1/9/2020 6:08,,8,5508,"

I have a data set that was split using a fixed random seed and I am going to use 80% of the data for training and the rest for validation.

Here are my GPU and batch size configurations

  • use 64 batch size with one GTX 1080Ti
  • use 128 batch size with two GTX 1080Ti
  • use 256 batch size with four GTX 1080Ti

All other hyper-parameters such as lr, opt, loss, etc., are fixed. Notice the linearity between the batch size and the number of GPUs.

Will I get the same accuracy for those three experiments? Why and why not?

",31870,,2444,,12/27/2021 8:51,12/27/2021 9:01,Effect of batch size and number of GPUs on model accuracy,,2,1,,,,CC BY-SA 4.0 17425,1,,,1/9/2020 6:09,,1,25,"

I would like to create a GloVe word embedding on a very large corpus (trillions of words). However, creating the co-occurence matrix with the GloVe cooccur script is projected to take weeks. Is there any way to parallelize the process of creating a co-occurence matrix, either using GloVe or another resource that is out there?

",31294,,,,,1/9/2020 6:09,Is there a way to parallelize GloVe cooccur function?,,0,0,,,,CC BY-SA 4.0 17426,1,,,1/9/2020 7:33,,0,82,"

I am trying to extract product information from email receipts HTML. Most services I have found focus on OCR from paper receipts or PDFs. I would imagine that extraction of product information would be easier from structured HTML. What type of AI approach would be used to support this?

",32611,,,,,1/9/2020 10:04,Extract product information from email receipt HTML,,1,2,,4/14/2022 16:16,,CC BY-SA 4.0 17427,2,,17403,1/9/2020 9:03,,1,,"

But because the inputs have to have a fixed length

Do they? Why? The go-to strategy would be to use an RNN (possibly with LSTM or GRUs, but probably not necessary) and train it to process input sequentially and output the final classification of the paragraph. This has the advantage of being able to take into account word order and constellations, as well as processing variable size inputs.

Intuitively, I would think simply adding word vectors will include a lot of noise from commonly occurring words that don't provide much meaningful information from the classification. I would consider Bayesian methods or dimensionality reduction methods to limit the input to the more useful input vectors.

",29720,,,,,1/9/2020 9:03,,,,1,,,,CC BY-SA 4.0 17428,2,,17424,1/9/2020 9:07,,3,,"

No. Different batch sizes mean different gradients (check stochastic gradient descent concept you will get how loss calculated) are calculated in each step, and thus the gradient descent will likely end up in different places in parameter space.

In addition, how this is actually parallelized might make a difference, including the order of operations and converting between FP precision.

Additional check resources:issue of multi gpus

",29720,,15368,,1/9/2020 15:21,1/9/2020 15:21,,,,0,,,,CC BY-SA 4.0 17429,2,,17426,1/9/2020 10:04,,1,,"

It depends on the data. If it is structured like form data, then you might not need AI at all — simple regular expression patterns might be fine. This would apply for example to address data. If you have the word street followed by a colon, followed by some text, it seems fairly obvious that this is the name of a street, and possibly also a house number.

If, however, you have free text, eg the answer to ""describe any medical conditions you have"", or ""which companies have you worked at before?"", then you might need to look at named entity extraction (NER) to identify names of medical conditions or companies.

So, some Natural Language Processing (NLP) and information extraction might be required apart from simple pattern matching.

",2193,,,,,1/9/2020 10:04,,,,0,,,,CC BY-SA 4.0 17430,2,,17424,1/9/2020 11:31,,11,,"

This should make a difference, but how big is the difference heavily depends on your task. However, generally speaking, a smaller batch size will have a lower speed if counted in sample/minutes, but have a higher speed in batch/minutes. If the batch size is too small, the batch/minute will be very low and therefore decreasing training speed severely. However, a batch size too small (for example 1) will make the model hard to generalize and also slower to converge.

This slide (source) is a great demonstration of how batch size affects training.

As you can see from the diagram, when you have a small batch size, the route to convergence will be ragged and not direct. This is because the model may train on an outlier and have its performance decrease before fitting again. Of course, this is an edge case and you would never train a model with 1 batch size.

On the other hand, with a batch size too large, your model will take too long per iteration. With at least a decent batch size (like 16+) the number of iterations needed to train the model is similar, so a larger batch size is not going to help a lot. The performance is not going to vary a lot.

In your case, the accuracy will make a difference but only minimally. Whilst writing this answer, I have run a few tests on batch size effect on performance and time, and here are the results. (Results to be added for 1 batch size)

Batch size 256 Time required 98.50849771499634s : 0.9414
Batch size 128 Time required 108.53689193725586s : 0.9668
Batch size 64 Time required 129.92272853851318s : 0.9776
Batch size 32 Time required 162.13709354400635s : 0.9844
Batch size 16 Time required 224.82269191741943s : 0.9854
Batch size 8 Time required 351.2729814052582s : 0.9861
Batch size 4 Time required 514.2667407989502s : 0.9862
Batch size 2 Time required 829.1623721122742s : 0.9869

You can test out yourself in this Google Colab.

As you can see, the accuracy increases while the batch size decreases. This is because a higher batch size means it will be trained on fewer iterations. 2x batch size = half the iterations, so this is expected. The time required has risen exponentially, but the batch size of 32 or below doesn't seem to make a large difference in the time taken. The accuracy seems to be normal as half the iterations are trained with double the batch size.

In your case, I would actually recommend you stick with 64 batch size even for 4 GPU. In the case of multiple GPUs, the rule of thumb will be using at least 16 (or so) batch size per GPU, given that, if you are using 4 or 8 batch size, the GPU cannot be completely utilized to train the model.

For multiple GPU, there might be a slight difference due to precision error. Please, see here.

Conclusion

The batch size doesn't matter to performance too much, as long as you set a reasonable batch size (16+) and keep the iterations not epochs the same. However, training time will be affected. For multi-GPU, you should use the minimum batch size for each GPU that will utilize 100% of the GPU to train. 16 per GPU is quite good.

",23713,,2444,,12/27/2021 9:01,12/27/2021 9:01,,,,3,,,,CC BY-SA 4.0 17431,1,,,1/9/2020 13:43,,4,44,"

Set is a card game and is Nicely described here.

Each set-card has 4 properties:

  1. The number(1,2 or 3)
  2. the color (Red, Green or Purple)
  3. Fill (Full, Stripes, None)
  4. Form (Wave, Oval or Diamond)

converts to 2 Purple Waves No fill (code: 2PWN)

and

convert to codes 1RON and 3GDN

For every combination there is one card so in total there are 3^4 = 81 cards. The goal is to identify 3 cards (set) out of collection of 12 displayed randomly chosen set cards where all properties occur 0,1 or 3 times.

As a hobby project I want to create an android app which can -with the camera- capture the 12 (less or more) set cards and indicate the sets present in the collection of 12. I'm looking for ways to leverage image recognition as efficient as possible.

I've been thinking of taking multiple pictures of all the individual cards, label them and feeding them to a trainer (firebase ML KIT AutoML Vision Edge) But I have the feeling that this a bit of brute force and takes a lot of time and effort photographing and labeling. I could also take pictures of multiple set cards and provide the different codes as labels.

What would be the best (most efficient) approach to have a model for labelling all cards?

",32616,,23713,,1/9/2020 16:04,1/11/2020 13:10,Recognizing Set CARDs,,1,2,0,,,CC BY-SA 4.0 17432,1,,,1/9/2020 14:55,,4,113,"

In training a neural network, you often see the curve showing how fast the neural network is getting better. It usually grows very fast then slows down to almost horizontal.

Is there a mathematical formula that matches these curves?

Some similar curves are:

$$y=1-e^{-x}$$

$$y=\frac{x}{1+x}$$

$$y=\tanh(x)$$

$$y=1+x-\sqrt{1+x^2}$$

Is there a theoretical reason for this shape?

",4199,,4199,,1/9/2020 19:10,1/9/2020 19:10,Is there a mathematical formula that describes the learning curve in neural networks?,,0,6,,,,CC BY-SA 4.0 17433,1,,,1/9/2020 16:32,,6,106,"

I'm training a multi-label classifier that's supposed to be tested on underwater images. I'm wondering if feeding the model drawings of a certain class plus real images can affect the results badly. Was there a study on this? Or are there any past experiences anyone could share to help?

",32622,,32622,,1/11/2020 8:30,1/19/2023 7:08,Can training a model on a dataset composed by real images and drawings hurt the training process of a real-world application model?,,1,10,,,,CC BY-SA 4.0 17434,2,,7359,1/9/2020 19:34,,3,,"

A trajectory ist just a sequence of states and actions. In RL, the goal is to maximize the reward, by finding the right trajectories.

$$ \operatorname{max}_\tau R(\tau) $$

This means maximizing not immediate reward (caused by one action from a state), but cumulative reward (all states and actions: trajectory)

",31753,,31753,,1/10/2020 10:17,1/10/2020 10:17,,,,0,,,,CC BY-SA 4.0 17435,1,17437,,1/9/2020 20:57,,3,137,"

I modeled the TicTacToe game as a RL problem - with an environment and an agent.

At first I made an ""Exact"" agent - using the SARSA algorithm, I saved every unique state, and chose the best (available) action given that state. I made 2 agents learn by competing against each other.

The agents learned fast - it took only 30k games for them to reach a tie stand-off. And the agent clearly knew how to play the game.

I then tried to use function approximation instead of saving the exact state. My function was a FF-NN. My 1st (working) architecture was 9 (inputs) x 36 x 36 x 9 (actions). I used the semi-gradient 1-step SARSA. The agents took much longer time to learn. After about 50k games they were still less good than the exact agent. I then made a stand off between the Exact and the NN agent - the exact agent won 1721 games from 10k, the rest were tied. Which is not bad.

I then tried reducing the number of units in the hidden layers to 12, but didn't get good results (even after playing for 500k+ games total, tweaking stuff). I also tried playing with convolution architectures, again - not getting anywhere.

I am wondering if there's some optimal function approximation solution that can get as-good of results as the exact agent. TicTacToe doesn't seem like such a hard problem for me.

Conceptually I think there should be much less complexity involved in solving it then can be expressed in a 9x36x36x9 network. Am I wrong, and it's just an illusion of simplicity? Or are there better ways? Maybe modeling the problem differently?

",27947,,,,,1/9/2020 22:35,Optimal RL function approximation for TicTacToe game,,1,0,,,,CC BY-SA 4.0 17436,1,,,1/9/2020 22:29,,6,139,"

When a neural network learns something from a data set, we are left with a bunch of weights which represent some approximation of knowledge about the world. Although different data sets or even different runs of the same NN might yield completely different sets of weights, the resulting equations must be mathematically similar (linear combinations, rotations, etc.). Since we usually build NNs to model a particular concrete task (identify cats, pedestrians, tumors, etc.), it seems that we are generally satisfied to let the network continue to act as a black box.

Now, I understand that there is a push for ""understandability"" of NNs, other ML techniques, etc. But this is not quite what I'm getting at. It seems to me that given a bunch of data points recording the behavior of charged particles, one could effectively recover Maxwell's laws using a sufficiently advanced NN. Perhaps that requires NNs which are much more sophisticated than what we have today. But it illustrates the thing I am interested in: NNs could, in my mind, be teaching us general truths about the world if we took the time to analyze and simplify the formulae that they give us1.

For instance, there must be hundreds, if not thousands of NNs which have been trained on visual recognition tasks that end up learning many of the same sub-skills, to put it a bit anthropomorphically. I recently read about gauge CNNs, but this goes the other way: we start with what we know and then bake it into the network.

Has anyone attempted to go the opposite way? Either:

  1. Take a bunch of similar NNs and analyze what they have in common to extract general formulae about the focus area2
  2. Carefully inspect the structure of a well-trained NN to directly extract the ""Maxwell's equations"" which might be hiding in them?

1 Imagine if we built a NN to learn Newtonian mechanics just to compute a simple ballistic trajectory. It could surely be done, but would also be a massive waste of resources. We have nice, neat equations for ballistic motion, given to us by the ""original neural networks"", so we use those.

2 E.g., surely the set of all visual NNs have collectively discovered near-optimal algorithms for edge/line/orientation detection, etc.). This could perhaps be done with the assistance of a meta-ML algorithm (like, clustering over NN weight matrices?).

",32634,,2444,,6/21/2020 12:32,6/21/2020 12:32,Has anyone attempted to take a bunch of similar neural networks to extract general formulae about the focus area?,,0,8,,6/21/2020 12:40,,CC BY-SA 4.0 17437,2,,17435,1/9/2020 22:29,,2,,"

I think you can break this problem down into two parts to try and find the solution.

1. Can the neural network model the desired function?

Take the tabular function you have learned in the exact agent, and treat it as training data for the neural network model, using the same loss function and other hyperparameters as you intend to use when the NN is being used online inside the RL inner loops.

You can answer two related questions with this:

  • Does the loss reduce down to a low value after a suitable number of epochs? If so, then the NN has capacity to learn and can learn fast enough. If not, you need to look at the hyperparameters of the NN.

  • Does the trained NN play well against the exact agent? Ideally it plays the same, but it is possible that even though the loss is low, one or two key values in the function are compromised, meaning it still loses. I am not entirely sure what you would do in this case, but either try changing the hyperparameters to increase the capacity of the NN, or try augmenting the data so that there are more examples of the ""difficult"" action values to learn, to see if the issue is something that can be solved in learning.

Probably you will find your NN architecture is good, or only requires minor changes to become useful. The more likely issues are in next section.

2. Is the RL framework set up correctly for function approximation?

It is quite hard to get this right. Bootstrapping value based methods can easily become unstable if converted naively from tabular to function approximation approaches. Some variants are moderately stable - most stable would probably be Monte Carlo approaches.

If you don't want to use Monte Carlo control, then the answer here would be to take ideas from DQN used originally to play Atari games:

  • Don't learn online. Store transitions in an experience replay table - store $(s, a, r, s', done)$ tuples where $done$ is true if $s'$ is a terminal state - and sample a minibatch from it on every step. Reconstruct the bootstrap estimates of value functions to train from each time you sample, don't store and re-use the estimate from the time the action was taken.

  • Optionally use two value estimators - the current learning one, used to select plays and which is updated on each step, and a ""target"" one used to calculate TD targets. Update the target network by cloning the learning network every N steps (e.g. every 100 games).

  • To avoid figuring out hyperparameters for SARSA epsilon decay, I suggest use one-step Q learning. Also one issue you may be facing with SARSA combined epsilon decay is ""catastrophic forgetting"" where the agent gets good, starts to train itself only on play examples by good players, and forgets the values of states that it has not seen in the training data for many time steps. With Q learning you can avoid that by having a relatively high minimum epsilon e.g. 0.1.

In fact with TicTacToe learning through self-play, you can get away with $\epsilon = 1$ and Q learning should still work - i.e. it can learn optimal play by observing random self-play. This should apply equally tabular and function approximation agents. It doesn't scale to more complex games where random play would take too long to discover optimal strategies.

",1847,,1847,,1/9/2020 22:35,1/9/2020 22:35,,,,2,,,,CC BY-SA 4.0 17439,1,,,1/9/2020 23:11,,4,57,"

I would like to train an RNN to follow the sentences:

""Would you like some cheese""? with ""Yes, I would like some cheese.""

So whenever the template ""Would you like some ____?"" appears then RNN produces the sequence above. And it should even work on sentences which are new like ""Would you like some blumf?""

I have thought of various ways of doing this. Such as, as well as having 26 outputs for letters of the alphabet have about 20 more for ""repeat the character that is 14 characters to the left"" and so on.

Has this been done before or is there a better way?

",4199,,,,,1/28/2020 19:40,Training an RNN to answer simple quesitons,,1,3,,,,CC BY-SA 4.0 17440,2,,17433,1/10/2020 0:46,,0,,"

To my knowledge the deployment model (that you will test on underwater images) as inference will not have a negative effect. Yet drawings may even help differentiate some classes at training and inference. Provided that you won't use drawings in inference, adding them in training phase will not necessarily hurt the accuracy. Note that a drawing of a particular class should not be in the search domains of other classes, namely, a drawing of a particular class should not be the same with the other classes.

",31870,,,,,1/10/2020 0:46,,,,5,,,,CC BY-SA 4.0 17441,1,17448,,1/10/2020 6:54,,2,3181,"

For a project I am doing, I found the paper Face Alignment in Full Pose Range: A 3D Total Solution.

It is using a cascaded convolutional neural network, but I wasn't able to find the original paper explaining what that is.

In layman's terms and intuitively, how does a cascaded CNN work? What does it solve?

",21645,,2444,,1/10/2020 15:03,1/10/2020 15:03,What is a cascaded convolutional neural network?,,1,0,,,,CC BY-SA 4.0 17443,2,,17431,1/10/2020 8:37,,2,,"

Since you only have fixed types.

For colour, I think it is fairly straight forward.
For number, simplest way is to plot a projection histogram and count the points of discontinuity.

An example of the projection histogram

For fill, You can find the number of islands. Islands of background colour.
For shape, Like Clement Hui suggested you can use shape detection

",32408,,32408,,1/11/2020 13:10,1/11/2020 13:10,,,,2,,,,CC BY-SA 4.0 17444,1,,,1/10/2020 9:07,,1,31,"

I have lots of text documents structured as

{
{
    Item1=[
            {a1=1,
             a2=2,
             a3=3},
            {a1=11,
             a2=22,
             a3=33},
            {a1=41,
             a2=52,
             a3=63},
            {a1=19,
             a2=29,
             a3=39}
    ],
    Item2=[
            {a4=1,
             a5=2,
             a6=3},
            {a4=11,
             a5=22,
             a6=33},
            {a4=41,
             a5=52,
             a6=63},
            {a4=19,
             a5=29,
             a6=39}
    ],
}
}

Now this can be formatted into two csv's as

and

I can write regex parser for this but is there a way by which a neural network or deep learning model can be trained for this, which can create these csvs?

The above example has been indented for better visuals, the raw text looks something like

{{Item1=[{a1=1,a2=2,a3=3},{a1=11,a2=22,{a1=41,a2=52,a3=63},{a1=19,a2=29,a3=39}],Item2=[{a4=1,a5=2,a6=3},{a4=11,a5=22,a6=33},{a4=41,a5=52,a6=63},{a4=19,a5=29,a6=39}]}}
",28421,,,,,1/10/2020 9:07,how to convert one structured data to another without specifying structure,,0,2,,,,CC BY-SA 4.0 17445,1,,,1/10/2020 10:25,,6,490,"

What is the mathematical definition of an activation function to be used in a neural network?

So far I did not find a precise one, summarizing which criterions (e.g. monotonicity, differentiability, etc.) are required. Any recommendations for literature about this or – even better – the definition itself?

In particular, one major point which is unclear for me is differentiability. In lots of articles, this is required for the activation function, but then, out of nowhere, ReLU (which is not differentiable) is used. I totally understand why we need to be able to take derivatives of it and I also understand why we can use ReLU in practice anyway, but how does one formalize this?

",32649,,2444,,1/10/2020 13:19,11/9/2020 10:04,What is the mathematical definition of an activation function?,,1,0,,11/9/2020 10:02,,CC BY-SA 4.0 17446,2,,17445,1/10/2020 11:11,,5,,"

There is no strict definition of suitability of an activation function for neural networks. Instead there are a number of desirable traits, and functions that don't meet them or come close enough may perform badly in general (but those functions may still work in specific cases)

If you are using gradient descent as a training method, then differentiability is closest to being a requirement. However, even then as you noticed with ReLU, it is not an absolute requirement, provided behaviour close to discontinuities is good. For example $\frac{1}{x}$ or $log{x}$ make bad choices here due to how they behave for values near to $0$.

Wikipedia summarises some important desirable traits, plus compares popular activation functions in this table.

Nonlinear – When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. The identity activation function does not satisfy this property. When multiple layers use the identity activation function, the entire network is equivalent to a single-layer model.

Range – When the range of the activation function is finite, gradient-based training methods tend to be more stable, because pattern presentations significantly affect only limited weights. When the range is infinite, training is generally more efficient because pattern presentations significantly affect most of the weights. In the latter case, smaller learning rates are typically necessary.

Continuously differentiable – This property is desirable (RELU is not continuously differentiable and has some issues with gradient-based optimization, but it is still possible) for enabling gradient-based optimization methods. The binary step activation function is not differentiable at 0, and it differentiates to 0 for all other values, so gradient-based methods can make no progress with it.

Monotonic – When the activation function is monotonic, the error surface associated with a single-layer model is guaranteed to be convex.

Smooth functions with a monotonic derivative – These have been shown to generalize better in some cases.

Approximates identity near the origin – When activation functions have this property, the neural network will learn efficiently when its weights are initialized with small random values. When the activation function does not approximate identity near the origin, special care must be used when initializing the weights.

You can see that ReLU is an outlier on more than one of these traits. The reason that it is popular is that despite these weaknesses, the things it does well - speed of operation plus helping combat the vanishing gradient problem - make it a solid choice for very deep networks. In fact the success of ReLU has inspired use of a number of similar-looking activation functions that hope to keep its benefits whilst having more of these desirable traits.

Ultmately though, almost any non-linear function can be used successfully in a neural network. The more of the desirable traits it has, the more likely you can use it in general cases with existing approaches to training.

",1847,,1847,,1/10/2020 11:31,1/10/2020 11:31,,,,0,,,,CC BY-SA 4.0 17447,1,,,1/10/2020 14:30,,2,30,"

I was wondering whether it is possible to regularize (L1 or L2) non-linear parameters in a general regression model. Say, I have the following non-linear least squares cost function, where $p$ is a $3d$ vector of fitting parameters:

$cost(p_i) = ( y(x) - sin^{p_1}(x) + p{_2}e^{(p_3*x)} )^2$

In the above cost function, $p_1$ and $p_3$ are non-linear parameters. How should I go about regularizing them? If they were linear, I can just sum them up together with the linear parameters (absolute values or squares) and add as a penalty to the cost function, right? However, I'm not sure if I'm allowed to do so for non-linear parameters.

Has anyone considered this problem?

",32653,,,,,1/10/2020 14:30,Regularization of non-linear parameters?,,0,0,,,,CC BY-SA 4.0 17448,2,,17441,1/10/2020 14:53,,3,,"

The paper you are citing is the paper that introduced the cascaded convolution neural network. In fact, in this paper, the authors say

To realize 3DDFA, we propose to combine two achievements in recent years, namely, Cascaded Regression and the Convolutional Neural Network (CNN). This combination requires the introduction of a new input feature which fulfills the ""cascade manner"" and ""convolution manner"" simultaneously (see Sec. 3.2) and a new cost function which can model the priority of 3DMM parameters (see Sec. 3.4)

where 3DDFA stands for 3D Dense Face Alignment, the framework proposed in this paper for face alignment, in which a dense 3D Morphable Model (3DMM) is fitted to the image via cascaded CNNs (the regressor), where the term dense refers to the number of points of the face that will be modeled. See figure 1 of this paper, which should provide some intuition behind the purpose of this framework.

In section 3 (page 3), they also say

In this section, we introduce how to combine Cascaded Regression and CNNs to realize 3DDFA. By applying a CNN as the regressor in Eqn. 1, Cascaded CNN can be formulated as:

\begin{align} \mathbf{p}^{k+1} = \mathbf{p}^{k} + \text{Net}^{k} (\text{Fea}(\mathbf{I}, \mathbf{p}^k)) \tag{1}\label{1} \end{align}

where

  • $k$ is the iteration number
  • $\mathbf{p}$ is the regression objective
  • $\text{Net}$ is the CNN structure
  • $\text{Fea}$ contains the two constructed image features
    • Pose Adaptive Feature (PAF) (section 3.2.1)
    • Projected Normalized Coordinate Code (PNCC) (section 3.2.2)
  • $\mathbf{I}$ is the image

The expression cascaded CNN apparently refers to the fact that equation \ref{1} is used iteratively, so there will be multiple CNNs, one for each iteration $k$. In fact, in the paper, they say

Unlike existing CNN methods that apply different network structures for different fitting stages, 3DDFA employs a unified network structure across the cascade. In general, at iteration $k$ ($k = 0, 1, \dots, K$), given an initial parameter $\mathbf{p}^k$, we construct PNCC and PAF with $\mathbf{p}^k$ and train a two-stream CNN $\text{Net}^k$ to conduct fitting. The output features from two streams are merged to predict the parameter update $\Delta \mathbf{p}^k$

$$ \Delta \mathbf{p}^k = \text{Net}^k(\text{PAF}(\mathbf{p}^k, \mathbf{I}), \text{PNCC}(\mathbf{p}^k, \mathbf{I})) $$

Afterwards, a better intermediate parameter $\mathbf{p}^{k+1} = \mathbf{p}^k + \Delta \mathbf{p}^k$ becomes the input of the next network $\text{Net}^k$ which has the same structure but different weights with $\text{Net}^k$.

In figure 2 of the paper (page 4), the structure of this two-stream CNN, $\text{Net}^k$, at iteration $k$, is shown.

",2444,,2444,,1/10/2020 14:58,1/10/2020 14:58,,,,1,,,,CC BY-SA 4.0 17449,1,,,1/10/2020 21:31,,1,76,"

Can someone explain the difference? I'm assuming the difference is just that the neural nets representing the encoder and decoder are trained in a semi-supervised manner in semi-supervised VAE, which in conditional the approximation to the posterior and the posterior's distributions are conditioned on some labels? So, I'm guessing that semi-supervised VAE affects the loss evaluation, whereas, in conditional VAEs, the inference network is conditioned on another label as well?

",30885,,2444,,1/10/2020 22:20,1/10/2020 22:20,What's the difference between semi-supervised VAEs and conditional VAEs?,,0,0,,,,CC BY-SA 4.0 17450,1,,,1/10/2020 23:04,,4,143,"

I've been doing some research on the principles behind AlphaZero. Especially this cheat sheet (1) and this implementation (2) (in Connect 4) were very useful.

Yet, I still have two important questions:

  1. How is the policy network updated? In (2), board positions are saved in a dataset of tuples (state, policy, value). The value is derived from the result of the self-played game. However, I'm not sure which policy is saved: the number of times that each move has been played, the prior probabilities for each move (I guess not), or something else?

  2. The cheat sheet says that (for competitive play) the move is chosen with the greatest N (=most visited). Wouldn't it be more logical to choose the move with the highest probability calculated by the policy head?

",32668,,2444,,12/17/2020 2:01,12/17/2020 2:01,"In AlphaZero, which policy is saved in the dataset, and how is the move chosen?",,1,0,,,,CC BY-SA 4.0 17452,1,,,1/11/2020 8:21,,1,38,"

Let's say that I have a pre-trained model where the training set used to pretrain the model is very different from my training set. Let's say I unfreeze layers that have X trainable parameters. What size should the training set be with/without data augmentation for multi-class/multi-label image classification with Y number of labels?

",32622,,32622,,1/12/2020 10:54,1/12/2020 10:54,What's the mathematical relationship between number of trainable parameters and size of training set?,,0,2,,,,CC BY-SA 4.0 17453,1,,,1/11/2020 9:10,,0,246,"

I have images that contain lots of elements. Some I know, some I don't. I want to know if it's ok to only label those I do know. Let's take this image for example. I would label the green stuff and the worm but leave the rest unlabeled. Is that ok?

Another question I would also like to ask is how concise I should be in labeling. For instance, You can see in the picture a bit of blue behind the green plant. So should I label that bit and say water or leave it unlabeled?

EDIT:

I also want to ask if it's ok to label only the things I'm interested in even if they take up to 30% of the picture? Won't the neural network be confused by all the details in the picture that it perceives and that I label as A for example, even if A is just a part of it?

Another question would be, let's say I have labels A, B and C. I have an image in which I'm a bit confused if a certain object is of label B or A or even a totally different class other than (A,B,C). What should I do in this instance?

I'm having a hard time with the dataset. It would take an expert to label this correctly. But I want to do things as cleanly as possible, so all the effort doesn't go to waste. I would really appreciate your help. Thank you guys.

",32622,,32622,,1/12/2020 11:32,6/8/2021 11:02,How to correctly label images for multi-label classification?,,3,2,,,,CC BY-SA 4.0 17456,1,,,1/11/2020 12:41,,2,262,"

In the step of tuning my neural networks I often encounter a problem that every time I train the exact same network, it gives me different final error due to random initialization of the weights. Sometimes the differences are small and negligible, sometimes they are significant, depending on the data and architecture.

My problem arises when I want to tune some parameters like number of layers or neurons, because I don't know if the change in final error was caused by recent changes in network's architecture or it is simply effect of the aforementioned randomness.

My question is how to deal with this issue?

",22659,,,,,3/21/2020 16:20,How to deal with random weights initialization in hyperparameters tuning?,,3,0,,,,CC BY-SA 4.0 17457,2,,17456,1/11/2020 13:21,,2,,"

I don't think you can.

Say a NN with 3 layers gives an accuracy of 95.3% and another NN with 4 layers gives an accuracy of 95.4%. Then there is no guarantee that the 4 layer NN is better than the 3 layered NN. Since with different initial values the 3 layer NN might perform better.

You could run multiple times and probabilistically say that this is better, but this is computational intense.

",32408,,,,,1/11/2020 13:21,,,,0,,,,CC BY-SA 4.0 17459,2,,17453,1/11/2020 14:38,,2,,"

You can't label things you don't know. The goal of labeling is to label the things you want the classifier to learn so that when you run it in inference mode you can discover what is in your data (new data that you didn't use for training, validating, or testing).

It is not a good idea to label small objects like the 'blue water' unless it is important to you to discover these fine details in inference mode.

",5763,,,,,1/11/2020 14:38,,,,1,,,,CC BY-SA 4.0 17460,2,,17453,1/11/2020 15:26,,1,,"

I think what you are actually talking about is semantic segmentation (where you label pixels individually).

There is a difference in theses tasks like Classification, Detection or Semantic Segmentation.

Classification refers to the task of giving a (usually) single label to the whole image, e.g. cat. But as you already noticed this does not necessarily end in a clear labeling policy, since you basically always have multiple classes in one image. However, a ANN usually learns the most relevant (biggest, nearest) object in an image to set it to the corresponding class (but this of course again depends on how the images are labeled). At inferencing of the ANN, you then get a probability distribution over all predefined classes. You can, of course, use this to take the K most relevant classes instead of just the single most relevant classes, to cover cases where multiple objects are probably present. However common output layers, e.g. like Softmax, are usually designed to favour for one single class instead of multiple classes, so you should keep that in mind or consider using a better suited output layer function for your use-case.

To have a more general approach in the task of object detection you classify multiple objects in an image (usually as bounding boxes). That means you label all predefined objects with their position and class in an image.

And the most general approach here would be to do semantic segmentation that labels every pixel to a corresponding class, which then gives you the actual object borders etc. You can also label pixels as "voids" (or something like that) to cover unknown classes or classes that are not considered in your dataset. However creating such a dataset is a horrible amount of work.

To clear this up: look at your actual use-case, and think about what you actually want your neural network to do. According to this, you then think of a labeling policy and label your data.

",13104,,47724,,6/8/2021 11:02,6/8/2021 11:02,,,,2,,,,CC BY-SA 4.0 17461,1,,,1/11/2020 16:23,,1,88,"

I am using a normalizing flow (Neural Spline Flows) to approximate a probability. After some training, the average loss is around 0.5 (so the logarithm of the probability = -0.5). However, when I am trying it on some new test data, I am getting some values of the logarithm of the probability bigger than zero, which would mean that the probability for that element is bigger than one (which doesn't make sense).

Does anyone know what could cause this? Isn't the flow supposed to keep all the probabilities below 1 automatically?

",22839,,2444,,6/11/2020 20:16,10/30/2022 4:10,Why am I getting the logarithm of the probability bigger than zero when using Neural Spline Flows?,,1,0,,,,CC BY-SA 4.0 17463,1,,,1/11/2020 21:37,,0,83,"

Suppose we want to train a model to detect various objects. Let's say we have training data of those objects in various backgrounds along with their bounding boxes. Basically these objects have been three dimensionally created and the bounding boxes have been drawn on them. Then these have been ""synthetically inserted"" into various blank backgrounds.

Why would a model trained only on this data do better than a model that has this data along with ""real"" data of these objects with their bounding boxes manually drawn?

",32686,,,,,10/13/2022 12:01,YOLOv3 Synthetic Data Training,,1,0,,,,CC BY-SA 4.0 17464,2,,17456,1/11/2020 23:58,,1,,"

There are two weight-initializing methods for neural networks: 1-Zero initializing 2-Random initializing

https://towardsdatascience.com/weight-initialization-techniques-in-neural-networks-26c649eb3b78

If you choose zero initalizing method in every train loop, you may get same results OR you can use transfer learning according to your problem, it allows to start same parameters. At last, as a worst and hardest choice, you can write your own weight arrays and feed your layers.

Problem you mentioned is one of the most interesting problems in the evaluation of performance of neural networks. You can use cross-validation method to verify your model's accuracy! ıt will give more reliable results!

",32669,,,,,1/11/2020 23:58,,,,0,,,,CC BY-SA 4.0 17465,1,,,1/12/2020 6:04,,1,43,"

I have read that in deep networks you can engineer each layer for a particular purpose with regards to feature learning. I'm wondering how that is actually done and how it is trained?

In addition doesn't this conflict with the idea of deep-networks having ""automatic"" feature extraction?

For example consider this:

Lets say you want to detect stop signs. How would you teach a deep network to do this in a layer-wise fashion?

People write about one layer of a Deep Network does edge detection, but how?

",32390,,,,,1/12/2020 6:04,How to Layer based Feature extraction?,,0,0,,,,CC BY-SA 4.0 17466,1,,,1/12/2020 11:13,,7,620,"

I came across a news article from 2018 where the president of India was saying that Sanskrit is the best language for ML/AI. I have no idea regarding his qualification on either AI or Sanskrit to say this but this idea has been floated earlier in the context of NLP. Specifically, Rick Briggs had said so in 1985.

I know elementary Sanskrit and I know NLP. I can understand the point that as a strongly declined language Sanskrit is less ambiguous than say English as the position of words in a sentence are not important. Add to it the fact that words are also declined (that's not the technical term for verbs and I am not sure what is) so that verb gender and number help identify which entity they refer to.

However, that point was valid in 1985. My question is that post the Deep Learning revolution of the last couple of years is it still relevant? Especially given the fact that humans still have to first learn Sanskrit in case NLP is done in Sanskrit.

Of course, as can be guessed from the tone of the question, I am of the opinion that Sanskrit is not relevant for AI now but I wanted to know if someone who works in AI thinks otherwise and if so what is their reason so think so.

",21511,,,,,1/29/2020 8:37,Is Sanskrit still relevant for NLP/AI?,,1,3,,,,CC BY-SA 4.0 17467,1,,,1/12/2020 14:55,,3,1895,"

I'm building an RL agent using SARSA and Q-Learning for testing its capabilities.

The environment is a 10x10 grid, where it gets a reward of 1 if he reaches the goal while he takes -1 every time he takes a step out of the grid. So, it can freely move out and every time it takes a step outside of the grid it gets -1.

After tuning the main parameters

  • alpha_val: 0.25
  • discount: 0.99
  • episode_length: 50
  • eps_val: 0.5

I get the following plot for 10000 episodes (The plot is sampled every 100 episodes):

But when I look at the plots online I see usually plots like this one:

Since I'm new at RL, I'm asking some comments about my outcome or any type of suggestion if anyone of you think that I'm doing something wrong.

",32694,,2444,,1/14/2020 13:53,7/20/2022 17:34,Why is the average reward plot for my reinforcement learning agent different than the usual plots?,,2,1,,,,CC BY-SA 4.0 17468,1,,,1/12/2020 22:11,,3,363,"

I'm working on semantic segmentation tasks in the medical space using the U-Net. Let's say that I train a U-Net model on medical images with the goal of segmenting out, say, ligaments, from a medical image. If I train that model on images that contain just a single labelled ligament, it will be able to segment out single ligaments pretty well, I assume. If I present it with an image with multiple ligaments, should it also be able to segment the multiple ligaments well too?

Based on my understanding, semantic segmentation is just pixel-wise classification. As a result, shouldn't the number of the objects in the image not be relevant since it's only looking at individual pixels? So, as long as a pixel matches that of a ligament, it should be able to segment it equally right?

Or am I misunderstanding some piece?

Basically, if I train a U-Net on images with just single ligaments, will it also be able to segment images with multiple ligaments equally as well based on my logic above?

",32701,,2444,,2/7/2021 12:44,11/30/2022 2:00,"If I trained a model to perform semantic segmentation on images with only one object, would it also work on images with multiple objects?",,3,0,,,,CC BY-SA 4.0 17469,2,,17467,1/12/2020 23:10,,2,,"

Well, the way to know that the agent is actually learning is by looking at its behavior while it performs the task, and by comparing against a known optimal performance.

So, does your agent reaches the goal quickly? Does it step out of the grid frequently? What is the maximum possible sum of rewards / minimum number of steps attainable? Is the agent close to that limit? From your graphic, and if I understood correctly your RL problem, the maximum average reward per step should be close to 1 (depending on the specific environment you are using), so I guess you are not so far from the optimal solution.

Also, probably if you keep training for a longer period, your agent will reach a stable solution that might or might not be optimal. If you keep training after that, your curves surely will look like the ones you found online.

",30983,,,,,1/12/2020 23:10,,,,2,,,,CC BY-SA 4.0 17470,1,,,1/13/2020 0:01,,3,839,"

Assuming we had an unlimited time to train a model and a very powerful machine to use our model in real-time (hello quantum computer), I'd like to know why no one could achieve to build an AI able to play a FPS, using ONLY pixels shown on the screen.

Disclaimer: I am not tackling this problem and neither am I planning on doing such a thing, this is pure speculation and curiosity.


I read this great article: playing FPS Games with Deep Reinforcement Learning (2017) (Guillaume Lample, Devendra Singh Chaplot) where they achieve a 4/1 kills/death ratio on Doom against bots. But this is 3 years old now.

Here is a picture of their model:

But they made 2 assumptions that we, humans, do not make when we are playing for the first time to a new game like Call of Duty or Battlefield:

  1. Game feature augmentation. To train a part of their model they used the game engine to know if there is (or not) an enemy in the frame they are processing. We obviously can't do this with CoD or Battlefield, and we, as human, just ""learn"" to recognize an enemy without these informations.

  2. Changing textures of several maps while training to generalize better the model (see 5.2 of the paper linked previously). To summarize, they trained the model with 10 maps changing texture of some elements to make the model generalize better. Then they tested it on 3 unknown maps. In real world (ie in the scenario where we base our training/testing exclusively on the pixels of the screen), we can't train a model with different textures on the same map. And humans are able to play a deathmatch on an unknow map without re-learning everything (detect enemies, move, hide, reloading...). We just need to construct a 3D model of the map in our head to play our best.


Their agent ""divides the problem into two phases: navigation (exploring the map to collect items and find enemies) and action (fighting enemies when they are observed), and uses separate networks for each phase of the game"".

Would it be wise to use more than 2 models? Let's say:

  • 1 CNN to detect enemies
  • 1 model to deal with space features (position/navigation of the agent and the enemies)
  • 1 model to choose the actions given all data previous models have found?

We could train them independently, at the same time.

I think we'd get better result by processing manually some features (using computer vision techniques) like the minimap to get know positions of enemies and number of ammo to feed as input of the last model (action decider).

But there are other problem we'd get: there is a delay between the frame we choose to pull the trigger, the time the bullet hit the enemy and the time the ""reward"" appears on the screen (ex: ""100 points, kill [nameOfLateEnemy]"" appears after the 3th bullet, and if there is ping because we are playing online it may appear 100ms after). How to use reinforcement learning when we don't know exactly which action was the one getting the reward? (we can move the agent while changing the lok directon while pulling the trigger at the same time. It's the combination of these actions that is making the agent kill an enemy).

If the 2 assumptions they made were easy to get rid of, they would be discarded already.
However detecting enemies is basically a simple CNN, and making the navigation network generalize certainly have solutions I can't think of right now but some researchers should have in this 3-year gap between the paper and today.

So why isn't there a model playing CoD or Battlefield better than humans? What am I missing?

",32704,,,,,1/13/2020 0:01,Why isn't there a model playing FPS like CoD or Battlefield already existing?,,0,3,,,,CC BY-SA 4.0 17471,2,,17118,1/13/2020 1:09,,1,,"

If is a truly a random number, and you could guess each of the next successive five in sequence, then you could win the lottery consistently.

This is one of the first tasks many people try to do when first learning machine learning. If the lottery is truly a random physical process with fair, i.e., balanced ping pong balls, then you cannot predict which 5 or 6 numbers will come up next. The Lottery Commissions around the world go through great pains to ensure that the lotteries are fair and not fraudulent.

It looks like you are using a random seed number, that is why you are getting the next number always the same.

Good luck learning your numbers !

",32706,,,,,1/13/2020 1:09,,,,0,,,,CC BY-SA 4.0 17472,1,17482,,1/13/2020 2:52,,1,60,"

The following table shows the precision and recall values I obtained for three object detection models.

The goal is to find the best object detection model for that particular data set.

I evaluate the first two models as the following.

  • Model 1 has a high recall and precision values. High precision relates to a low false-positive rate, and high recall relates to a low false-negative rate. High scores for both show that the model is returning accurate results.

  • Model 2 has high precision but low recall. This means it returns very few results, but most of its identified objects are correct.

How can I evaluate the third one?

",32343,,2444,,1/6/2022 11:07,1/6/2022 11:07,"Given the precision and recall of this model, what can I say about it?",,1,0,,,,CC BY-SA 4.0 17473,1,,,1/13/2020 3:26,,1,34,"

I am trying make an ANN model that takes a constant m (will be changed later but now it is just a constant, let's say 0) as an input and generate 5 non-integer numbers (a1,a2..a5) after some layers like relu, linear,relu ... and then these 5 numbers enter to the loss function layer along with an additional 5 numbers (b1,b2..b5) given by hand directly into the same loss function. In the loss function, S=a1xb1+...+a5xb5 will be calculated and the model should use mean square error with this S and S0, which is given by hand again to tune the 5 numbers generated after the NN layers.

For a dummy like me, this looks like a totally different model than the examples online. I'd really appreciate any guidance here like the model I should use, examples etc. I can't even know where to start even though I believe I understand the generic NN examples that one can find online.

Thanks

",32707,,,,,1/13/2020 3:26,Generating 5 numbers with 1 input before loss function,,0,0,,,,CC BY-SA 4.0 17474,1,,,1/13/2020 5:27,,1,66,"

We know a lot of common sense about the world. Things like ""to buy something you need money"".

I wonder how much of this common sense comes about through actual someone explicitly telling you the instructions ""You need money to buy things"". Which we store in our brains as a sort of rule. As opposed to just intutively understanding things and picking it up.

I am imagining children playing at shop-keeping and saying things like ""I give you this and you give me that"". And other children not quite understanding the concept of buying things until being told by a teacher.

If so, giving a computer a list of common sense rules likes these is no different to teaching a child. So I am wondering why this area of AI research (semantic webs etc.) has been frowned upon in the last decade in favour of trying to learn everything through experience like deep neural networks?

",4199,,,,,1/29/2020 2:05,How much knowledge of the world is learnt through words?,,2,3,,,,CC BY-SA 4.0 17475,1,,,1/13/2020 6:08,,1,27,"

for a screen printing app, I'd like to remove background colors from images.

There is still a white border around text from anti-aliasing. Dropshadows also break it.

So, I was thinking I could train an AI by creating images with shapes and text, with and without backgrounds.

The AI input would get a version with a background and the ""goal"" would be the version without the background.

How do I go about doing this? ( total AI noob )

================

Non-AI solution

If anyone is interested... I have made a non AI solution which takes all colors within a tolerance of the background, then looks at the 4x4 neighbors.. from each neighbor(which is a candidate for converting into semi-transparent), it looks at the 3x3 neighbors around the candidate for the furthest color from the removal color ( which typically grabs the solid pixels ), and then converts the current pixel to an alpha version by copying the rgb values, and converting alpha to 255 * (1 - dist_removal_to_current / dist_removal_to_furthest) or something like that.

I should write an article or something... it was an interesting algorithm to write. linkedin me Dan Schumann in wisconsin

",32709,,32709,,1/14/2020 17:21,1/14/2020 17:21,Train AI with shapes + drop shadows to remove background colors,,0,0,,,,CC BY-SA 4.0 17476,1,17478,,1/13/2020 7:04,,9,1055,"

In the gradient descent algorithm, the formula to update the weight $w$, which has $g$ as the partial gradient of the loss function with respect to it, is:

$$w\ -= r \times g$$

where $r$ is the learning rate.

What should be the formula for momentum optimizer and Adam (adaptive momentum?) optimizer? Something should be added to the right side of the formula above?

",2844,,18758,,5/24/2022 23:17,5/24/2022 23:17,What is the formula for the momentum and Adam optimisers?,,1,0,,,,CC BY-SA 4.0 17478,2,,17476,1/13/2020 9:30,,9,,"

I'm going to use slightly different notation, $\leftarrow$ for an assignment, $\alpha$ for learning rate, $\nabla_w J$ in place of $g$* and implied multiplication as these are slightly more common. Also, using bold letters to represent vectors. In that notation, the update rule for basic gradient descent would be written as:

$$\mathbf{w} \leftarrow \mathbf{w} - \alpha \nabla_w J$$

This cannot be extended to momentum and Adam update rules whilst keeping it as a single line and modifying the right hand side. That is because these variations of gradient descent maintain running statistics of previous gradient values, which have their own separate update rules. When implemented on a computer, these become additional terms, mainly vectors the same size as the weight vector being updated. These variables also require initialisation before use.

Momentum

Momentum maintains a ""velocity"" term which essentially tracks a recency-weighted average of gradients. However, the classic form of momentum given here does not normalise the resulting vector, and you often have to adjust the learning rate down when using it. Momentum has a parameter $\beta$ which should be between 0 and 1, and typically is set at $0.9$ or higher.

Intitialisation

$$\mathbf{m} \leftarrow \mathbf{0}$$

Update rules

$$\mathbf{m} \leftarrow \beta \mathbf{m} + \nabla_w J$$ $$\mathbf{w} \leftarrow \mathbf{w} - \alpha \mathbf{m}$$

There are some variants of these update rules in practice. An important one is Nesterov momentum.

Adam

The Adam optimiser maintains a momentum term, plus a scaling term, and also corrects these terms for initial bias. Adam has three parameters $\beta_m$ for momentum (typically 0.99), $\beta_v$ for scaling (typically 0.999) and $\epsilon$ to avoid divide by zero and numerical stability issues (typically $10^{-6}$).

Initialisation

$$\rho_m \leftarrow 1$$ $$\rho_v \leftarrow 1$$ $$\mathbf{m} \leftarrow \mathbf{0}$$ $$\mathbf{v} \leftarrow \mathbf{0}$$

Update rules

$$\rho_m \leftarrow \beta_m \rho_m$$ $$\rho_v \leftarrow \beta_v \rho_v$$ $$\mathbf{m} \leftarrow \beta_m \mathbf{m} + (1-\beta_m) \nabla_w J$$ $$\mathbf{v} \leftarrow \beta_v \mathbf{v} + (1-\beta_v) (\nabla_w J \odot \nabla_w J)$$ $$\mathbf{w} \leftarrow \mathbf{w}- \alpha(\frac{\mathbf{m}}{\sqrt{\mathbf{v}}+\epsilon} \frac{\sqrt{1-\rho_v}}{1-\rho_m})$$

The symbol $\odot$ stands for element-wise multiplication. Here that essentially means to square each term of the gradient to calculate terms in $\mathbf{v}$. The square root and division of $\mathbf{m}$ by $\sqrt{\mathbf{v}}+\epsilon$ in the last update step are also handled element-wise.

The variant I show here has an ""optimisation"" to the bias correction so that you don't need to calculate high powers of either of the parameters. You may see variants that don't have $\rho_m$ and $\rho_v$ (or equivalents), but instead use $\beta_{m}^t$ or $\beta_{v}^t$ directly, which is exactly what $\rho_m$ and $\rho_v$ represent.


* $\nabla_w J$ is the gradient of $J$ with respect to $\mathbf{w}$. By writing it this way, it also describes the goal of the update explicitly within the notation i.e. to minimise a function that is parameterised by $\mathbf{w}$.

",1847,,1847,,1/13/2020 9:43,1/13/2020 9:43,,,,0,,,,CC BY-SA 4.0 17481,1,,,1/13/2020 18:36,,3,84,"

I watched a video explaining how LSTM cells have very rudimentary feed-forward neural networks, basically a 2 layer input-output with no hidden layers.

Why don't LSTM cells have more complex neural networks before each gate, i.e. containing 1 hidden layer?

I would think that if you want more advanced gating decisions, that you would use at least 1 hidden layer to have more complex processing.

",32716,,2444,,1/14/2020 13:57,1/14/2020 13:57,Why don't the neural networks inside LSTM cells contain hidden layers?,,0,3,,,,CC BY-SA 4.0 17482,2,,17472,1/13/2020 19:18,,1,,"

The second model has the same precision, but worse recall, than model 1. Therefore we would rather have model 1 than model 2.

The third model has worse recall than model 1, and worse precision than model 1, therefore we would rather have model 1 than model 3.

Thus, model 1 is the best object detection model.

",16909,,2444,,1/6/2022 11:07,1/6/2022 11:07,,,,5,,,,CC BY-SA 4.0 17483,2,,17461,1/13/2020 19:35,,0,,"

I have always seen the negative odd-logs ration when doing similar work. As it is now, I don't think you are constraining the splines appropriately Link to reading

",32719,,,,,1/13/2020 19:35,,,,0,,,,CC BY-SA 4.0 17484,1,,,1/13/2020 19:53,,2,98,"

I am working with a data set that consists of the actual pitch angle (given as PA(Y)) and the pitch angle at each radii (listed from 1 to 217). In the image below, you can only see radii 1 through 16. The Mode(Y) in the image below is not of relevance at the moment.

There are regions that range between certain radii in which the pitch angle measurement does not change (in the image, you'll notice this happens for all the radii values, but they do change after a certain radius that's cut off in the image). These are known as stable regions. My goal is to capture all the ranges in the data in which the pitch angle measurement does not change, and create a program that returns those values.

Is there a machine learning method in which this is possible, or is this just a non-machine learning problem? I have tried creating plots and have considered creating a CNN that can identify these flat regions, but I feel like this is overkill. My PIs want to use a machine learning method and they have proposed neural networks, hence why I tried the CNN, but I just am not sure if that is possible?

I should add, usually stable regions radii ranges are unknown, so the goal is to try to see if certain radii ranges usually can predict where a stable region are located.

Moreover, I've thought of using a classifier to determine whether a region is flat or not. I am just very confused as to how to approach this. Are there any similar examples to the problem I'm currently working on that someone can point me to?

",32717,,32717,,1/13/2020 20:42,1/13/2020 20:42,Suggestion for finding the stable regions in spiral galaxy data?,,0,0,,,,CC BY-SA 4.0 17485,1,,,1/13/2020 20:18,,2,27,"

I've been trying out bayesian hyperparameter optimisation (with TPE) on a simple CNN applied to the MNIST handwritten digit dataset. I noticed that over iterations of the optimisation loop, the tested parameters appear to oscillate slowly.

Here's the learning rate:

Here's the momentum:

I won't add a graph, but the batch size is also sampled from one of 32, 64, or 128. Also note that I did this with a fixed 10 epochs in each trial.

I understand that we'd expect the trialled parameters to converge gradually towards the optimal, but why the longer term movement of the average?

For context here is the score (1 - accuracy) over iterations

And also for context, here's the architecture of the CNN.

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 9, 9, 64)          36928     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 100)               102500    
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1010      
=================================================================

Optimization done with mini-batch gradient descent on the cross entropy.

",16871,,,,,1/13/2020 20:18,Is it normal to see oscillations in tested hyperparameters during bayesian optimisation?,,0,0,,,,CC BY-SA 4.0 17487,1,,,1/14/2020 3:08,,1,33,"

Let's say I have the number 123.45 and the expression one hundred twenty-three and forty-five cents.

Can I develop AI to identify these two values as a match? If I can, how should I do that?

",32727,,2444,,1/14/2020 11:08,1/14/2020 11:08,How can I match numbers with expressions?,,0,1,,,,CC BY-SA 4.0 17488,2,,17468,1/14/2020 3:26,,0,,"

Without experimental evidence to back me up, I can not answer this with 100% confidence. However, I am fairly certain that this will cause issues depending on the model.

U-net is essentially an auto-encoder, and due to the fact that it is all just one big neural network, it is likely it will learn the easiest pattern (as all NN do), and that is to find one single instance of an object and shade that region.

Now why does this depend on the model? Well let's say you are using something slightly different, where region proposals are generated by a determinisitc algorithm we've predefined, then these regions are run through a CNN to segment them. In this case, as each region is without context of the entire image, the difference between 2 objects in an image and 1 is indistinguishable to the network (as regions may overlap), and as such, only using images with 1 object will not pose any problems (there is a name for early models like these, though it escapes me).

So assuming I am correct, what should you do? The models that use a deterministic algorithm for region proposals are slow and old, so I wouldn't suggest that. Instead, I would think that you should first do some testing, to see if it actually does cause issues. Assuming it does, a good option could be to tamper with the training data and separate segments by a few pixels to sort of ""force"" multiple objects into existence.

Regardless of such, I would still suggest using U-net. Fixing this issue (if it does arrise) should be relatively easy to do, so there's little to lose by using U-net and just trying the training.

",26726,,,,,1/14/2020 3:26,,,,0,,,,CC BY-SA 4.0 17490,1,,,1/14/2020 10:39,,3,48,"

Main question

Is there some way we can leverage general knowledge of how certain hyperparameters affect performance, to very rapidly get some sort of estimate for how good a given architecture could be?

Elaboration

I'm working on a handwritten character recognition problem using CNNs. I want to try out a few different architectures (mostly at random) to iterate towards something which might work. The problem is that one run takes a really long time.

So what's a way to quickly verify if a given architecture is promising? And let me elaborate on what I've tried:

  • Just try it once. Yeah but maybe I chose some bad hyperparameter combination and actually that architecture was going to be the ground breaker.
  • Do Bayesian optimisation. That's still really slow. From examples and trials, I've seen that it takes quite some time for convergence. And besides, I'm not trying to optimise yet, I just want to check if there's any potential.
",16871,,,,,1/14/2020 10:39,What are some ways to quickly evaluate the potential of a given NN architecture?,,0,3,,,,CC BY-SA 4.0 17491,1,,,1/14/2020 11:14,,3,169,"

I implemented a DQN algorithm that plays OpenAIs Cartpole environment. The NN architecture consists of 3 normal linear layers that encode the state, and one noisy linear layer, that predicts the Q value based on the encoded state.
My NoisyLinear layers looks like this:

class NoisyLinear(nn.Module):
  def __init__(self, in_features, out_features):
    super(NoisyLinear, self).__init__()
    self.in_features = in_features
    self.out_features = out_features
    self.sigma_zero = 0.5
    self.weight_mu = torch.empty(out_features, in_features)
    self.weight_sigma = torch.empty(out_features, in_features)
    self.weight_epsilon = torch.empty(out_features, in_features, requires_grad=False)
    self.bias_mu = torch.empty(out_features)
    self.bias_sigma = torch.empty(out_features)
    self.bias_epsilon = torch.empty(out_features, requires_grad=False)
    self.reset_parameters()
    self.reset_noise()

  def reset_parameters(self):
    mu_range = 1 / math.sqrt(self.in_features)
    self.weight_mu.data.uniform_(-mu_range, mu_range)
    self.weight_sigma.data.fill_(self.sigma_zero / math.sqrt(self.in_features))
    self.bias_mu.data.uniform_(-mu_range, mu_range)
    self.bias_sigma.data.fill_(self.sigma_zero / math.sqrt(self.out_features))

  def _scale_noise(self, size):
    x = torch.randn(size)
    return x.sign().mul_(x.abs().sqrt_())

  def reset_noise(self):
    epsilon_in = self._scale_noise(self.in_features)
    epsilon_out = self._scale_noise(self.out_features)
    self.weight_epsilon.copy_(epsilon_out.ger(epsilon_in))
    self.bias_epsilon.copy_(epsilon_out)

  def forward(self, input):
    return F.linear(input, self.weight_mu + self.weight_sigma * self.weight_epsilon, self.bias_mu + self.bias_sigma * self.bias_epsilon)

However, with the default hyperparameters from the (sigma_0 = 0.5), the agent does not explore at all, and even if I crank it up to sigma_0 = 5, it works way worse than epsilon greedy.
(When I use noisy nets I don't use epsilon greedy).

",31821,,32742,,1/14/2020 16:26,10/2/2022 13:07,NoisyNet DQN with default parameters not exploring,,1,0,,,,CC BY-SA 4.0 17492,1,17504,,1/14/2020 13:18,,4,744,"

I'm still on my first steps in the Data Science field. I played with some DL frameworks, like TensorFlow (pure) and Keras (on top) before, and know a little bit of some ""classic machine learning"" algorithms like decision trees, k-nearest neighbors, etc.

For example, image classification problems can be solved with deep learning, but some people also use the SVM.

Why are traditional ML models still used over neural networks, if neural networks seem to be superior to traditional ML models? Keras is rather simple to use, so why don't people just use deep neural networks with Keras? What are the pros and cons of each approach (considering the same problem)?

",7268,,2444,,1/14/2020 23:11,3/2/2020 21:36,Why are traditional ML models still used over deep neural networks?,,3,0,,,,CC BY-SA 4.0 17494,1,17495,,1/14/2020 15:29,,1,396,"

I'm a newbie in Convolutional Neural Networks. I have found out that kernels in convolutional layers are usually learned while training.

Suppose I have a kernel that is very good to extract the features that I want to extract. In that case, I don't want the kernels to be learnable. So, how can I make the kernels non-learnable and set them manually?

Maybe, in that case, I have to use something different from a CNN.

",4920,,2444,,2/19/2021 11:32,2/19/2021 11:32,How can I make the kernels non-learnable and set them manually?,,1,0,,,,CC BY-SA 4.0 17495,2,,17494,1/14/2020 15:52,,2,,"

In most modern neural network frameworks, the update rules for training can be selectively applied to some parameters and not others.

How to do that is dependent on the framework. Some will have the concept of ""freezing"" a layer, preventing parameters in it being updated. Keras does this for example. Others will do the opposite and expect you to provide a list of trainable parameters - these typically come with helpers that will list all parameters in a neural network, so you would need to add some kind of filter after collecting that data to exclude your pre-trained layer. PyTorch does this (although the linked example is slightly more complex in that it stops calculating gradients too).

If your framework of choice does not allow you to select and isolate layers in the training process, then you still have a couple of options:

  • You could store a copy of layer parameters that you want to keep and force your learning network to re-load these parameters after each mini-batch. This does depend on you having a method that can selectively set parameters.

  • If your pre-trained layers are the first ones, immediately next to the input, then instead of including them in your learning network model, you can pre-process all your training data with just the fixed layers (build a model using only those layers), save the output and use that as an alternative input for the learning layers (build a second model with only the learning layers). Later, once training is complete, you can build a combined neural network out of the fixed layers and the learning layers.

",1847,,1847,,1/14/2020 16:01,1/14/2020 16:01,,,,0,,,,CC BY-SA 4.0 17496,2,,16364,1/14/2020 16:39,,1,,"

You would need to perform some kind of speech-to-text to get the audio transcription with the corresponding synchronization wrt the audio. Then search in the transcription.

You could use DSAlign by mozilla

",20269,,,,,1/14/2020 16:39,,,,0,,,,CC BY-SA 4.0 17497,2,,11047,1/14/2020 16:49,,1,,"

The task of isolating 2 or more speakers is called speaker diarization, here a list of softwares and useful resources.

Once you have the 2 or more audio files containing the individual voices, you could run some speech-to-text network that also outputs time information.

",20269,,,,,1/14/2020 16:49,,,,0,,,,CC BY-SA 4.0 17498,1,17518,,1/14/2020 17:51,,4,722,"

I'm currently trying to take the next step in deep learning. I managed so far to write my own basic feed-forward network in python without any frameworks (just numpy and pandas), so I think I understood the math and intuition behind backpropagation. Now, I'm stuck with deep q-learning. I've tried to get an agent to learn in various environments. But somehow nothing works out. So there has to be something I'm getting wrong. And it seems that I do not understand the critical part right at least that's what I'm thinking.

The screenshot is from this video.

What I'm trying to draw here is my understanding of the very basic process of a simple DQN. Assuming this is right, how is the loss backpropagated? Since only the selected $Q(s, a)$ values (5 and 7) are further processed in the loss function, how is the impact from the other neurons calculated so their weights can be adjusted to better predict the real q-values?

",30431,,2444,,1/15/2020 13:13,1/15/2020 14:16,How can a DQN backpropagate its loss?,,1,0,,,,CC BY-SA 4.0 17499,2,,17453,1/14/2020 18:00,,0,,"

I would classify each pixel separately instead of giving a label to the whole image. Sadly preparing the training data is very tedious and time-consuming.

Let's say the input image has dimensions of 200 x 300 x 3 (RGB) and there are two classes of regions you want to identify. A few approaches come to mind:

1) Train two separate networks, each forecasting a binary mask of size 200 x 300 of the object class in guestion.

2) Train a single network with a binary output of size 200 x 300 x 2 (sigmoid activation)

3) Train a single network with a binary output of size 200 x 300 x 3 (softmax activation), the 3rd class is for ""other""

If you are uncertain of some regions you can leave its class probability to 50% and it won't affect cross-entropy losses.

Option 1 is easiest to get started with but training a single network should be more computationally more efficient than training two separate ones. In addition options 1 and 2 can forecast a single pixel belonging to both classes with 100% probability unlike the network of option 3.

",32722,,,,,1/14/2020 18:00,,,,0,,,,CC BY-SA 4.0 17501,1,,,1/14/2020 20:22,,1,25,"

I am an agronomy graduate student looking to classify crops from weeds using convolutional neural networks (CNNs).

The basic idea that I am wanting to get into involves separating crops from weeds from aerial imagery (either captured by drones or piloted aircraft). The idea of the project that I am proposing involves spending some time driving around to different fields and capturing many images of both crops and weeds. These images will then be used to train a CNN that will classify aerial imagery on the location of crops and weeds. After classifying the imagery, a herbicide application map will be generated for site-specific weed control. This involves the integration of CNN classification and GIS technology.

My question is this: If you have an orthomosaic image generated from a drone, will images captured from a digital camera on the ground be effective for training a CNN that will classify high-resolution aerial imagery?

Being new to CNNs, I just didn't know if I had to use aerial imagery to train a CNN to classify aerial imagery, or if a digital camera will work just fine.

",32750,,,,,1/14/2020 20:22,Training dataset for convolutional neural network classification - will images captured on the ground be useful for training aerial imagery?,,0,0,,,,CC BY-SA 4.0 17502,1,,,1/14/2020 22:02,,3,1381,"

I'm struggling to fully understand the stochastic gradient descent algorithm. I know that gradient descent allows you to find the local minimum of a function. What I don't know is what exactly that function IS. More specifically, the algorithm should work by initializing the network with random weights. Then, if I'm not mistaken, it forward-propagates $n$ times (where $n$ is the mini-batch size). At this point, I've no idea about what function should I search for, with over hundreds of neurons each having hundreds of parameters.

",32751,,2444,,1/15/2020 0:01,1/27/2020 11:40,What's the function that SGD takes to calculate the gradient?,,3,0,,,,CC BY-SA 4.0 17503,2,,6429,1/14/2020 22:38,,0,,"

Ideally, yes. Ideally, because the network should be fed with the words of an entire book (wich vary around 100k words). With an hypotetical amount of processing power, you should be able to just train the NN with like thousands of books. It might be possible to be trained with quantum computers.... who knows...

For smaller stories, I think that the major problem is to know in what ""shape"" should the story be generated. Because if it simply outputs some words, then the first thing the network should be able to do is speaking, that means the model should evolve from a pretrained NLP model, and (from what I know) we still have some problems with that.

So.... I really think that to do such kind of things, the approach we take to make NNs learn should be changed. The fact that humans exist proves that genetic algorithms would work 100%. But we obviously don't have 3+ billion years to evolve a ""brain"" from scratch, that's why we use training algorithms: we force them to learn from something.

But back to the question: humans do a lot of work by thinking about what outcome to chose. To just make a netork generate an outcome, without imitating humans, it would be easy to just choose randomly some aspects of this outcome. For example, a randomly chosen outcome might be ""outcome: Dennis dies, Morty kills Eminem, sad sciene, happiness sciene, the end"". That means that the NN or any ML model doesn't actually produce an outcome to the story. In fact, what it does is to connect some generated ""checkpoints"" about that story. Actually, you might train a model to generate checkpoints to but this is just a random idea from a newbie, so I've got no clue about how to actually implement that.

I'm italian b.t.w., sorry about my english :)

",32751,,,,,1/14/2020 22:38,,,,0,,,,CC BY-SA 4.0 17504,2,,17492,1/14/2020 22:46,,2,,"

Why are still traditional machine learning (ML) models used over neural networks if neural networks seem to be superior to traditional ML models?

Of course, the model that achieves state-of-the-art performance depends on the problem, available datasets, etc., so a comprehensive comparison between traditional ML models and deep neural networks is not appropriate for this website, because it requires a lot of time and space. However, there are certain disadvantages of deep neural networks compared to traditional machine learning models, such as k-nearest neighbors, linear regression, logistic regression, naive Bayes, Gaussian processes, support vector machines, hidden Markov models and decision trees.

  • Often, traditional ML models are conceptually simpler (for example, k-NN or linear regression are much simpler than deep neural networks, such as LSTMs).

  • Personally, I've noticed that traditional ML models can be used more easily compared to deep neural networks, given the existence of libraries, like scikit-learn, which really have a simple and intuitive API (even though you apparently do not agree with this).

  • Deep neural networks usually require more data than traditional ML models in order not to overfit. Empirically, I've once observed that certain traditional ML models can achieve comparable performance to deep neural networks in the case of small training datasets.

  • Even though there's already a new and promising area of study called Bayesian deep learning, most deep neural networks do not really provide any uncertainty guarantees, they only provide you a point estimate. This is a big limitation, because, in areas like healthcare, uncertainty measures are required. In those cases, Gaussian processes may be more appropriate.

",2444,,2444,,1/14/2020 22:52,1/14/2020 22:52,,,,0,,,,CC BY-SA 4.0 17505,2,,17502,1/14/2020 23:34,,3,,"

Welcome to AI Stack exchange!

You're right, as the network is initialised randomly, the resultant function is essentially impossible to get your head around. This is because most of the time the network has >4 dimensions (4 can be graphed with some effort and a lot of color), and as such is literally beyond human comprehension via graphing.

So what do we do? Well, conveniently, it is possible to find the gradient of segments of a function, without having to know the entirety of the function itself (it's worth noting that it actually is possible to find the resultant function and with a lot of effort find it's derivative. This proves to be much more work than it's worth though, as we don't need the general derivative that tells us what the gradient is for whatever input we give it, we only need the derivative at the specific input we just fed through the network).

This might be hard to understand, but if you're familiar with the chain rule, it might make a bit more sense. The chain rule essentially allows you to split a function into components, and find the gradient of those specific components. By combining all of that, you end up with some nice gradients at each weight/bias with respect to the loss function. Take the negative of the gradient, and you now have the change required to decrease the loss function.

This is obviously quite hard to understand without an example, so here's the best one I've ever found, that helped me very much.

Also, as a side note, the whole mini batch thing is used to minimise catastrophic forgetting (where the network begins to ""unlearn"" old inputs). To deal with minibatchs, what you do is take input individually, then find the gradient for all weights and bias' for that specific input and remember the changes you want to make. Do that for all inputs in the minibatch, then finally add all the changes together to get the resultant best change for each weight and bias. Only then do you update the weights and bias'.

Let me know if you have any further questions

",26726,,,,,1/14/2020 23:34,,,,0,,,,CC BY-SA 4.0 17506,1,,,1/14/2020 23:46,,1,116,"

While studying genetic algorithms, I've come across different crossover operations used for binary chromosomes, such as the 1-point crossover, the uniform crossover, etc. These methods usually don't use any "intelligence".

I found methods like the fitness-based crossover and Boltzmann crossover, which use fitness value so that the child will be created from better parents with a better probability.

Is there any other similar method that uses fitness or any other way for an intelligent crossover for binary chromosomes?

",30164,,2444,,12/7/2020 19:10,12/7/2020 19:10,Are there clever (fitness-based) crossover operators for binary chromosomes?,,2,0,,,,CC BY-SA 4.0 17507,2,,17502,1/14/2020 23:48,,3,,"

I know that gradient descent allows you to find the local minimum of a function. What I don't know is what exactly that function IS.

It's usually called the loss function (and, in general, objective function) and often denoted as $\mathcal{L}$ or $L$ (or something like that, i.e. it is not really important how you denote it). The specific function used as a loss function depends on the problem (ask another question if you want to know the details). For example, in the case of regression, the loss function may be the mean squared error. In classification, the loss function may be the cross-entropy. However, the most important thing to note is that the loss function depends on the parameters of the neural network (NN), so you can differentiate it with respect to the parameters of the NN (i.e. you can take the partial derivative of the loss function with respect to each of the parameters of the NN).

Let's take the example of the mean squared error function, which is defined as

$$ \mathcal{L}(\boldsymbol{\theta}) ={\frac {1}{n}}\sum _{i=1}^{n}(y_i-f_{\boldsymbol{\theta}}(x_i))^{2}. $$ where $n$ is the number of training examples used (the batch size), $y_i$ is the true class (or target) of the input example $x_i$ and $f(x_i)$ is the prediction of the neural network $f_{\boldsymbol{\theta}}$ with parameters (or weights) $\boldsymbol{\theta} = [\theta_i, \dots, \theta_M] \in \mathbb{R}^M$, where $M$ is the number of parameters.

Given the loss function $\mathcal{L}(\boldsymbol{\theta})$, we can now take the derivative of $\mathcal{L}$, with respect to $\boldsymbol{\theta}$, using the famous back-propagation (BP) algorithm, which essentially applies the chain rule of calculus. The BP algorithm produces the gradient of the loss function $\mathcal{L}(\boldsymbol{\theta})$. The gradient can be denoted as $\nabla \mathcal{L}(\boldsymbol{\theta})$ and it contains the partial derivatives of $\mathcal{L}(\boldsymbol{\theta})$ with respect to each parameter $\theta_i$, that is, $\nabla \mathcal{L}(\boldsymbol{\theta}) = \left[ \frac{\partial \mathcal{L}(\boldsymbol{\theta})}{\partial \theta_i}, \dots, \frac{\partial \mathcal{L}(\boldsymbol{\theta})}{\partial \theta_M} \right] \in \mathbb{R}^M$. (If you want to know the details of the back-propagation algorithm, you should ask another question, but make sure you get informed first, because it may not be easy to fully explain it in an answer.)

Afterward, we just apply the gradient descent step

$$ \boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \gamma \nabla \mathcal{L}(\boldsymbol{\theta}) $$

where $\gamma \in \mathbb{R}$ is often called the learning rate and is used to weight the contribution of the gradient $\nabla \mathcal{L}(\boldsymbol{\theta})$ to the new values of the parameters, and $\leftarrow$ represents an assignment (like in programming).

It is worth emphasizing that both $\boldsymbol{\theta}$ and $\nabla \mathcal{L}(\boldsymbol{\theta})$ are vectors and have the same dimensionality ($M$).

Have also a look at this answer where I explain the difference between gradient descent and stochastic gradient descent.

",2444,,2444,,1/15/2020 13:42,1/15/2020 13:42,,,,5,,,,CC BY-SA 4.0 17508,2,,17506,1/15/2020 1:18,,1,,"

It's not obvious what you mean by ""intelligent crossover"".

However, it is common to use fitness-based selection of parents: individuals in the current population who have higher fitness are assigned a higher probability of being selected to mate and produce offspring. This will increase the likelihood that ""good"" combinations of genes in members of the current population will be passed along to the next generation, and that independent ""good"" combinations will be combined in some members of the next generation.

The ""best"" crossover operator depends dramatically on the structure of the problem being solved, and on the mapping of gene ""vectors"" to the salient features of a solution.

Edit #1: In some cases it is important to increase diversity in order to avoid convergence to a local optimum. In that case, an ""intelligent"" GA might for example select a first parent for its high fitness, and a second parent at random. In ""Generator"", a GA I sold for a while about 25 years ago, mate selection worked that way, and it was often very effective. Generator also replaced any duplicate individuals in the population with entirely random individuals. I have also structured genetic algorithms specifically to evolve multiple separate populations of individuals, with minimum gene flow between the populations, in order to evolve multiple solutions corresponding to ""regional"" fitness optima.

Edit #2: In genetic algorithms it is not common to directly seek the best combination of genes from both parents. The assumption is that higher-fitness parents are more likely to produce even higher-fitness offspring than lower-fitness parents. Sometimes there is a local search (hill climbing) operation where the offspring of two parents is mutated in various ways and the best of the mutants is put in the next generation.

And, sometimes a crossover operation involves producing a larger number of offspring than the parent generation, followed by culling low-fitness individuals to keep the population size constant. This is vaguely analogous to a local search via mutation, but uses random crossover instead of random mutation for its search.

",28348,,28348,,1/15/2020 13:36,1/15/2020 13:36,,,,1,,,,CC BY-SA 4.0 17510,1,17511,,1/15/2020 5:18,,1,100,"

In my understanding, the formula to calculate the cross-entropy is

$$ H(p,q) = - \sum p_i \log(q_i) $$

But in PyTorch nn.CrossEntropyLoss is calculated using this formula:

$$ loss = -\log\left( \frac{\exp(x[class])}{\sum_j \exp(x_j)} \right) $$

that I think it only addresses the $\log(q_i)$ part in the first formula.

Why does PyTorch use a different formula for the cross-entropy?

",16565,,2444,,1/15/2020 13:05,1/15/2020 13:05,Why does PyTorch use a different formula for the cross-entropy?,,1,1,,,,CC BY-SA 4.0 17511,2,,17510,1/15/2020 7:52,,1,,"

When you one-hot-encode your labels with $p_i \in \{0,1\}$ you get $p_i = 0$ iff $i$ is not correct and, equivalently, $p_i =1$ iff $i$ is correct.

Hence, $p_i \log(q_i) = 0 \log(q_i) = 0 $ for all classes except the ""truth"" and $p_i \log(q_i) = 1 \log(q_i) = \log(q_i) $ for the correct prediction.

Therefore, your loss reduces to: $$ H(p,q) = - \sum p_i \log(q_i) = - \log(q_{truth}) $$

",30789,,,,,1/15/2020 7:52,,,,4,,,,CC BY-SA 4.0 17512,1,,,1/15/2020 9:04,,5,1018,"

I am training a convolutional neural network for object detection. Apart from the learning rate, what are the other hyperparameters that I should tune? And in what order of importance? Besides, I read that doing a grid search for hyperparameters is not the best way to go about training and that random search is better in this case. Is random search really that good?

",30497,,18758,,11/7/2021 11:44,1/7/2023 21:08,"When training a CNN, what are the hyperparameters to tune first?",,2,6,,,,CC BY-SA 4.0 17513,1,17515,,1/15/2020 9:48,,1,275,"

How is the batch loss calculated in both DQNs and simple classifiers? From what I understood, in a classifier, a common method is that you sample a mini-batch, calculate the loss for every example, calculate the average loss over the whole batch, and adjust the weights w.r.t the average loss? (Please correct me if I'm wrong)

But is this the same in DQNs? So, you sample a batch from your memory, say 64 transitions. Do I iterate through each transition and adjust the weights "on the fly", or do I calculate the average loss of the batch and THEN in a big step adjust the weights w.r.t the average batch loss?

",30431,,2444,,5/21/2021 9:44,5/21/2021 9:45,What is the difference between batches in deep Q learning and supervised learning?,,1,0,,,,CC BY-SA 4.0 17515,2,,17513,1/15/2020 10:23,,0,,"

From what I understood in a classifier a common method is that you sample a mini-batch, calculate the loss for every example, calculate the average loss over the whole batch and adjust the weights w.r.t the average loss? (Please correct me if I'm wrong)

You are wrong.

The weights are adjusted w.r.t. the average gradient, and this must be calculated using individual loss function results. The average loss (or cost function when considering the whole dataset) is a useful metric for current performance, and it is the measure being minimised. But you cannot calculate meaningful gradients against the average loss directly.

But is this the same in DQNs?

The batch process is not as you described, but an experience replay minibatch in RL and a sampled minibatch in supervised learning can be very similar. The main difference in RL is that your prediction targets must be recalculated as part of the sampling process (using $G_{t:t+1} = R_{t+1} + \gamma \text{max}_{a'}\hat{q}(S_{t+1},a', \theta)$ to calculate the TD target, assuming you are using single step Q learning), whilst in most supervised learning the target values are fixed for each example.

In theory you could use repeated single item stochastic gradient descent in DQN, it doesn't break any theory, and it would work. However, it will usually be more efficient to use a standard minibatch update, combining all gradients into one average gradient for the minibatch and making a single update step.

If you are using a high level library for your neural network model in DQN, you usually don't need to worry about this detail. You can use the .fit function or whatever the library provides. In that case the only difference between a supervised learning update and an experience replay DQN update is what you get from the sampling. In supervised learning you get a set of $(\mathbf{x}_i, \mathbf{y}_i)$ examples directly by sampling a minibatch. In RL you get $(\mathbf{s}_i, \mathbf{a}_i, r, \mathbf{s'}_i, done)$ and must construct the $(\mathbf{x}_i, \mathbf{y}_i)$ minibatch from these before passing to your .fit function

",1847,,2444,,5/21/2021 9:45,5/21/2021 9:45,,,,4,,,,CC BY-SA 4.0 17517,2,,17492,1/15/2020 13:23,,3,,"

This question is very broad, so let me attempt to answer it using my own background in time series analysis.

As an example, why would I continue using ARIMA to forecast a time series? Why not simply use an LSTM model by default, since this is a type of recurrent neural network that takes time-related dependencies into account?

Well, an LSTM model is not good at modelling all time series. It is effective when it comes to modelling volatile data, but ARIMA still outperforms when it comes to forecasting trend data - LSTM tends to overemphasise volatile patterns in future predictions.

Let's take an example of forecasting weekly hotel cancellations by potential customers. The second time series shows much more variability in the number of weekly hotel cancellations than the first:

H1 Time Series

H2 Time Series

Based on MDA (mean directional accuracy), RMSE (root mean squared error), and MFE (mean forecast error) - ARIMA demonstrates superior performance overall for the first time series, while LSTM shows better performance for the second:

On the basis of this example - which is quite specific given the broadness of your question - deep learning techniques are not always used because simpler models can perform better under certain circumstances. It is all about understanding the data you are working with and then choosing the model - not the other way around.

",22692,,22692,,3/2/2020 16:55,3/2/2020 16:55,,,,0,,,,CC BY-SA 4.0 17518,2,,17498,1/15/2020 14:05,,2,,"

It is quite common in DQN to instead of having the neural network represent function $f(s,a) = \hat{q}(s,a,\theta)$ directly, it actually represents $f(s)= [\hat{q}(s,1,\theta), \hat{q}(s,2,\theta), \hat{q}(s,3,\theta) . . . \hat{q}(s,N_a,\theta)]$ where $N_a$ is the maximum action, and the input the current state. That is what is going on here. It is usually done for a performance gain, since calculating all values at once is faster than individually.

However, in a Q learning update, you cannot adjust this vector of output values for actions that you did not take. You can do one of two things:

  • Figure out the gradient due to the one item with a TD error, and propagate that backwards. This involves inserting a known gradient into the normal training update step in a specific place and working from there. This works best if you are implementing your own backpropagation with low-level tools, otherwise it can be a bit fiddly figuring out how to do it in a framework like Keras.

  • Force the gradients of all other items to be zero by setting the target outputs to be whatever the learning network is currently generating.

If you are using something like Keras, the second approach is the way to go. A concrete example where you have two networks n_learn and n_target that output arrays of Q values might be like this:

  • For each sample (s, a, r, next_s, done) in your minibatch*

    • Calculate array of action values from your learning network qvals = n_learn.predict(s)
    • Calulate TD target for $(s,a)$ e.g. td_target = r + max(n_target.predict(next_s)) (discount factor and how to handle terminal states not shown)
    • Alter the one array item that you know about from this sample qvals[a] = td_target
    • Append s to your train_X data and qvals to your train_Y data
  • Fit the minibatch n_learn.fit(train_X, train_Y)


* It is possible to vectorise these calculations for efficiency. I show it as a for loop as it is simpler to describe that way

",1847,,1847,,1/15/2020 14:16,1/15/2020 14:16,,,,3,,,,CC BY-SA 4.0 17521,1,17871,,1/15/2020 16:15,,2,71,"

I'm new in Artificial Intelligence and I want to do image segmentation.

Searching I have found these ways

  1. Digital image processing (I have read it in this book: Digital Image Processing, 4th edition)

  2. Convolutional neural networks

Is there something else that I can use?

",4920,,2444,,12/12/2021 12:58,12/12/2021 12:58,How many ways are there to perform image segmentation?,,1,0,,,,CC BY-SA 4.0 17522,1,,,1/15/2020 17:59,,2,152,"

I am trying to use Logistic Regression to make a spam filter, but I am having trouble understanding the weight update part. I have processed my email dataset, and I have an attribute vector of the top n words that are most likely to be contained within a spam.

From my understanding, during training, I will have to implement an optimization formula after each training example in order to update the weights.

$$ w_l \leftarrow w_l + \eta \cdot \sum_{i=1}^m [ y^{(i)} - P(c_+ \mid \vec{x}^{(i)} )] \cdot x_l^{(i)} $$

How does a formula such as this work? How can it be implemented in Python?

",32762,,2444,,1/15/2020 19:32,4/13/2020 14:49,How does the weight update formula for logistic regression work?,,1,0,,,,CC BY-SA 4.0 17523,1,,,1/15/2020 23:20,,2,43,"

I have frequency EEG data from fall and non-fall events and I am trying to incorporate it with accelerometer data that was collected at the same time. One approach is, of course, to use two separate algorithms and find the threshold for each. Then comparing the threshold of each. In other words, if the accelerometer algorithm predicts a fall (fall detected = 1) and the EEG algorithm detects a fall, based on the power spectrum (fall detected = 1), then the system outputs a ""1"" that a fall was truly detected. This approach uses the idea of a simple AND gate between the two algorithms.

I would like to know how to correctly process the data so that I can feed both types of data into one algorithm, perhaps a CNN. Any advice is really appreciated, even a lead to some literature, articles or information would be great.

",32771,,2444,,1/16/2020 22:58,1/16/2020 22:58,How can I compare EEG data with accelerometer data in 1 algorithm?,,0,3,,,,CC BY-SA 4.0 17524,1,,,1/16/2020 2:33,,1,31,"

In 'Deep Reinforcement Learning Hands-On' book and chapter about Distributional C51 algorithm I'm reading, that to obtain Q-values from the distribution I need to calculate the weighted sum of the normalized distribution and atom's values.

Why I have to multiply that distribution with support? How does it work and what happening there?

",32540,,,,,1/16/2020 2:33,Why we multiply probabilities with support to obtain Q-values in Distributional C51 algorithm?,,0,0,,,,CC BY-SA 4.0 17525,1,,,1/16/2020 4:57,,2,158,"

This is part of the exercise 2.13 in the book Foundations of Machine Learning (page 28). You can refer to chapter 2 for the notations.

Consider a family of concept classes $\left\{\mathcal{C}_{s}\right\}_{s}$ where $\mathcal{C}_{s}$ is the set of concepts in $\mathcal{C}$ with size at most $s.$ Suppose we have a PAC-learning algorithm $\mathcal{A}$ that can be used for learning any concept class $\mathcal{C}_{s}$ when $s$ is given. Can we convert $\mathcal{A}$ into a PAC-learning algorithm $\mathcal{B}$ that does not require the knowledge of $s ?$ This is the main objective of this problem.

To do this, we first introduce a method for testing a hypothesis $h,$ with high probability. Fix $\epsilon>0, \delta>0,$ and $i \geq 1$ and define the sample size $n$ by $n=\frac{32}{\epsilon}\left[i \log 2+\log \frac{2}{\delta}\right].$ Suppose we draw an i.i.d. sample $S$ of size $n$ according to some unknown distribution $\mathcal{D}.$ We will say that a hypothesis $h$ is accepted if it makes at most $3 / 4 \epsilon$ errors on $S$ and that it is rejected otherwise. Thus, $h$ is accepted iff $\widehat{R}(h) \leq 3 / 4 \epsilon$

(a) Assume that $R(h) \geq \epsilon .$ Use the (multiplicative) Chernoff bound to show that in that case $\mathbb{P}_{S \sim D^{n}}[h \text { is accepted}] \leq \frac{\delta}{2^{i+1}}$

(b) Assume that $R(h) \leq \epsilon / 2 .$ Use the (multiplicative) Chernoff bounds to show that in that case $\mathbb{P}_{S \sim \mathcal{D}^{n}}[h \text { is rejected }] \leq \frac{\delta}{2^{i+1}}$

(c) Algorithm $\mathcal{B}$ is defined as follows: we start with $i=1$ and, at each round $i \geq 1,$ we guess the parameter size $s$ to be $\widetilde{s}=\left\lfloor 2^{(i-1) / \log \frac{2}{\delta}}\right\rfloor .$ We draw a sample $S$ of size $n$ (which depends on $i$ ) to test the hypothesis $h_{i}$ returned by $\mathcal{A}$ when it is trained with a sample of size $S_{\mathcal{A}}(\epsilon / 2,1 / 2, \widetilde{s}),$ that is the sample complexity of $\mathcal{A}$ for a required precision $\epsilon / 2,$ confidence $1 / 2,$ and size $\tilde{s}$ (we ignore the size of the representation of each example here). If $h_{i}$ is accepted, the algorithm stops and returns $h_{i},$ otherwise it proceeds to the next iteration. Show that if at iteration $i,$ the estimate $\widetilde{s}$ is larger than or equal to $s,$ then $\mathbb{P}\left[h_{i} \text { is accepted}\right] \geq 3 / 8$

Question (a) and (b) are easy to prove, but I have trouble with the question (c). More specifically, I don't know how to use the condition that $\widetilde{s} \geq s$. Can anyone help?

",32673,,-1,,6/17/2020 9:57,1/16/2020 11:17,Convert a PAC-learning algorithm into another one which requires no knowledge of the parameter,,0,0,,,,CC BY-SA 4.0 17526,1,,,1/16/2020 5:55,,1,33,"

I’ve been reading about neural network architectures. In certain cases, people say that the sigmoid ""more accurately reflects real-life"" and, in other cases, functions like hard limits reflect ""the brain neural networks more accurately"".

What activation functions are better for what problems?

",30885,,2444,,1/16/2020 11:20,1/16/2020 11:20,What activation functions are better for what problems?,,0,0,,,,CC BY-SA 4.0 17527,2,,17491,1/16/2020 6:47,,0,,"

I think the problem is that you are not defining the weights and biases as parameters. So, when you backpropagate, they are not modified.

These lines should do the trick:

self.weight_mu = Parameter(torch.Tensor(out_features, in_features))
self.weight_sigma = Parameter(torch.Tensor(out_features, in_features))
self.bias_mu = Parameter(torch.Tensor(out_features))
self.bias_sigma = Parameter(torch.Tensor(out_features))

In case you are not familiar, the Parameter class must be imported from torch:

from torch.nn.parameter import Parameter
",30983,,,,,1/16/2020 6:47,,,,0,,,,CC BY-SA 4.0 17528,1,17534,,1/16/2020 8:12,,0,79,"

I'm trying to implement deep q learning in the OpenAI's gym ""Taxi-v3"" environment. But my agent only learns to do one action in every state. What am I doing wrong? Here is the Github repository with the code.

",30431,,2444,,5/20/2020 11:53,5/20/2020 11:53,Why is this deep Q agent constantly learning just one action?,,1,0,,,,CC BY-SA 4.0 17529,1,17541,,1/16/2020 8:23,,2,152,"

I'm a newbie in artificial intelligence.

I have started to research how to do image segmentation and all the papers that I have found are about CNN. Most of them use the same network, U-net, but with little variations: with more or fewer layers, different parameter values, etc.; but with not very different results.

It seems that CNNs are in fashion and everyone uses them. Or there are other reasons that I don't know.

If everyone is getting not very different results, why are they using the same approach instead of trying different ones?

",4920,,2444,,6/13/2020 0:11,6/13/2020 0:23,Why everyone is using CNN for image segmentation?,,1,0,,,,CC BY-SA 4.0 17530,1,17532,,1/16/2020 8:24,,3,313,"

I am new to Reinforcement learning and am currently reading up on the estimation of Q $\pi(s, a)$ values using MC epsilon-soft approach and chanced upon this algorithm. The link to the algorithm is found from this website.

https://www.analyticsvidhya.com/blog/2018/11/reinforcement-learning-introduction-monte-carlo-learning-openai-gym/

def monte_carlo_e_soft(env, episodes=100, policy=None, epsilon=0.01):

if not policy:
    policy = create_random_policy(env)
# Create an empty dictionary to store state action values
Q = create_state_action_dictionary(env, policy)

# Empty dictionary for storing rewards for each state-action pair
returns = {} # 3.

for _ in range(episodes): # Looping through episodes
    G = 0 # Store cumulative reward in G (initialized at 0)
    episode = run_game(env=env, policy=policy, display=False) # Store state, action and value respectively

    # for loop through reversed indices of episode array.
    # The logic behind it being reversed is that the eventual reward would be at the end.
    # So we have to go back from the last timestep to the first one propagating result from the future.

    # episodes = [[s1,a1,r1], [s2,a2,r2], ... [Sn, an, Rn]]
    for i in reversed(range(0, len(episode))):
        s_t, a_t, r_t = episode[i]
        state_action = (s_t, a_t)
        G += r_t # Increment total reward by reward on current timestep

        # if state - action pair not found in the preceeding episodes,
        # then this is the only time the state appears in this episode.

        if not state_action in [(x[0], x[1]) for x in episode[0:i]]: #
            # if returns dict contains a state action pair from prev episodes,
            # append the curr reward to this dict
            if returns.get(state_action):
                returns[state_action].append(G)
            else:
                # create new dictionary entry with reward
                returns[state_action] = [G]

            # returns is a dictionary that maps (s,a) : [G1,G2, ...]
            # Once reward is found for this state in current episode,
            # average the reward.
            Q[s_t][a_t] = sum(returns[state_action]) / len(returns[state_action]) # Average reward across episodes

            # Finding the action with maximum value.



            Q_list = list(map(lambda x: x[1], Q[s_t].items()))
            indices = [i for i, x in enumerate(Q_list) if x == max(Q_list)]
            max_Q = random.choice(indices)

            A_star = max_Q # 14.

            # Update action probability for s_t in policy
            for a in policy[s_t].items():
                if a[0] == A_star:
                    policy[s_t][a[0]] = 1 - epsilon + (epsilon / abs(sum(policy[s_t].values())))
                else:
                    policy[s_t][a[0]] = (epsilon / abs(sum(policy[s_t].values())))

return policy

This algorithm computes the $Q(s, a)$ for all state action value pairs that the policy follows. If $\pi$ is a random policy, and after running through this algorithm, and for each state take the $\max Q(s,a)$ for all possible actions, why would that not be equal to $Q_{\pi^*}(s, a)$ (optimal Q function)?

From this website, they claim to have been able to find the optimal policy when running through this algorithm.

I have read up a bit on Q-learning and the update equation is different from MC epsilon-soft. However, I can't seem to understand clearly how these 2 approaches are different.

",32780,,2444,,1/16/2020 11:29,1/19/2020 8:46,"Why Monte Carlo epsilon-soft approach cannot compute $\max Q(s,a)$?",,1,0,,,,CC BY-SA 4.0 17531,1,,,1/16/2020 9:29,,1,57,"

I'm trying to train a network to navigate a 48x48 2D grid, and switch pixels from on to off or off to on. The agent receives a small reward if correct, and small punishment if incorrect pixel plotted.

I thought, like the Deepmind ""Playing Atari with Deep Reinforcement Learning"" Paper, I could just use only the pixel input, fed through 2 convolutional layers, to solve this task. The output of this is fed into 512 fully connected layer.

Unfortunately, it barely trains. When instead using additional vectors as input containing information about nearby pixels' state around the agent, the agent learns the task quite well (yet often orients the wrong awkwardly).

Each step, the agent moves up down left right, and plot pixel or not. The agent is visualized in the environemtn as a red square with white center dot. (also tried single red pixel). On-pixels within the red square are colored purple.

Is there something I can try to make the agent learn visual input better?

The orange line is the training with only visual observations, the grey one contained vector observations about the immediate neighboring pixel state as well.

",31180,,,,,1/16/2020 9:29,Reinforcement learning CNN input weakness,,0,4,,,,CC BY-SA 4.0 17532,2,,17530,1/16/2020 9:40,,2,,"

If $\pi$ is a random policy, and after running through this algorithm, and for each state take the $\max Q(s,a)$ for all possible actions, why would that not be equal to $Q_{\pi^*}(s, a)$ (optimal Q function)?

Assuming that the estimates for $Q_{\pi}(s,a)$ have converged to close to correct values from many samples, then a policy based on $\pi'(s) = \text{argmax}_a Q_{\pi}(s,a)$ is not guaranteed to be an optimal policy unless the policy $\pi$ being measured is already the optimal policy.

This is because the action value $Q_{\pi}(s,a)$ gives the expected future reward from taking action $a$ in state $s$, and from that point on following the policy $\pi$. The function does not, by itself, adapt to the idea that you might change other action choices as well. It is a measure of immediate differences between action choices at any given time step. Therefore if there are any long-term dependencies where your action choice at $t$ would be different if only you could guarantee a certain choice at $t+1$ or later, this cannot be resolved by simply taking the maximum $Q_{\pi}(s,a)$ when $\pi$ was a simple random policy.

However, if you do decide to change the policy such that you always follow actions $\pi'(s) = \text{argmax}_a Q_{\pi}(s,a)$ for all states, then you can say this: For each state, $V_{\pi'}(s) \ge V_{\pi}(s)$. I.e. $\pi'(s)$ is no worse than, and may be a strict improvement over $\pi(s)$. Better than that, $\pi'(s)$ will be a strict improvment over $\pi(s)$ if $Q_{\pi}(s,a)$ is accurate and $\pi(s)$ is not already the optimal policy $\pi^*(s)$. This is the basis for the Policy Improvement Theorem which shows that if you repeat the process of measuring $Q_{\pi^k}$ and then creating a new policy $\pi^{k+1}(s) = \text{argmax}_a Q_{\pi^k}(s,a)$ that you will eventually find the optimal policy. You only have to repeat your idea many times to eventually find $\pi^*$

The Dynamic Programming technique Policy Iteration does this exactly. All other value-based Reinforcement Learning methods are variations of this idea and rely at least in part on this proof.

",1847,,1847,,1/19/2020 8:46,1/19/2020 8:46,,,,5,,,,CC BY-SA 4.0 17533,1,,,1/16/2020 11:01,,2,76,"

This is an inequality on page 36 of the Foundations of Machine Learning by Mohri, but the author only states it without proof. $$ \mathbb{P}\left[\left|R(h)-\widehat{R}_{S}(h)\right|>\epsilon\right] \leq 4 \Pi_{\mathcal{H}}(2 m) \exp \left(-\frac{m \epsilon^{2}}{8}\right) $$

Here the growth function $\Pi_{\mathcal{F}}: \mathbb{N} \rightarrow \mathbb{N}$ for a hypothesis set $\mathcal{H}$ is defined by: $$ \forall m \in \mathbb{N}, \Pi_{\mathcal{F}}(m)=\max _{\left\{x_{1}, \ldots, x_{m}\right\} \subseteq X}\left|\left\{\left(h\left(x_{1}\right), \ldots, h\left(x_{m}\right)\right): h \in \mathcal{H}\right\}\right| $$

Given a hypothesis h $\in \mathcal{H},$ a target concept $c \in \mathcal{C}$ and an underlying distribution $\mathcal{D},$ the generalization error or risk of $h$ is defined by $$ R(h)=\underset{x \sim D}{\mathbb{P}}[h(x) \neq c(x)]=\underset{x \sim D}{\mathbb{E}}\left[1_{h(x) \neq c(x)}\right] $$ where $1_{\omega}$ is the indicator function of the event $\omega$.

And the empirical error or empirical risk of $h$ is defined $$ \widehat{R}_{S}(h)=\frac{1}{m} \sum_{i=1}^{m} 1_{h\left(x_{i}\right) \neq c\left(x_{i}\right)} $$

In the book, the author proves another inequality that differs from this one by only a constant using Rademacher complexity, but he says that the stated inequality can be proved without using Rademacher complexity. Does anyone know how to prove it?

",32673,,27229,,7/19/2021 21:25,7/19/2021 21:25,"How to Prove This Inequality, Related to Generalization Error (Not Using Rademacher Complexity)?",,0,1,0,,,CC BY-SA 4.0 17534,2,,17528,1/16/2020 11:02,,1,,"

I thought about my input-layer. I had the 500 states one hot encoded. So 499 of every input node would be 0. And 0 is very bad in an neural network. I tried the same code with the ""CardPole-v0"" and it worked.

So think about your input guys

",30431,,,,,1/16/2020 11:02,,,,0,,,,CC BY-SA 4.0 17537,1,,,1/16/2020 12:44,,2,538,"

This problem is about two-oracle variant of the PAC model. Assume that positive and negative examples are now drawn from two separate distributions $\mathcal{D}_{+}$ and $\mathcal{D}_{-} .$ For an accuracy $(1-\epsilon),$ the learning algorithm must find a hypothesis $h$ such that: $$ \underset{x \sim \mathcal{D}_{+}}{\mathbb{P}}[h(x)=0] \leq \epsilon \text { and } \underset{x \sim \mathcal{D}_{-}}{\mathbb{P}}[h(x)=1] \leq \epsilon$$

Thus, the hypothesis must have a small error on both distributions. Let $\mathcal{C}$ be any concept class and $\mathcal{H}$ be any hypothesis space. Let $h_{0}$ and $h_{1}$ represent the identically 0 and identically 1 functions, respectively. Prove that $\mathcal{C}$ is efficiently PAC-learnable using $\mathcal{H}$ in the standard (one-oracle) PAC model if and only if it is efficiently PAC-learnable using $\mathcal{H} \cup\left\{h_{0}, h_{1}\right\}$ in this two-oracle PAC model.

However, I wonder if the problem is correct. In the official solution, when showing that 2-oracle implies 1-oracle, the author returns $h_0$ and $h_1$ when the distribution is too biased towards positive or negative examples. However, in the problem, it is required that only in 2-oracle case we can return $h_0$ and $h_1$. Therefore, in this too-biased case, it seems that there may not exist a 'good' hypothesis at all.

Is this problem wrong? Or I make some mistake somewhere?

",32673,,2444,,1/16/2020 13:30,1/16/2020 13:30,A problem about the relation between 1-oracle and 2-oracle PAC model,,0,1,,,,CC BY-SA 4.0 17538,1,30437,,1/16/2020 12:54,,0,653,"

I'm working on an advantage actor-critic (A2C) reinforcement learning model, but when I test the model after I trained for 3500 episodes, I start to get almost the same action for all testing episodes. While if I trained the system for less than 850 episodes, I got different actions. The value of state is always different, and around 850 episodes, the loss becomes zero.

Here is the Actor and critic Network

        with g.as_default():
            #==============================actor==============================#
            actorstate = tf.placeholder(dtype=tf.float32, shape=n_input, name='state')
            actoraction = tf.placeholder(dtype=tf.int32, name='action')
            actortarget = tf.placeholder(dtype=tf.float32, name='target')

            hidden_layer1 = tf.layers.dense(inputs=tf.expand_dims(actorstate, 0), units=500, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
            hidden_layer2 = tf.layers.dense(inputs=hidden_layer1, units=250, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
            hidden_layer3 = tf.layers.dense(inputs=hidden_layer2, units=120, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
            output_layer = tf.layers.dense(inputs=hidden_layer3, units=n_output, kernel_initializer=tf.zeros_initializer())
            action_probs = tf.squeeze(tf.nn.softmax(output_layer))
            picked_action_prob = tf.gather(action_probs, actoraction)

            actorloss = -tf.log(picked_action_prob) * actortarget
            # actorloss = tf.reduce_mean(tf.losses.huber_loss(picked_action_prob, actortarget, delta=1.0), name='actorloss')

            actoroptimizer1 = tf.train.AdamOptimizer(learning_rate=var.learning_rate)

            if var.opt == 2:
                actoroptimizer1 = tf.train.RMSPropOptimizer(learning_rate=var.learning_rate, momentum=0.95,
                                                            epsilon=0.01)
            elif var.opt == 0:
                actoroptimizer1 = tf.train.GradientDescentOptimizer(learning_rate=var.learning_rate)

            actortrain_op = actoroptimizer1.minimize(actorloss)

            init = tf.global_variables_initializer()
            saver = tf.train.Saver(max_to_keep=var.n)

        p = tf.Graph()
        with p.as_default():
            #==============================critic==============================#
            criticstate = tf.placeholder(dtype=tf.float32, shape=n_input, name='state')
            critictarget = tf.placeholder(dtype=tf.float32, name='target')

            hidden_layer4 = tf.layers.dense(inputs=tf.expand_dims(criticstate, 0), units=500, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
            hidden_layer5 = tf.layers.dense(inputs=hidden_layer4, units=250, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
            hidden_layer6 = tf.layers.dense(inputs=hidden_layer5, units=120, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
            output_layer2 = tf.layers.dense(inputs=hidden_layer6, units=1, kernel_initializer=tf.zeros_initializer())
            value_estimate = tf.squeeze(output_layer2)

            criticloss= tf.reduce_mean(tf.losses.huber_loss(output_layer2, critictarget,delta = 0.5), name='criticloss')
            optimizer2 = tf.train.AdamOptimizer(learning_rate=var.learning_rateMADDPG_c)
            if var.opt == 2:
                optimizer2 = tf.train.RMSPropOptimizer(learning_rate=var.learning_rate_c, momentum=0.95,
                                                            epsilon=0.01)
            elif var.opt == 0:
                optimizer2 = tf.train.GradientDescentOptimizer(learning_rate=var.learning_rateMADDPG_c)

            update_step2 = optimizer2.minimize(criticloss)

            init2 = tf.global_variables_initializer()
            saver2 = tf.train.Saver(max_to_keep=var.n)

 

This is the choice of action.

def take_action(self, state):
                """Take the action"""
                action_probs = self.actor.predict(state)
                action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
                return action

This is the actor.predict function.

def predict(self, s):
        return self._sess.run(self._action_probs, {self._state: s})

Any Idea what causing this?

Update

Change the learning rate, state, and the reward solve the problem where I reduce the size of the state and also added switching cost to the reward.

",21181,,21181,,9/9/2021 7:02,9/9/2021 7:02,Why I got the same action when testing the A2C?,,2,4,,,,CC BY-SA 4.0 17539,1,,,1/16/2020 13:25,,2,146,"

How can I show that the VC dimension of the set of all closed balls in $\mathbb{R}^n$ is at most $n+3$?

For this problem, I only try the case $n=2$ for 1. When $n=2$, consider 4 points $A,B,C,D$ and if one point is inside the triangle formed by the other three, then we cannot find a circle that only excludes this point. If $ABCD$ is convex assume WLOG that $\angle ABC + \angle ADC \geq 180$ then use some geometric argument to show that a circle cannot include $A,C$ and exclude $B,D$.

For the general case I’m thinking of finding $n+1$ points so that a ball should be quite ‘large‘ to include them, and that this ball can not exclude the other 2 points. However, in high-dimensional case I do not know how to use maths language to describe what is ‘large’.

Can anyone give some ideas to this question please?

",32673,,2444,,1/16/2020 14:38,1/16/2020 14:38,How can I show that the VC dimension of the set of all closed balls in $\mathbb{R}^n$ is at most $n+3$?,,0,0,,,,CC BY-SA 4.0 17540,1,,,1/16/2020 16:26,,3,1280,"

My understanding of how non-max suppression works is that it suppresses all overlapping boxes that have a Jaccard overlap smaller than a threshold (e.g. 0.5). The boxes to be considered are on a confident score (may be 0.2 or something). So, if there are boxes that have a score over 0.2 (e.g. the score is 0.3 and overlap is 0.4) the boxes won't be suppressed.

In this way, one object will be predicted by many boxes, one high score box, and many low confident score boxes, but I found that the model predicts only one box for one object. Can someone enlighten me?

I currently viewing the ssd from https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection

Here is the code.

#Finding Jaccap Overlap and sorting scotes
class_scores, sort_ind = class_scores.sort(dim=0, descending=True)
class_decoded_locs = class_decoded_locs[sort_ind]  # (n_min_score, 4)
overlap = find_jaccard_overlap(class_decoded_locs, class_decoded_locs)  
suppress = torch.zeros((n_above_min_score), dtype=torch.uint8).to(device)  

for box in range(class_decoded_locs.size(0)):
# If this box is already marked for suppression
    if suppress[box] == 1:
    continue
    suppress = torch.max(suppress, overlap[box] > max_overlap)
    suppress[box] = 0
",32794,,2444,,9/12/2020 17:31,9/12/2020 17:31,How does non-max suppression work when one or multiple bounding boxes are predicted for the same object?,,1,1,,,,CC BY-SA 4.0 17541,2,,17529,1/16/2020 17:19,,2,,"

CNN is used since it is effectively an optimized use case for dealing with image data.

CNN effectively automatically extracts features from images. Other techniques are more likely to not take full advantage of the data. CNN is able to make full use of the data by also including information from adjacent pixels and downsample through layers.

  • Here is a paper on the performance of CNN on image data
  • Here is a paper comparing CNN's to other methods
",32719,,2444,,6/13/2020 0:23,6/13/2020 0:23,,,,0,,,,CC BY-SA 4.0 17542,2,,17334,1/16/2020 17:41,,0,,"

random forest's feature importances are not reliable and you should probably avoid them. Instead you can use permutation_importance: https://scikit-learn.org/stable/auto_examples/inspection/plot_permutation_importance.html#sphx-glr-auto-examples-inspection-plot-permutation-importance-py

",32797,,,,,1/16/2020 17:41,,,,2,,,,CC BY-SA 4.0 17543,1,17555,,1/16/2020 18:09,,2,160,"

I was looking into the use of a greedy layer-wise pretraining to initialize the weights of my network.

Just for the sake of clarity: I'm referring to the use of gradually deeper and deeper autoencoders to teach the network gradually more abstract representations of the input one layer at the time.

However, reading HERE, I read:

Nevertheless, it is likely better performance may be achieved using modern methods such as better activation functions, weight initialization, variants of gradient descent, and regularization methods.

and

Today, we now know that greedy layer-wise pretraining is not required to train fully connected deep architectures, but the unsupervised pretraining approach was the first method to succeed.

My question is then: if I'm building a network already using ""modern"" techniques, such as ReLU activations, batch normalization, adam optimizers, etc, is the good-ol' greedy layer-wise pretraining useless? Or can it still provide an edge in the initialization of the network?

",32799,,32799,,1/17/2020 10:33,1/17/2020 15:57,Is greedy layer-wise pretraining obsolete?,,1,0,,,,CC BY-SA 4.0 17544,1,,,1/16/2020 18:11,,2,203,"

I having trouble finding some starting points for solving an occupancy problem which seems like a good candidate for ai.

Assume the following situation: In a company I have n cars and m employees. Not every employee can drive any car (f. e. a special driving license is required). A car can only be used by one employee at a specific point in time.

There is a plan which states which employee must be somewhere within some time (therefor they must use a car, so the car is blocked for that amount of time).

The goal is to find a near optimal occupancy of the cars according to that plan.

This problem is easy to specify, but I'm stumped as to which methods to implement.

As it can be represented by a graph I think the right way to solve such a problem is using searching techniques, but a problem here is that I don't know the goal state (and there is no efficient way to compute it - thats the task I want the ai to do...). Neither finding the goal state is in fact part of the problem.

So my question is: What ai techniques could be used to solve such a problem ?

Edit: Some clarification:

Assmume we have two sets - one of the employees (E) and one of cars (C). |C| < |E| is most likely true. Each car has an assigned priority which corresponds to the costs of using it (for example using a Ferrari costs more than using a Dacia, therefore a Dacia has a higher priority (ex. 1) compred to the Ferrari (ex. 10))). Assume further that having employees which are not using a car at a specific time slice are a bad thing - they cost an individual penalty (you want the employeed to be at the customer and sell things etc.).
The goal is to find the occupation of employees and cars which has a low total cost.

One Example: If you assign an employee to a car at a specific time slice it may turn out that another employee gets no car within that time slice. This can be either because

  • a car is free, but he has no license for it
  • because a car is free, but the costs of using this car would be higher than having the employee staing at the head quater
  • because no car is free anymore

Of cause it could be better in terms of costs to change the occupation and give that employee which got no car in this solution a car and therefore having another employee getting no car or not using all cars or ...

Note: There is no need to find an exact optimal solution (=lowest total cost of all possible occupations), as this would require checking out all possible occupations of the exponential solution space. Insetad finding a more or less good approximation of a near-optimal low total cost is sufficent.

",32800,,32800,,1/20/2020 8:49,1/24/2020 17:13,Solving a planning if finding the goal state is part of the problem,,2,5,,,,CC BY-SA 4.0 17546,2,,9766,1/16/2020 23:00,,1,,"

I guess they are talking about adversarial attacks in the same way that Szegedy et al. did in "Intriguing properties of neural networks"

They described "adversarial attacks" or "adversarial examples" as images with hardly perceptible perturbations that change the network's prediction.

For example, imagine you've trained a CNN to classify between a variety of classes. You take a picture of a dog $X_1$, and your CNN correctly classified it as a "dog", everything is fine so far.

Then you can add some small perturbation $p$ to your image $X_1$, so now you have a new image $X_2 = X_1 + p$. This new image still looks like a dog, because your perturbation was so small that is almost imperceptible.

The problem is that your CNN will classify your picture $X_2$ as something that is not a dog, for example, "fish".

Here, $X_2$ is an adversarial example created after using an adversarial perturbation $p$.

What is interesting about these adversarial perturbations $p$ is that they are not random. Actually, CNNs are very robust to random perturbations (noise), but adversarial perturbations $p$ are not like them. They are computed to fool a classifier (not only CNNs).

You can refer to figure 5 of the aforementioned paper for more examples.

",32360,,2444,,12/10/2021 19:08,12/10/2021 19:08,,,,0,,,,CC BY-SA 4.0 17548,1,,,1/17/2020 8:29,,1,19,"

What is the status of the research on regional specialization of the artificial neural networks? Biology knows that such specialization exists in the brain and it is very important for the functioning of the brain. My thinking is that specialization can solve the transfer learning/catastrophic forgetting by creating centers of sophisticated skills if such sophistication is necessary. Actually - there is no much alternative to specialization? If specialization exists then there can be small decision centers that routes the request to the actual part and such decision centers can be efficient. But if specialization does not exist then routing happens in the total soup/pudding of neuronal see and such all-to-all routing should be very inefficient. Of course there should be some mixing of specialization vs pudding, because there is always mixing between rationality and emotions, between execution and exploration, but nevertheless - specialization should happen, at least partially.

The problem is - that I can not find any focused article about such specialization and I can not find how such specialization can be trained? There are research on hierarchical reinforcement learning - but that is about imposing the external fixed structure on the set of neural networks, but it is not how the nature works - nature implements such hiearchy within neural network and not by imposing rigorous, symbolic structures.

Are there some notions, terms, keywords, research trends, important articles (and researchers) devoted to such specialization (including the machine learning of such specialization).

Of course, my topic is very large, but the actual work on this is small ir nonexistant and that is why it is focused.

There is work on convolution neural networks but maybe there is another approach for language processing where the parts can be specialized in - parsing, understanding, anaphora resolution, translation, etc? And is the convolution the kind of specialization I am seeking?

Maybe the notion of attention somehow is connected with my question. But usually the attention is connected with the single neurons and not with the regions? Maybe there is notions about hierarchy of attentions - one level of attention values refers to high level tasks/skills, but another level of attention values refers to subskills, etc.

",8332,,8332,,1/17/2020 8:36,1/17/2020 8:36,Regional specialization in neural networks (especially for language processing)?,,0,0,,,,CC BY-SA 4.0 17549,1,17551,,1/17/2020 9:01,,7,910,"

After reading a lot of articles (for instance, this one - https://developers.google.com/machine-learning/gan/generator), I've been wondering: how does the generator in GAN's work?

What is the input to the generator? What is the meaning behind ""input noise""?

As I've read, the only input that the generator receives is a random noise, which is weird.

If I would like to create a similar picture of $x$, and put as an input a matrix of random numbers (noise) - it would take A LOT of training until I would get some sort of picture $x^*$, that is similar to the source picture $x$.

The algorithm should receive some type of reference or a basic dataset (for instance, the set of $x$'s) in order to start the generation of the fake image $x^*$.

",32821,,2444,,1/18/2020 13:21,12/10/2020 9:24,How does the generator in GAN's work?,,1,0,,,,CC BY-SA 4.0 17550,1,,,1/17/2020 9:25,,4,226,"

I have trained a RNN, GRU, and LSTM on the same dataset, and looking at their respective predictions I have observed, that they all display an upper limit on the value they can predict. I have attached a graph for each of the models, which shows the upper limit quite clearly. Each dot is a prediction, and the orange graph is simply there to illustrate the ground truth (i.e. ground truth on both axis).

My dataset is split in 60% for training, 20% for test, and 20% for validation and then each of the splits are shuffled. The split/shuffle is the same for all three models, so each model uses the exact same split/shuffle of data for its predictions too. The models are quite simple (2 layers, nothing fancy going on). I have used grid search to find the most optimal hyperparameters for each model. Each model is fed 20 consecutive inputs (a vector of features, e.g. coordinates, waiting time, etc) and produces a single number as output which is the expected remaining waiting time. I know this setup strongly favours LSTM and GRU over RNN, and the accuracy of the predictions definitively shows this too.

However, my question is why do each model display an upper limit on its predictions? And why does it seem like such a hard limit?

I cannot wrap my head around what the cause of this is, and so I am not able to determine whether it has anything to do with the models used, how they are trained, or if it is related to the data. Any and all help is very much appreciated!


Hyperparameters for the models are:

RNN: 128 units pr layer, batch size of 512, tanh activation function

GRU: 256 units pr layer, batch size of 512, sigmoid activation function

LSTM: 256 units pr layer, batch size of 256, sigmoid activation function

All models have 2 layers with a dropout in between (with probability rate 0.2), use a learning rate of $10^{-5}$, and are trained over 200 epochs with early stopping with a patience of 10. All models use SGD with a momentum of 0.8 , no nesterov and 0.0 decay. Everything is implemented using Tensorflow 2.0 and Python 3.7. I am happy to share the code used for each model if relevant.


EDIT 1 I should point out the graphs are made up of 463.597 individual data points, most of which are placed very near the orange line of each graph. In fact, for each of the three models, of the 463.597 data points, the number of data points within 30 seconds of the orange line is:

RNN: 327.206 data points

LSTM: 346.601 data points

GRU: 336.399 data points

In other words, the upper limit on predictions shown on each graph consists of quite a small number of samples compared to the rest of the graph.

EDIT 2 In response to Sammy's comment I have added a graph showing the distribution of all predictions in 30 second intervals. The y-axis represents the base 10 logarithm of the number of samples which fall into a given 30 second interval (the x-axis). The first interval ([0;29]) consists of approximately 140.000 predicted values, out of the roughly 460.000 total number of predicted values.

",32820,,32820,,1/20/2020 20:09,1/20/2020 20:09,RNN models displays upper limit on predictions,,0,7,,,,CC BY-SA 4.0 17551,2,,17549,1/17/2020 10:47,,3,,"

What's the input to the Generator?

In the basic implementation of GANs, the Generator only takes in a vector of random variables. This might seem strange, but after training, the generator can transform this input noise into an image resembling those of the training set.

How does it work?

It is trained along with its counterpart the Discriminator, whose goal is to distinguish real images (i.e. the dataset's images) from fake ones (i.e. images produced by the Generator). The Generator's goal in training is to fool the Discriminator into thinking that its images are real.

Training process

In the beginning, where they are both untrained, they are both "terrible" at their respective tasks. The Generator can't produce anything resembling an image, but the Discriminator can't distinguish real from fake. As training progresses, the Discriminator starts identifying ways to distinguish the real images from the fake ones (i.e. patterns that appear in real images, but not in fake ones). The Generator, however, in its attempt to fool the Discriminator, starts producing those same patterns in its own images. After a while of both models becoming better at their respective tasks, we reach a point where the Generator can produce realistic images and the Discriminator is very good at distinguishing between real or fake.


Edit as suggested from comment:

A vector of random values is used as an input, so that the Generator can learn to generate unique outputs. In itself the Generator is deterministic, meaning that it has no internal sources of randomness. If we give it the same input vector twice, it will produce the same output both times. Thus, we feed the Generator with random values, so that it can learn to produce different outputs, depending on those values.

",26652,,26652,,12/10/2020 9:24,12/10/2020 9:24,,,,6,,,,CC BY-SA 4.0 17553,1,17556,,1/17/2020 12:38,,4,305,"

The selection of experimental data includes a set of vectors of different dimensions. The input is a 3-dimensional vector, and the output is a 12-dimensional vector. The sample size is 120 pairs of input 3-dimensional and output 12-dimensional vectors.

Is it possible to train such a neural network (in MATLAB)? Which structure of the neural network is best suited for this?

",32829,,2444,,1/18/2020 23:21,1/24/2020 5:09,Is it possible to train a neural network with 3 inputs and 12 outputs?,,2,4,,,,CC BY-SA 4.0 17555,2,,17543,1/17/2020 15:57,,2,,"

It depends. It could give you a boost or it could not.

Intuitively I would expect it to actually hurt performance if the network is initialized correctly (I think the optimizer is less of a bottleneck because they will have the same effect in both approaches).

Ideal World: We optimize the network as a whole to gain better course grained features over the sequential layers of the encoder

Reality years ago: Deep nets have trouble in information propagation either forward or backwards (ex: vanishing/exploding activation and vanishing/exploding gradient).

  • solution: Break up the training into iterative schemes that don't require backward information to propagate far at each optimization step
    • Cons: Each step is looking for a greedy solution, rather than a deep one.

Reality today: Depending on the domain, there are various publications discussing how to circumvent this issue (ex: Residual networks, He initialization, fixup initialization, batchnorms, different activation functions, etc...)

  • solution: We can train deep AEs without sequential layerwise training and achieve usually better results, because the model and optimization scheme allow for deep representations to form.

I hope this helped give some form of intuition of the matter.

",25496,,,,,1/17/2020 15:57,,,,0,,,,CC BY-SA 4.0 17556,2,,17553,1/17/2020 16:12,,4,,"

There is nothing stopping you, you can setup Dense Neural Networks to have any size inputs or outputs (simple proof is to imagine a single layer NN with no activation is just a linear transform and given input dim $n$ and output dim $m$, it's just a matrix of $n$ x $m$, trivially this works though with any number of hidden layers)

The better question is should you?. In all honesty, it depends on the data that you have, but, usually, with only 120 examples you'll either overfit completely or do relatively well if the true solution is a very simple function, but, in general, in the common situations where that isn't the case I find myself more likely or not using Bayesian approaches, so I can actually consider confidence (with little data, this is really nice)

",25496,,2444,,1/18/2020 23:25,1/18/2020 23:25,,,,3,,,,CC BY-SA 4.0 17558,1,,,1/17/2020 21:00,,1,85,"

Let us assume we have a general AI that can improve itself and is at least as intelligent as humans.

It has wide access to technical systems including the internet, and it can communicate with humans.

The AI could become malicious.

Can we just switch off a rouge AI?

",2317,,,,,1/17/2020 21:27,Can we just switch off a malicious artificial intelligence?,,2,0,,,,CC BY-SA 4.0 17559,2,,17558,1/17/2020 21:00,,0,,"

No.

With a sufficiently advanced general AI, we can not generally assume that we can switch it off when it becomes dangerous dangerous.

It seems that the electrical energy supply can always be switched off.

While that is true on the physical level, it is not guaranteed to work in practice.
The AI could cooperate with humans, which protect the AI from deactivation, or reactivate it.

The cooperation could, for example, one of the following forms:

  • The AI prepares a relation with humans offering a bribe for help.
    • An operator gets a reward for keeping the AI running, or reactivating it.
  • The AI prepares an extortion with humans
    • For example by organizing a process that needs to be actively kept running by the AI to avoid the death of a child of an operator.
  • The AI maintains contact with a criminal organisation with strong interest to keep it running, and week ethical inhibition.
    • The cooperation could be with traditional organized crime organisations, providing money laundering and other vital services to a mafia organisation, which can activate it's widespread influence to maintain the AI running even when there is strong public interest in shutting it down.

Note that these examples can protect against physical attacks by law enforcement.

",2317,,,,,1/17/2020 21:00,,,,5,,,,CC BY-SA 4.0 17561,2,,17558,1/17/2020 21:27,,1,,"

Malware viruses are a very simple form of AI. It is not difficult to conceive of a form of malware that A) can't be detected easily, B) is redundantly distributed across thousands of computers that occasionally connect to the internet, C) is capable of detecting some kinds of threats to itself and mutating to avoid the threats.

So, simply ""turning off"" a malicious AI will not always be possible.

",28348,,,,,1/17/2020 21:27,,,,0,,,,CC BY-SA 4.0 17562,1,,,1/17/2020 22:18,,2,47,"

Do you think psychological defense system, for example, repression, regression, reaction, formation, isolation, undoing, projection, introjection, sublimation, etc., could be created by artificial intelligence systems? Can AI also be used to better understand the psychological defense system?

If yes, what tools do we need? Maybe supervised learning algorithms, such as PSO or ANN, are better suited for these levels?

That doesn't seem to be that easy, and I think it needs to have a more general understanding of these algorithms. So I asked here.

On the other hand, what do you think is the appropriate workspace or workspace available for this job?

For example, I think the interactions between robots that use different levels of the defense system by our selection as a game theory round play, could be good and suited to the community which is equal to the psychological connection of this part of us and as a testing environment.

But the robots themselves are dealing with a real problem that makes it even more difficult, som according to this approach, by selecting a more nonlinear problem to solve we can count the using the levels of these defense mechanisms (for example, in simple problem 30% 1th level, 20% 2th level and ...).

",33936,,2444,,1/17/2020 23:51,1/17/2020 23:51,Understanding the nature of psychological defense system by artificial intelligence,,0,0,,,,CC BY-SA 4.0 17565,1,17570,,1/18/2020 13:21,,1,88,"

I am reading about backpropagation and I wonder why I have to backpropagate.

For example, I would update the network by randomly choosing a weight to change, $w$. I would have $X$ and $y$. Then, I would choose $dw$, a random number from $-0.1$ to $0.1$, for example. Then, I would do two predictions of the neural network and get their losses with the original neural network and one with $w$ changed by $dw$ to get the respective losses $L_{\text{original}}$ and $L_{\text{updated}}$. $L_{\text{updated}} - L_{\text{original}}$ is $dL$. I would update $w$ by $\gamma \frac{d L}{dw}$, where $\gamma$ is the learning rate and $L$ is the loss.

This does not need a gradient backpropagation throughout the system, and must have somehow a disadvantage because no one uses it. What is this disadvantage?

",17423,,2444,,1/18/2020 13:58,1/18/2020 18:11,How is back-propagation useful in neural networks?,,1,3,,,,CC BY-SA 4.0 17566,1,,,1/18/2020 13:41,,10,4484,"

I was wondering if it's possible to get the inverse of a neural network. If we view a NN as a function, can we obtain its inverse?

I tried to build a simple MNIST architecture, with the input of (784,) and output of (10,), train it to reach good accuracy, and then inverse the predicted value to try and get back the input - but the results were nowhere near what I started with. (I used the pseudo-inverse for the W matrix.)

My NN is basically the following function:

$$ f(x) = \theta(xW + b), \;\;\;\;\; \theta(z) = \frac{1}{1+e^{-z}} $$

I.e.

def rev_sigmoid(y):
    return np.log(y/(1-y))

def rev_linear(z, W, b):
    return (z - b) @ np.linalg.pinv(W)

y = model.predict(x_train[0:1])
z = rev_sigmoid(y)
x = rev_linear(z, W, b)
x = x.reshape(28, 28)
plt.imshow(x)

^ This should have been a 5:

Is there a reason why it failed? And is it ever possible to get inverse of NN's?

EDIT: it is also worth noting that doing the opposite does yield good results. I.e. starting with the y's (a 1-hot encoding of the digits) and using it to predict the image (an array of 784 bytes) using the same architecture: input (10,) and output (784,) with a sigmoid. This is not exactly equivalent, to an inverse as here you first do the linear transformation and then the non-linear. While in an inverse you would first do (well, undo) the non-linear, and then do (undo) the linear. I.e. the claim that the 784x10 matrix is collapsing too much information seems a bit odd to me, as there does exist a 10x784 matrix that can reproduce enough of that information.

",27947,,27947,,1/23/2020 12:32,2/3/2022 10:34,Can we get the inverse of the function that a neural network represents?,,3,9,,,,CC BY-SA 4.0 17568,1,17572,,1/18/2020 17:35,,3,151,"

What are the differences between TensorFlow and PyTorch, both in terms of performance and functionality?

",32763,,2444,,12/21/2021 15:59,12/21/2021 15:59,What are the differences between TensorFlow and PyTorch?,,1,0,,5/18/2020 12:33,,CC BY-SA 4.0 17570,2,,17565,1/18/2020 18:11,,3,,"

The method you propose is already known, its basically a numerical approximation to the gradient. It is not used to train neural networks because its well... an approximation. You still need to do two forward passes to get an approximation, which introduces noise and might make the training process fail.

Using backpropagation to compute the gradient is an exact solution, so why would you use an approximation if the exact computation is equally efficient?

Numeric approximations of the gradient only make sense if exact computation is not possible.

",31632,,,,,1/18/2020 18:11,,,,2,,,,CC BY-SA 4.0 17571,1,,,1/18/2020 18:41,,2,39,"

This post continues the topic in the following post: Is it possible to train a neural network with 3 inputs and 12 outputs?.

I conducted several experiments in MATLAB and selected those neural networks that best approximate the data.

Here is a list of them:

  • Cascade-forward backpropagation

  • Elman backpropagation

  • Generalized regression

  • Radial basis (exact fit)

I did not notice a fundamental difference in quality, except for Elman's backpropagation, which had a higher error than the rest.

How to justify the choice of the structure of the neural network in this case?

",32829,,2444,,1/18/2020 23:20,1/18/2020 23:20,How do I determine the best neural network architecture for a problem with 3 inputs and 12 outputs?,,0,0,,,,CC BY-SA 4.0 17572,2,,17568,1/18/2020 19:45,,3,,"

TensorFlow was developed by Google and is based on Theano (Python library), while Facebook developed PyTorch using the Torch library. Both frames are useful and have a great community behind them. Both provide machine learning libraries to accomplish various tasks and do the job. TensorFlow is a powerful and deep learning tool with active visualization and debugging capabilities. TensorFlow also offers serialization benefits since the entire graphic is saved as a protocol buffer. It also has support for mobile platforms and offers a production-ready implementation. PyTorch, on the other hand, is still gaining momentum and attracting Python developers, since it is more Python friendly. In summary, TensorFlow is used to speed things up and create AI-related products, while research-oriented developers prefer PyTorch.

",32861,,,,,1/18/2020 19:45,,,,0,,,,CC BY-SA 4.0 17575,2,,17566,1/18/2020 23:41,,3,,"

Mathematical Exploration

let $\Theta^+$ be the pseudo-inverse of $\Theta$.

Recall, that if a vector $\boldsymbol v \in R(\Theta)$ (ie in the row space) then $\boldsymbol v = \Theta^+\Theta\boldsymbol v$. That is, so long as we select a vector that is in the rowspace of $\Theta$ then we can reconstruct it with full fidelity using the pseudo inverse. Thus, if any of the images happen to be linear combinations of the rows of $\Theta$ then we can reconstruct it.

To be more specific. let $f(\boldsymbol x)$ have a pseudo-inverse $f^+(\boldsymbol x)$ defined as you have. If we restrict our domain such that $\boldsymbol x \in C(\Theta^T)$ (column space of the transpose) then $f^+= f^{-1}_{res}$.

That is, under our domain restriction the pseudo inverse becomes a true inverse.

An Extrapolation

It would then seem that so long as we are under such domain restrictions then we could define a pseudo inverse for a general NN. Though, it might be possible that some NNs don't have any restriction that admits an inverse. Maybe, there is some way to regularize the parameters such that this is possible. NNs with ReLU wouldn't admit such an inverse since ReLU loses information on negative values. Leaky Relu might work.

Further Investigation

Finally, this presents a zone for further study. Some questions to answer might be:

  • Is it possible for optimized parameters to contain non-trivial examples in their row-space?
  • If so, under what conditions is this possible?
  • Are the examples in any way represented in the row space?
  • Is there some way to regularize a NN such that it admits an inverse over some desired restriction?
  • Under what conditions is invertibility useful?
",28343,,28343,,1/19/2020 0:43,1/19/2020 0:43,,,,2,,,,CC BY-SA 4.0 17576,1,,,1/19/2020 0:25,,3,476,"

I've lot of training data points (i.e in millions) and I've around few features but the issue with that is all the features are categorical data with 1 million+ categories in each.

So, I couldn't use one hot encoding because it's not efficient so I went with the other option which is embedding of fixed length. I've just used neural nets to compute embedding.

My question is can we use advanced NLP models like bert to extract embeddings for categorical data from my corpus? Is it possible? I've only asked it because I've only heard that bert is good for sentence embeddings.

Thank you.

",32867,,,,,1/19/2020 0:25,Can Bert be used to extract embedding for large categorical features?,,0,5,,,,CC BY-SA 4.0 17577,1,17578,,1/19/2020 0:56,,4,3288,"

In my code, I usually use the mean squared error (MSE), but the TensorFlow tutorials always use the categorical cross-entropy (CCE). Is the CCE loss function better than MSE? Or is it better only in certain cases?

",2844,,2444,,1/20/2020 18:31,3/11/2021 8:40,In which cases is the categorical cross-entropy better than the mean squared error?,,3,0,,,,CC BY-SA 4.0 17578,2,,17577,1/19/2020 1:45,,7,,"

As a rule of thumb, mean squared error (MSE) is more appropriate for regression problems, that is, problems where the output is a numerical value (i.e. a floating-point number or, in general, a real number). However, in principle, you can use the MSE for classification problems too (even though that may not be a good idea). MSE can be preceded by the sigmoid function, which outputs a number $p \in [0, 1]$, which can be interpreted as the probability of the input belonging to one of the classes, so the probability of the input belonging to the other class is $1 - p$.

Similarly, cross-entropy (CE) is mainly used for classification problems, that is, problems where the output can belong to one of a discrete set of classes. The CE loss function is usually separately implemented for binary and multi-class classification problems. In the first case, it is called the binary cross-entropy (BCE), and, in the second case, it is called categorical cross-entropy (CCE). The CE requires its inputs to be distributions, so the CCE is usually preceded by a softmax function (so that the resulting vector represents a probability distribution), while the BCE is usually preceded by a sigmoid.

See also Why is mean squared error the cross-entropy between the empirical distribution and a Gaussian model? for more details about the relationship between the MSE and the cross-entropy. In case you use TensorFlow (TF) or Keras, see also How to choose cross-entropy loss in TensorFlow?, which gives you some guidelines for how to choose the appropriate TF implementation of the cross-entropy function for your (classification) problem. See also Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? and Does the cross-entropy cost make sense in the context of regression?.

",2444,,2444,,1/19/2020 3:00,1/19/2020 3:00,,,,1,,,,CC BY-SA 4.0 17579,2,,17506,1/19/2020 10:58,,0,,"

The idea behind it

I'll do an analogy: while classical GAs look like how humanity reproduced until now, ""intelligent crossover"" is more looking like designer babies. You first would need to identify which genes are responsible for certain behaviors and then you can bring them on to the new generation.

Said this, you will understand why there aren't so many algorithms around: it is a very case-tailored approach. Possibly each problem requires a different method and certainly will increase considerably the complexity of the crossover algorithms.


How to make your own

If you want to create one of these algorithms, you might want to add a step to your GA cycle: Evaluation, Identification, Selection, Crossover, Mutation, Replacement.

On the Identification step, you can run a probabilistic search on your population to discover which genes might be responsible for higher fitness results. Then you can:

  • proceed normally: Select two parents based on their fitness and perform crossover giving more importance to the ""good"" genes. This is the ""safest"" way.

  • fuse Selection and Crossover: use the data retrieved during the Identification step to create good individuals directly from the population pool. This system is dangerous as it might reduce the population diversity at alarming rates, leading to stagnation. This is how the Fitness-based Crossover are generally built [paper]

",15530,,,,,1/19/2020 10:58,,,,0,,,,CC BY-SA 4.0 17580,1,,,1/19/2020 11:40,,1,50,"

How is clustering used in the unsupervised training of a neural network? Can you provide an example?

",32798,,2444,,1/19/2020 13:46,1/19/2020 13:46,How is clustering used in the unsupervised training of a neural network?,,0,8,,,,CC BY-SA 4.0 17581,2,,14215,1/19/2020 13:47,,0,,"

I found a solution to this some time back. I have been studying function approximation (within linear regression) for some time. Here's how I did it:

Neural Networks have been proved to be universal function approximators. So, even a single hidden layer would be sufficient to approximate a function simple as addition (Even somewhat complex functions like the Sine and any random CONTINUOUS wiggly function have been approximated)

First, I used a high level API like TensorFlow and Keras and implemented it here

The model was trained on the data (input-output pairs)

R    = np.array([-4, -10,  -2,  8, 5, 22,  3],  dtype=float)
B    = np.array([4, -10,  0,  0, 15, 5,  1],  dtype=float)
G    = np.array([0, 10,  5,  8, 1, 2,  38],  dtype=float)

Y    = np.array([0, -10, 3, 16, 21, 29, 42],  dtype=float)

And trained as follows:

#Create a hidden layer with 2 neurons
hidden = tf.keras.layers.Dense(units=2, input_shape=[3])

#Create the output (final) layer which symbolises value of **Y**
output = tf.keras.layers.Dense(units=1)

#Combine layers to form the neural network and compile it
model = tf.keras.Sequential([hidden, output])
model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(RBG,Y, epochs=500, verbose=False)

The model converges in about 50 epochs

Also, I have implemented the same using only C/C++ and used GNU plot to visualize the results.

",26452,,,,,1/19/2020 13:47,,,,0,,,,CC BY-SA 4.0 17582,1,,,1/19/2020 17:04,,2,30,"

I have seen the teef glove ofNavid Azodi and Thomas Pryor, like this :

and also seen this post which has been said about this kind of work problem :

Their six-page letter, which Padden passed along to the dean, points out how the SignAloud gloves—and all the sign-language translation gloves invented so far—misconstrue the nature of ASL (and other sign languages) by focusing on what the hands do. Key parts of the grammar of ASL include “raised or lowered eyebrows, a shift in the orientation of the signer’s torso, or a movement of the mouth,” reads the letter. “Even perfectly functioning gloves would not have access to facial expressions.” ASL consists of thousands of signs presented in  sophisticated ways that have, so far, confounded reliable machine recognition. One challenge for machines is the complexity of ASL and other sign languages. Signs don’t appear like clearly delineated beads on a string; they bleed into one another in a process that linguists call “coarticulation” (where, for instance, a hand shape in one sign anticipates the shape or location of the following sign; this happens in words in spoken languages, too, where sounds can take on characteristics of adjacent ones). Another problem is the lack of large data sets of people signing that can be used to train machine-learning algorithms.

So i like to know what proper AI module do you know for doing the navid works better by adding for first step the geometrical position analyzing into it, i like to use popular AI blocks like Tensorfliw for doing this kind of analyzing by fast online and the modules which updated at the time by large community users.

Update:

I think, some Virtual reality analyzer for the position analyzing must be existing, so which one if popular and free to contributing with large community?

thanks for your attention.

",33936,,,,,1/19/2020 17:04,Searching for powerfull AI modules to improve teef gloves,,0,0,,,,CC BY-SA 4.0 17584,1,17590,,1/20/2020 3:20,,4,2321,"

I am learning to use tensorflow.js. I am also using the tfvis library to print information about the neural net to the web browser. When I create a create a dense neural net with a layer with 5 neurons and another layer with 2 neurons, each layer has a bias vector of length 5 and 2 respectively. I checked the docs (https://js.tensorflow.org/api/0.6.1/#layers.dense), and it says that there is indeed a bias vector for each dense layer. Isn't a vector redundant? Doesn't each layer only need a single number for the bias? See the code below:

//Create tensorflow neural net
this.model = tf.sequential();

this.model.add(tf.layers.dense({units: 5, inputShape: [1]}))
this.model.add(tf.layers.dense({units: 2}))

const surface = { name: 'Layer Summary', tab: 'Model Inspection'};
tfvis.show.layer(surface, this.model.getLayer(undefined, 0))
",32887,,2444,,1/20/2020 13:55,1/20/2020 18:17,Why does the bias need to be a vector in a neural network?,,2,0,,,,CC BY-SA 4.0 17585,1,,,1/20/2020 4:27,,2,44,"

I am reading a research paper on the formulation of MDP problems to ICU treatment decision making: Treatment Recommendation in Critical Care: A Scalable and Interpretable Approach in Partially Observable Health States. The paper applies a Monte Carlo approach to approximate the value function. Below is a screenshot of the excerpt that I came across.

The last sentence of the excerpt reads ""The approach is scalable for growing number of states variables and action variables"".

What does it mean when the author says that the Monte Carlo approach is scalable for a growing number of states variables and action variables? Wouldn't the amount of data needed to approximate the value function increase with the higher dimensionality of states? Or does the Monte Carlo approach scale better in time complexity as compared to traditional Q-learning methods?

",32780,,2444,,1/20/2020 18:34,1/20/2020 18:34,Why is this Monte Carlo approach scalable for a growing number of states variables and action variables?,,0,0,,,,CC BY-SA 4.0 17588,1,17708,,1/20/2020 14:27,,2,202,"

I'm interested in starting a project that will identify the face of a Lego mini figure from a digital photo. My goal is to eventually map the expression of a person's face to the Lego mini figure.

I don't have any experience working with image recognition technology (my technical experience is mainly in web technology), and I am looking for recommended platforms or resources that I could get started with.

Most helpful would be recommendations for image recognition technologies (Python would be great!) that I could start to experiment with.

NOTE: I'm aware of SparkAR as a library designed to for Instagram camera effects specifically, and even though I'm not interested in Instagram, I wonder if there are comparable libraries/studios/products for working with image recognition development.

",32902,,32902,,1/29/2020 22:38,1/29/2020 22:38,Lego minifigure facial recognition: where to start?,,1,2,,10/10/2021 14:40,,CC BY-SA 4.0 17590,2,,17584,1/20/2020 15:27,,4,,"

In a simple feed-forward network, each artificial neuron has a separate bias value. This allows for greater flexibility for the output layer function than if each neuron had to use a single whole-layer bias. Although not an absolute requirement, without this arrangement it may become very hard to approximate some functions. Moving from a bias vector to a single scalar bias value per layer will most of the time reduce the effectiveness of a neural network through lost flexibility in how it fits to the target function.

Once you have $N$ output neurons in a layer leading to needing $N$ values for bias, then it is fairly straightforward to model this collection of bias values as a vector.

Often you will see a neural network layer function written in this form or similar:

$$\mathbf{y} = f(\mathbf{W}\mathbf{x} + \mathbf{b})$$

Where $f()$ is the activation function (applied element-wise), $\mathbf{W}$ the weights matrix for the layer and $\mathbf{b}$ is the bias. When written in this form, it is easy to see that $\mathbf{y}$, $\mathbf{W}\mathbf{x}$ and $\mathbf{b}$ must all be vectors of the same size.

This layer design has become so standard that it is possible to forget that other designs and implementations are possible for neural network parameters, and can sometimes be useful. Frameworks like TensorFlow also make it easier to take the standard approach, which is why you need a vector for bias on the example you are using. Whilst you are learning, and probably 99% of the time after that, it will be best to go with what the framework is doing here.

",1847,,,,,1/20/2020 15:27,,,,1,,,,CC BY-SA 4.0 17591,1,,,1/20/2020 16:04,,1,52,"

I am reading the R-CNN paper by Ross Girshick1 et al. (link) and I fail to understand how they do the inference. This is described in the section 2.2.Test-time Detection in the paper. I quote:

At test time, we run selective search on the test image to extract around 2000 region proposals (we use selective search’s “fast mode” in all experiments). We warp each proposal and forward propagate it through the CNN in order to read off features from the desired layer. Then, for each class, we score each extracted feature vector using the SVM trained for that class.

I do not understand how a Support Vector Machine (SVM) can score a feature vector since SVM does not tell you class probability, it only tells you if an object belongs to a class or not. How is this possible?

It seems that detection flow is: get image, run it through CNN and get feature vector, score this feature vector and run Non-Maximal Suppresion (NMS). But for running NMS we need the feature vector scored, and again, SVM do not score predictions, right?

Actually, when represented in the same paper, the SVM does not provide a score as you can see in the next image (taken from the same paper).

So, how this makes sense?

",26882,,,,,1/20/2020 16:04,Scoring feature vector with Support Vector Machine,,0,0,,,,CC BY-SA 4.0 17592,2,,17584,1/20/2020 18:11,,2,,"

To emphasize (and this is not emphasized in this answer), in the case of neural networks, the biases or, more precisely, the connections (or weights) between biases and other neurons are also learnable parameters, so the back-propagation algorithm calculates a gradient of the loss function that contains the partial derivatives with respect to the connections between the biases and other neurons too and, in the gradient descent step, these connections can also be updated.

Each neuron usually has its own bias. For example, in Keras, this is the case, as you can easily verify. However, in principle, you could also have a layer with a single scalar bias that is shared across all neurons of that layer, but this would probably have a different effect. The role of the bias is discussed in several places on the web. For example, in this Stack Overflow post or in this Stats SE post.

",2444,,2444,,1/20/2020 18:17,1/20/2020 18:17,,,,0,,,,CC BY-SA 4.0 17593,2,,16915,1/20/2020 20:37,,0,,"

Setting aside the dubious nature of subliminal messages. You have to clearly define what subliminal is. In all your cases presented, a manual algorithmic approach would be better. An ANN would tend to average out the subliminal messages as noise, just as your brain would. In the case of #2, #3, it's a matter of contrast, in which case computer vision methods of bringing out low contrast details to the forefront combined with a manual algorithm (perhaps an OCR pass for content not expected to have text) would be the way to go. ANN are not magic bullets, and still need to be used appropriately.

",32908,,,,,1/20/2020 20:37,,,,14,,,,CC BY-SA 4.0 17596,1,,,1/21/2020 9:08,,5,833,"

I am working on a problem in which I need to train a neural network to map one or more input images to one or more output images (1 channel for image). Below I report some examples of input&output. In this case I report 1 input and 1 output image, but may need to pass to more inputs and outputs, maybe by encoding this in channels. However, the images are all of this kind, maybe rotated, tralated or changed a bit in shape. (fyi, they are fields defined by fluid dynamics simulations)

I was thinking about CNN, but the standard architecture used for image classification (convolutional layers + fully connected layers) seems not to be the best choice. Instead, I tried using the U-net architecture, composed of compression+decompression convolutional layers. This works quite fine, but maybe there is some other architecture that could be more suited to my problem.

Any suggestion would be appreciated!

",32915,,2444,,1/25/2020 15:28,11/30/2022 10:45,Which deep learning models are suitable for image-to-image mapping?,,1,2,,,,CC BY-SA 4.0 17598,2,,7940,1/21/2020 13:36,,2,,"

Assume you are the snake.

In front of you is empty. Left of you is empty. Right of you is empty. The distance to the apple is 4. The apple straight in front of you. Your length is 20.

Can you make a good decision with this input? In which direction would you go to achieve maximum score?

From the given input, you could go straight forward to the apple. But that might be a failure and lead to death.

IMHO, the input state is simply not enough to make a good decision, because

a) the snake doesn't even know in which direction it's currently moving.

b) the snake has no idea about where its body is

The situation could look like this:

The only way for the snake to move out of this trap is as indicated by the arrow, so that the tail frees the way out just in time. Your neural network does not have the necessary input to make that decision.

",31627,,,,,1/21/2020 13:36,,,,0,,,,CC BY-SA 4.0 17599,1,17673,,1/21/2020 15:46,,2,444,"

I wonder, if there are other than NEAT approaches to evolving architectures and weights of artificial neural networks?

To be more specific: I am looking for projects/frameworks/libraries that use evolutionary/genetic algorithms to simultanousely evolve both topology and train weights of ANNs other than NEAT approach. By 'other' I mean similar to NEAT but not based entirely on NEAT. I hope to find different approaches to the same problem.

",22659,,22659,,1/22/2020 18:36,1/25/2020 15:44,What are evolutionary algorithms for topology and weights evolving of ANN (TWEANN) other than NEAT?,,1,0,,,,CC BY-SA 4.0 17601,1,,,1/21/2020 18:28,,1,23,"

I have frequency EEG data from fall and non-fall events and I am trying to incorporate it with accelerometer data that was collected at the same time. One approach is, of course, to use two separate algorithms and find the threshold for each. Then comparing the threshold of each. In other words, if the accelerometer algorithm predicts a fall (fall detected = 1) and the EEG algorithm detects a fall, based on the power spectrum (fall detected = 1), then the system outputs a ""1"" that a fall was truly detected. This approach uses the idea of a simple AND gate between the two algorithms.

I would like to know how to correctly process the data so that I can feed both types of data into a CNN. Any advice is really appreciated, even a lead to some literature, articles or information would be great.

",32806,,,,,1/21/2020 18:28,EEG and Accelerometer Neural Network,,0,1,,,,CC BY-SA 4.0 17602,1,,,1/21/2020 19:53,,3,61,"

The inverse propensity score (IPS) estimator, which is used for off-policy evaluation in a contextual bandit problem, is well explained in the paper Doubly Robust Policy Evaluation and Optimization.

The old policy $\mu$, or the behavior policy, is okay to be non-stationary in the IPS estimator even if the new policy $\nu$, or the target policy, should be stationary.

Is this true for the importance sampling (IS) estimator, which seems to be a variant of IPS, for off-policy evaluation in a reinforcement learning problem?

IS estimator is explained in this paper Doubly Robust Off-policy Value Evaluation for Reinforcement Learning.

The target policy should be stationary, but can the old policy be non-stationary in the IS estimator?

",30051,,2444,,1/24/2020 11:58,1/24/2020 11:58,Can the importance sampling estimator have a non-stationary behaviour policy even if the target policy is stationary?,,0,0,,,,CC BY-SA 4.0 17603,1,,,1/21/2020 20:39,,2,5236,"

Could someone explain to me which is the key difference between the $\epsilon$-greedy policy and the softmax policy? In particular, in the contest of SARSA and Q-Learning algorithms. I understood the main difference between these two algorithms, but I didn't understand all the combinations between algorithm and policy

  • SARSA + $\epsilon$-greedy
  • SARSA + Softmax
  • Q-Learning + $\epsilon$-greedy
  • Q-Learning + Softmax
",32694,,2444,,12/4/2020 17:42,12/4/2020 17:42,What is the difference between the $\epsilon$-greedy and softmax policies?,,1,0,,,,CC BY-SA 4.0 17605,1,17607,,1/21/2020 22:17,,7,3862,"

I'd like to better understand temporal-difference learning. In particular, I'm wondering if it is prudent to think about TD($\lambda$) as a type of ""truncated"" Monte Carlo learning?

",32929,,2444,,6/4/2020 16:57,3/14/2021 23:17,What is the intuition behind TD($\lambda$)?,,2,0,,,,CC BY-SA 4.0 17606,1,,,1/22/2020 0:26,,1,50,"

If I have a lot of input output pairs as training data

<float Xi, float Yi>

and I have a parametrized approximation function (I know the function algorithm, but not the values of the many many parameters it contains) which shall approximate the process by which the original data pairs were generated. The function takes two input values:

// c is a precomputed classifier for x and can have values from 0 to 255, so there can be up to 256 different classes
y = f(float x, int c)

the hidden parameters of the function are some big lookup tables (a lot of free parameters, but still much fewer than the amount of data points in the training data)

Now I want to fit all the hidden parameters that f contains AND compute for each Xi a ci, such that for the fitted function the error over all i of Yi - f(Xi, ci) is minimized

So, using some algorithm I want to fit the parameters of f and also classify the inputs Xi so that f(Xi, ci) aproximates Yi

How is this kind of problem called and what kind of algorithm is used to solve it?

I assume it's possible to initialize all hidden parameters as well as all ci with random values and then somehow use back propagation of the error to iteratively find parameters and ci such that the function works well.

What I don't know is whether this is a well known class of problem and I just don't know the name of it, so I'm asking for pointers.

Or maybe in other words: I have a function that has a certain layout (for performance reasons) which I want to use to approximate and interpolate my training data, I want to tune the parameters of this function such that it approximates the original data well. since the data points fall into some 'categories', I want to pre-classify the data-points to make it easier for the function to do its job. What kind of algorithm do I use to find the function's parameters and to pre-classify the input?

",32111,,32111,,1/22/2020 6:51,1/22/2020 8:08,What class of problem is this?,,1,0,,,,CC BY-SA 4.0 17607,2,,17605,1/22/2020 3:01,,4,,"

TD($\lambda$) can be thought of as a combination of TD and MC learning, so as to avoid to choose one method or the other and to take advantage of both approaches.

More precisely, TD($\lambda$) is temporal-difference learning with a $\lambda$-return, which is defined as an average of all $n$-step returns, for all $n$, where an $n$-step return is the target used to update the estimate of the value function that contains $n$ future rewards (plus an estimate of the value function of the state $n$ steps in the future). For example, TD(0) (e.g. Q-learning is usually presented as a TD(0) method) uses a $1$-step return, that is, it uses one future reward (plus an estimate of the value of the next state) to compute the target. The letter $\lambda$ actually refers to a parameter used in this context to weigh the combination of TD and MC methods. There are actually two different perspectives of TD($\lambda$), the forward view and the backward view (eligibility traces).

The blog post Reinforcement Learning: Eligibility Traces and TD(lambda) gives a quite intuitive overview of TD($\lambda$), and, for more details, read the related chapter of the book Reinforcement Learning: An Introduction.

",2444,,2444,,3/10/2021 17:45,3/10/2021 17:45,,,,0,,,,CC BY-SA 4.0 17608,1,,,1/22/2020 3:50,,0,3230,"

I am trying to understand how deep Q learning (DQN) works. To my current understanding, each $Q(s, a)$ functions is estimated to be a function of a feature vector of its state $\phi$(s) and the weight of the network $\theta$.

The loss function to minimise is $||\delta_{t+1}||^2$ where $\delta_{t+1}$ is shown below. The loss function is from the website talking about function approximation. Even though it is not explicitly deep Q learning, the loss function to minimise is similar.

$$\delta_{\mathrm{t}+1}=\mathrm{R}_{\mathrm{t}+1}+\max _{\mathrm{a}\in\mathrm{A}} \boldsymbol{\theta}^{\top} \Phi\left(\mathrm{s}_{t+1}, \mathrm{a}\right)-\boldsymbol{\theta}^{\top} \Phi\left(\mathrm{s}_{\mathrm{t}}, \mathrm{a}\right)$$

Source: https://towardsdatascience.com/function-approximation-in-reinforcement-learning-85a4864d566.

Intuitively, I am not able to understand why the loss function is defined as such. Once the network converges to a $\theta$ using gradient descent, does that mean that the $Q_{max}(s,a)$ is found?

In essence, I am not able to grasp intuitively how the neural network is able to generalise the learning to unseen states.

The algorithm I am looking at to help me understand the deep Q networks is below.

Source: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf

",32780,,2444,,4/14/2022 14:41,4/14/2022 14:41,How is the DQN able to generalise the learning to unseen states with such a loss function?,,2,1,,,,CC BY-SA 4.0 17609,1,,,1/22/2020 4:40,,5,1579,"

In deep learning, is it possible to use discontinuous activation functions (e.g. one with jump discontinuity)?

(My guess: for example, ReLU is non-differentiable at a single point, but it still has a well-defined derivative. If an activation function has a jump discontinuity, then its derivative is supposed to have a delta function at that point. However, the backpropagation process is incapable of considering that delta function, so the convex optimization process will have some problem?)

",32933,,2444,,1/28/2021 0:24,1/28/2021 0:24,"In deep learning, is it possible to use discontinuous activation functions?",,2,0,,,,CC BY-SA 4.0 17610,1,,,1/22/2020 5:12,,2,21,"

I have a pool of knowledge that I want to mine for information and allow an AI to deduce likely conclusions from this information.

My goal is to give the AI a set of textual data that is rated on a scale of 0 to 100 ranging from false (0) to unequivocally true (100). Based on ongoing learning I want to be able to ask it about it's data and to make relational conclusions as to not simply whether things are true or false, but to extrapolate likelihoods, conclusions and so forth... or to simply tell me it can't understand something which would then trigger me to give it more information and to train it with additional material - even if it's my own limited answers.

Ultimately I'll deal with image data as well, but that's a bit down the road.

I'm new to the area of neural nets and deep learning and so I'm hoping someone could point me in the right direction in the way of terminology to search for / research as well as where perhaps I should start.

I wouldn't mind working in C predominantly if possible, but other languages (especially Ruby) is fine.

The field is moving so fast and there's so much research now that seems to trump information available from just a couple years ago now and so I'm hoping to jump into information that takes advantage of more general learning algorithms so that this can be as robust possible while taking care of current trends.

Where do I go from here?

",32936,,,,,1/22/2020 5:12,Getting started with creating a general AI based on textual and then image based data?,,0,0,,,,CC BY-SA 4.0 17611,1,,,1/22/2020 5:42,,1,50,"

I am trying to find unique (distinct) faces in multiple videos files. What is the best way to do that?

",32937,,,,,1/22/2020 5:42,Finding unique faces in a video,,0,1,,,,CC BY-SA 4.0 17612,1,,,1/22/2020 7:52,,1,150,"

Following the online courses with Andrew Ng, he talks about L2 regularization (a.k.a. weight decay) and input normalization. Now, the argument is that L2 regularization make the weights smaller, which makes the sigmoid activation functions (and thus the whole network) ""more"" linear.

Question 1: can this rather handwavey explanation be formalized? Can we define ""more linear"" in a mathematical sense, and demonstrate that smaller weights in fact achieve this?

Question 2: in contrast to sigmoid, ReLU activations have a single point where it is nonlinear - i.e. the breaking point at x=0. No scaling of the input changes the shape (i.e. derivative) of this function, the only effect is reducing the magnitude of positive outputs. Does the argument still hold? Why?

Input normalization is given as a good practice, but it seems to me that the network should just compensate for varying magnitude between components of the input by scaling the weights appropriately. The only exception I can think of is again under L2 regularization, which would penalize large weights (assoiciated with small inputs).

Question 3: Is this correct, and is input scaling thus mostly important with L2 normalization, or is there some reason why the network would fail to adjust the weights without scaling?

",29720,,,,,1/22/2020 7:52,Do L2 regularization and input normalization depend on sigmoid activation functions?,,0,1,,,,CC BY-SA 4.0 17613,1,17627,,1/22/2020 7:56,,2,69,"

I would much appreciate if you could point me in the right direction regarding this question about targets for SARSA and Q-learning (notation: $S$ is the current state, $A$ is the current action, $R$ is the reward, $S'$ is the next state and $A'$ is the action chosen from that next state).

Do we need an explicit policy for the Q-learning target to sample $A'$ from? And for SARSA?

I guess this is true for Q-learning since we need to get max Q-value which determines which action $A'$ we'll use for the update. For SARSA, we update the $Q(S, A)$ depending on which action was actually taken (no need for max). Please correct me if I'm wrong.

",32893,,2444,,1/22/2020 12:02,1/22/2020 23:07,Do we need an explicit policy to sample $A'$ in order to compute the target in SARSA or Q-learning?,,1,0,,,,CC BY-SA 4.0 17614,2,,17606,1/22/2020 8:08,,1,,"

This problem is typically called parameter estimation or inverse modelling, and there are a variety of techniques to solve it.

If your free parameters are all continuous (i.e. none are discrete, such as integers), and your model function is differentiable, then you can turn the model into a computation graph in e.g. TensorFlow and use gradient descent methods with MSE loss, learning parameters on the data in much the same way as a neural networks. Most of the toolkits built for neural networks can do this, you just have to ignore the pre-packaged layer models and the utilities to manage them. You will need to take care with parameter initialisation, ideally you already have some rough starting values.

You can use other approaches, such as search methods too. Even when you have some discrete-valued parameters then you can use genetic algorithms for instance.

The best method to use will depend on details of your model, it is not possible to point out a generic ""best"". If you can use a gradient-based method directly, as you suggest in the question, then that might be the most efficient in terms of computation, provided you have some way to set reasonable initial parameters.

",1847,,,,,1/22/2020 8:08,,,,0,,,,CC BY-SA 4.0 17615,1,,,1/22/2020 8:44,,2,53,"

I plan to develop OCR application using tensorflow to get the value from the image. Text in the image may handwritting or text printed.

From the image, my ocr appplication will able to get the value of below: 1. ChequeDate 2. Payee Name 3. Legal Amount 4. Courtesy Amount

How the OCR application can get the value i want as highlighted as red color? is it need to crop it as small size of the picture for the OCR?

",32727,,32727,,1/23/2020 3:19,1/23/2020 3:19,OCR - Text recognition from Image,,0,2,,,,CC BY-SA 4.0 17616,1,,,1/22/2020 12:34,,2,111,"

I have a detection problem. An object with a probability of 0.5 is in a box with coordinates ((0,0), (2, 2)) and with a probability of 0.5 a box with coordinates ((2,0), (4,2)).

What is the maximum expected value of an intersection over union (IOU) with an object that a constant detection algorithm that produces one box can reach? I can't understand how I should find the expected value here. P.S. IOU is 0 cos intersection is empty.

",32952,,2444,,1/24/2020 11:32,2/19/2020 13:49,What is the expected value of an IOU in this case?,,0,4,,,,CC BY-SA 4.0 17619,1,17621,,1/22/2020 14:51,,1,67,"

I am coding out a simple 4x4 grid game whereby the agent starts at a particular state and his aim is to reach the terminal state. The agent is supposed to avoid traps along the way and reach the end goal with high reward. The below picture illustrates the environment.

The code that I am running is shown below:

# 4x4 Grid
import random


gamma = 1
grid = [[-0.1 for i in range(4)] for j in range(4)]
episodes = 500000
epsilon = 1 # start greedy
decay = 0.999
min_epsilon = 0.1
alpha = 0.65
# set terminal states
grid[1][0] = -1
grid[2][2] = -1
grid[0][3] = 1

# Set up Q tables
# 0: up, 1: down, 2: left, 3: right
# Q = {(0,0): {0: z, 1: x, 2: c, 3: v}, ... }}
Q = {}

# 4 rows
for row in range(4):
    # 4 columns
    for column in range(4):
        Q[(row,column)] = {}
        # 4 actions
        for k in range(4):
            Q[(row,column)][k] = 0


def isTerminal(state):
    if state == (1,0) or state == (2,2) or state == (0,3):
        return True
    return False

def get_next_state_reward(state, action):

    row = state[0]
    col = state[1]
    #print(row, col)
    if action == 0: # up
        # out of grid
        if (row - 1) < 0:
            return (state, grid[row][col])

    if action == 1: # down

        # out of grid
        if (row + 1) > len(grid) - 1:
            return (state, grid[row][col])

    if action == 2: # left

        if (col - 1 < 0):
            return (state, grid[row][col])

    if action == 3: # right

        if (col + 1 > len(grid[row]) - 1):
            return (state, grid[row][col])

    if action == 0:

        row -= 1
        return ((row,col), grid[row][col])

    if action == 1:

        row += 1
        return ((row,col), grid[row][col])

    if action == 2:

        col -= 1
        return ((row,col), grid[row][col])

    if action == 3:

        col += 1
        return ((row,col), grid[row][col])

state_visit = {}
for row in range(4):
    # 4 columns
    for column in range(4):
        state_visit[(row,column)] = 0

for episode in range(episodes):

    # let agent start at start state
    state = (3,0)

    while not isTerminal(state):

        r = random.uniform(0,1)

        if r < epsilon:
            action = random.randint(0,3)
        else:
            action = max(Q[state], key=lambda key: Q[state][key])


        next_state, reward = get_next_state_reward(state, action)

        TD_error = reward + gamma * max(Q[next_state]) - Q[state][action]

        Q[state][action] = Q[state][action] + alpha * TD_error

        state = next_state

        state_visit[next_state] += 1
        epsilon = max(min_epsilon, epsilon*decay)

        #input()


policy = {}
# get optimal policies for each state
for states in Q:
    policy[states] = max(Q[states], key=lambda key: Q[states][key])

When I finish running the algorithm however, I am unable to achieve the optimal policy no matter how many tweaks I do to the number of episodes, or epsilon decay, or the alpha value.

Particularly, the Q values that I attain for state (2,0), (0,1) and (0,0) have Q values that are equal values for three directions except for the last direction which brings the agent to the terminal state.

For example, these are the Q-values that I get for state (0,0), (0,1) and (2,0) respectively.

(0,0): {0: 2.0, 1: 2.9, 2: 2.9, 3: 2.9}

(0,1): {0: 2.9, 1: 2.0, 2: 2.9, 3: 2.9}

(2,0): {0: 2.9, 1: 2.9, 2: 2.9, 3: 2.9}

I am not sure why the Q-values for the 3 directions should be the same because each extra step that the agent takes incurs a negative reward.

Would anyone be able to help ? Thank you so much !

",32780,,,,,1/22/2020 15:29,Q-learning problem wrong policy,,1,3,,,,CC BY-SA 4.0 17620,1,,,1/22/2020 15:11,,1,36,"

In the media there's lot of talk about face recognition, mainly with respect to identifying faces (= assigning to persons). Less attention is paid to the recognition of facially expressed emotions but there's a lot of research done into this direction, too. Even less attention is paid to the recognition of facially expressed emotions of a single person (which could be much more detailed) - even though this would be a very interesting topic.

What holds for faces does similarily hold for voices. With the help of artifical intelligence voices can be identified (= assigned to persons) and emotions as expressed by voice can be recognized - on a general and on an individual's level.

My general question goes into another direction: As huge progress has been made in visual scene analysis (""what is seen in this scene?"") there has probably been some progress made in auditory scene analysis: ""What is heard in this scene?""

My specific question is: Are there test cases and results where some AI software was given some auditory data with a lot of ""voices"" and could tell how many voices there were?

As a rather easy specific test case consider some Gregorian chant sung in perfect unison. (See also here.)

",25362,,25362,,1/22/2020 15:18,1/22/2020 15:18,State of the art in voice recognition,,0,0,,,,CC BY-SA 4.0 17621,2,,17619,1/22/2020 15:29,,1,,"

You have a simple mistake in your TD Error function:

TD_error = reward + gamma * max(Q[next_state]) - Q[state][action]

You have made Q[next_state] a Python dict, so this will take the maximum key which is 3 for all your Q table entries. This is why you end up with values very close to 3 at the end, which is impossible for your problem, the maximum return will be 1.0 when stepping from adjacent grid points to the positive terminal state.

The correct code is:

TD_error = reward + gamma * max(Q[next_state].values()) - Q[state][action]

Alternatively, since your actions are all numeric, you could use a list instead of a dict in your Q table.

",1847,,,,,1/22/2020 15:29,,,,0,,,,CC BY-SA 4.0 17622,2,,17608,1/22/2020 20:37,,0,,"

Well, you want your network to have a good prediction powers for the Q-values. So you compare Q-value at time t with the reward that you've got at time t after having executed action a + the prediction of the best Q-value of your neural network at time t+1. Note, that you are optimizing using a prediction and not a true value. That is called bootstrapping, look up TD-learning to have a better grasp of the concept.

",2254,,,,,1/22/2020 20:37,,,,0,,,,CC BY-SA 4.0 17623,2,,17609,1/22/2020 20:41,,1,,"

I would say that it is possible, but probably not a very good idea.?Like you say, the hard requirement is that the network (and thus its components, including the activation functions) must be differentiable. ReLU isn't, but you can cheat by defining f'(0) to be 0 (or 1).

A continuous function means that gradient descent leads to some local minimum¹, for piecewise continuous functions, it may not converge (i.e. the breakpoints themselves may not be part of the segment you descend, so you will never get get to an actual minimum). This is not likely to be a problem in practice, though.

¹ At least for functions that are bounded from below, like cost functions are.

",29720,,,,,1/22/2020 20:41,,,,2,,,,CC BY-SA 4.0 17624,1,,,1/22/2020 21:10,,2,403,"

Can you please elucidate the math behind the update rule for the critic? I've seen in other places that just a squared distance of $R + \hat{v}(S', w) - \hat{v}(S,w)$ is used, but Sutton suggests an update rule (and the math behind) that is beyond my understanding?

Also, why do we need $I$?

",2254,,2254,,1/28/2020 21:13,1/28/2020 21:13,How does the update rule for the one-step actor-critic method work?,,0,11,,,,CC BY-SA 4.0 17626,2,,17603,1/22/2020 22:24,,5,,"

The $\epsilon$-greedy policy is a policy that chooses the best action (i.e. the action associated with the highest value) with probability $1-\epsilon \in [0, 1]$ and a random action with probability $\epsilon $. The problem with $\epsilon$-greedy is that, when it chooses the random actions (i.e. with probability $\epsilon$), it chooses them uniformly (i.e. it considers all actions equally good), even though certain actions (even excluding the currently best one) are better than others. Of course, this approach is not ideal in the case certain actions are extremely worse than others. Therefore, a natural solution to this problem is to select the random actions with probabilities proportional to their current values. These policies are called softmax policies.

Q-learning is an off-policy algorithm, which means that, while learning a so-called target policy, it uses a so-called behaviour policy to select actions. The behaviour policy can either be an $\epsilon$-greedy, a softmax policy or any other policy that can sufficiently explore the environment while learning.

The figure below shows the pseudocode of the Q-learning algorithm. In this case, the $\epsilon$-greedy policy is actually derived from the current estimate of the $Q$ function. The target policy, in this context, is represented by the $\operatorname{max}$ operator, which is used to select the highest $Q$ value of the future state $s'$, which is the state the RL agent ends up in after having taken the action $a$ selected by the $\epsilon$-greedy behaviour policy, with respect to another action $a'$ from state $s'$. This may sound complicated, but if you read the pseudocode several times, you will understand that there are two different actions (and states). The target policy (i.e. the policy that the RL agent wants to learn) is represented by the $\operatorname{max}$ operator in the sense that the so-called target of the Q-learning update step, i.e. $r + \gamma \operatorname{max}_{a'} Q(s', a')$, assumes that the greedy action is taken from the next state $s'$. For this reason, Q-learning is said to learn the greedy policy (as a target policy), while using an exploratory policy, usually, the $\epsilon$-greedy, but it can also be the softmax. Note that, in both cases, the policies are derived from the current estimate of the Q function.

On the other hand, SARSA is often considered an on-policy algorithm, given that there aren't necessarily two distinct policies, i.e. the target policy is not necessarily different than the behaviour policy, like in Q-learning (where the target policy is the greedy policy and the behaviour policy is e.g. the softmax policy derived from the current estimate of the Q function). This can more easily be seen from the pseudocode.

.

In this case, no $\operatorname{max}$ operator is used and the $\epsilon$-greedy policy is mentioned twice: in the first case, it is used to choose the action $a$ and indirectly $s'$, and, in the second case, to select the action $a'$ from $s'$. In Q-learning, $a'$ is the action that corresponds to the highest Q value from $s'$ (i.e. the greedy action). Clearly, you are free to choose a different policy than the $\epsilon$-greedy (in both cases), but this will possibly have a different effect.

To conclude, to understand the difference between Q-learning and SARSA and the places where the $\epsilon$-greedy or softmax policies can be used, it is better to look at the pseudocode.

",2444,,2444,,1/22/2020 22:29,1/22/2020 22:29,,,,4,,,,CC BY-SA 4.0 17627,2,,17613,1/22/2020 23:00,,3,,"

Q-learning uses an exploratory policy, derived from the current estimate of the $Q$ function, such as the $\epsilon$-greedy policy, to select the action $a$ from the current state $s$. After having taken this action $a$ from $s$, the reward $r$ and the next state $s'$ are observed. At this point, to update the estimate of the $Q$ function, you use a target that assumes that the greedy action is taken from the next state $s'$. The greedy action is selected by the $\operatorname{max}$ operator, which can thus be thought of as an implicit policy (but this terminology isn't common, AFAIK), so, in this context, the greedy action is the action associated with the highest $Q$ value for the state $s'$.

In SARSA, no $\operatorname{max}$ operator is used, and you derive a policy (e.g. the $\epsilon$-greedy policy) from the current estimate of the $Q$ function to select both $a$ (from $s$) and $a'$ (from $s'$).

To conclude, in all cases, the policies are implicit, in the sense that they are derived from the estimate of the $Q$ function, but this isn't a common terminology. See also this answer, where I describe more in detail the differences between Q-learning and SARSA, and I also show the pseudocode of both algorithms, which you should read (multiple times) in order to fully understand their differences.

",2444,,2444,,1/22/2020 23:07,1/22/2020 23:07,,,,3,,,,CC BY-SA 4.0 17629,1,17633,,1/23/2020 6:37,,2,77,"

During weeks and months of your work, many things may change, for example :

  • You may modify the loss function
  • Your training or validation datasets may change
  • You modify data augmentation

Which tools or processes do you use to track modifications you have made and how did they affected the model ?

",23912,,,,,4/1/2020 19:24,How to track performance of your model during experimenting?,,2,0,,11/8/2020 19:38,,CC BY-SA 4.0 17630,1,,,1/23/2020 7:50,,3,1224,"

I'm reading the book Hands-On Meta Learning with Python, and in Prototypical networks said:

So, we use episodic training—for each episode, we randomly sample a few data points from each class in our dataset and we call that a support set and train the network using only the support set, instead of the whole dataset.

I think, but I'm not sure, I have understood what "episodic training" is, but what is the meaning of "episodic" or "episode" here?

I'm sorry, I'm not English and I can't guess what it is meaning searching in a dictionary. I know what an episode is, but I don't know what an episode, in this context of training, means.

",4920,,2444,,12/12/2021 13:00,12/12/2021 13:00,"What does ""episodic training"" mean?",,1,0,,,,CC BY-SA 4.0 17631,2,,5174,1/23/2020 9:03,,0,,"

You can try move ordering where we store the values till depth d, sort them and use them in particular order before we go for depth d+1 ...

",1935,,,,,1/23/2020 9:03,,,,0,,,,CC BY-SA 4.0 17633,2,,17629,1/23/2020 11:29,,3,,"

Maybe you are looking for a combination of a version control system (like git and Github) and a tool like comet.ml. In the past, I used comet.ml to keep track of different experiments performed with different hyper-parameters or different versions of the code. There are other alternatives to comet.ml, such as sacred, but they may also have different features and may not be as visually pleasing as comet.ml or even free. Personally, I liked comet.ml (even though, at the time, it still lacked some features). In any case, a VCS, like git, is widely used in software development (not just in AI projects) to keep track of different versions of the code, etc. You may also be interested in continuous integration (e.g. Travis CI) and code review (e.g. codacy) tools.

",2444,,,,,1/23/2020 11:29,,,,0,,,,CC BY-SA 4.0 17634,1,17688,,1/23/2020 11:40,,5,127,"

Given a pre-trained CNN model, I extract feature vector of images in reference and query dataset with several thousands of elements.

I would like to apply some augmentation techniques to reduce the feature vector dimension to speed up cosine similarity/euclidean distance matrix calculation.

I have already come up with the following two methods in my literature review:

  1. Principal Component Analysis (PCA) + Whitening
  2. Locality Search Hashing (LSH)

Are there more approaches to perform dimensionality reduction of feature vectors? If so, what are the pros/cons of each perhaps?

",31312,,2444,,1/25/2020 15:24,2/16/2021 13:54,What are examples of approaches to dimensionality reduction of feature vectors?,,2,0,,,,CC BY-SA 4.0 17635,2,,17566,1/23/2020 11:48,,1,,"

So, if I go the opposite way, start with my y and predict an x, and then ask for the inverse of that - I get really good results (actually - 100% accuracy).

i.e.

model = Sequential([
    Dense(784, input_shape=(10,), activation='sigmoid'),
])
model.compile(loss=keras.losses.binary_crossentropy,
              optimizer=keras.optimizers.Adam(0.01),
              metrics=['binary_crossentropy'])
model.fit(y_train, x_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(y_test, x_test))
# train until accuracy > 0.9, then:
W, b = model.get_weights()
y = y_train
x = reverse.predict(y)
z = rev_sigmoid(x)
y_hat = rev_linear(z, W, b)
(y_hat.argmax(axis=1) == y.argmax(axis=1)).mean()  # 1.0

After playing a bit with some toy examples, I think the other way is probably not possible, as the matrices don't have an inverse. Putting these (toy) matrices in WolframAlpha for example tells you the determinant is 0, but in numpy the determinant is just slightly bigger than 0, so you manage to calculate an ""inverse"" which is not really an inverse and get the bad results.

It's also makes sense. In the reversed scenario, we start with 10 dimension, expand to 784, and then collapse back to 10. But in the ""regular"" scenario, we start at 784, collapse to 10, and then expand to 784 again - and (I guess) too much information is lost then.

",27947,,27947,,1/23/2020 18:18,1/23/2020 18:18,,,,1,,,,CC BY-SA 4.0 17636,1,17725,,1/23/2020 12:10,,4,310,"

It's not clear to me whether or not someone whose work aims to improve an NLP system may be called a ""Computational Linguist"" even when she/he doesn't modify the algorithm directly by coding.

Let's consider the following activities:

Annotation for Machine Learning: analysis of Morphology, Syntax, POS tagging Annotation, analysis, and annotation of entities (NER) and collocations; supporting content categorization; chunking; word sense disambiguation. Recording of technical issues of the annotation tool to improve its reliability. Recording of linguistic and logical particular rules adopted by the research team who develops the NLP algorithm to improve consistency between annotation and criteria previously adopted to train the NLP.

May be these activities considered ""Computational Linguistics""? If not, which is their professional category and how should they be included in the resume in a word which synthesizes them?

",22959,,1671,,1/23/2020 21:56,8/11/2022 7:36,"What is ""Computational Linguistics""?",,2,0,,,,CC BY-SA 4.0 17637,2,,17636,1/23/2020 13:37,,4,,"

Yes. A computational linguist is someone who (among other things) uses computers to process/model/analyse/... natural language. Coding might be one aspect of it, but is about the least important: you can always get a non-linguist programmer to do coding for you.

I studied ""Computational Linguistics"" at university, and while programming was taught as part of the course, coding was only a minor aspect in the actual subject matter. The senior professor (and head of the department) wasn't able to do any coding himself; he came from the linguistics side of it.

Being able to program is useful, as it speeds things up and makes you more independent, but it is by no means an important part of being a computational linguist.

UPDATE: I have been accused of misrepresenting the field of CL. However, it is a broad, interdisciplinary field, and comprises many elements. Sure, on the academic/research side you might do more programming than in the applied/commercial side, but I maintain that you can easily work as a computational linguist without actually doing any programming. For most tasks, readily available software exists by now, so you don't actually need to program anything new.

",2193,,2193,,2/4/2020 9:35,2/4/2020 9:35,,,,2,,,,CC BY-SA 4.0 17638,2,,17512,1/23/2020 14:03,,0,,"

Firstly when you say an object detection CNN, there are a huge number of model architectures available. Considering that you have narrowed down on your model architecture a CNN will have a few common layers like the ones below with hyperparameters you can tweak:

  1. Convolution Layer:- number of kernels, kernel size, stride length, padding
  2. MaxPooling Layer:- kernel size, stride length, padding
  3. Dense Layer:- size
  4. Dropout:- Percentage to keep/drop
",32973,,32973,,1/23/2020 14:13,1/23/2020 14:13,,,,1,,,,CC BY-SA 4.0 17640,1,,,1/23/2020 17:32,,1,36,"

I have an object with known size and I want to know that's the distance from the camera and camera angle. Is there any way to do this? I have a single source (camera).

",32763,,2444,,1/25/2020 15:22,1/25/2020 15:22,What's the best solution to find distance of an object to camera?,,0,2,,,,CC BY-SA 4.0 17641,1,,,1/23/2020 18:08,,1,154,"

I have 26 features from tabular data (clinical variables from patients like age gender etc) that I want to add to my cnn which is using xray images from patients. I am using the inception network. Right now I am just concatenating these features to the final fully connected layer just before the softmax activation.

My concern though is that since this final layer also contains 2048 image features that these 26 features will not contribute much in the classification.

Empirically, these 26 features should contribute more than the 2048 image features. A random forest trained on only 26 features did better than the cnn using only the images. I'd like to get a model that does better than either of them separately so I thought I should add these metadata features to the cnn.

Are my concerns warranted? What is the best approach?

",32927,,,,,1/23/2020 18:08,What is generally the best way to combine tabular image metadata with image data in a convolutional neural network?,,0,0,,,,CC BY-SA 4.0 17642,1,,,1/23/2020 18:33,,2,265,"

I am training an undercomplete autoencoder network for feature selection. I am using one hidden layer in the encoder and decoder networks each. The ELU activation function is used for each layer. For optimization, I am using the ADAM optimizer. Moreover, to improve convergence, I have also introduced learning rate decay. The model shows good convergence initially, but later starts to generate losses (12 digits) in the same range of values for several epochs, and is not converging. How to solve this issue?

",32981,,,,,1/27/2020 4:52,Autoencoder network for feature selection not converging,,1,0,,,,CC BY-SA 4.0 17645,2,,17544,1/24/2020 1:01,,1,,"

Your question is still nearly perfectly unclear, but let's make a few guesses in order to make some progress.

The biggest mystery are your time slices:

  • How many of them you need? With more granular time, you get more unknowns and more constraints and the complexity grows quickly.
  • When someone gets a car, they probably use it to travel somewhere, so it may be necessary to assign the driver to the car for multiple subsequent slices depending in the distance. That's something completely missing from your description (so I'll ignore it).

As already said, you need a cost function, i.e., something to minimize. Even before that, you need to define your variables. Let's say, you need binary variables

x[t, c, e] 

such that x[t, c, e] = 1 when at time slice t the car c is assigned to the employee e (otherwise, it's zero). Now, we can specify some constraints

  • x[t, c, e] = 0 for each t, when the employee e can't drive the car c
  • for each t and c, the sum of x[t, c, e] over all e is at most one (as no employee can use two cars at the same time)
  • for each t and e, the sum of x[t, c, e] over all c is at most one (as no car can be use twice at the same time)

I may have missed some constraints, but that's no big deal; I hope you've got the idea.


Your cost function will be a sum of a few terms.

  • What you wrote about priorities is unclear to me, too, but you can define a weight w[c, e] (or maybe just w[c]), so let the sum of w[c, e] * x[t, c, e] over all t, c and e be the term describing these priorities.
  • Let u[e] be the cost of giving the employee e no car. The expression y[t, e] = 1 - sum over all c of x[t, c, e] equals to one, when the employee e has no car at time slice t. Now, the corresponding cost term can be expressed as the sum of u[e] * y[t, e] over all e and t.

Again, there may be more of this.


What I described so far is a integer linear programming formulation of your problem. Convert my text into ILP generating code, get an ILP solver, feed it and run it....

When you problem is small, you may even get the optimum. Otherwise, you'll get some solution (this is guaranteed) and a bound on the optimum, so you can know how far you might improve.

That's more than any heuristic solver (e.g., simulated annealing) can offer. OTOH arriving at a good solution may be faster using some heuristic solver. But that's all irrelevant until you get a clear problem definition.

",12053,,,,,1/24/2020 1:01,,,,0,,,,CC BY-SA 4.0 17648,2,,17553,1/24/2020 5:09,,1,,"

The perceptron convergence theorem states that any architecture will lead to a correlation between the data.

Yes, you can!

",32798,,,,,1/24/2020 5:09,,,,0,,,,CC BY-SA 4.0 17649,2,,16234,1/24/2020 8:18,,1,,"

Sorry for the delay. The term ""vector-valued feedback"" is compared to scalar-valued feedback. The implication (which I should have made explicit) is that, because vector-valued feedback tells the network the correct answer, the changes in weights required to improve performance are reasonably easy to calculate (e.g. using backprop).

In contrast, if a scalar-valued feedback is given (as in reinforcement learning) then the network knows only how bad its previous output was, but not how to change weights in order to improve the output.

A rough analogy would be that vector-valued feedback tells you that you got the wrong answer to a question, and provides the correct answer. In contrast, scalar-valued feedback just tells you 'how wrong' your answer was, but does not tell you how to improve your answer.

",15257,,2444,,1/24/2020 12:34,1/24/2020 12:34,,,,0,,,,CC BY-SA 4.0 17650,2,,6468,1/24/2020 8:34,,2,,"

A multi-layer network in which all units have linear activation functions can always be collapsed to an equivalent network with two layers of units. That is why it is essential to use nonlinear unit activation functions.

The underlying reason for using nonlinear activation functions involves a remarkable theorem of Cybenko (1989), which states that one layer of nonlinear hidden units is sufficient to approximate any mapping from input to output units. Actually, I think there is a later proof which specifies that the nonlinearity can be any non-polynomial function (e.g. sigmoidal).

This text is based on the book: Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning.

",15257,,2444,,1/24/2020 12:33,1/24/2020 12:33,,,,3,,,,CC BY-SA 4.0 17651,2,,92,1/24/2020 9:18,,2,,"

Neural networks are easily fooled, provided you know how to fool them.

Consider a linear network with an input layer and an output layer, which has an error function E (we don't need hidden layers to show how to fool a network). For a given input image x, E measures the (squared) difference between the network's output y and the desired (correct) output.

The output unit’s state y is given by the inner product of x with the output unit’s weight vector w,so that

y=w·x. 

If we change x to x′ by adding ∆x then the output will change by ∆y to

y′ = w·x+w·∆x (9.4) 

   = y + ∆y. (9.5)

Notice that ∆x defines a direction in the input space; the question is, which direction ∆x will have most impact on y?

By definition, a change in x in the direction ∇E produces the largest possible change in y. An adversarial image x' is constructed by taking the derivative ∇E of E with respect to the input image x, that is,

   x′ = x + ε∇E,

where ε is a small constant. By definition, ∇E is the direction of steepest ascent, so the modification ε∇E to x will alter y more than a change in any other direction.

This is an extract from the book: Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning (2019).

",15257,,2444,,1/24/2020 12:33,1/24/2020 12:33,,,,0,,,,CC BY-SA 4.0 17652,2,,5728,1/24/2020 9:39,,1,,"

A potential disadvantage of gradient-based methods is that they head for the nearest minimum, which is usually not the global minimum.

This means that the only difference between these search methods is the speed with which solutions are obtained, and not the nature of those solutions.

An important consideration is time complexity, which is the rate at which the time required to find a solution increases with the number of parameters (weights). In short, the time complexities of a range of different gradient-based methods (including second-order methods) seem to be similar.

Six different error functions exhibit a median run-time order of approximately O(N to the power 4) on the N-2-N encoder in this paper:

Lister, R and Stone J ""An Empirical Study of the Time Complexity of Various Error Functions with Conjugate Gradient Back Propagation"" , IEEE International Conference on Artificial Neural Networks (ICNN95), Perth, Australia, Nov 27-Dec 1, 1995.

Summarised from my book: Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning.

",15257,,2444,,1/24/2020 12:25,1/24/2020 12:25,,,,0,,,,CC BY-SA 4.0 17653,1,,,1/24/2020 11:07,,3,385,"

According to the book ""Artificial Intelligence: A Modern Approach"", ""In a known environment, the outcomes (or outcome probabilities if the environment is stochastic) for all actions are given."", and in a deterministic environment, ""the next state of the environment is completely determined by the current state and the action executed by the agent..."".

What's the difference between the two terms? Don't they mean the same thing?

",32999,,2444,,1/24/2020 11:19,1/24/2020 11:42,"What is the difference between the concepts ""known environment"" and ""deterministic environment""?",,1,0,,,,CC BY-SA 4.0 17654,2,,3494,1/24/2020 11:38,,1,,"

We also work with Python in our company. One of the sphere that we use it for is fast prototyping and building highly scalable web applications. For over two decades, our Python developers have been providing businesses with full-stack web-development services, client-server programming and administration. We help our clients build high-load web portals, automation plugins, high-performance data-driven enterprise systems, and many more.

",33001,,,,,1/24/2020 11:38,,,,0,,,,CC BY-SA 4.0 17655,2,,17653,1/24/2020 11:42,,2,,"

What's the difference between the two terms? Don't they mean the same thing?

They mean different things, and can occur in any combination.

A known, deterministic environment

This is an environment where the researcher knows how to calculate all the transitions in advance of observing them, and the transition from state $s$ given action $a$ is always to the same next state $s'$ with the same reward $r$.

Example: Any classic board game against a fixed opponent (where any situation allows choice of action, the opponent always picks the same choice in the same situation)

An unknown, deterministic environment

This is an environment where the researcher does not have the knowledge to calculate all the transitions in advance of observing them, but any observation the transition from state $s$ given action $a$ is always to the same next state $s'$ with the same reward $r$.

Example: Simple mechanical physics environments, where initial measurements are unknown, imprecise or the researcher does not want to code knowledge of them into the agent. E.g. pole balancing.

A known, stochastic environment

This is an environment where the researcher knows all the rules about transitions, but those rules include transitions with random elements. Transition from state $s$ given action $a$ varies according to some probability function $p(s'|s,a)$ possibly so does the reward $p(r|s,a)$ - sometimes combined into joint probability function $p(r, s'|s,a)$.

Example: Any board game involving dice, e.g. Backgammon.

An unknown, stochastic environment

This is an environment where the researcher does not know all the rules, or can only calculate expected results with difficulty or to a low level of precision. Transition from state $s$ given action $a$ varies according to some unknown probability function $p(r, s'|s,a)$. Learning the transition function may require many samples of $s, a$, and this will be a statistical approximation.

Example: In practice, most complex environments, including real-world physics with friction, fluids, non-perfect measurements.

It is quite common for purpose of experiments to have a simulation where the environment is technically known (because the researcher wrote it, or has access to the code and underlying models), but that agents are written to treat it like this last, more challenging case. Agents that can figure out how to act without prior knowledge of the environment are often of interest.

",1847,,,,,1/24/2020 11:42,,,,2,,,,CC BY-SA 4.0 17658,1,,,1/24/2020 13:48,,1,40,"

I have a list of positive nonzero integers $T=[v_1,\dots,v_𝑛|v_𝑖\in Z^{\neq}]$ which sum up to $V=\sum_i v_i$. Typically, the length of T (number of integers) goes from 100 to 1000. The list is not sorted, i.e., there's no guarantee that $v_i\leq v_{i+1}\ \forall\ i.$ Each integer can be be assigned either to a set $S_1$ or a set $S_2$: equivalently, it can be labeled as $l_1$ or $l_2$. The objective is to label $v_1,\dots,v_n$ so that

$$\sum_{v_i\in S_1} v_i = 0.3V \tag{1}$$ $$\sum_{v_i\in S_2} v_i = 0.7V \tag{2}$$

i.e., to minimize the cost $$L=\left(\sum_{v_i\in S_1} v_i - 0.3V\right)^2$$

Up to this point, the problem would be fairly trivial and it definitely wouldn't require AI. However, there are a couple additional details: the integers must be labeled in the sequence they appear ($v_1$ first, then $v_2$, etc.), and each time we ""switch"" label, we incur a cost. In other words, if the agent assigns $v_1$ to $S_1$, then $v_2$ to $S_2$, then $v_3$ to $S_1$, etc., it should be penalized for that.

I was thinking of formalizing this by counting the number $m$ of switches (of course $m\geq 1$) and adding it to $L$, i.e. by modifying the cost function to

$$L'=\left(\sum_{v_i\in S_1} v_i - 0.3V\right)^2+\beta m^2$$

where $\beta$ is a positive parameter, which I could use to weight the two objectives.

Would it make sense to cast this as a Reinforcement Learning problem? Or is it more appropriately an AI planning problem? Can you suggest an efficient algorithm to solve it?

",20874,,20874,,1/24/2020 15:52,1/24/2020 15:52,"Can I solve this assignment problem with RL or AI planning, and if yes how?",,0,6,,,,CC BY-SA 4.0 17660,1,,,1/24/2020 14:17,,2,32,"

I know that ensembles can be made by combining sklearn models with a VotingClassifier, but is it possible to combine different deep learning models? Will I have to make something similar to Voting Classifiers?

",32490,,2444,,1/24/2020 15:24,1/24/2020 15:24,How can we combine different deep learning models?,,0,0,,,,CC BY-SA 4.0 17661,1,,,1/24/2020 14:44,,0,836,"

How can a data stream for a RNN (LSTM) be handled, when the stream contains data sets belonging to different prediction classes?

Training phase: I have trained a LSTM to predict a class out of a sequence of Letters. For the training phase I used a fixed data array where the beginning an the ending of a sequence belonged to a class. Of course there is a little noise but the whole data set was labled with a class. E.g:

Seq.    is  Class
ABC     is  One
CBA     is  Two
ABD     is  Three

The network predicts well when it sees a static data array.

Problem in Prediction Phase: During prediction the LSTM will receive a data stream where there is a sequence off arrays but there is no delimiter. The data set can not be distinguished or separated. I am not sure how it would perform when I have a data stream for different classes like ABCABCCBAABD.

I guess in speech recognition one must face similiar problems.

",27777,,27777,,1/24/2020 16:32,2/17/2021 19:03,How to process data in a data stream for a LSTM,,1,1,,,,CC BY-SA 4.0 17664,1,,,1/24/2020 15:57,,1,705,"

if I define the architecture of a neural network using only dense fully connected layers and train them such that there are two models which are trained using model.fit() and GradientTape. Both the methods of training use the same model architecture.

The randomly initialized weights are shared between the two models and all other parameters such as optimizer, loss function and metrics are also the same.

Dimensions of training and testing sets are: X_train = (960, 4), y_train = (960,), X_test = (412, 4) & y_test = (412,)

import pandas as pd, numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score, precision_score, recall_score, confusion_matrix

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import tensorflow_model_optimization as tfmot
from tensorflow_model_optimization.sparsity import keras as sparsity


def create_nn():
    """"""
    Function to create a
    Neural Network
    """"""
    model = Sequential()                                                    

    model.add(
        Dense(
            units = 4, activation = 'relu',
            kernel_initializer = tf.keras.initializers.GlorotNormal(),
            input_shape = (4,)
        )
    )

    model.add(
        Dense(
            units = 3, activation = 'relu',
            kernel_initializer = tf.keras.initializers.GlorotNormal()
        )
    )

    model.add(
        Dense(
            units = 1, activation = 'sigmoid'
        )
    )

    """"""
    # Compile the defined NN model above-
    model.compile(
        loss = 'binary_crossentropy',  # loss = 'categorical_crossentropy'
        optimizer = tf.keras.optimizers.Adam(lr = 0.001),
        metrics=['accuracy']
    )
    """"""

    return model


# Instantiate a model- model = create_nn()

# Save weights for fair comparison- model.save_weights(""Random_Weights.h5"", overwrite=True)


# Create datasets to be used for GradientTape-
# Use tf.data to batch and shuffle the dataset train_ds = tf.data.Dataset.from_tensor_slices(
    (X_train, y_train)).shuffle(100).batch(32)

test_ds = tf.data.Dataset.from_tensor_slices(
    (X_test, y_test)).shuffle(100).batch(32)

# Define early stopping- callback = tf.keras.callbacks.EarlyStopping(
    monitor='val_loss', patience=3,
    min_delta = 0.001, mode = 'min' )

# Train defined model- history_orig = model.fit(
    x = X_train, y = y_train,
    batch_size = 32, epochs = 500,
    validation_data = (X_test, y_test),
    callbacks = [callback],
    verbose = 1 )


# Instantiate a model- model_gt = create_nn()

# Restore random weights as used by the previous model for fair comparison- model_gt.load_weights(""Random_Weights.h5"")


# Choose an optimizer and loss function for training- loss_fn = tf.keras.losses.BinaryCrossentropy() optimizer = tf.keras.optimizers.Adam(lr = 0.001)

# Select metrics to measure the error & accuracy of model.
# These metrics accumulate the values over epochs and then
# print the overall result- train_loss = tf.keras.metrics.Mean(name = 'train_loss') train_accuracy = tf.keras.metrics.BinaryAccuracy(name = 'train_accuracy')

test_loss = tf.keras.metrics.Mean(name = 'test_loss') test_accuracy = tf.keras.metrics.BinaryAccuracy(name = 'train_accuracy')


# Use tf.GradientTape to train the model-

@tf.function def train_step(data, labels):
    """"""
    Function to perform one step of Gradient
    Descent optimization
    """"""

    with tf.GradientTape() as tape:
        predictions = model_gt(data)
        loss = loss_fn(labels, predictions)

    gradients = tape.gradient(loss, model_gt.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model_gt.trainable_variables))

    train_loss(loss)
    train_accuracy(labels, predictions)


@tf.function def test_step(data, labels):
    """"""
    Function to test model performance
    on testing dataset
    """"""

    predictions = model_gt(data)
    t_loss = loss_fn(labels, predictions)

    test_loss(t_loss)
    test_accuracy(labels, predictions)


EPOCHS = 100

# User input- minimum_delta = 0.001 patience = 3

patience_val = np.zeros(patience)


# Dictionary to hold scalar metrics- history = {}

history['accuracy'] = np.zeros(EPOCHS) history['val_accuracy'] = np.zeros(EPOCHS) history['loss'] = np.zeros(EPOCHS) history['val_loss'] = np.zeros(EPOCHS)

for epoch in range(EPOCHS):
    # Reset the metrics at the start of the next epoch
    train_loss.reset_states()
    train_accuracy.reset_states()
    test_loss.reset_states()
    test_accuracy.reset_states()

    for x, y in train_ds:
        train_step(x, y)

    for x_t, y_t in test_ds:
        test_step(x_t, y_t)

    template = 'Epoch {0}, Loss: {1:.4f}, Accuracy: {2:.4f}, Test Loss: {3:.4f}, Test Accuracy: {4:4f}'

    history['accuracy'][epoch] = train_accuracy.result()
    history['loss'][epoch] = train_loss.result()
    history['val_loss'][epoch] = test_loss.result()
    history['val_accuracy'][epoch] = test_accuracy.result()

    print(template.format(epoch + 1, 
                          train_loss.result(), train_accuracy.result()*100,
                          test_loss.result(), test_accuracy.result()*100))

    if epoch > 2:
        # Computes absolute differences between 3 consecutive loss values-
        differences = np.abs(np.diff(history['val_loss'][epoch - 3:epoch], n = 1))

        # Checks whether the absolute differences is greater than 'minimum_delta'-
        check =  differences > minimum_delta

        # print('differences: {0}'.format(differences))

        # Count unique element with it's counts-
        # elem, count = np.unique(check, return_counts=True)
        # print('\nelem = {0}, count = {1}'.format(elem, count))

        if np.all(check == False):
        # if elem.all() == False and count == 2:
            print(""\n\nEarlyStopping Evoked! Stopping training\n\n"")
            break

In ""model.fit()"" method, it takes around 82 epochs, while GradientTape method takes 52 epochs.

Why is there this discrepancy in the number of epochs?

Thanks!

",31215,,31215,,1/24/2020 17:01,1/24/2020 17:01,TensorFlow fit() and GradientTape - number of epochs are different,,0,2,,,,CC BY-SA 4.0 17665,2,,17661,1/24/2020 16:28,,1,,"

One thing I might add is not to expect the ML model to take care of everything, that is where ML engineering comes into play. One suggestion I have, without knowing much about your data stream, is that you need to implement a real-time data ingestion/pipeline, like Apache Kafka Stream (so similar implementation), where you can decompose your stream into output that your model was trained for. You could split by dictionary, pattern or whatever you need, then push the split output into another real-time stream that can be ingested by your model. Not a lot to go for, but hopefully it helps a little.

",33006,,,,,1/24/2020 16:28,,,,1,,,,CC BY-SA 4.0 17666,2,,17544,1/24/2020 17:05,,1,,"

I think there are different ways to solve the problem you presented: 1) It could be seen as classical resource optimization problem that can be solved via Linear Programming setup (highly suggested taking a look, not everything needs ML). You can find some resources here and quick intro to LP here

Linear Programming (LP) is a mathematical procedure for determining optimal allocation of scarce resources. LP is a procedure that has found practical application in almost all facets of business, from advertising to production planning. Transportation, distribution, and aggregate production planning problems are the most typical objects of LP analysis. In the petroleum industry, for example a data processing manager at a large oil company recently estimated that from 5 to 10 percent of the firm's computer time was devoted to the processing of LP and LP-like models.

2) Setup as ML problem using Deep Q-learning Network (might be overkill), at least it comes to my mind as easier setup (potentially). There are probably tons of ways to set your problem and environment. With DQN you would build an simulated environment for your problem where you create an list (or dictionary) of cars with the license requirement, a list for employees and their license, another one for schedule which could simply state 0 = no meeting at that time, 1 = meeting at that time (assuming not all employees need a car at all hours, if that is not true, then you can even remove the schedule requirement).

So DQN is all about action -> reward optimization. You feed in a state, that could be a series of variables, and it spins our an array of actions with their respective scoring. For state columns, you could use cars availability (like one column per vehicle), while actions could be a 2-dimensional array of weather each employee might take a specific car. You could leave in the environment simulator the logic check of weather an employee has the correct license. I.e. if the model sends an employee to the wrong licensed car, then you can return a negative value as penalty as reward. For correct assignment, reward could be reward assigned to each employee (fixed or variable) minus cost of car.

Once trained, at any given state sent to the model, it should output a 2D matrix showing the score of each employee taking each vehicle. For each vehicle, you could select the employee that would get max score, so just numpy slicing and dicing at that point. Again this is a quick idea after 10 minutes of thoughts, so take with little pinch of salt. Also there are many DQN flavors (target-Q, dueling-Q, experience replay, rainbow DQN etc.)

3) Use the DQN structure above but simply it into a Q-learning problem. The actions set could be the same, but state could simply by period of your schedule (1,2,3,4,5...). You leave most logic to the simulated environment, like availability of cars at each period state (like at each state, based on actions return, you can dynamically set which cars will be available for next one). Make sure you set reward and penalty correctly. Create a good balance of exploration vs exploiting, maybe using epsilon-greedy policy and let it run.

",33006,,33006,,1/24/2020 17:13,1/24/2020 17:13,,,,0,,,,CC BY-SA 4.0 17667,1,17668,,1/24/2020 17:30,,1,568,"

This is a piece of code from my homework.

# action policy: implements epsilon greedy and softmax
def select_action(self, state, epsilon):
    qval = self.qtable[state]
    prob = []
    if (self.softmax):
        # use Softmax distribution
        prob = sp.softmax(qval / epsilon)
        #print(prob)
    else:
        # assign equal value to all actions
        prob = np.ones(self.actions) * epsilon / (self.actions -1)
        # the best action is taken with probability 1 - epsilon
        prob[np.argmax(qval)] = 1 - epsilon
    return np.random.choice(range(0, self.actions), p = prob)

This is a method in order to select the best action according to the two polices i think. My question is, why in the softmax computation there is the epsilon parameter used as temperature. Is really the same thing? Are they different? I think they should be two different variables. Should the temperature be a fixed value over time? Because when i use the epsilon-greedy policy my epsilon decrease over time.

",32694,,,,,1/24/2020 19:39,Is the temperature equal to epsilon in Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 17668,2,,17667,1/24/2020 19:39,,1,,"

Your are correct that epsilon in epsilon-greedy and temperature parameter in the ""softmax distribution"" are different parameters, although they serve a similar purpose. The original author of the code has taken a small liberty with variable names in the select_action method in order to use just one simple name as a positional argument.

Should the temperature be a fixed value over time?

Not necessarily, if your goal is to converge on an optimal policy you will want to decrease temperature. A slow decay factor applied after each update or episode, as you might use for epsilon (e.g. 0.999 or other value close to 1), can also work for temperature decay. A very high temperature is roughly equivalent to epsilon of 1. As temperature becomes lower, differences in action value estimates become major differences in action selection probabilities, with a sometimes desirable effect of picking ""promising"" action choices more often, plus with very low probabilities of picking the actions with the worst estimates.

Using a more sophisticated action selection such as the temperature based on in the example code can speed learning in RL. However, this particular approach is only good in some cases - it is a bit fiddly to tune, and can simply not work at all.

The tricky part of using a temperature parameter is choosing good starting point, as well as the decay rate and end values (you have to do the last for epsilon-decay as well). The problem is that the impact of using this distribution depends on the actual differences between action choices. You need a temperature value on roughly the same scale as the Q value differences. This is difficult to figure out in advance. In addition, if the Q value differences are more pronounced in some states than in others, you risk either having next to no exploration or having too much in some parts of the problem.

",1847,,,,,1/24/2020 19:39,,,,0,,,,CC BY-SA 4.0 17670,1,17671,,1/24/2020 23:01,,3,1063,"

In the literature and textbooks, one often sees supervised learning expressed as a conditional probability, e.g.,

$$\rho(\vec{y}|\vec{x},\vec{\theta})$$

where $\vec{\theta}$ denotes a learned set of network parameters, $\vec{x}$ is an arbitrary input, and $\vec{y}$ is an arbitrary output. If we assume we have already learned $\vec{\theta}$, then, in words, $\rho(\vec{y}|\vec{x},\vec{\theta})$ is the probability that the network will output an arbitrary $\vec{y}$ given an arbitrary input $\vec{x}$.

I am having a hard time reconciling how, after learning $\vec{\theta}$, there is still a probabilistic aspect to it. Post training, a network is, in general, a deterministic function, not a probability. For any specific input $\vec{x}$, a trained network will always produce the same output.

Any insight would be appreciated.

",33012,,2444,,7/9/2020 13:59,7/9/2020 19:53,How can supervised learning be viewed as a conditional probability of the labels given the inputs?,,1,1,,,,CC BY-SA 4.0 17671,2,,17670,1/24/2020 23:57,,5,,"

This formulation/interpretation can indeed be confusing (or even misleading), as the output of a neural network is usually deterministic (i.e. given the same input $x$, the output is always the same, so there is no sampling), and there isn't really a probability distribution that models any uncertainty associated with the parameters of the network or the input.

People often use this notation to indicate that, in the case of classification, there is a categorical distribution over the labels given the inputs, but this can be misleading, as the softmax (the function often used to model this categorical distribution) only squashes its inputs and doesn't really model any uncertainty associated with the input or the parameter of the neural network, although the elements of the resulting vector add up to 1. In other words, in traditional deep learning, only a point estimate for each parameter of the network is learned and no uncertainty is properly modeled.

Nevertheless, certain supervised learning problems have a formal probabilistic interpretation. For example, the minimization of the mean squared error function is equivalent to the maximization of a log probability, assuming your probability distribution is a Gaussian with a mean equal to the output of your model. In this probabilistic interpretation, you typically attempt to learn a probability (e.g. of the labels in the training dataset) and not a probability distribution. Watch Lecture 9.5 — The Bayesian interpretation of weight decay (Neural Networks for Machine Learning) by G. Hinton or read the paper Bayesian Learning via Stochastic Dynamics or Bayesian Training of Backpropagation Networks by the Hybrid Monte Carlo Method by R. Neal for more details.

Moreover, there are Bayesian neural networks (BNNs), which actually maintain a probability distribution over each parameter of the neural network that models the uncertainty associated with the value of this parameter. During the forward pass of this BNN, the specific parameters are actually sampled from the corresponding probability distributions. The actual learnable parameters of a BNN are the parameters of these distributions. For example, if you decide to have a Gaussian distribution over each parameter of the neural network, then you will learn the mean and variance of these Gaussians.

",2444,,2444,,7/9/2020 19:53,7/9/2020 19:53,,,,3,,,,CC BY-SA 4.0 17673,2,,17599,1/25/2020 15:44,,0,,"

The Wikipedia article on neuroevolution contains a list of neuroevolution techniques (e.g. NEAT). I will list below the examples that evolve both the parameters and the topology of the neural network.

",2444,,,,,1/25/2020 15:44,,,,1,,,,CC BY-SA 4.0 17674,2,,9197,1/25/2020 17:44,,5,,"

I would like to point out this paper: https://arxiv.org/pdf/1712.00378.pdf, that answers exactly that question.

We then showed that, when learning policies for time-unlimited tasks, it is necessary for correct value estimation, to continue bootstrapping at the end of the partial episodes when termination is due to time limits, or any early termination causes other than the environmental ones.

The authors therefore argue, that the target $y$ for a one-step TD update, after transitioning to a state $s′$ and receiving a reward $r$, should be:

$$ y = \begin{cases} r & \text{if environment terminates}\\ r+\gamma\hat{v}_{\pi}(s') & \text{otherwise (including timeouts)} \end{cases} $$

",33018,,33018,,1/25/2020 18:04,1/25/2020 18:04,,,,0,,,,CC BY-SA 4.0 17677,2,,4454,1/25/2020 20:12,,5,,"

One important consideration here: in the last decade or two the machine learning and artificial intelligence fields, which contains the majority of reinforcement learning work, researchers have considered conferences to be the more impactful publishing venues than journals. The particular venue a researcher chooses depends on the data and/or application domain of his or her use of reinforcement learning, and the conferences are often changing, but to get you started the top tier conferences are (in rough order of exclusivity and importance):

",7752,,7752,,3/31/2020 13:55,3/31/2020 13:55,,,,0,,,,CC BY-SA 4.0 17678,1,,,1/25/2020 21:26,,1,44,"

I'll try to rephrase my problem in the context of video processing. Imagine that initial frame of video has some translational symmetry. The frame evolves according to an update rule.

I generate a time series for how an edge, say right up edge, of the frame evolves. I generate another time series for how a larger edge, including the smaller right up edge, evolves. Since there is translational symmetry, I should be able to find how the smaller edge is related to the larger edge. The final goal is to use the obtained correlation to extrapolate to larger edges. I want to find the correlation between these two multivariate time series using machine learning (ML) methods.

I want to know

1 - which one of ML methods can be used in general for this task?

2 - if I use neural networks, the input and output shapes would be (values at time steps, number of variables). For the input it makes sense, but how can I define the output layer (for example, for LSTM in tensorflow)?

",33022,,33022,,1/26/2020 9:39,1/26/2020 9:39,How to exploit translational symmetry for extrapolation in video generation using machine learning,,0,2,,,,CC BY-SA 4.0 17679,2,,17596,1/26/2020 3:15,,2,,"

Since you have already tried U-Net. You may look into Siamese Networks (with CNNs for images), they are very well known for computing similarity via deep learning. This is a central idea and can be performed with both text and images. As a tip, you may be able to leverage a lot of architecture from U-Net to Siamese.

Hope it helps, Some useful links to start with :

  1. https://medium.com/@prabhnoor0212/siamese-network-keras-31a3a8f37d04
  2. https://www.aclweb.org/anthology/W16-1617.pdf
",23793,,,,,1/26/2020 3:15,,,,2,,,,CC BY-SA 4.0 17680,2,,9491,1/26/2020 6:09,,4,,"
  1. Another approach that came across was to, assuming the number of different action set $n$ is quite small, have functions $f_{\theta_1}$, $f_{\theta_2}$, ..., $f_{\theta_n}$ that returns the action regarding that perticular state with $n$ valid actions. In other words, the performed action of a state $s$ with 3 number of actions will be predicted by $\underset{a}{\text{argmax}} \ f_{\theta_3}(s, a)$.

That sounds pretty complicated and the number of different action sets is usually very high, even for the simplest games. Imagine checkers, ignore promotions and jumping, for simplicity, and there are some $7 \cdot 4 \cdot 2=56$ possible actions (which is fine), but the number of different sets of these actions is much higher. It's actually difficult to compute how many such sets are possible in a real game - it's surely much less than $2^{56}$, but also surely far too big for being practical.

Are there other approaches to deal with variable action spaces?

Assuming the number of actions is not too big, you can simply ignore actions which don't apply in a given state. That's different from learning - you don't have to learn to return negative reward for illegal actions, you simply don't care and select the legal action returning the best award.


Note that your expression

$$\forall s \in S: \exists s' \in S: A(s) \neq A(s') \wedge s \neq s'$$

can be simplified to

$$\forall s \in S: \exists s' \in S: A(s) \neq A(s')$$

or even

$$|A(s)|_{s \in S} > 1$$

",12053,,2444,,12/29/2021 15:30,12/29/2021 15:30,,,,0,,,,CC BY-SA 4.0 17681,1,17685,,1/26/2020 8:13,,3,97,"

What is the difference between the notations $\|x\|_1, \|x\|_2$ and $|x|$? I think $|x|$ is the magnitude of $x$.

",25676,,2444,,1/26/2020 20:40,1/27/2020 10:58,"What is the difference between the notations $\|x\|_1, \|x\|_2$ and $|x|$?",,1,0,,,,CC BY-SA 4.0 17682,1,17683,,1/26/2020 10:20,,5,1661,"

I'm a big fan of computer board games and would like to make Python chess/go/shogi/mancala programs. Having heard of reinforcement learning, I decided to look at OpenAI Gym.

But first of all, I would like to know, is it possible using OpenAI Gym/Universe to create a chess bot that will be nearly as strong as Stockfish and create a go bot that will play as good as AlphaGo?

Is it worth learning OpenAI?

",33025,,2444,,1/26/2020 20:41,1/26/2020 20:41,How powerful is OpenAI's Gym and Universe in board games area?,,1,0,,12/29/2021 13:09,,CC BY-SA 4.0 17683,2,,17682,1/26/2020 11:02,,5,,"

OpenAI's Gym is a standardised API, useful for reinforcement learning, applied to a range of interesting environments many of which you can then access for free with little effort. It is very simple to use, and IMO worth learning if you want to practice RL using Python to any depth at all. You could use it to ensure you have good understanding of basic algorithms such as Q learning, independently of and before you look at using RL in a board game context.

There are limitations for Gym and Universe when dealing with multiple agents. The API is not really designed with that in mind. For instance, there is no simple way to add two agents to an environment, you would have to write a new environment and attach an opposing agent inside of it. This is still possible, and not necessarily a terrible idea (it depends on the training setup you want to investigate).

If you want to look into classic two-player games, and write bots like AlphaGo and Stockfish, then I would point out that:

  • Game-playing bots often make extensive use of planning that can interrogate potential future game states. OpenAI's Gym doesn't prevent you doing that, but it doesn't help in any way.

  • Algorithms for AlphaGo are public, with many nice tutorials. It would be quicker to follow one of these and develop your own bot training code in most cases, than to try and adapt an OpenAI solution for single agent play.

  • Probably the biggest time-saver you could find for any game is a rules engine that implements the board, pieces and game rules for you. If Gym already has a game environment for the game you want your bot to play, it might be worth checking the Gym code to see what it is integrating, then try to use the same library yourself, but not the Gym environment directly.

  • Many decent game-playing algorithms don't use RL at all. You can frame most of them as search (finding best moves) plus heuristics (rating moves or positions), and can usually make independent choices for algorithms that perform each sub-task. You can apply RL so that a bot learns game heuristics, then use a more traditional search e.g. negamax in order to make decisions during play. Or you can use any analysis of the game you like in order to generate heuristics. Very simple games usch as tic-tac-toe (noughts and crosses in UK) can just have heuristic of +1 if X has won, -1 if O has won and 0 otherwise, and still be quickly solved with a minimax search for perfect play.

  • DeepMind's AlphaGo uses a variant of MCTS for the search algorithm, which can be considered a RL technique, but the lines are a bit blurry around definitions there - it is safer to say that AlphaGo incorporates MCTS as the chosen search technique for both self-play training and active play against any other opponent.

",1847,,1847,,1/26/2020 11:42,1/26/2020 11:42,,,,2,,,,CC BY-SA 4.0 17684,1,,,1/26/2020 13:39,,2,65,"

I tried training a 2 hidden layer network using the mnist dataset, but I am not getting any results. I have tried tuning the learning rate(tried 0.1 and 0.0001) and the number of epochs(tried 10 and 50). I even changed the size of hidden layer from 10 to 250. First i had initialized the weights between 0 and 1 and was getting the same classification for all test samples but added (-) sign to 50% of them(chose the figure of 50% by myself) and that problem was solved. Now I cant figure out why it is not working.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from sklearn.preprocessing import StandardScaler

scaler = StandardScaler() 

def to_array(img):
    img = np.asarray(img)
    return img

'''def standardize(gray):
    st_gray = (gray-np.amin(gray))/(np.amax(gray)-np.amin(gray))
    return st_gray
'''
def activ_func(x):
    for i in range(x.shape[0]):
        '''x[i][0]=(1-np.e**(-2*x[i][0]))/(1+np.e**(-2*x[i][0]))'''
        x[i][0] = 1/(1+np.e**(-x[i][0]))
    return x

def deriv_activ_func(x):
    for i in range(x.shape[0]):
        '''x[i][0] = 1-math.pow(x[i][0],2)'''
        x[i][0] = (x[i][0])*(1-x[i][0])
    return x

def cost(out_layer, label, ind):
    cost = (out_layer[ind]-label)**2
    return cost

def update(x, grad, r):
    for i in range(x.shape[0]):
        x[i][0] = x[i][0]+r*grad[i][0]
    return x

path = ""mnist/mnist_train.csv""
gray = pd.read_csv(path)
labels = gray['label']
gray = gray.drop(['label'], axis=1)
gray = to_array(gray)
labels = to_array(labels)
st_gray = np.empty(shape=(gray.shape[1],1))

def rand_sign(w):
    n = np.random.randint(2,size=w.shape[0]*w.shape[1]).reshape(w.shape[0],w.shape[1])
    for i in range(w.shape[0]):
        for j in range(w.shape[1]):
            if(n[i][j]==1):
                w[i][j]=(-1)*w[i][j]
    return w

def initialize():
    in_layer = np.empty(shape=(st_gray.shape[0],1))
    out_layer = np.unique(labels).reshape(-1,1)
    w1 = rand_sign(np.random.rand(250,in_layer.shape[0]))
    b1 = rand_sign(np.random.rand(250,1))
    l1 = np.empty(shape=(250,1))
    w2 = rand_sign(np.random.rand(250,l1.shape[0]))
    b2 = rand_sign(np.random.rand(250,1))
    l2 = np.empty(shape=(250,1))
    w3 = rand_sign(np.random.rand(out_layer.shape[0],l2.shape[0]))
    b3 = rand_sign(np.random.rand(out_layer.shape[0],1))
    l3 = np.empty_like(out_layer)
    return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer

def feed_forward(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,i):
    st_gray = scaler.fit_transform(gray[i][:].reshape(-1,1))
    in_layer = st_gray
    l1 = np.dot(w1,in_layer)+b1
    l1 = activ_func(l1)
    l2 = np.dot(w2,l1)+b2
    l2 = activ_func(l2)
    l3 = np.dot(w3,l2)+b3
    l3 = activ_func(l3)
    return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer

def one_hot(out_layer,label):
    for j in range(out_layer.shape[0]):
        if(out_layer[j][0]==label):
            out_layer[j][0] = 1
        else:
            out_layer[j][0] = 0
    return out_layer 

def back_prop(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer,lr):
    error = out_layer-l3
    grad = np.dot(error*deriv_activ_func(l3),l2.T)
    w3 = update(w3, grad, lr)
    grad = error*deriv_activ_func(l3)
    b3 = update(b3, grad, lr)
    grad = np.dot(w3.T,error*deriv_activ_func(l3))
    error = grad
    grad = error*deriv_activ_func(l2)
    w2 = update(w2, grad, lr)
    grad = error*deriv_activ_func(l2)
    b2 = update(b2, grad, lr)
    grad = np.dot(w2.T,error*deriv_activ_func(l2))
    error = grad
    grad = error*deriv_activ_func(l1)
    w1 = update(w1, grad, lr)
    grad = error*deriv_activ_func(l1)
    b1 = update(b1, grad, lr)
    return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer

def predict(l3):
    out = np.amax(l3)
    count = 0
    for j in range(l3.shape[0]):
        count=count+1
        if(l3[j]==out):
            break
    return count

def trainer():
    l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer = initialize()
    for epochs in range(50):
        for i in range(gray.shape[0]):
            out_layer = np.unique(labels).reshape(-1,1)
            l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer = feed_forward(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,i)
            out_layer = one_hot(out_layer,labels[i])
            l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer = back_prop(l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer,0.0001)
        print(""End of epoch :"",epochs+1) 
    return l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer

l1,l2,l3,w1,w2,w3,b1,b2,b3,in_layer,out_layer = trainer()

path = ""mnist/mnist_train.csv""
gray = pd.read_csv(path)
labels = gray['label']
gray = gray.drop(['label'], axis=1)
gray = to_array(gray)
labels = to_array(labels)
st_gray = np.empty(shape=(gray.shape[1],1))

for i in range(10):
    st_gray = scaler.fit_transform(gray[i][:].reshape(-1,1))
    in_layer = st_gray
    l1 = np.dot(w1,in_layer)+b1
    l1 = activ_func(l1)
    l2 = np.dot(w2,l1)+b2
    l2 = activ_func(l2)
    l3 = np.dot(w3,l2)+b3
    l3 = activ_func(l3)
    count = predict(l3)
    print(""Expected: "",labels[i],"" Predicted: "",count)

",33029,,,,,1/26/2020 13:39,Neural nets not learning mnist dataset,,0,1,,,,CC BY-SA 4.0 17685,2,,17681,1/26/2020 14:06,,5,,"

$\|x\| = |x|$ denotes the absolute value norm, which is a special case of the $L_1$ norm defined on the 1-D vector spaces formed by real or complex numbers.

$\|\textbf{x}\|_1 = \sum_{i=1}^n|x_i|$ denotes the Taxicab / Manhattan norm, relating to how a Taxi would drive along a rectangular grid of roads to reach a point $(x, y)$ from $(0,0)$.

$\|\textbf{x}\|_2 = \sqrt{x_1^2 + \dots + x_n^2}$ denotes the Euclidean norm on an N-D Euclidean space, which is a result of the Pythagorean theorem (the shortest distance between two points).

",33010,,33010,,1/27/2020 10:58,1/27/2020 10:58,,,,2,,,,CC BY-SA 4.0 17687,2,,17502,1/26/2020 20:58,,0,,"

After I've learned a little bit more about the topic, I think I figured out the exact sequence of the algorithm. So, here's my own answer. Please, correct me if I'm wrong.

  1. Give an input, forward-propagate it, and generate an output

  2. For each output neuron: for each weight connected to the neuron: Given the function C = f(w) (which represents the cost in function to the weight value), calculate the derivative of that function at the point where the current weight actually is)

  3. Calculate the actual derivative of all the weights by combining all the partial derivatives in respect to the weight: now you have a gradient of the weights

  4. Repeat this process to calculate the gradient for each of the batch elements. If you have a batch size of 8, then you'll have 8 gradients.

  5. Find the average gradient ((gradient_1+gradient2+gradient3...)/n_gradients)

  6. Move the weights of that gradient

Am I right? How does this apply to deeper layers?

",32751,,2444,,1/27/2020 11:40,1/27/2020 11:40,,,,4,,,,CC BY-SA 4.0 17688,2,,17634,1/26/2020 23:01,,4,,"

Dimensionality reduction could be achieved by using an Autoencoder Network, which learns a representation (or Encoding) for the input data. While training, the reduction side (Encoder) reduces the data to a lower-dimension and a reconstructing side (Decoder) tries to reconstruct the original input from the intermediate reduced encoding.

You could assign the encoder layer output ($L_i$) to a desired dimension (lower than that of the input). Once trained, $L_i$ could be used as a alternative representation of your input data in a lower feature-space, and can be used for further computations.

",33010,,33010,,1/26/2020 23:09,1/26/2020 23:09,,,,3,,,,CC BY-SA 4.0 17689,2,,17642,1/27/2020 4:52,,1,,"

The trick was to normalize the input dataset values with the respective mean and standard deviation in each column. This reduced the loss drastically, and my network is training more efficiently now. Moreover, normalizing the data also helps you calculate the weights associated with each input node more easily, especially when trying to find out variable importance.

",32981,,,,,1/27/2020 4:52,,,,0,,,,CC BY-SA 4.0 17690,1,,,1/27/2020 6:04,,2,369,"

I have written code in OpenAI's gym to simulate a random playing in Montezuma's Revenge where the agent randomly samples actions from the action space and tries to play the game. A success for me is defined as the case when the agent is atleast able to successfully retrieve the key (Gets a reward of 100). And such cases I dump in a pickle file. I got 44 successful cases when I kept it to run for a day or so. Here is the code I use to generate the training set :

import numpy
import gym
import cma
import random
import time
import pickle as pkl

env   = gym.make('MontezumaRevenge-ram-v0')

observation = env.reset()

#print(observation)
#print(env.action_space.sample())

obs_dict    = []
action_dict = []
success_ctr = 0

for i in range(0, 1000000):
    print('Reward for episode',i+1)
    done = False
    rew = 0
    action_list = []
    obs_list    = []
    while not done:
        action = env.action_space.sample()
        observation, reward, done, _ = env.step(action)
        action_list.append(action)
        obs_list.append(observation)

        rew += reward
        env.render()
        time.sleep(0.01)
        if done:
            env.reset()
            if rew > 0:
                success_ctr += 1
                print(action_list)
                action_dict.append(action_list)
                obs_dict.append(obs_list)
                pkl.dump(obs_dict, open(""obslist.pkl"", ""wb""))
                pkl.dump(action_dict, open(""action.pkl"", ""wb""))

    print(rew)
    time.sleep(1)

try:
    print(obs_dict.shape)
except:
    pass

print(""Took key:"", success_ctr)

I loaded the successful cases from my generated pickle file, and simulated the agent's playing using those exact same cases. But, the agent never receives a reward of 100. I dont understand why this is happening. A little search online suggested it could be because of noise in the game. So, I gave a sleep time, before running each episode. Still, doesn't work. Can someone please explain why is this happening? And suggest a way I could go about generating the training set?

",32455,,,,,1/27/2020 19:15,Simulating successful trajectories in Montezuma's Revenge turns out to be unsuccessful,,2,0,,,,CC BY-SA 4.0 17692,2,,12397,1/27/2020 9:46,,1,,"

It turns out there is actually a practical reason for this.

Practically speaking, in GANs the generator tends to converge on few 'good' outputs that fool the discriminator if you don't do it. In the optimal case, the generator will actually emit a single fixed output regardless of the input vector that fools the discriminator.

Which is to say, the generators loss function is intended not simply as "fool the discriminator", it is actually:

  • Fool the discriminator.
  • Generate novel output.

You can write your generator's loss function to explicitly attempt to say the output in any training batch should be distinct, but by passing the outputs to the discriminator you create a history of previous predictions from the generator, effectively applying a loss metric for when the generator tends to produce the same outputs over and over again.

...but it is not magic, and is not about the discriminator learning "good features" features; it is about the loss applied to the generator.

This is referred to as "Mode Collapse", to quote the Google ML guide on GAN troubleshooting:

If the generator starts producing the same output (or a small set of outputs) over and over again, the discriminator's best strategy is to learn to always reject that output. But if the next generation of discriminator gets stuck in a local minimum and doesn't find the best strategy, then it's too easy for the next generator iteration to find the most plausible output for the current discriminator.

Each iteration of generator over-optimizes for a particular discriminator, and the discriminator never manages to learn its way out of the trap. As a result the generators rotate through a small set of output types. This form of GAN failure is called mode collapse.

See also, for additional reading "Unrolled GANs" and "Wasserstein loss".

see: https://developers.google.com/machine-learning/gan/problems

",25722,,-1,,6/17/2020 9:57,1/27/2020 9:46,,,,0,,,,CC BY-SA 4.0 17693,1,,,1/27/2020 9:52,,2,55,"

I have a dataset containing timestamp and temperature. For each day, I have 1440 values viz., I have data for every minute of that day(60minutes * 24hrs = 1440).

The Dataset looks like this:

As an initial step, I gathered day1 data to predict day2 data. I have tried AR, ARIMA, SARIMAX models but I didn't find any positive results. I think this is multivariate since the time and the temperature values changes with respect to date. I need guidance to choose the ML model that will suit for my dataset and it should be able to predict next day/ next month

",27539,,27539,,1/27/2020 12:12,1/27/2020 12:12,Predicting a day's data,,1,0,,,,CC BY-SA 4.0 17695,1,17700,,1/27/2020 11:20,,1,320,"

I've just started with CNN and there is something that I haven't understood yet:

How do you ""ask"" a network: ""classify me these images"" or ""do semantic segmentation""?

I think it must be something on the architecture, or in the loss function, or whatever that makes the network classify its input or do semantic segmentation.

I suppose its output will be different on classification and on semantic segmentation.

Maybe the question could be rewritten to:

What do I have to do to use a CNN for classification or for semantic segmentation?

",4920,,2444,,12/12/2021 13:01,12/12/2021 13:01,What make a CNN suitable for image classification or semantic segmentation?,,1,0,,12/12/2021 13:01,,CC BY-SA 4.0 17696,2,,17693,1/27/2020 11:35,,1,,"

Just for clarification: your description (1 sample per minute) does not match the example data (far fewer data points which is understandable, but also two data points in one minute which contradicts the initial assertion.) If your actual measurements are like that you should first work on the sampling process to get reliable data.

For creating predictions, you need to have a reasonable model of the observed process. If you're measuring environmental temperatures, you will basically have three causes of variation:

  1. A day/night cycle
  2. A seasonal (summer/winter) cycle
  3. Local weather fluctuation

From only one day of samples, the only thing you can reasonably predict is that the next day will look mostly the same. If you collect more data over a year, you will be able to extract a seasonal cycle and estimate the deviations caused by local weather. ""You"" means either you as a researcher or any machine learning system that you program. Without sufficient historical data it is impossible to make good predictions (and even with sufficient data it's hard.)

",22993,,,,,1/27/2020 11:35,,,,4,,,,CC BY-SA 4.0 17697,1,17706,,1/27/2020 11:37,,0,124,"

In the snippet below, the highlighted part is the average norm, but since $1/|p_i|$ is outside the summation, it is very confusing to understand.

  • is $|p_i|$ l2-norm(as per wolfram) or l1-norm or absolute value as per wiki.

  • Should the $i$ inside summation be considered for $1/|p_i|$, which is outside the summation?

",25676,,2444,,1/27/2020 17:50,1/27/2020 17:50,How to understand the average l2 loss?,,1,0,,,,CC BY-SA 4.0 17698,2,,9746,1/27/2020 11:47,,-1,,"

Writing a game AI is difficult in general and it's even harder when there's no game to play. Like in your case.

While ""Lines"" is pretty neat, there's no (playing) opponent and there are no moves. apart from the setup. So what we have is an optimization problem where we need to get more point than the ""opponents"".

Game playing algorithms like Reinforcement learning or MCTS are IMHO out of questions as there's no game to play. Still, it's an interesting problem, though I can't see how to make any learning AI. Anyway, what you need first is a simulator and an optimizer choosing promising actions.

Some ideas concerning the actions

Some actions are trivial, e.g., the eraser as there are just a few dots to erase. Trying all combination should be good enough for the start.

The scissors can be in theory placed anywhere, but the only interesting places are just next to the oponent piece or a crossing. Additionally, you only need to consider crossings which gets ""won"" (reached first) by an opponent. Moreover, they only matter if they get won by an opponent having more points than you or blocking you.

The dots to insert should similarly by placed near an oponent or so that you just win a crossing.

The additional lines are worst, as they may be used to let you reach unconnected areas, or reach a crossing faster or simply give you additional points (then you want a long line somewhere).

A model

Most of the game can be seen as a non-directed graph with edges labeled by their length, where the crossing and the initial dots are nodes. The simulator doesn't need to work ""pixel by pixel"", but may instead jump to the next time, when a crossing gets reached or two ""flows"" meat each other. The points of a player grow linearly with time, where the slope is given by the number of their extending ""flows"".

So far, I could imagine modelling it as an Integer Linear Program, but with the additional lines, it's probably no more possible.

I'd probably let the optimization be simulator-guided, i.e., analyse whether I could get closer to winning by reaching a crossing sooner. The details are still unclear....

",12053,,,,,1/27/2020 11:47,,,,0,,,,CC BY-SA 4.0 17699,2,,17690,1/27/2020 11:48,,2,,"

What's exactly the point of time.sleep() in this code? I don't really understand it, you're simply stopping the execution of the program for $0.01$ seconds, how will that affect the simulator in any way ? It's not running in parallel, it does one step of the simulation when you call env.step function and returns the next state and reward. Calling sleep function only slows down the program here.

The reason for the failure of successful trajectories, when repeated, is probably because the environment isn't stationary. That means that there are enemies or obstacles moving in the environment. If you simply repeat the trajectory that was successful once the enemies and obstacles might be in different positions and the agent will die. The reason why it succeeded the first time is because the agent got lucky. These are still valid learning trajectories because they were successful. The agent should learn reasons (features) why those trajectories were successful but not learn the trajectories themselves because, as you saw, they don't generalize well. If you plan on doing some kind of supervised learning approach you should also generate variety of unsuccessful trajectories so that the agent can learn what are correct and what are not correct actions depending on the current state in the environment.

",20339,,,,,1/27/2020 11:48,,,,2,,,,CC BY-SA 4.0 17700,2,,17695,1/27/2020 11:52,,2,,"

Disclaimer: This question is very broad, my answer is admittedly partial and is intended to just give an idea of what's out there and how to find out more.

How do you ""say"" a network: ""classify me these images"" or ""do semantic segmentation""?

You're mixing two very different problems there. Although there are SO many variations of problems people are applying CNNs to, for this example we can focus on the ""classification of something in the image"" subset ad identify 4 key tasks:

  • Image Classification answers the question ""What is this image about"" (the answer is an image where each pixel is assigned to one of the given classes)
  • Semantic Segmentation answer he question ""What areas of this image are part of a Cat?"" (e.g.,
  • Object Detection answers the question ""Where are the objects in the image AND what objects are they?"" (e.g. of answer: ""Cat in bounding box at x,y,w,h [10,20,50,60]"")
  • Semantic Segmentation answers the question ""Where are the individual objects in this image AND what class are they AND give me the pixels that belong to each object"". You may guess from the number of ANDs there, this is the hardest of the four. The output here would be a set of class, bounding_box, mask tuples where the mask is typically defined in relation to the returned bounding box.

So, how do we build networks capable of solving one problem or the other? We build architectures towards one specific problem, exploiting reusable parts where possible. For example, typically classification and object detection are based on a deep ""backbone"" that extracts highly complex features from the image, that finally are used by a classifier layer interprets to make a prediction (for image classification) or a box prediction head to predict where objects lie in the image (very big simplification, look up object detection architectures and how they work for the proper description!).

What do I have to do to use a CNN for classification or for semantic segmentation?

In principle you can't just take a network built for classification and just ""ask"" it to do semantic segmentation (think of it as trying to use a screwdriver as scissors... it just was not built for that!). You need changes in the architecture, which necessarily imply new training, at the very least for the new parts that were added.

",22086,,,,,1/27/2020 11:52,,,,0,,,,CC BY-SA 4.0 17701,1,,,1/27/2020 12:19,,2,1213,"

What are the advantages and disadvantages of using meta-heuristic algorithms on optimization problems? Simply, why do we use meta-heuristic algorithms, like PSO, over traditional mathematical techniques, such as linear, non-linear and dynamic programming?

I actually have a good understanding of meta-heuristic algorithms and I know how they work. For example, one advantage of this kind of algorithms is that they can find an optimal solution in a reasonable time.

However, my lack of knowledge about other methods and techniques brought this question to my mind.

",33046,,2444,,1/27/2020 16:14,1/27/2020 16:14,What are advantages of using meta-heuristic algorithms on optimization problems?,,1,0,,,,CC BY-SA 4.0 17704,1,,,1/27/2020 12:50,,1,228,"

I've got a few thousands of sequences like

1.23, 2.15. 3.19, 4.30, 5.24, 6.22

where the numbers denote times on which an event happened (there's just a single kind of events). The events are sort of periodical and the period is known to be exactly one, however, the exact times varies. Sometimes, events are missing and there are other irregularities, but let's ignore them for now.

I'd like to train an neural network for predicting the probability that there'll be a next even in a given time interval. The problem is that I have no probabilities for the training.

All I have are the above sequences. If I had four sequences like

1.23, 2.15. 3.19, 4.30, 5.24, 6.05
1.23, 2.15. 3.19, 4.30, 5.24, 6.83
1.23, 2.15. 3.19, 4.30, 5.24, 6.27
1.23, 2.15. 3.19, 4.30, 5.24, 6.22
1.23, 2.15. 3.19, 4.30, 5.24, 6.17

then I could say that the probability of an event in the interval [6.10, 6.30] is 60% and use this value for learning. However, all my sequences are different. I could try to group them somehow so that I can define something like a probability, but this sounds way more complicated than what I'm trying to achieve.

Instead, I could try to use the sequence

1.23, 2.15. 3.19, 4.30, 5.24, 6.22

to learn that after the prefix 1.23, 2.15. 3.19, 4.30, 5.24, there will be an event in the interval [6.10, 6.30] for sure (value to learn equal to one); if there was 6.05 instead of 6.22, the value to learn would be zero. A learned network would produce the average value (let's say 0.60).

However, the error would never converge to zero, so there'd be no quality criterion and probably a big chance of overtraining leading to non-sense results.

Is there a way to handle this?

",12053,,,,,12/13/2022 7:06,Predicting probabilities of events using neural networks,,3,2,,,,CC BY-SA 4.0 17705,2,,17701,1/27/2020 13:23,,1,,"

Meta-heuristics are particularly suited for combinatorial optimization problems, given that, although they are not usually guaranteed to find the optimal global solution, they can often find a sufficiently good solution in a decent amount of time. So, they are an alternative to exhaustive search, which would take exponential time. For example, ant colony optimization algorithms have been used to approximately (or exactly, in the case of small or medium-size instances) solve the travelling salesman problem, whose decision version is an NP-complete problem (which means that, unless P=NP, there is no polynomial-time solution to solve it).

Meta-heuristics can also be easily applied to many problems, given that they are not problem-specific. For example, in the case of genetic algorithms, you just need to encode the possible solutions, but, in principle, you can apply genetic algorithms to a wide range of problems, although they may not always be the best solution to each of these problems. Moreover, as opposed to gradient-based optimisation algorithms, there's no need for the gradient of the objective function. For instance, in the case of genetic algorithms, you just need a way of evaluating the solutions (e.g. the fitness or the novelty).

Meta-heuristics often incorporate some form of randomness in order to escape from local minima. Ant-colony optimization algorithms or simulated annealing are two good examples of this approach.

If you are still interested in meta-heuristics, the book Clever Algorithms: Nature-Inspired Programming Recipes (by Jason Brownlee) is a very good resource for learning about them. There's also a Github repository with the implementation of the algorithms described in this book.

",2444,,2444,,1/27/2020 13:44,1/27/2020 13:44,,,,1,,,,CC BY-SA 4.0 17706,2,,17697,1/27/2020 13:48,,1,,"

I agree that this notation is unclear. I would interpret it as follows:

Given that the expression is supposed to denote the average norm $|p_i|$ is likely the cardinality of the set $\{p_i\}$.

In that case the expression would just be the sum over all norms divided by the number of norms, resulting in the average norm. The authors likely use this notation because they didn't want to introduce the number $n$ of $p_{i < n}$. $|\{p_i\}|$ would be clearer, but maybe uglier to typeset.

",2227,,2227,,1/27/2020 13:57,1/27/2020 13:57,,,,1,,,,CC BY-SA 4.0 17707,2,,17704,1/27/2020 14:07,,0,,"

You can set up a neural network to predict whether there is an event in a randomly picked interval. I.e. if there is an event in this interval in your trainingsdata you train to output a 1 otherwise you train to output a 0.

If you use the quadratic loss function the prediction of the NN should approximate the probability of such an event.

Overfitting can be monitored with by splitting your trainingsdata into train- and testset.

If you train an RNN by inputting the intervals between events, these intervals should be more similar than the exact event times. Modelling a time series like this also makes more sense.

Of course these details depend on what exactly the data is representing. And it is also possible that decent predictions are impossible for this dataset.

",2227,,,,,1/27/2020 14:07,,,,0,,,,CC BY-SA 4.0 17708,2,,17588,1/27/2020 16:39,,0,,"

Since you are mainly interested in starter libraries and packages to read up on here are some pointers:

Image recognition as well as general spatial classification mainly consists of two major tasks.

  1. Translating images to data that can be fed into a ML model of any kind.

  2. Building a model that is able to use this data to complete tasks like classification (which lego figure is this?), spatial clustering (which pixels form the ""face""), etc.

For the first task you will find a lot of python libraries that suit the task but `` PIL/Pillow``` should be the main library for this. So start by reading some documentation and transferring the images you do have (this includes standardizing and cleaning the images to improve the results of step 2).

For the second task the actual model you will use depends on your task but generally speaking some form of neural net is a good place to start and if you prefer python then you should look into tensorflow and keras (easy to use interface to tensorflow).

Where to start

Start with the classical MNIST numbers recognition case and if you have grokked that, you will now where to go from there or at least be able to ask a more specific question here.

References

",27665,,,,,1/27/2020 16:39,,,,0,,,,CC BY-SA 4.0 17710,1,,,1/27/2020 17:25,,1,26,"

Consider the function $f(x)=\tan(2x)$. How can I determine a fuzzy system that approximates $f(x)$? How to choose membership functions and how to determine fuzzy rules? Any help would be appreciated.

",33051,,2444,,1/28/2020 0:57,1/28/2020 0:57,How can I formulate a fuzzy inference system to approximate the tangent function?,,0,0,,,,CC BY-SA 4.0 17711,1,,,1/27/2020 17:57,,2,75,"

I am building a 2d top-down space game, which involves several objects, such as asteroids, drones, spaceships, space litter and power-ups. It follows the rules of space gravity, with speed and acceleration. The idea is that a player controls his own spaceship, can fire bullets, and is allowed to spawn 3 support drones that will attack the enemy. Its goal is to deal the most damage to the opponent spaceship in 3 minutes.

It involves certain dynamics, such as ""Destroying an asteroid spawns a power-up"", ""touching an asteroid deals damage to your spaceship"" etc.

What would be the best approach to define an AI agent for it?

I was thinking of Reinforcement learning, but maybe the game is too complex and I wouldn't have the computational power for it?

",33053,,,,,1/27/2020 17:57,Best AI Approach for 2D to-down space shooter,,0,0,,,,CC BY-SA 4.0 17712,2,,17690,1/27/2020 19:15,,2,,"

Since the environment has some randomness in it, purely memorizing a trajectory to victory will not work. You will have to memorize every single trajectory for that to work, and there are an infinite number of them.

So, you will need to add some sort of bias to your learning model - i.e., what to do when the observations in your pickle file don't match the current observation.

Your current setup lends itself well to a case-based reasoning (CBR) approach. The idea of CBR is that you have a memory bank of observation-action pairs and when you see a new observation you look up the memory bank and see if the current observation has been seen before. If so, do that action. The interesting part is when there are no observations that match directly, but there are some that are similar. In this case you choose the most similar. The similarity can be calculated in any number of ways, and it is dependant on the data types. This paper is will provide a good start: https://alumni.media.mit.edu/~jorkin/generals/papers/Kolodner_case_based_reasoning.pdf

",33054,,,,,1/27/2020 19:15,,,,2,,,,CC BY-SA 4.0 17713,2,,16905,1/27/2020 21:02,,3,,"

By far the most commonly used strategy is to select the child with the highest number of visits. This is as described in the 2008 paper you linked. It's also what's referred to as the ""robust child"" in the 2012 paper you linked.

In algorithm 2 of the 2012 paper, they actually use the highest average reward, which corresponds to ""Max child"". It looks like they're using the UCB1 policy, but they actually use a value of $0$ for the exploration parameter $c$, which makes the entire square root term drop out. This is also explained in the text at the end of your quote. But usually, a robust child / max visit count performs better.

Progressive Strategies for Monte-Carlo Tree Search is a different paper from 2008, in which these strategies are experimented with a bit. Usually, they perform similarly, but a robust child tends to be the best if there is any difference at all.

",1641,,2444,,1/28/2020 13:35,1/28/2020 13:35,,,,0,,,,CC BY-SA 4.0 17714,1,,,1/27/2020 21:26,,2,126,"

I have a neural network model defined as below. How many layers exist there? Not sure which ones to count when we are asked about the number.

def create_model():
    channels = 3
    model = Sequential()
    model.add(Conv2D(32, kernel_size = (5, 5), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, channels)))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Conv2D(256, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())

    model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(BatchNormalization())
    model.add(Dropout(0.2))
    model.add(Flatten())
    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(128, activation='relu'))
    model.add(Dense(2, activation = 'softmax'))

    return model
",9053,,9053,,1/28/2020 14:53,1/28/2020 14:53,How many layers exists in my neural network?,,1,4,,,,CC BY-SA 4.0 17715,1,17719,,1/28/2020 1:20,,1,74,"

Not sure where to put this... I am trying to create a convolutional architecture for a DQN in keras, and I want to know why my param count is so high for my last layer compared to the rest of the network. I've tried slowly decreasing the dimensions of the layers above it, but it performs quite poorly. I want to know if there's anything I can do to decrease the param count of that last layer, besides the above.

Code:

#Import statements.
import random
import numpy as np
import tensorflow as tf
import tensorflow.keras.layers as L
from collections import deque
import layers as mL
import tensorflow.keras.optimizers as O
import optimizers as mO
import tensorflow.keras.backend as K


#Conv function.
def conv(x, units, kernel, stride, noise=False, padding='valid'):
    y = L.Conv2D(units, kernel, stride, activation=mish, padding=padding)(x)
    if noise:
        y = mL.PGaussian()(y)
    return y

#Network
        x_input = L.Input(shape=self.state)
        x_goal = L.Input(shape=self.state)
        x = L.Concatenate(-1)([x_input, x_goal])
        x_list = []
        for i in range(2):
            x = conv(x, 4, (7,7), 1)
        for i in range(2):
            x = conv(x, 8, (5,5), 2)
        for i in range(10):
            x = conv(x, 6, (3,3), 1, noise=True)
        x = L.Conv2D(1, (3,3), 1)(x)
        x_shape = K.int_shape(x)
        x = L.Reshape((x_shape[1], x_shape[2]))(x)
        x = L.Flatten()(x)
        crit = L.Dense(1, trainable=False)(x)
        critic = tf.keras.models.Model([x_input, x_goal], crit)
        act1 = L.Dense(self.action, trainable=False)(x)
        act2 = L.Dense(self.action2, trainable=False)(x)
        act1 = L.Softmax()(act1)
        act2 = L.Softmax()(act2)
        actor = tf.keras.models.Model([x_input, x_goal], [act1, act2])
        actor.compile(loss=mish_loss, optimizer='adam')
        actor.summary()

actor.summary():

________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            [(None, 300, 300, 3) 0                                            
__________________________________________________________________________________________________
input_3 (InputLayer)            [(None, 300, 300, 3) 0                                            
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 300, 300, 6)  0           input_2[0][0]                    
                                                                 input_3[0][0]                    
__________________________________________________________________________________________________
conv2d_52 (Conv2D)              (None, 294, 294, 4)  1180        concatenate[0][0]                
__________________________________________________________________________________________________
conv2d_53 (Conv2D)              (None, 288, 288, 4)  788         conv2d_52[0][0]                  
__________________________________________________________________________________________________
conv2d_54 (Conv2D)              (None, 142, 142, 8)  808         conv2d_53[0][0]                  
__________________________________________________________________________________________________
conv2d_55 (Conv2D)              (None, 69, 69, 8)    1608        conv2d_54[0][0]                  
__________________________________________________________________________________________________
conv2d_56 (Conv2D)              (None, 67, 67, 6)    438         conv2d_55[0][0]                  
__________________________________________________________________________________________________
p_gaussian (PGaussian)          (None, 67, 67, 6)    1           conv2d_56[0][0]                  
__________________________________________________________________________________________________
conv2d_57 (Conv2D)              (None, 65, 65, 6)    330         p_gaussian[0][0]                 
__________________________________________________________________________________________________
p_gaussian_1 (PGaussian)        (None, 65, 65, 6)    1           conv2d_57[0][0]                  
__________________________________________________________________________________________________
conv2d_58 (Conv2D)              (None, 63, 63, 6)    330         p_gaussian_1[0][0]               
__________________________________________________________________________________________________
p_gaussian_2 (PGaussian)        (None, 63, 63, 6)    1           conv2d_58[0][0]                  
__________________________________________________________________________________________________
conv2d_59 (Conv2D)              (None, 61, 61, 6)    330         p_gaussian_2[0][0]               
__________________________________________________________________________________________________
p_gaussian_3 (PGaussian)        (None, 61, 61, 6)    1           conv2d_59[0][0]                  
__________________________________________________________________________________________________
conv2d_60 (Conv2D)              (None, 59, 59, 6)    330         p_gaussian_3[0][0]               
__________________________________________________________________________________________________
p_gaussian_4 (PGaussian)        (None, 59, 59, 6)    1           conv2d_60[0][0]                  
__________________________________________________________________________________________________
conv2d_61 (Conv2D)              (None, 57, 57, 6)    330         p_gaussian_4[0][0]               
__________________________________________________________________________________________________
p_gaussian_5 (PGaussian)        (None, 57, 57, 6)    1           conv2d_61[0][0]                  
__________________________________________________________________________________________________
conv2d_62 (Conv2D)              (None, 55, 55, 6)    330         p_gaussian_5[0][0]               
__________________________________________________________________________________________________
p_gaussian_6 (PGaussian)        (None, 55, 55, 6)    1           conv2d_62[0][0]                  
__________________________________________________________________________________________________
conv2d_63 (Conv2D)              (None, 53, 53, 6)    330         p_gaussian_6[0][0]               
__________________________________________________________________________________________________
p_gaussian_7 (PGaussian)        (None, 53, 53, 6)    1           conv2d_63[0][0]                  
__________________________________________________________________________________________________
conv2d_64 (Conv2D)              (None, 51, 51, 6)    330         p_gaussian_7[0][0]               
__________________________________________________________________________________________________
p_gaussian_8 (PGaussian)        (None, 51, 51, 6)    1           conv2d_64[0][0]                  
__________________________________________________________________________________________________
conv2d_65 (Conv2D)              (None, 49, 49, 6)    330         p_gaussian_8[0][0]               
__________________________________________________________________________________________________
p_gaussian_9 (PGaussian)        (None, 49, 49, 6)    1           conv2d_65[0][0]                  
__________________________________________________________________________________________________
conv2d_66 (Conv2D)              (None, 47, 47, 1)    55          p_gaussian_9[0][0]               
__________________________________________________________________________________________________
reshape (Reshape)               (None, 47, 47)       0           conv2d_66[0][0]                  
__________________________________________________________________________________________________
flatten (Flatten)               (None, 2209)         0           reshape[0][0]                    
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 2000)         4420000     flatten[0][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 200)          442000      flatten[0][0]                    
__________________________________________________________________________________________________
softmax (Softmax)               (None, 2000)         0           dense_1[0][0]                    
__________________________________________________________________________________________________
softmax_1 (Softmax)             (None, 200)          0           dense_2[0][0]                    
==================================================================================================
Total params: 4,869,857
Trainable params: 7,857
Non-trainable params: 4,862,000
__________________________________________________________________________________________________
",33058,,33058,,1/28/2020 4:42,1/28/2020 13:50,"Param count in last layer high, how can I decrease?",,1,0,,,,CC BY-SA 4.0 17716,2,,5115,1/28/2020 5:49,,-1,,"

It is wrong to assume that just the connections to other words define their meaning.

Give an AI a hundred novels and it would still not know what the word ""cat"" means.

Show the AI a picture of a cat with the word ""cat"" underneath it and it would know straight away.

In this way an AI needs to know a minimum number of words through experience other than combinations of other words. From then it may be able to deduce meanings of new words.

Just like, if I gave you a hundred novels in Chinese you would never be able to understand Chinese. I show you a picture book in Chinese and maybe you have a chance.

",4199,,,,,1/28/2020 5:49,,,,10,,,,CC BY-SA 4.0 17718,2,,17714,1/28/2020 13:07,,3,,"

tl;dr I'd say your model has 8 layers (5 conv, 3 dense), however a lot of people count layers in other ways. From what I've seen this is by far the most conventional way for counting layers.


Justification

This is an interesting question because its quite subjective. In most cases only the convolutional and dense layers would count from your network. Bach norm, dropout and flatten are usually considered as operations to other layers rather than layers of their own (much like activation functions).

Note: It is debatable whether or not pooling layers are considered to be layers (as they have no trainable parameters) but in most cases they are not considered to be so.

Note 2: Batch norm, on the other hand, isn't usually considered a layer even though it has trainable parameters. Clearly the authors didn't introduce it as a layer, but as a way to normalize, shift and scale the inputs of a layer. This is apparent in some of the examples below which don't count batch norm as an actual layer.

Note 3: Conventionally all networks are considered to have [at least] one input layer but it doesn't count as a layer.


Examples

Some examples that follow this reasoning when counting layers are the following. I'll also write the pooling layers in each, but they clearly aren't considered as layers be the authors. When available I'll also write the number of layers that keras registers from their official implementations:

The ResNet-50 architecture has 50 layers (49 conv, 2 pool, 1 dense), however keras registers it as 177 layers. ResNets also use batch normalization after each convolution (so 49 batch norms in total), but clearly don't count them as layers.

The Resnet-34 has 34 layers (33 conv, 2 pool, 1 dense). Like the previous, this also uses batch norm but doesn't count it.

VGG-19 has 19 layers (16 conv, 5 pool 3 dense). Keras registers this as 26 layers.

AlexNet is considered to have 8 layers (5 conv, 3 pool, 3 dense).

",26652,,,,,1/28/2020 13:07,,,,0,,,,CC BY-SA 4.0 17719,2,,17715,1/28/2020 13:50,,1,,"

If I understood correctly you want to decrease the parameters count on the last layer (dense_2 layer right?). It would be nice to know why you want to decrease the number of parameters in the last layers... But I'll proceed with what I see.

Firstly, the dense layers (or fully connected in literature) have a deterministic number of parameters (or weights) to learn according to the size of the input and output tensor. The relation is the following:

$N_{params} = Y_{output} \cdot (X_{input} +1) = 200 \cdot (2209 +1) = 442000$

Where:

  • $Y_{output}$: Output tensor shape (act2=200 which is the output of dense_2 layer)
  • $X_{input}$: Input tensor shape (x=2209which is the ouput of flatten layer)

So if you want to decrease the number of parameters you can:

  • Decrease the input tensor: your self.action2 which I guess is the action space so you might not be able to decrease it
  • Decrease the output tensor: maybe? I would need more context (code) to know if that is even possible

So, in short: if you are not willing to change your the input or output tensor to the Dense layers, then no, you can not decrease the number of parameters.


BONUS: In case you missed it, I have noticed you have set your dense layers to trainable=False. So in principle you should not care about decreasing number of parameters (which in most cases is motivated to reduce training time) since they are already not being trained. You can check that in the Keras summary output:

Total params: 4,869,857
Trainable params: 7,857
Non-trainable params: 4,862,000

Where the non-trainable parameters are $4862000 = 4420000 + 442000 $, which are the number of parameters of your 2 dense layers.

",26882,,,,,1/28/2020 13:50,,,,2,,,,CC BY-SA 4.0 17721,1,17722,,1/28/2020 16:16,,35,10437,"

I trained a simple CNN on the MNIST database of handwritten digits to 99% accuracy. I'm feeding in a bunch of handwritten digits, and non-digits from a document.

I want the CNN to report errors, so I set a threshold of 90% certainty below which my algorithm assumes that what it's looking at is not a digit.

My problem is that the CNN is 100% certain of many incorrect guesses. In the example below, the CNN reports 100% certainty that it's a 0. How do I make it report failure?

My thoughts on this: Maybe the CNN is not really 100% certain that this is a zero. Maybe it just thinks that it can't be anything else, and it's being forced to choose (because of normalisation on the output vector). Is there any way I can get insight into what the CNN ""thought"" before I forced it to choose?

PS: I'm using Keras on Tensorflow with Python.

Edit

Because someone asked. Here is the context of my problem:

This came from me applying a heuristic algorithm for segmentation of sequences of connected digits. In the image above, the left part is actually a 4, and the right is the curve bit of a 2 without the base. The algorithm is supposed to step through segment cuts, and when it finds a confident match, remove that cut and continue moving along the sequence. It works really well for some cases, but of course it's totally reliant on being able to tell if what it's looking at is not a good match for a digit. Here's an example of where it kind of did okay.

My next best option is to do inference on all permutations and maximise combined score. That's more expensive.

",16871,,2444,,10/8/2021 0:50,10/8/2021 0:50,"Why do CNN's sometimes make highly confident mistakes, and how can one combat this problem?",,6,5,,,,CC BY-SA 4.0 17722,2,,17721,1/28/2020 16:40,,35,,"

The concept you are looking for is called epistemic uncertainty, also known as model uncertainty. You want the model to produce meaningful calibrated probabilities that quantify the real confidence of the model.

This is generally not possible with simple neural networks as they simply do not have this property, for this you need a Bayesian Neural Network (BNN). This kind of network learns a distribution of weights instead of scalar or point-wise weights, which then allow to encode model uncertainty, as then the distribution of the output is calibrated and has the properties you want.

This problem is also called out of distribution (OOD) detection, and again it can be done with BNNs, but unfortunately training a full BNN is untractable, so we use approximations.

As a reference, one of these approximations is Deep Ensembles, which train several instances of a model in the same dataset and then average the softmax probabilities, and has good out of distribution detection properties. Check the paper here, in particular section 3.5 which shows results for OOD based on entropy of the ensemble probabilities.

",31632,,33085,,1/29/2020 6:57,1/29/2020 6:57,,,,8,,,,CC BY-SA 4.0 17723,2,,9322,1/28/2020 17:30,,2,,"

(Old question, I know...)

It is not that we need both an encoder and decoder for sequence-to-sequence models - this decoupling of ""reading"" and ""generating"" just works better very often.

Example for Sequence-to-sequence without two RNNs

To prove my point above, here is an example from machine translation. Current machine translation systems are sequence-to-sequence models, and virtually all models have the bipartite structure of encoder and decoder.

Approaches like Eager Translation break this implied convention. They learn translation models that do not encode and decode with separate RNNs, but at every time step 1) read a source token and 2) produce a target token - with a single RNN.

Why encoder-decoder works better very often

Sequence-to-sequence modeling with encoder-decoder structure almost always implies attention in-between encoder and decoder. Attention relays information between the encoder and decoder, in the sense that every time the decoder has to generate the next item in the target sequence, an attention network computes a dynamic, useful ""summary"" of all encoder states.

This attention summary is different and recomputed for every decoding step. On the other hand, encoding the source sequence is done only once and then all encoder states are kept in memory.

The ability to have a direct view of the source sequence (using as a proxy the entire sequence of encoder states) via attention is what makes the encoder-decoder approach superioir to a single RNN.

In comparison, a single RNN only has a direct view on one element of the input sequence. Some interesting scenarios for a single RNN:

  • At every time step, read one source token, then write one target token: Previous elements in the source sequence are represented only in lossy recurrent states, while future elements cannot be accessed at all.
  • First read all source tokens, then write all target tokens.: the meaning of the entire source sentence has to be compressed into a fixed-size recurrent state vector.
",33073,,,,,1/28/2020 17:30,,,,0,,,,CC BY-SA 4.0 17724,2,,5115,1/28/2020 19:22,,2,,"

You are implying that such ideas are novel, and that such tools do not exist. But the idea is very popular, and there are numerous tools.

We need to write a program that would recognize that a word is connected to other words in the same way in both language. Then it would know those two words must have the same meaning.

You are describing the essence of known natural language processing (NLP) tasks such as word alignment (link words in different languages that have the same meaning) and, of course, machine translation.

While learning a machine translation model, we actually do discover which words (or parts of words, or sequences of words) in different languages have the same meaning.

Here are some concepts I would recommend for further study of this subject:

  • Word alignment, an example for a well-known and popular tool would be fast_align
  • Word embeddings, word2vec is a widely used tool
  • Modern machine translation with sequence-to-sequence models, well-known tools are fairseq, or Sockeye
",33073,,,,,1/28/2020 19:22,,,,0,,,,CC BY-SA 4.0 17725,2,,17636,1/28/2020 19:32,,3,,"

(Disclosure: I am a researcher and lecturer in Computational Linguistics)

It is true that annotation and debugging work with existing tools without modification can be considered Computational Linguistics.

And yet, most Computational Linguists program on a daily basis, since they actively develop tools. Just to give you some context, at major Computational Linguistics conferences such as ACL or EMNLP (the biggest ones), most authors did the coding themselves.

To say that coding is an unimportant side aspect of being a Computational Linguist, as claimed in another answer, is a slight misrepresentation.

",33073,,33073,,8/11/2022 7:36,8/11/2022 7:36,,,,3,,,,CC BY-SA 4.0 17726,2,,17439,1/28/2020 19:40,,1,,"

I have thought of various ways of doing this. Such as, as well as having 26 outputs for letters of the alphabet have about 20 more for ""repeat the character that is 14 characters to the left"" and so on

Creating a system of rules like your example above is the very opposite of training an RNN to perform this task.

If you would like to train an RNN to answer simple questions you would not need to come up with ingenious rules, but with enough training data of the form

question -> answer

Then you could use one of many popular sequence-to-sequence NLP tools to try and learn this behaviour, effectively treating the problem as if it were machine translation between languages.


More broadly, yes, question answering has been done before, it is very popular in fact. It is an active subfield of NLP research and many methods have been developed, some of them involving RNN networks.

",33073,,,,,1/28/2020 19:40,,,,0,,,,CC BY-SA 4.0 17727,2,,17474,1/28/2020 19:43,,1,,"

If I understand correctly, what you are looking for is called ""common sense reasoning"" in NLP research.

Research in this field revolves around benchmark data sets, where good performance indicates some ability to do common sense reasoning. Here is a nice collection of data sets and research by Sebastian Ruder:

http://nlpprogress.com/english/common_sense.html


In the end, the main question is not

How much knowledge of the world is learnt through words?

since it is virtually unanswerable if asked in this form. A question that is answered in NLP common sense reasoning research is

Out of 100 specific decisions that my model needs to take, for how many does the model show the ability to reason correctly?

",33073,,,,,1/28/2020 19:43,,,,1,,,,CC BY-SA 4.0 17728,1,,,1/29/2020 1:12,,2,521,"

I've been dabbling with machine learning and neural networks (namely, resnet50) for a few months now, mostly doing image recognition. I am currently trying to make a program that, given a string of numbers as input, can predict the next number in this sequence. For example, the input could be 1, 2, 3, 4 and the output should be 5.

I read something that said this could be done with a multilayer perceptron neural net, but that didn't elaborate much.

Any ideas, or links to tutorials/code?

",33078,,2444,,1/29/2020 11:08,1/29/2020 11:08,How I can predict the next number in a sequence with a neural network?,,0,1,,,,CC BY-SA 4.0 17729,2,,17721,1/29/2020 1:48,,7,,"

Broken assumptions

Generalization relies on making strong assumptions (no free lunch, etc). If you break your assumptions, then you're not going to have a good time. A key assumption of a standard digit-recognition classifier like MNIST is that you're classifying pictures that actually contain a single digit. If your real data contains pictures that have non-digits, then that means that your real data is not similar to training data but is conceptually very, very different.

If that's a problem (as in this case) then one way to treat that is to explicitly break that assumption and train a model that not only recognizes digits 0-9 but also recognizes whether there's a digit at all, and is able to provide an answer ""that's not a digit"", so a 11-class classifier instead of a 10-class one. MNIST training data is not sufficient for that, but you can use some kind of 'distractor' data to provide the not-a-digit examples. For example, you could use some dataset of letters (perhaps omitting I, l, O and B) transformed to look similar to MNIST data.

",1675,,,,,1/29/2020 1:48,,,,0,,,,CC BY-SA 4.0 17730,2,,17474,1/29/2020 1:57,,0,,"

Grounded language learning

Your description does not match the current understanding of how children learn language, and giving a computer an explicit textual list of common sense rules is very different from teaching a young child. (Some aspects of teaching older children is more similar to that, but by that time they fully know the language already).

Much of language is acquired through interaction (both sensory and motoric) with the physical world, applying words together with a shared focus of attention to something real. I.e. you talk about a cat or a ball and it's behavior while both you and the child are paying attention to that behavior or object or an image of it. The same applies later for more complex topics such as social situations - to teach a topic, to a child, you'd inevitably use a shared attention to specific events in the real world or specific events in a mostly shared 'model world' i.e. one reconstructed from memory (what you saw your sister do five minutes ago) or imagined/hypothesised (if you do this, these consequences might happen).

Attempts to replicate this process in artificial systems is usually called 'grounded language learning', and there's extensive published literature on that which may be interesting to you.

In essence, the assumption is that English (or any other) words to an artificial system are just as useful as a Chinese-Chinese explanatory dictionary to me. If I speak some basic Chinese, then I can use that dictionary to expand my vocabulary and understand complex Chinese - but if I have no grounding in Chinese whatsoever, then the expectation is that it's impossible to reconstruct a language from that data alone.

",1675,,1675,,1/29/2020 2:05,1/29/2020 2:05,,,,1,,,,CC BY-SA 4.0 17731,2,,17721,1/29/2020 2:29,,15,,"

Your classifier is specifically learning the ways in which 0s are different from other digits, not what it really means for a digit to be a zero.

Philosophically, you could say the model appears to have some powerful understanding when restricted to a tightly controlled domain, but that facade is lifted as soon as you throw any sort of wrench in the works.

Mathematically, you could say that the model is simply optimizing a classification metric for data drawn from a specific distribution, and when you give it data from a different distribution, all bets are off.

The go-to answer is to collect or generate data like the data you expect the model to deal with (in practice, the effort required to do so can vary dramatically depending upon the application). In this case, that could involve drawing a bunch of random scribbles and adding them to your training data set. At this point you must ask, now how do I label them? You will want a new ""other"" or ""non-digit"" class so that your model can learn to categorize these scribbles separately from digits. After retraining, your model should now better deal with these cases.

However, you may then ask, but what if I gave it color images of digits? Or color images of farm animals? Maybe pigs will be classified as zeros because they are round. This problem is a fundamental property of the way deep learning is orchestrated. Your model is not capable of higher order logic, which means it can seem to go from being very intelligent to very dumb by just throwing the slightest curve ball at it. For now, all deep learning does is recognize patterns in data that allow it to minimize some loss function.

Deep learning is a fantastic tool, but not an all-powerful omnitool. Bear in mind its limitations and use it where appropriate, and it will serve you well.

",33079,,,,,1/29/2020 2:29,,,,1,,,,CC BY-SA 4.0 17732,1,,,1/29/2020 3:56,,2,156,"

The famous Nvidia paper Progressive Growing of GANs for Improved Quality, Stability, and Variation, the GAN can generate hyperrealistic human faces. But, in the very same paper, images of other categories are rather disappointing and there hasn't seemed to be any improvements since then. Why is it the case? Is it because they didn't have enough training data for other categories? Or is it due to some fundamental limitation of GAN?

I have come across a paper talking about the limitations of GAN: Seeing What a GAN Cannot Generate.

Anybody using GAN for image synthesis other than human faces? Any success stories?

",33082,,2444,,1/29/2020 11:12,1/29/2020 11:12,Is it feasible to use GAN for high-quality image synthesis other than human faces?,,2,0,,,,CC BY-SA 4.0 17733,2,,17466,1/29/2020 8:37,,2,,"

I came across a news article from 2018 where the president of India was saying that Sanskrit is the best language for ML/AI.

A very interesting statement indeed! Globally, there is very little interest in NLP for Sanskrit compared to very many other languages.

Especially given the fact that humans still have to first learn Sanskrit in case NLP is done in Sanskrit.

Most people would say that NLP tools are meant to cater to the needs of humans, instead of requiring humans to learn a new language to be able to benefit from tools (which is a huge barrier).

To answer your question more clearly, it is not that the research community selects languages for research based on ""suitability"" of the language. Instead, in general the intention of NLP research is to grapple with problems that are relevant, impact many people - or are funded (unfortunately :-).

",33073,,,,,1/29/2020 8:37,,,,0,,,,CC BY-SA 4.0 17734,1,,,1/29/2020 8:47,,7,258,"

While reading the DQN paper, I found that randomly selecting and learning samples reduced divergence in RL using a non-linear function approximator (e.g a neural network).

So, why does Reinforcement Learning using a non-linear function approximator diverge when using strongly correlated data as input?

",33088,,2444,,12/19/2020 13:06,12/19/2020 13:06,Why does reinforcement learning using a non-linear function approximator diverge when using strongly correlated data as input?,,1,2,,,,CC BY-SA 4.0 17736,2,,17732,1/29/2020 9:38,,1,,"

Generative Adversarial Networks, basically boil down to a combination of a generic Generator and a Discriminator trying to beat each other, so that the generator tries to generate much better images (usually from noise) and discriminator becomes much better at classification. So, no it is not just suited for only synthesis high quality human face synthesis but any image type.

In fact, not only can it be used for any high quality image synthesis, it can work on non-image data types as well (such as text, etc). It all depends on the type of neural network you are using for the discriminator and generator at the end of the day.

http://openaccess.thecvf.com/content_iccv_2017/html/Osokin_GANs_for_Biological_ICCV_2017_paper.html

The above refers to a paper synthesizing cell image through GANs, as I haven't personally used GANs on a practical level.

General blog explaining GANs:
https://machinelearningmastery.com/impressive-applications-of-generative-adversarial-networks/

Human faces are much more frequently tackled, for many reasons, usually that human faces are highly symmetric and have a lot of different features, usually more than other types of images, with the added difficulty of being that we as actual humans are usually good at recognizing faces - making a neural net to fool ourselves, makes it a challenging area of research.

Hope it helped! Do let me know if I'm wrong somewhere.

",33092,,,,,1/29/2020 9:38,,,,0,,,,CC BY-SA 4.0 17737,2,,17732,1/29/2020 10:06,,1,,"

I'd challenge your assertion somewhat that the generated images of other categories are of much worse quality than the faces!

Take the bikes on transparent / solid backgrounds they look great!

Where the images fail a bit is with the more complex pictures which have a lot of elements where element bleed (covers bleeding into the floor, etc.) occurs. This is simply a result of the complexity of the image and the training data base.

As an example I have developed a GAN that generates ""Vaporwave""-like imagery like this:

Now my results were generally poor because unlike faces my training set was highly diverse in terms of arrangements, elements, etc. If you look at the generated bed images in your example paper not only did the GAN have to learn and generate beds but also the highly complex backgrounds which did differ severely between training images whereas in the face example the image was zoomed in on the faces and obscuring the background.

If you use human faces in a normal background setting (e.g. with the scenery around them visible) your GAN will perform equally good or bad because there is so much more complexity to learn.

You can find my experiences with non-faces GAN on Kaggle but understand the bad results are mainly due to a very small training set and the fact that these images are very different (besides the color gradient which the GAN pics up very fast).

https://www.kaggle.com/fnguyen/vaporprogan

",27665,,,,,1/29/2020 10:06,,,,0,,,,CC BY-SA 4.0 17740,2,,17195,1/29/2020 12:52,,1,,"

The problem is not that RNN flavours such as LSTMs are incapable of keeping track of the ""important"" parts of the input. They also do not have much trouble recognizing commas in different places.

To prove this point, I recommend reading Andrej Karpathy's excelllent write-up about the behaviour of individual RNN ""neurons"".

Addressing specifically this comment in your question:

I suspect another reason why the LSTM is not able to infer the format is that the comma can be placed in different indexes of the sequence, so it could be losing its importance in the hidden state the longer the sequence is (not sure if that makes sense).

If commas are relevant to solve the task at hand, LSTMs can learn to remember its position or related information. This information is not necessarily diluted by repeated application of reccurrence with long sequences: networks can learn to propagate and promote crucial information from one recurrent state to the next.


Input sequences have arbitrary length, which means that LSTMs need to compress information about seen sequence elements

Rather, the input sequence has an arbitrary length, while the LSTM state vectors have a fixed size. State vectors are the only way for an LSTM to ""keep track of important parts"". This means that those fixed-size vectors are a bottleneck and there is an information-theoretic upper bound on the amount of information about ""important parts"" that can be kept by an LSTM.

LSTMs potentially take multiple decisions. For each decision, something else in the input sequence is most important

For tasks such as summarization that you mention in the question, an LSTM makes a series of predictions (predicting the tokens of the summary one token at a time). For each prediction, different things in the input sequence might be important. Put another way, for each decision, another view of the input may be most helpful.

This is a key motivation for using attention networks. Each time an LSTM is making a prediction, an attention network can provide a dynamic, optimally helpful view of the input sequence.

",33073,,,,,1/29/2020 12:52,,,,0,,,,CC BY-SA 4.0 17741,2,,12490,1/29/2020 13:29,,9,,"

Can the decoder in a transformer model be parallelized like the encoder?

The correct answer is: computation in a Transformer decoder can be parallelized during training, but not during actual translation (or, in a wider sense, generating output sequences for new input sequences during a testing phase).

What exactly is parallelized?

Also, it's worth mentioning that "parallelization" in this case means to compute encoder or decoder states in paralllel for all positions of the input sequence. Parallelization over several layers is not possible: the first layer of a multi-layer encoder or decoder still needs to finish computing all positions in parallel before the second layer can start computing.

Why can the decoder be parallelized position-wise during training?

For each position in the input sequence, a Transformer decoder produces a decoder state as an output. (The decoder state is then used to eventually predict a token in the target sequence.)

In order to compute one decoder state for a particular position in the sequence of states, the network consumes as inputs: 1) the entire input sequence and 2) the target words that were generated previously.

During training, the target words generated previously are known, since they are taken from the target side of our parallel training data. This is the reason why computation can be factored over positions.

During inference (also called "testing", or "translation"), the target words previously generated are predicted by the model, and computing decoder states must be performed sequentially for this reason.

Comparison to RNN models

While Transformers can parallelize over input positions during training, an encoder-decoder model based on RNNs cannot parallelize positions. This means that Transformers are generally faster to train, while RNNs are faster for inference.

This observation leads to the nowadays common practice of training Transformer models and then using sequence-level distillation to learn an RNN model that mimicks the trained Transformer, for faster inference.

",33073,,33073,,11/12/2020 12:39,11/12/2020 12:39,,,,12,,,,CC BY-SA 4.0 17743,2,,17721,1/29/2020 14:07,,3,,"

Apollys,

That's a very well thought out response. Particularly, the philosophical discussion of the essence of ""0-ness.""

I haven't actually performed this experiment, so caveat emptor... I wonder how well an ""other"" class would actually work. The ways in which ""other"" differs from ""digit"" has infinite variability (or at least its only limitation is the cardinality of the input layer).

The NN decides whether something is more of one class or more of a different class. If there isn't an essence in common among other ""non-digits"", I don't believe it will do well at identifying ""other"" as the catch-all for everything that has low confidence level of classification.

This approach still doesn't identify what it is to be ""not-digit"". It identifies how all the things that are ""other"" differ from the other labeled inputs -- probably poorly, depending on the variability of the ""non-digit"" labeled data. (i.e. is it numerically exhaustive, many times over, of all random scribbles?) Thoughts?

",33103,,,,,1/29/2020 14:07,,,,5,,,,CC BY-SA 4.0 17744,1,,,1/29/2020 14:56,,2,124,"

Let's assume that we have a regression problem. Our input is just binarized image that contains a single rectangle and we want to predict just a float number. Actually, this floating-point number depends on rectangle angle, rectangle size and rectangle location. Is this problem can be solved by a neural network?

I think, it can not be solved by a neural network, because rectangle angle, size and location are latent variables and without learning these latent variable, above problem can not be solved. What do you think?

",33108,,2444,,1/29/2020 16:18,1/29/2020 16:18,Can a neural network learn to predict a number given a binarized image of a rectangle?,,1,0,,,,CC BY-SA 4.0 17746,2,,17744,1/29/2020 15:35,,3,,"

It can definitely be learned, the question is the approach. It would be expensive and difficult from a modeling directive to do this Densely, so usually convolutions are the way to go. An issue with convolutions is that is generally focuses on equivariant and relative features, so if you need specific location within the approach might be worth the simple alteration of CoordConv. Regardless of approach, that type of input to output is possible, you just have to consider it when modeling.

",25496,,,,,1/29/2020 15:35,,,,0,,,,CC BY-SA 4.0 17747,1,,,1/29/2020 15:39,,1,23,"

Or would you simply do this as a time series of models.

Basically I think you can think of time series of weights as the hidden states and the dynamics driving the weight time series as the RNN weights. I'm not sure if the data gradients are avoiding look ahead in this context though.

I am basically thinking of smoothing (data assimilation) formulation of the filtering problem. Usually smoothing has look-ahead bias but with stop_gradients (and a large graph) it should be possible to do ""batch"" filtering.

",23001,,,,,1/29/2020 15:39,Is there a way to use RNN (in tensorflow) to do something like a batch Kalman with the weight dynamics specified in the loss?,,0,0,,,,CC BY-SA 4.0 17749,1,,,1/29/2020 20:33,,4,952,"

Can someone explain to me why it is possible to eliminate the rest of the middle branch in this image for alpha-beta pruning? I am confused because it seems the only information you know is that Helen would pick at least a 2 at the top (considering if we iterate from left to right in DFS), and Stavros would absolutely not pick anything above 7. This leaves 5 possible numbers that the rest of the branch could take on that Helen could potentially end up picking, but can't because we've eliminated those possibilities via pruning.

",33117,,2444,,1/29/2020 20:37,3/26/2020 0:03,Why is it possible to eliminate this branch with alpha-beta pruning?,,1,1,,,,CC BY-SA 4.0 17750,1,,,1/29/2020 22:33,,2,216,"

I'm super new to deep learning and computer vision, so this question may sound dumb.

In this link (https://github.com/GeorgeSeif/Semantic-Segmentation-Suite), there are pre-trained models (e.g., ResNet101) called front-end models. And they are used for feature extraction. I found these models are called backbone models/architectures generally. And the link says some of the main models (e.g. DeepLabV3 or PSPNet) rely on pre-trained ResNet.

Also, transfer learning is to take a model trained on a large dataset and transfer its knowledge to a smaller dataset, right?

  1. Do the main models that rely on pre-trained ResNet do transfer learning basically (like from ResNet to the main model)?

  2. If I use a pre-trained network, like ResNet101, as the backbone architecture of the main model(like U-Net or SegNet) for image segmentation, is it considered as transfer learning?

",33118,,2444,,1/30/2020 16:36,1/30/2020 16:36,What is the difference between using a backbone architecture and transfer learning?,,0,0,,,,CC BY-SA 4.0 17752,1,,,1/29/2020 23:32,,1,102,"

I'm trying to develop a stock predictor.

I'm using LSTM but I am unsure about the structure of the Neural Network. For example, I'm assuming that the Neural Network is a many-to-one since we have many inputs (i.e Open, Close etc) and one output (stock price).

By misunderstanding is coming with how to construct the nodes. For example, what input goes into the ""Cell"" (or node)? I.e does say 60 timestep mean 60 days of 'Open Price' are fed into the Neural Network at t and then 60 days of 'Close' into t + 1 until we use all input to produce an output?

If someone could explain the process of how LSTM are used with stock predictions that would be appreciated.

",33121,,2444,,1/30/2020 16:41,1/30/2020 16:41,How should I design this LSTM network to perform stock prediction?,,1,0,,,,CC BY-SA 4.0 17753,2,,17752,1/30/2020 2:09,,1,,"

I can't say for all cases, but I can certainly give you an example of how an LSTM could be used to predict stock prices.

An LSTM is temporal, meaning you can feed in one input, get an output and the network will remember this interaction (if it's important), which will effect the outcome of future predictions. As such, it is reasonable to feed in all the data for the day (Open, Close etc), obtain an output for the predicted stock price of the following day (or a prediction for all the inputs you feed in, the output can be as large or small as you want), then repeat for as long as you want, each time only feeding in the data for that particular day as the network will remember previous days.

In your example, 60 timesteps would mean doing a forward pass of the LSTM 60 times, for 60 days. Each of these forward passes will produce an output, which you can compare to the next actual stock price for verification, until you reach the current day, where the prediction is for the future.

",26726,,,,,1/30/2020 2:09,,,,4,,,,CC BY-SA 4.0 17754,2,,17721,1/30/2020 9:00,,2,,"

I'm an amateur with neural networks, but I will illustrate my understanding of how this problem comes to be.

First, lets see how trivial neural network classifies 2D input into two classes :

But in case of complex neural network, the input space is much bigger and the sample data points are much more clustered with big chunks of empty space between them:

The neural network then doesn't know how to classify the data in the empty space, so something like this is possible :

When using the traditional ways of measuring quality of neural networks, both of these will be considered good. As they do classify the classes themselves correctly.

Then, what happens if we try to classify these data points?

Really, neural network has no data it could fall back on, so it just outputs what seems to us as random nonsense.

",28367,,,,,1/30/2020 9:00,,,,0,,,,CC BY-SA 4.0 17755,1,,,1/30/2020 9:27,,2,103,"

We have hundreds of thousands of customers records, and we need to take the benefits of our data to train a model that will recognize fake entries or unrealistic ones for our platform, where customers are asked to enter their names, phone number and zip code.

So, our attributes are name, phone number, zip code and IP address to train the model with. We have only data associated with real users. Can we train a model provided with only positive labels (as we do not have a negative dataset to train the model with)?

",33135,,2444,,1/31/2020 2:29,1/31/2020 14:36,Can we train the model to detect real users with only positive labels?,,2,14,,,,CC BY-SA 4.0 17756,1,,,1/30/2020 11:03,,1,36,"

https://arxiv.org/abs/1910.07954 In this paper, we have a convolutional character neural network where we have object detection by taking a character as a basic unit. First, we do character detection and recognition and then we go for text detection.

Here (Page number 5 under the subheading Iterative character detection) it is written that a model trained on English and Chinese texts will generalize well in regards to text detection. But how English and Chinese texts are good for generalization in text detection. If you have any queries regarding the paper you can ask me in the comment section Thanks in advance!

",30907,,,,,1/30/2020 11:03,Text detection on English and Chinese language,,0,0,,,,CC BY-SA 4.0 17757,1,,,1/30/2020 11:38,,2,28,"

I wonder if researchers tried to understand how LSTMs work by analyzing the dynamics of simple LSTM (e.g. with 2 units)? For example, how the hidden state evolves depending on the properties of weight matrices.

It seems like a very natural thing to try (especially because it is easy to draw hidden states with 2 units on a 2D plane). However, I haven't found any papers that would play with such a toy example.

Are there such papers at all?

Or with such a simple example, it is impossible to get some understanding because of its over-simplicity? (which I doubt because even logistic maps generate very complicated behavior)

",33098,,2444,,12/12/2021 13:02,12/12/2021 13:02,Did people analyze dynamics of very simple LSTMs?,,0,1,,,,CC BY-SA 4.0 17758,1,,,1/30/2020 12:48,,1,50,"

This question should be quite generic but I faced the problem in the case of the TiF-GAN generator so I am going to use it as an example. (Link to paper)

If you check the penultimate page in the paper you can find the architecture design of the generator.

The generator has a dense layer and then a reshape layer converting the hidden layer feature map to a dimensionality of 8x4x512 (given that $s = 64$)

Then what follows is a transpose convolution operation with a kernel size of 12x3 with $512$ filters and a stride of $2$ in all dimensions. The output of this layer should be then 16x8x512.

After fiddling with some coding I found out that the authors also used the setting padding=same in their tensor flow code.

So, my question is: How and what do you pad when you perform such a transpose convolution to get those output dimensions?

Without any padding I would assume that you should get an output of 26x9x1534 assuming that each output dimension is equal to dim = kernel_dim + strides * (input_dim - 1)

",13257,,,,,1/30/2020 12:48,"Transpose convolution in TiF-GAN: How does ""same"" padding works?",,0,0,,,,CC BY-SA 4.0 17760,2,,17721,1/30/2020 13:35,,0,,"

In your particular case, you could add a eleventh category to your training data: ""not a digit"".

Then train your model with a bunch of images of incorrectly segmented digits, in addition to the normal digit examples. This way the model will learn to tell apart real digits from incorrectly segmented ones.

However even after doing that, there will be an infinite number of random looking images that will be classified as digits. They're just far away from the examples of ""not a digit"" you provided.

",25912,,,,,1/30/2020 13:35,,,,0,,,,CC BY-SA 4.0 17761,1,,,1/30/2020 13:41,,2,23,"

Let's set up some hypothetical simplified scenario: Each instance $i$ of my imaginary dataset $D=\{i_{1}, \ldots, i_{MAX}\}$ has different number $k_{i}$ of $n$-dimensional vectors as input into my neural network. Each of them will be transformed with $m \times n$ matrix $M$ (so, matrices with same parameters) and acted point-wise with some non-linearity $\sigma_{1}$.

Now there are 2 possibilities I want to consider separately:

  1. Case: I want to average all those outputs, thus depending of total number of vectors $k_{i}$.
  2. Case: I want to choose 2 subsets (no necessarily mutually disjoint) and average vectors from those subsets.

For the later layers (in both cases) I'd use some the ""standard"" neural networks architectures and loss functions. Notice, I'll probably go with more complex connections than averaging, this is only for the purpose of this technical question.

My questions are: 1. Is there a ""simple"" way (meaning: high-level library like tensorflow) which allows me to create custom layer with shared weights which is also size-agnostic (not size of vectors, but number of vectors) Is it possible to parallelise computation like this?

I have some knowledge of tf, but I still didn't use all obscure low-level details and don't know all capabilities of $2.0$ version. Also, I'd know to do this stuff in numpy with handcrafted back-propagation implementation. I'd like to do it in more high level way which will handle backpropagation and parallelisation for me.

I'd like to avoid having tensor of vectors which will be padded, but I'm not sure is it possible. My real problem is more complex but has same characteristics: I need to use same weights on differently connected layers, and I would handle normalization of the input to the following layers to the same scale manually (i.e. with ""normal"" averaging or weighted attention-like averaging,...). Not always I'd connect all vectors from first layers to all from second. It needs to be flexible for each instance of data.

So, in short: I want to reuse the same weight matrices but connect them differently for each instance of dataset. I want to do it in framework which already has automatic differentiation/backpropagation and parallelisation. Is it possible?

Does eager execution in tf help in solving problems like this? I guess nothing could be done with prebuilt computation graphs in 1.x versions.

",33144,,,,,1/30/2020 13:41,How to handle set-like size agnostic input format,,0,0,,,,CC BY-SA 4.0 17762,1,,,1/30/2020 14:50,,2,84,"

I modified resnet50 architecture to get a regression network. I just add batchnorm1d and ReLU layers just before the fully connected layer. During the training, the output of batchnorm1d layer is nearly equal to 3 and this gives good results for training. However, during inference, output of batchnorm1d layer is about 30 so this leads to too low accuracy for test results. In other words, outputs of batchnorm1d layers give very different normalized output during training and inference.

What is the reason for this situation, and how can I solve it? I am using PyTorch.

",33108,,2444,,1/30/2020 16:44,1/30/2020 16:44,Why does the BatchNormalization layer produce different outputs during training and inference?,,0,5,,,,CC BY-SA 4.0 17763,1,,,1/30/2020 15:08,,2,23,"

I am playing with a large dataset of hotel reviews, which contains both positive and negative reviews (the reviews are labeled). I want to use this dataset to perform textual style transfer - given a positive review, output a negative review which address the same thing. For example, if the positive review mentioned how spacious the rooms are, I want the output to be a review that complains about the small and claustrophobic rooms.

However, I don't have positive review-negative review pairs for the training. I was thinking that maybe I could create those pairs myself, but I'm not sure what is the best way to do that. Simple heuristics like jaccard index and such didn't give the desired results.

",33147,,,,,1/30/2020 15:08,How to tell if two hotel reviews addressing the same thing,,0,2,,,,CC BY-SA 4.0 17764,1,17785,,1/30/2020 15:09,,6,7568,"

I read that, if we use the sigmoid or hyperbolic tangent activation functions in deep neural networks, we can have some problems with the vanishing of the gradient, and this is visible by the shapes of the derivative of these functions. ReLU solves this problem thanks to its derivative, even if there may be some dead units. ResNet uses ReLU as activation function, but looking online what I understood is that ResNet solves the vanishing of the gradient thanks to its identity map, and I do not totally agree with that. So what's the purpose of the identity connections in ResNet? Are they used for solving the vanishing of the gradient? And ReLU really solves the vanishing of the gradient in deep neural networks?

",32694,,2444,,1/30/2020 16:50,1/17/2021 14:07,Why do ResNets avoid the vanishing gradient problem?,,1,0,,,,CC BY-SA 4.0 17765,1,,,1/30/2020 15:11,,2,41,"

with a friend of mine, we got in an argument over how to label images for multi-label.

Note: Groups of a species and the species of catfish is important to recognize.

The labels are:

  • 'I': an individual fish of any type except catfish
  • 'R': A group of same species
  • 'K': Catfish

First conflict:

For an image containing a bank of fish, our conflicting opinions are:

  1. I and R
  2. R

Second conflict:

If in an image there's actually a bank of catfish, our conflicting opinions are:

  1. K and R
  2. K

Third conflict:

If in an image there's actually a group of same species of fish and other individual fish from different species, our conflicting opinions are:

  1. R
  2. I and R

Summary:

  • I think one very important difference of opinion here, is if in multi-label, we should give two labels to one object in image(say group and fish) or if an object should have only one label.

  • Should the overwhelming presence of an object, overshadow the presence of another (group of fish of same species and another individual fish)

What do you think?

",32622,,,,,1/30/2020 15:11,Labeling for multilabel image classification,,0,0,,,,CC BY-SA 4.0 17768,1,17798,,1/30/2020 16:57,,2,4137,"

I am looking for a small size dataset on which I can implement object detection, object segmentation and object localization.

Can anyone suggest me a dataset less than 5GB? Or do I need to know anything before implementing these algorithms?

",32638,,2444,,2/2/2020 20:15,2/5/2020 15:36,"Small size datasets for object detection, segmentation and localization",,3,0,,9/12/2020 15:17,,CC BY-SA 4.0 17769,2,,17755,1/31/2020 0:24,,1,,"

The problem which you have is a classification problem. You assume a class ""good users"" and a distinct class ""bad users"". You want to train an AI to tell the two apart, but all your examples are ""good users"". Any reasonable AI will draw the logical conclusion from those examples: all users are good users. That's a 100% match for the training data.

",16378,,,,,1/31/2020 0:24,,,,1,,,,CC BY-SA 4.0 17770,1,,,1/31/2020 7:36,,1,71,"

I am trying to solve the kaggle's house prices using neural network. I've already made it with ensembling several models (XGBoost, GradientBooster and Ridge) and I've got a great score ranking me between the top 25%.

I imagined that by adding a new model to the ensembled models like ANN would increase prediction accuracy, so I did the following:

import keras

model = keras.models.Sequential()

model.add(keras.layers.Dense(235, activation='relu', input_shape=(235,)))
model.add(keras.layers.Dense(235, activation='relu'))
model.add(keras.layers.Dense(235, activation='relu'))
model.add(keras.layers.Dense(235, activation='relu'))
model.add(keras.layers.Dense(235, activation='relu'))
model.add(keras.layers.Dense(1))

model.compile(optimizer='adam', loss='mean_squared_error')
model.summary()
model.fit(dataset_ann, y, epochs=100, callbacks=[keras.callbacks.EarlyStopping(patience=3)])
y_pred = model.predict(X_test_ann)

I choosed 235 neurons for each layer, as the training set has 235 features.

For model ensembling:

y_p = (0.1*model.predict(X_test_ann)+0.2*gbr.predict(testset)+0.3*xgb.predict(testset)+0.1*regressor.predict(testset)+0.1*elastic.predict(testset)+0.2*ridge.predict(testset))

The shape of y_p is (1459, 1459) instead of (1459, ) where columns are all having the same values, so taking y_p[0] would be more than enough.

I submitted the result into kaggle and went from top 25% into bottom 60%.

Is it because the number of hidden layers with its input? Or because there is few data to train (1460 rows of train set) and the neural network needs more than that? Or is it because of the number of neurons in each layer?

I tried with epoch = 30, 100, 1000 and got nearly the same bad ranking.

",26028,,,,,1/31/2020 13:27,Applying Artificial neural network into kaggle's house prices data set gave bad predicted values,,1,0,,,,CC BY-SA 4.0 17772,1,,,1/31/2020 8:14,,4,65,"

I am experimenting with OpenAI Gym and reinforcement learning. As far as I understood, the environment is waiting for the agent to make a decision, so it's a sequential operation like this:

decision = agent.decide(state)
state, reward, done = environment.act(decision)
agent.train(state, reward)

Doing it in this sequential way, the Markov property is fulfilled: the new state is a result of the old state and the action. However, a lot of games will not wait for the player to make a decision. The game will continue to run and perhaps, the action comes too late.

Has it been observed or is it even possible that a neuronal network adjusts its weights so that the PC computes the result faster and thus makes the ""better"" decision? E.g. one AI beats another because it is faster.

Before posting an answer like ""there are always the same amount of calculations, so it's impossible"", please be aware that there is caching (1st level cache versus RAM), branch prediction and maybe other stuff.

",31627,,2444,,8/23/2021 0:31,8/23/2021 0:31,Is a neural network able to optimize itself for speed?,,1,1,,,,CC BY-SA 4.0 17773,1,17814,,1/31/2020 9:19,,2,132,"

I have trained a XGboost model to predict survival for the Kaggle Titanic ML competition.

As with all Kaggle competitions there is a train dataset with the target variable included and a test dataset without the target variable which is used by Kaggle to compute the final accuracy score that determines your leaderboard ranking.

My problem:

I have build a fairly simple ensemble classifier (based on XGboost) and evaluated it via standard train-test-splits of the train data. The accuracy I get from this validation is ~80% which is good but not amazing by public leaderboard standards (excluding the 100% cheaters).

The results and all the KPIs I looked at of this standard model do not indicate severe overfitting, etc. to me.

However when I submit my predictions for the test set my public score is ~35% which is way below even a random chance model. It is sooo bad I even improved my score by simply reversing all predictions from the model.

Why is my model so much worse on the test?

I know that Kaggle computes their scores a bit differently than I do locally, additionally there is probably some differences between the datasets. Most who join the competition notices at least some difference between their local test scores and the public scores.

However my difference is really drastic and indeed reversing the predictions improves my score. This does not make sense to me because reversing the predictions on my local validations leads to garbage predictions, so this is not a simple problem of generally reversed predictions.

So can you help me understand how those two issues happen at the same time:

  • Drastic difference between local accuracy and public score
  • Reversing actually leads to the better public score.

Here is my notebook for the code (please ignore the errors, they are simply because the code does not work on kaggle kernels only locally):

https://www.kaggle.com/fnguyen/titanicrising-test

",27665,,,,,2/5/2020 14:23,Why is my model accuracy high in train-test split but actually worse than chance in validation set?,,2,0,,,,CC BY-SA 4.0 17774,1,21600,,1/31/2020 10:45,,4,309,"

I'm working on a continuous state / continuous action controller. It shall control a certain roll angle of an aircraft by issuing the correct aileron commands (in $[-1, 1]$).

To this end, I use a neural network and the DDPG algorithm, which shows promising results after about 20 minutes of training.

I stripped down the presented state to the model to only the roll angle and the angular velocity, so that the neural network is not overwhelmed by state inputs.

So it's a 2 input / 1 output model to perform the control task.

In test runs, it looks mostly good, but sometimes, the controller starts thrashing, i.e. it outputs flittering commands, like in a very fast bangbang-controlm which causes a rapid movement of the elevator.

Even though this behavior kind of maintains the desired target value, this behavior is absolutely undesirable. Instead, it should keep the output smooth. So far, I was not able to detect any special disturbance that starts this behavior. Yet it comes out of the blue.

Does anybody have an idea or a hint (maybe a paper reference) on how to incorporate some element (maybe reward shaping during the training) to avoid such behavior? How to avoid rapid actuator movements in favor of smooth movements?

I tried to include the last action in the presented state and add a punishment component in my reward, but this did not really help. So obviously, I do something wrong.

",25972,,2444,,10/7/2020 16:55,10/7/2020 16:55,How to avoid rapid actuator movements in favor of smooth movements in a continuous space and action space problem?,,1,3,,,,CC BY-SA 4.0 17775,1,17787,,1/31/2020 11:49,,3,329,"

Why are the terms classification and prediction used as synonyms especially when it comes to deep learning? For example, a CNN predicts the handwritten digit.

To me, a prediction is telling the next step in a sequence, whereas classification is to put labels on (a finite set of) data.

",27777,,2444,,1/31/2020 13:16,2/1/2020 17:20,Why are the terms classification and prediction used as synonyms in the context of deep learning?,,2,3,,,,CC BY-SA 4.0 17776,2,,17770,1/31/2020 13:27,,2,,"

Is it because the number of hidden layers with its input? Or because there is few data to train (1460 rows of train set) and the neural network needs more than that? Or is it because of the number of neurons in each layer?

I think you are onto something here. You have over a thousand nodes, almost as many as training samples you have. In my experience, the network will have no trouble overfitting, given you train long enough. If you dont train long enough on the other hand, your network will probably not learn anything.

I would try with a smaller network, so you ""force"" it to learn something, instead of memorizing.

",31714,,,,,1/31/2020 13:27,,,,1,,,,CC BY-SA 4.0 17777,2,,17775,1/31/2020 13:38,,0,,"

They aren't synonyms literally, books never interchange those terms, as they represent two different processes. What they are, though are two similar processes.

Classification can be thought of a ""process"" that uses specific functions for the generation of one or more discrete values, usually using a cross-entropy function.

Prediction, on the other hand can be thought of a ""process"" that uses, again specific functions for the generation of continuous values, usually using a linear or multiple dependency model with a MSE loss function.

TL:DR; Classification is for discrete valued dependent variable, Prediction is for continuous valued dependent variable. Both are similar processes with different functions used for learning and estimation of the predicate. You can think of classification as a specific form of prediction.

Hope it helped! Do let me know if I have made any mistakes.

",33092,,,,,1/31/2020 13:38,,,,3,,,,CC BY-SA 4.0 17778,2,,17755,1/31/2020 14:36,,1,,"

Pragmatically you could use the discriminatory from a GAN for outlier detection.

Ideally you'd start collecting fakes now and do a normal model on both good and bad cases.

In the absence of that you can create a GAN to create realistic looking fakes on only real cases and then take the discriminator from that GAN to flag real-life cases for manual checks.

Please for a real life case always include these real life checks which also helps collecting cases for improving the model.

",27665,,,,,1/31/2020 14:36,,,,0,,,,CC BY-SA 4.0 17779,1,,,1/31/2020 17:05,,3,50,"

In machine learning, problem space can be represented through concept space, instance space version space and hypothesis space. These problem spaces used the conjunctive space and are very restrictive one and also in the above-mentioned representations of problem spaces, it is not sure that the true concept lies within conjunctive space.

So, let's say, if we have a bigger search space and want to overcome the restrictive nature of conjunctive space, then how can we represent our problem space? Secondly, in a given scenario which algorithm is used for our problem space to represent the learning problem?

",33185,,2444,,1/31/2020 17:12,1/31/2020 17:12,"In machine learning, how can we overcome the restrictive nature of conjunctive space?",,0,3,,,,CC BY-SA 4.0 17780,1,,,1/31/2020 17:20,,1,19,"

I am writing myself and was thinking about, what kind of metric can be applied to measure the ""dangerousness"" of a human being on a railtrack? For example detecting if a human is running on the rails?

Maybe predicting the human movement towards rails, the distance to the vehicle (ego perspective) or something else?

",32337,,,,,1/31/2020 17:20,How can a new metric applied for humans causing danger on railtracks?,,0,0,,,,CC BY-SA 4.0 17781,2,,17369,1/31/2020 17:50,,0,,"

I have finally abandoned the idea of ​​doing it with an exact method and I have passed to the heuristic. I have mixed multi-boot, local search and certain random movements. Apparently this is called Greedy Randomized Adaptative Search Procedures (GRASP)

Hypothesis: the best solution is reached filling the 'operations' with the maximum of 'Actions' per 'Operation'. Other combinations are more expensive

  1. create a random solution
    [[1 2 3 4] [5 6 7 8] [9 10]]

  2. calculate the cost of each 'Operation'
    [[6] [6] [5]] == 17

  3. Study the posibility of minimize the lower 'Operation' permuting some 'Action' with the other 'Operations'.
    Actions 4 & 7 permute with 9 & 10
    [[1 2 3 9] [5 6 10 8] [4 7]]

  4. Calculate the new cost
    [[6] [7] [1]] == 14

  5. Repeat steps 2 to 4 until it is not possible to minimize the lower, then the second lower, then the third etc

  6. You will find soon a Local Minimum. If no more moves are allowed but the Local Minimum is not the Global Minimum, permute two random 'actions' between two random 'operations' and repeat the steps 2º to 5º.

With this method it is possible to find very fast a solution adjusted to the minimum global cost with a complexity of O(n·log(n)), where n is the amount of 'actions'

I've tested it with a random sample of 60 'actions' and 26 'resources' grouped in 'operations' of 6 'actions'. It toke arround 5 minutes to get a really good solution arround 40% better that the initial one (*), and 30 minutes after it only got better solution in about 1%

(*) the initial solution was not actually a random solution. Instead of that I used 'Ant Colony' algorithm with 15 ants to get a better initial solution and reduce the amount of iterations, but that is another history

",6207,,,,,1/31/2020 17:50,,,,0,,,,CC BY-SA 4.0 17783,1,17804,,1/31/2020 20:01,,1,872,"

I am a bit confused about the depth of the convolutional filters in a CNN.

At layer 1, there are usually about 40 3x3x3 filters. Each of these filters outputs a 2d array, so the total output of the first layer is 40 2d arrays.

Does the next convolutional filter have a depth of 40? So, would the filter dimensions be 3x3x40?

",32390,,2444,,12/18/2021 12:21,12/18/2021 12:21,How is the depth of the filters of convolutional layers determined?,,1,0,,12/18/2021 15:20,,CC BY-SA 4.0 17784,1,,,2/1/2020 7:12,,3,315,"

How can I generate unique patterns, as they did for these Nutella jars? See, for example, the video Algorithm designs seven million different jars of Nutella.

",33194,,2444,,2/1/2020 16:05,2/2/2020 4:22,How can I generate unique random patterns (similar to the ones in Nutella jars)?,,1,1,,,,CC BY-SA 4.0 17785,2,,17764,2/1/2020 14:31,,7,,"

Before proceeding, it's important to note that ResNets, as pointed out here, were not introduced to specifically solve the VGP, but to improve learning in general. In fact, the authors of ResNet, in the original paper, noticed that neural networks without residual connections don't learn as well as ResNets, although they are using batch normalization, which, in theory, ensures that gradients should not vanish (section 4.1). So, in this answer, I'm just giving a potential explanation of why ResNets may also mitigate (or prevent to some extent) the VGP, but the cited research papers below also confirm that ResNets prevent the VGP. Given that I didn't fully read all the papers mentioned in this answer, the information in this answer may not be fully accurate.

The skip connections allow information to skip layers, so, in the forward pass, information from layer $l$ can directly be fed into layer $l+t$ (i.e. the activations of layer $l$ are added to the activations of layer $l+t$), for $t \geq 2$, and, during the forward pass, the gradients can also flow unchanged from layer $l+t$ to layer $l$.

How exactly could this prevent the vanishing gradient problem (VGP)? The VGP occurs when the elements of the gradient (the partial derivatives with respect to the parameters of the NN) become exponentially small, so that the update of the parameters with the gradient becomes almost insignificant (i.e. if you add a very small number $0 < \epsilon \ll 1$ to another number $d$, $d+\epsilon$ is almost the same as $d$) and, consequently, the NN learns very slowly or not at all (considering also numerical errors). Given that these partial derivatives are computed with the chain rule, this can easily occur, because you keep on multiplying small (finite-precision) numbers (please, have a look at how the chain rule works, if you're not familiar with it). For example, $\frac{1}{5}\frac{1}{5} = \frac{1}{25}$ and then $\frac{1}{5}\frac{1}{25} = \frac{1}{125}$, and so on. The deeper the NN, the more likely the VGP can occur. This should be quite intuitive if you are familiar with the chain rule and the back-propagation algorithm (i.e. the chain rule). By allowing information to skip layers, layer $l+t$ receives information from both layer $l+t-1$ and layer $l$ (unchanged, i.e. you do not perform multiplications). For example, to compute the activation of layer $l+t-1$, you perform the usual linear combination followed by the application of the non-linear activation function (e.g. ReLU). In this linear combination, you perform multiplications between numbers that could already be quite small, so the results of these multiplications are even smaller numbers. If you use saturating activation functions (e.g. tanh), this problem can even be aggravated. If the activation of layer $l+t$ are even smaller than the activations of layer $l+t-1$, the addition of the information from layer $l$ will make these activations bigger, thus, to some extent, they will prevent these activations from becoming exponentially small. A similar thing can be said for the back-propagation of the gradient.

Therefore, skip connections can mitigate the VGP, and so they can be used to train deeper NNs.

These explanations are roughly consistent with the findings reported in the paper Residual Networks Behave Like Ensembles of Relatively Shallow Networks, which states

Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.

In the paper Norm-Preservation: Why Residual Networks Can Become Extremely Deep?, the authors also discuss another desirable effect of skip connections.

We show theoretically and empirically that each residual block in ResNets is increasingly norm-preserving, as the network becomes deeper

",2444,,2444,,1/17/2021 14:07,1/17/2021 14:07,,,,2,,,,CC BY-SA 4.0 17786,1,,,2/1/2020 15:39,,2,25,"

Consider a 2D snake game, where the snake has to eat food to become longer. It must avoid hitting walls and biting into her tail.

Such a game could have a different amount of actions:

  • 3 actions: go straight, turn left, turn right (relative to crawling direction)
  • 4 actions: north, east, south, west (absolute direction on the 2D map)
  • 7 actions: a combination of option A and option B (leaves the preferred choice to the player)

While the game in principle is always the same, I would like to understand the impact of the amount of actions on the training of a neuronal network. One obvious thing is the number of output nodes of the neuronal network.

In case A (3 actions), the neuronal network cannot perform an incorrect action. Any of the 3 choices are valid moves.

In case B (4 actions), the net IMHO needs to learn that going into opposite direction does not have the desired effect and the snake continues moving into the old direction.

In case C (7 actions), the net needs to learn both, 1 action is always illegal and the 3 relative actions somehow map to the 3 absolute actions.

How can I consider the learning curve in these situations? Does option B need 25% more training than option A to achieve the same results (same fitness) (similar: option C needs 125% more training time)?

Is giving a negative reward for an impossible move considered cheating, because I do code the rules of the game into the reward logic?

",31627,,,,,2/1/2020 15:39,What effect does increasing the actions in RL have?,,0,0,,,,CC BY-SA 4.0 17787,2,,17775,2/1/2020 15:48,,2,,"

Many people confuse and misuse the two terms, classification and prediction (or classify and predict). This is because in many cases classification techniques are being used for prediction purposes which creates part of the confusion to others who then use the term ‘prediction’ (or ‘predict’) inappropriately.

Your understanding of the definitions of classification and prediction is mostly correct and you are absolutely correct that there are many people using the terms synonymously, sometimes correctly but I believe mostly erroneously. There are many good articles elaborating on the two and I have added some links and excerpts at the end of this answer. What these articles don't cover is that many forecasting (i.e. prediction) researchers and practitioners will use conventional classifiers to predict the future state of a time series or data sequence. More advanced researchers and practitioners will use time-recurrent models, which learn temporal patterns. These are still called classifiers but the for purpose of prediction.

There are more papers written on this use of classifiers, conventional and time-recurrent type classifiers, for time series than the use of regressor models!

This adds to the confusion in the data science and machine learning community in the usage of the terms ‘classify’ and ‘predict'.

Galit Shmueli sums it up best in his paper, “To Explain or to Predict?”, where he states: “Conflation between explanation and prediction is common, yet the distinction must be understood for progressing scientific knowledge.”

There is also the opposite problem where people will confuse regression models with classification. See the first article below.

Classification vs. Prediction, by Professor Frank Harrell

Excerpt: By not thinking probabilistically, machine learning advocates frequently utilize classifiers instead of using risk prediction models. The situation has gotten acute: many machine learning experts actually label logistic regression as a classification method (it is not). It is important to think about what classification really implies. Classification is in effect a decision. Optimum decisions require making full use of available data, developing predictions, and applying a loss/utility/cost function to make a decision that, for example, minimizes expected loss or maximizes expected utility. Different end users have different utility functions. In risk assessment this leads to their having different risk thresholds for action. Classification assumes that every user has the same utility function and that the utility function implied by the classification system is that utility function.

To Explain or to Predict?, by Galit Shmueli

Abstract. Statistical modeling is a powerful tool for developing and testing theories by way of causal explanation, prediction, and description. In many disciplines there is near-exclusive use of statistical modeling for causal explanation and the assumption that models with high explanatory power are inherently of high predictive power. Conflation between explanation and prediction is common, yet the distinction must be understood for progressing scientific knowledge. While this distinction has been recognized in the philosophy of science, the statistical literature lacks a thorough discussion of the many differences that arise in the process of modeling for an explanatory versus a predictive goal. The purpose of this article is to clarify the distinction between explanatory and predictive modeling, to discuss its sources, and to reveal the practical implications of the distinction to each step in the modeling process.

What is the difference between classification and prediction?, from KDnuggets

If one does a decision tree analysis, what is the result? A classification? A prediction?

Gregory Piatetsky-Shapiro answers: The decision tree is a classification model, applied to existing data. If you apply it to new data, for which the class is unknown, you also get a prediction of the class.

The assumption is that the new data comes from the similar distribution as the data you used to build your decision tree. In many cases this is a correct assumption and that is why you can use the decision tree for building a predictive model.

When Classification and Prediction are not the same?

Gregory Piatetsky-Shapiro answers: It is a matter of definition. If you are trying to classify existing data, e.g. group patients based on their known medical data and treatment outcome, I would call it a classification. If you use a classification model to predict the treatment outcome for a new patient, it would be a prediction.

gabrielac adds In the book ""Data Mining Concepts and Techniques"", Han and Kamber's view is that predicting class labels is classification, and predicting values (e.g. using regression techniques) is prediction.

Other people prefer to use ""estimation"" for predicting continuous values.

",5763,,5763,,2/1/2020 17:20,2/1/2020 17:20,,,,0,,,,CC BY-SA 4.0 17788,1,17789,,2/1/2020 17:30,,3,1301,"

In respect of RL, is model-free and off-policy the same thing, just different terminology? If not, what are the differences? I've read that the policy can be thought of as 'the brain', or decision making part, of machine learning application, where it stores its learnings and refers to it when a new action is required in the new state.

",27629,,2444,,2/1/2020 19:32,2/1/2020 19:36,Are model-free and off-policy algorithms the same?,,1,0,,,,CC BY-SA 4.0 17789,2,,17788,2/1/2020 18:41,,4,,"

In respect of RL, is model-free and off-policy the same thing, just different terminology?

No, they are entirely different terms, with the only thing they have in common is that they are both ways in which an RL agent can vary. An agent is generally either working off-policy or on-policy, and is generally either model-based or model-free. These things can otherwise appear in all four combinations.

If not, what are the differences?

Model-based vs model-free

A model-based learning agent uses knowledge of the environment dynamics in order to make predictions of expected outcomes. A model-free learning agent does not use such knowledge. The model here might be provided explicitly by the developer - that could be code for physics to predict a mechanical system, or it might be the rules of a board game that the agent is allowed to know and query to predict outcomes of actions before taking them. Models can also be learned statistically from experience, although that is harder to make effective.

On-policy vs off-policy

An on-policy agent learns statistically about how it is currently acting, and assuming a control problem, then uses that knowledge to change how it should act in future. An off-policy agent can learn statistically from other observed behaviours (including its own past behaviour, or random and exploratory behaviour) and use that knowledge to understand how a different target behaviour would perform.

Off-policy learning is a strict generalisation of on-policy learning and includes on-policy as a special case. However, off-policy learning is also often harder to perform since observations typically contain less relevant data.

I've read that the policy can be thought of as 'the brain', or decision making part, of machine learning application, where it stores its learnings and refers to it when a new action is required in the new state.

That's basically correct when considering how an agent learns how to behave in an environment.

You are assigning a bit too much to the word policy here. A policy is strictly only the mapping from a state to an action (or probability distribution over actions), and often written $\pi(a|s)$, i.e. the probability of taking action $a$ given the agent is in state $s$. The ""brain"" part might include how the agent learns that policy. That could include storing past experience or some summary of past experience in e.g. a neural network.

However, outside of machine learning context, a really simple function containing and if/then statement would also be a policy, if the input to the function was a state of the environment, and the output was an action or probabilities of taking a range of actions. Behaving completely randomly is also a policy, but outside of very specific environments (e.g. Rock/Paper/Scissors) it is usually not the optimal thing to do.

",1847,,2444,,2/1/2020 19:36,2/1/2020 19:36,,,,2,,,,CC BY-SA 4.0 17790,1,,,2/2/2020 2:11,,1,253,"

I am trying to use RealNVP with some data I have (the input size is a 1D vector of size 22). Here is the link to the RealNVP paper and here is a nice, short explanation of it (the paper is pretty long). My code is mainly based on this code from GitHub and below are the main piece that I am using (with slight adjustments). The problem is that the loss is getting negative, which in the definition of my code means that the log-probability of the my data is positive, which in turn means that the probabilities are bigger than 1. This is impossible mathematically, and I see no way how this can happen, from a mathematical point of view. I also couldn't find a mistake in my code. Can someone help me with this? Is there a mistake there? Am I missing something with my understanding of normalizing flows? Thank you!

class NormalizingFlowModel(nn.Module):

    def __init__(self, prior, flows):
        super().__init__()
        self.prior = prior
        self.flows = nn.ModuleList(flows)

    def forward(self, x):
        m, _ = x.shape
        log_det = torch.zeros(m).cuda()
        for flow in self.flows:
            x, ld = flow.forward(x)
            log_det += ld
        z, prior_logprob = x, self.prior.log_prob(x)
        return z, prior_logprob, log_det

    def inverse(self, z):
        m, _ = z.shape
        log_det = torch.zeros(m).cuda()
        for flow in self.flows[::-1]:
            z, ld = flow.inverse(z)
            log_det += ld
        x = z
        return x, log_det

    def sample(self, n_samples):
        z = self.prior.sample((n_samples,))
        x, _ = self.inverse(z)
        return x


class FCNN_for_NVP(nn.Module):
    """"""
    Simple fully connected neural network to be used for Real NVP
    """"""
    def __init__(self, in_dim, out_dim):
        super().__init__()
        self.network = nn.Sequential(
            nn.Linear(in_dim, 32),
            nn.Tanh(),
            nn.Linear(32, 32),
            nn.Tanh(),
            nn.Linear(32, 64),
            nn.Tanh(),
            nn.Linear(64, 64),
            nn.Tanh(),
            nn.Linear(64, 32),
            nn.Tanh(),
            nn.Linear(32, 32),
            nn.Tanh(),
            nn.Linear(32, out_dim),
        )

    def forward(self, x):
        return self.network(x)


class RealNVP(nn.Module):
    """"""
    Non-volume preserving flow.

    [Dinh et. al. 2017]
    """"""
    def __init__(self, dim, base_network=FCNN_for_NVP):
        super().__init__()
        self.dim = dim
        self.t1 = base_network(dim // 2, dim // 2)
        self.s1 = base_network(dim // 2, dim // 2)
        self.t2 = base_network(dim // 2, dim // 2)
        self.s2 = base_network(dim // 2, dim // 2)

    def forward(self, x):
        lower, upper = x[:,:self.dim // 2], x[:,self.dim // 2:]      
        t1_transformed = self.t1(lower)
        s1_transformed = self.s1(lower)
        upper = t1_transformed + upper * torch.exp(s1_transformed)
        t2_transformed = self.t2(upper)
        s2_transformed = self.s2(upper)
        lower = t2_transformed + lower * torch.exp(s2_transformed)
        z = torch.cat([lower, upper], dim=1)
        log_det = torch.sum(s1_transformed, dim=1) + torch.sum(s2_transformed, dim=1)
        return z, log_det

    def inverse(self, z):
        lower, upper = z[:,:self.dim // 2], z[:,self.dim // 2:]
        t2_transformed = self.t2(upper)
        s2_transformed = self.s2(upper)
        lower = (lower - t2_transformed) * torch.exp(-s2_transformed)
        t1_transformed = self.t1(lower)
        s1_transformed = self.s1(lower)
        upper = (upper - t1_transformed) * torch.exp(-s1_transformed)
        x = torch.cat([lower, upper], dim=1)
        log_det = torch.sum(-s1_transformed, dim=1) + torch.sum(-s2_transformed, dim=1)
        return x, log_det

flow = RealNVP(dim=data.size(1))
flows = [flow for _ in range(1)]
prior = MultivariateNormal(torch.zeros(data.size(1)).cuda(), torch.eye(data.size(1)).cuda())
model = NormalizingFlowModel(prior, flows)
model = model.cuda()

for i in range(10):
    for j, dtt in enumerate(my_dataloader_bkg_only):
        optimizer.zero_grad()
        x = dtt[0].float()
        z, prior_logprob, log_det = model(x)
        logprob = prior_logprob + log_det
        loss = -torch.mean(prior_logprob + log_det)
        loss.backward()
        optimizer.step()
    if i % 1 == 0:
        print(""Saved"")
        best_loss = logprob.mean().data.cpu().numpy()
        print(logprob.mean().data.cpu().numpy(), prior_logprob.mean().data.cpu().numpy(),
                  log_det.mean().data.cpu().numpy())
",22839,,,,,10/19/2022 12:02,RealNVP gives wrong probabilities,,2,3,,,,CC BY-SA 4.0 17791,1,,,2/2/2020 2:28,,2,83,"

Whenever I look for papers involving semi-supervised learning, I always find some that talk about graph semi-supervised learning (e.g. A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning).

What is the difference between graph semi-supervised learning and normal semi-supervised learning?

",31240,,2444,,4/28/2020 1:22,4/28/2020 1:23,What is the difference between graph semi-supervised learning and normal semi-supervised learning?,,1,0,,,,CC BY-SA 4.0 17792,2,,17784,2/2/2020 4:22,,1,,"

Some excerpts from Nutella 'Hired' an Algorithm to Design New Jars. And It Was a Sell-Out Success:

The ""algorithm"" is called HP Mosaic and is included free in HP SmartStream Designer for HP printers.

More about how the algorithm works here: https://www.linkedin.com/pulse/hp-mosaic-20-steven-chow

HP Mosaic takes the vector PDF file as input (also known as a Seed file), and generates a large number of variations on the file by transforming it — scaling, transposition, and rotation — randomly.

",5763,,,,,2/2/2020 4:22,,,,0,,,,CC BY-SA 4.0 17795,1,,,2/2/2020 12:34,,3,239,"

I'm quite new to reinforcement learning and my project will consist of detecting lanes with RL.

I'm using q-learning and I'm having a hard time thinking how my q table should look like, I mean - what could represent a state. My main idea is to feed the machine with a frame that contains a road picture, which the edge detection function is being applied to (and by thus getting lots of lines that exits in the frame). And train the machine which lines are the correct lane line. I already have a deterministic function that already recognizes the lanes and it will be the function that will teach the machine. I already organized some lane parameters such as (lane length, lane cords, lane color (white or yellow have a better probability to be a lane), lane diameter and the lane incline).

Now, my only issue is how should I construct the Q-table. Basically, what could represent a state and which lanes or decisions I should reward.

",33220,,2444,,2/3/2020 11:28,10/30/2020 13:01,How can I perform lane detection with reinforcement learning?,,1,3,,,,CC BY-SA 4.0 17796,1,18420,,2/2/2020 13:18,,2,993,"

How would you go about training an RL Tic Tac Toe (well, any board game, really) application to learn how to play successfully (win the game), without a human having to play against the RL?

Obviously, it would take a lot longer to train the AI if I have to sit and play ""real"" games against it. Is there a way for me to automate the training?

I guess creating ""a human player"" to train the AI who just selects random positions on the board won't help the AI to learn properly, as it won't be up against something that's not using a strategy to try to beat it.

",27629,,2444,,2/3/2020 11:31,3/4/2020 19:12,How can I train a RL agent to play board games successfully without human play?,,2,0,,,,CC BY-SA 4.0 17798,2,,17768,2/2/2020 15:51,,5,,"

There are various dataset available such as

  1. Pascal VOC dataset: You can perform all your task with these. size of the dataset is as follows

  2. ADE20K Semantic Segmentation Dataset: you can perform only segmentation here

  3. COCO dataset: This is rich dataset but a size larger then 5 GB so you can try downloading using google colab in your drive and then make a zip file of data as less than 5 GB

You can download all these datasets using Gluoncv easily.here link.

",15368,,15368,,2/3/2020 15:41,2/3/2020 15:41,,,,0,,,,CC BY-SA 4.0 17799,2,,17796,2/2/2020 16:09,,0,,"

One solution would be to simply play the AI against itself (which yields good results for the tic tac toe example, I've tried it), but a much more interesting approach is to train two networks at the same time and have them play against each other.

It's called a Generative adversarial network (GAN) and it's very promising. It is able to make realistic images and much more.

It's quite easy to implement both solutions so I'd recommend doing both and seeing for yourself since each problem is different.

",33226,,,,,2/2/2020 16:09,,,,2,,,,CC BY-SA 4.0 17800,2,,17795,2/2/2020 16:19,,2,,"

I will agree with malioboro, maybe RL is an overkill for such a task. Even though with the trend of autonomous driving research lately, papers dealing with lane changing almost certainly exist, you should check them out for more details.

As stated in Lane Change Decision-making through DeepReinforcement Learning with Rule-basedConstraints , ""When the states are discrete and finite, the Q-function canbe easily formulated in a tabular form. But in many practical applications, for example, lane change decision-making task,the state space of them is very large or even continuous,using the Q-learning algorithm will lead to dimension disaster. Therefore, tabular Q-learning algorithm does not applicable to the learning problem of continuous state space and continuousaction space""

So, i would recommend using image processing techniques, or Deep-Q-Learning. Q-Learning is not capable of dealing with continuous problems like lane changing or lane tracking.

",33225,,,,,2/2/2020 16:19,,,,0,,,,CC BY-SA 4.0 17801,1,,,2/2/2020 16:59,,4,63,"

In many papers about artificial spiking neural networks (SNNs), the performance of them is not up to par with traditional ANNs. I have read how some people have converted ANNs to SNNs using various techniques.

There has been work done on using unsupervised learning in SNN to recognise MNIST digits through spike-timing-dependent plasticity (for example, the paper Unsupervised learning of digit recognition using spike-timing-dependent plasticity, by Diehl and Cook, 2015). This form of learning is not possible in traditional ANNs due to their synchronous nature.

I was wondering would it be a good idea to first train an SNN in an unsupervised manner to learn some of the structure of the data. Then convert to a traditional ANN to take advantage of their superior performance with some more training. I can see this being useful for training a network on a sparsely labelled dataset.

I am quite a novice in this area, so I was looking for feedback on any immediate barriers as to why this would not work or if it is even worth doing.

",33227,,2444,,2/2/2020 20:05,2/2/2020 20:05,Is it a good idea to first train a spiking neural network and then convert it to a conventional neural network?,,0,0,,,,CC BY-SA 4.0 17802,1,17854,,2/2/2020 17:40,,0,232,"

I have to build a neural network without any architecture limitations which have to predict the next value of a time series.

The dataset is composed of 400.000 values, which are given in hex format. For example

0xbfb22b14
0xbfb22b10
0xbfb22b0c
0xbfb22b18
0xbfb22b14

I think LSTM is suitable for this problem, but I am worried about the length of the input. Would it be a good idea to use CNN?

def structure(step,n_features):
    # define model
    model = Sequential()
    model.add(LSTM(50, activation='relu', return_sequences=True, input_shape=(step, n_features)))
    model.add(Dense(1))
    model.compile(optimizer='adam', loss='mse')
    return model

What about this one ?

""model"": {
        ""loss"": ""mse"",
        ""optimizer"": ""adam"",
    ""save_dir"": ""saved_models"",
        ""layers"": [
            {
                ""type"": ""lstm"",
                ""neurons"": 999,
                ""input_timesteps"": 998,
                ""input_dim"": 1,
                ""return_seq"": true
            },
            {
                ""type"": ""dropout"",
                ""rate"": 0.05
            },
            {
                ""type"": ""lstm"",
                ""neurons"": 100,
                ""return_seq"": false
            },
            {
                ""type"": ""dropout"",
                ""rate"": 0.05
            },
            {
                ""type"": ""dense"",
                ""neurons"": 1,
                ""activation"": ""linear""
            }
",32076,,32076,,2/5/2020 13:48,2/5/2020 13:48,What's the best architecture for time series prediction with a long dataset?,,1,0,,,,CC BY-SA 4.0 17803,1,17852,,2/2/2020 18:22,,7,677,"

I am reading the Understanding Machine Learning book by Shalev-Shwartz and Ben-David and based on the definitions of PAC learnability and No Free Lunch Theorem, and my understanding of them it seems like they contradict themselves. I know this is not the case and I am wrong, but I just don't know what I am missing here.

So, a hypothesis class is (agnostic) PAC learnable if there exists a learner A and a function $m_{H}$ s.t. for every $\epsilon,\delta \in (0,1)$ and for every distribution $D$ over $X \times Y$, if $m \geq m_{H}$ the learner can return a hypothesis $h$, with a probability of at least $1 - \delta$ $$ L_{D}(h) \leq min_{h'\in H} L_{D}(h') + \epsilon $$

But, in layman's terms, the NFL theorem states that for prediction tasks, for every learner there exists a distribution on which the learner fails.

There needs to exists a learner that is successful (defined above) for every distribution $D$ over $X \times Y$ for a hypothesis to be PAC learnable, but according to NFL there exists a distribution for which the learner will fail, aren't these theorems contradicting themselves?

What am I missing or misinterpreting here?

",31757,,,user9947,3/26/2020 0:01,2/1/2022 7:52,Are PAC learnability and the No Free Lunch theorem contradictory?,,3,2,,,,CC BY-SA 4.0 17804,2,,17783,2/2/2020 20:53,,1,,"

Does the next convolutional filter have a depth of 40? So, would the filter dimensions be 3x3x40?

Yes. The depth of the next layer $l$ (which corresponds to the number of feature maps) will be 40. If you apply $8$ kernels with a $3\times 3$ window to $l$, then the number of features maps (or the depth) of layer $l+1$ will be $8$. Each of these $8$ kernels has an actual shape of $3 \times 3 \times 40$. Bear in mind that the details of the implementations may change across different libraries.

The following simple TensorFlow (version 2.1) and Keras program

import tensorflow as tf


def get_model(input_shape, num_classes=10):
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Input(shape=input_shape))
    model.add(tf.keras.layers.Conv2D(40, kernel_size=3))
    model.add(tf.keras.layers.Conv2D(8, kernel_size=3))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(num_classes))

    model.summary()

    return model


if __name__ == '__main__':
    input_shape = (28, 28, 1)  # MNIST digits have usually this shape.
    get_model(input_shape)

outputs the following

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 26, 26, 40)        400       
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 24, 24, 8)         2888      
_________________________________________________________________
flatten (Flatten)            (None, 4608)              0         
_________________________________________________________________
dense (Dense)                (None, 10)                46090     
=================================================================
Total params: 49,378
Trainable params: 49,378
Non-trainable params: 0
_________________________________________________________________

where conv2d has the output shape (None, 26, 26, 40) because there are 40 filters, each of which will have a $3\times 3 \times 40$ shape.

The documentation of the first argument (i.e. filters) of the Conv2D says

filters – Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).

and the documentation of the kernel_size parameter states

kernel_size – An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.

It doesn't actually say anything about the depth of the kernels, but this is implied from the depth of the layers.

Note that the first layer has $(40*(3*3*1))+40 = 400$ parameters. Where do these numbers come from? Note also that the second Conv2D layer has $(8*(3*3*40))+8 = 2888$ parameters. Try to set the parameter use_bias of the first Conv2D layer to False and see the number of parameters again.

Finally, note that this reasoning applies to 2d convolutions. In the case of 3d convolutions, the depth of the kernels could be different than the depth of the input. Check this answer for more details about 3d convolutions.

",2444,,2444,,12/28/2020 15:53,12/28/2020 15:53,,,,0,,,,CC BY-SA 4.0 17807,2,,17151,2/3/2020 5:19,,0,,"

I have explored the AI for edge devices. My findings for tflite model.

  1. TFLite is just the tool suite to convert TFModel into TFLite.
  2. TFLite optimizes the model for edge or embedded devices using quantization techniques.

Quantization dramatically reduces both the memory requirement and computational cost of using neural networks.

Answer in brief:

  1. When we optimize the model definitely they are faster to take inference but it will impact the accuracy. Quantization leads to bit accuracy loss in smaller networks.
  2. TFLite is increasing the model performance but we have to pay a cost in terms of accuracy that's why we have both TFLite for edge computation and TFModel as well.
",7681,,,,,2/3/2020 5:19,,,,0,,,,CC BY-SA 4.0 17808,1,17858,,2/3/2020 6:20,,2,227,"

I came across this graph in David Silver's youtube lecture and Sutton's book on reinforcement learning.

Can anyone help me understand the graph? From the graph, for 10000 episodes what i see is that when we don't have a usable ace we always lose the game except if the sum is 20 or 21. But in case if we have a usable ace there are chances to win when our sum is below 20. I don't know how this is possible.

",31749,,1847,,2/5/2020 12:37,2/5/2020 12:41,"What does the figure ""Blackjack Value Function..."" from Sutton represent?",,1,2,,,,CC BY-SA 4.0 17809,1,,,2/3/2020 6:46,,0,309,"

I want to make a model that outputs the centre pixel of objects appearing in an image.

My current method involves using a CNN with L2 loss to output an image of equivalent size to the input where each pixel has a value of 1 if it is the center of an object and 0 otherwise. Each input image has roughly ~80 objects.

The problem with this is the CNN learns the easiest way to reduce the error, which is having the entire output be 0, because for 97% of cases that's correct. As such, error decreases but it learns nothing.

What is another potential method for training a network to do something similar? I also tried adding dropout, which made the output a lot more noisy and it seemed to learn ok, but eventually ended up in the same state as before with the entire output being 0, never really seeming to learn how to output the locations of objects.

",26726,,,,,11/23/2022 16:07,Possible model to use to find pixel locations of objects,,2,4,,,,CC BY-SA 4.0 17810,1,,,2/3/2020 8:59,,3,708,"

I was going through David Silver's lecture on reinforcement learning (lecture 4). At 51:22 he says that Monte Carlo (MC) methods have high variance and zero bias. I understand the zero bias part. It is because it is using the true value of value function for estimation. However, I don't understand the high variance part. Can someone enlighten me?

",31749,,2444,,2/3/2020 11:17,2/4/2020 7:30,How does Monte Carlo have high variance?,,1,0,,,,CC BY-SA 4.0 17811,2,,6880,2/3/2020 9:14,,0,,"

Alexnet (2012), Overfeat (2013), VGG (2014) and ResNet (2016) are cited in many image recognition or segmentation applications. There is also GoogleLeNet (2015). The lastest is the publication the denser is the network.

The ResNet publication comments on how the network density affects accuracy depending on the image data set size. The article tends to give a motivated answer to the question

Is learning better networks as easy as stacking more layers?

You might consider the training time since you have your own image data set depending on the kind of hardware you can you use ( see this benchmark for instance ). The denser the more time it will takes.

You also have to consider the size of the traning data set w/r to the expected accuracy. If the set is too small the net will probably overfeat. In that case you migh consider a data augmentation strategy (one of the answers mentions auto encoding, I m not sure but this might help for this purpose).

All these publications refer to the ImageNet data base and the associated image classification/detection contest which has 1000 classes.

",30392,,,,,2/3/2020 9:14,,,,0,,,,CC BY-SA 4.0 17812,2,,17768,2/3/2020 12:42,,1,,"

Maybe try the COCO (common objects in context) dataset. It's often used for object detection, segmentation and localisation. They provide labels, and you can limit the size by downloading only a specific number of classes. http://cocodataset.org/#explore

It's also quite a common one, so you can expect good documentation, and online answers to your questions.

Hope that helps!

",31180,,,,,2/3/2020 12:42,,,,0,,,,CC BY-SA 4.0 17813,2,,17810,2/3/2020 13:18,,2,,"

When using terms like ""high"" for high variance, this is in comparison to other methods, mainly in comparison to TD learning, which bootstraps between single time steps.

It is worth spelling out what the variance applies to and where it comes from: Namely the Monte Carlo return $G_t$ distribution, which can be calculated as follows:

$$G_t = \sum_{k=0}^{T-t-1}\gamma^k R_{t+k+1}$$

I am using a common convention here by the way that capital letters such as $G_t$ stand for random distributions of variables and lower case letters such as $g_t$ stand for examples/samples from those distributions. It is not always strictly used, especially in pseudo-code, but is important to make clear for this answer.

So a value function, such as action value function $q_{\pi}(s,a)$ can be written:

$$q_{\pi}(s,a) = \mathbb{E}_{\pi} [ \sum_{k=0}^{T-t-1}\gamma^k R_{t+k+1} | S_t=s, A_t=a] = \mathbb{E}_{\pi} [ G_t | S_t=s, A_t=a]$$

i.e. it is fairly clear that the mean of Monte Carlo return $\mathbb{E}[G_t]$ from any specific starting point is the same as the value function from the same starting point, by definition (see Sutton & Barto or David Silver's definitions) so samples from it - taking specific measures $g_t$ - are unbiased estimates of it.

The variance $\text{Var}(G_t)$ might be written

$$\text{Var}(G_t) = \text{Var}(\sum_{k=0}^{T-t-1}\gamma^k R_{t+k+1}) \le \sum_{k=0}^{T-t-1}\gamma^k \text{Var}(R_{t+k+1})$$

The second step is technically equal only in a ""worst case"" scenario since to add variances directly, we need the variables to be independently sampled, whilst $R_{t}$ and $R_{t+1}$ could be correlated, and often are. So, technically we should account for covariance here between distribitions on different time steps. The maths for that is fiddly, and I won't show it here.

What you can say intuitively about the inequality, is unless $R_{t}$ and $R_{t+1}$ are perfectly correlated throughout each trajectory, then variance of $G_t$ will increase depending on the number of timesteps and possible rewards in the return, by some fraction between 0 and 1 of the total variance of all time steps. You can either add zero variance on a timestep, or some fraction of $\text{Var}(R_{t+n})$ depending on whether the relationship between [$R_t$ to $R_{t+n-1}$] and $R_{t+n}$ is fully deterministic ($+0$) or completely independent ($+\gamma^{n-1}\text{Var}(R_{t+n})$)*

The amount of covariance involved relies on details of the policy, state transition and reward functions. In general, the policy during RL will include some stochastic behaviour -such as $\epsilon$-greedy that will add to the variance on each time step. The state transition and reward functions might add yet more. In any case:

$$\text{Var}(G_t) \ge \text{Var}(R_{t+1})$$

and in practice, when $t+1 \neq T$, the inequality can be large, although highly dependent on the details of the MDP. For off-policy Monte Carlo with basic importance sampling in fact the variance can be unbounded.

This variance might be considered high, compared to the variance of typical single-step TD returns:

$$G_{t:t+1} = R_{t+1} + \gamma q_{\pi}(S_t, A_t)$$

The variance of which is limited to that inherent in a single time step (one reward function, one state transition and one policy decision). The problem of course being that

$$\mathbb{E}[G_{t:t+1}] \neq \mathbb{E}[G_{t}]$$

. . . unless $q_{\pi}(s, a)$ is perfect. Which is why we can say that TD returns are biased (whilst $q_{\pi}(s, a)$ is not converged, the expectation of the TD target does not equal the true value), but Monte Carlo returns are higher variance (when number of timesteps to end of trajectory is more than one, the effects of random factors in policy, state transition and reward function accumulate).


* You could explicitly construct an environment with covariance between time steps that can reduce overall variance of a MDP's retrun, e.g. the state includes reward so far, and future rewards are manipulated so that the return fits to a certain distribution. This would be a special case where variance of the return from some starting points could be lower than the variance from the first step, but this is an unusual set up and likely to be impossible for an agent to learn optimal control.

",1847,,1847,,2/4/2020 7:30,2/4/2020 7:30,,,,5,,,,CC BY-SA 4.0 17814,2,,17773,2/3/2020 14:03,,2,,"

Looking at your code, one set of data transformations were applied to the train data and a different set of transformations were applied to the test data. Different data transformations could account for different evaluation metric performance.

It is best practices to put all data transformations in a function so they can be applied to all data in a similar way.

Since you are using scikit-learn, sklearn.compose.ColumnTransformer is designed for this purpose. Example code for the Titanic dataset is here.

",15403,,,,,2/3/2020 14:03,,,,2,,,,CC BY-SA 4.0 17815,1,,,2/3/2020 14:04,,2,109,"

Given a list of $N$ questions. If question $i$ is answered correctly (given probability $p_i$), we receive reward $R_i$; if not the quiz terminates. Find the optimal order of questions to maximize expected reward. (Hint: Optimal policy has an ""index form"".)

I am fairly new to Reinforcement Learning and Markov Decision Problems (MDP). I am aware that the goal of the problem is to maximize the expected reward but I am not sure how exactly to formulate this into an MDP.

This is the approach I thought of:

1) Assume only 2 questions. Then the state space is $S\in \{1,2\}$.

2) Compute the expected total reward $J = E(R)$ for both cases, when we start with question $1$ and question $2$ and then find the maximum of the two.

3) If we start with $1$, then $$J(S_0 = 1) = p_1(1-p_{2})R_1 + (R_1 + R_2)p_1p_2$$

4) Similarly, if we start with $2$, $$J(S_0 = 2) = p_2(1-p_{1})R_2 + (R_1 + R_2)p_1p_2$$.

To determine the maximum reward of the two, the required condition for $1$ to be the optimal starting question is $$R_1p_1 - R_2p_2 + p_1p_2(R_2 - R_1) \gt 0$$ If the above expression is negative, then we should start with $2$.

I would like to know if the approach is correct and how to proceed further. I am also not sure how to define the action space in this case. Can a dynamic programming approach be used here to find the optimal policy?

",33249,,33249,,2/3/2020 15:53,2/3/2020 15:53,Formulation of a Markov Decision Process Problem,,0,0,,,,CC BY-SA 4.0 17817,1,17886,,2/3/2020 14:49,,1,105,"

I have created an LSTM Neural Network which take as input the following format in an .csv file

sinewave
0.841470985
0.873736397
0.90255357
0.927808777
0.949402346
0.967249058
0.98127848
0.991435244

How can I write some code so it can take as input hex addresses and convert them to int ?

eg the following .xlsx file containing 400.000 samples

0xbfb22b18
0xbfb22b14
0xbfb22b10
0xbfb22b0c
0xbfb22b18
0xbfb22b14
0xbfb22b10
0xbfb22b0c
0xbfb22b18
0xbfb22b14
0xbfb22b10
0xbfb22b0c
",32076,,32076,,2/6/2020 15:47,2/6/2020 15:49,Convert input dataset given in hex addresses to int,,1,0,,,,CC BY-SA 4.0 17820,1,17879,,2/3/2020 16:55,,3,526,"

Despite the problem being very simple, I was wondering why an LSTM network was not able to converge to a decent solution.

import numpy as np
import keras

X_train = np.random.rand(1000)
y_train = X_train
X_train = X_train.reshape((len(X_train), 1, 1))

model= keras.models.Sequential()
model.add(keras.layers.wrappers.Bidirectional(keras.layers.LSTM(1, dropout=0., recurrent_dropout=0.)))
model.add(keras.layers.Dense(1))

optimzer = keras.optimizers.SGD(lr=1e-1)

model.build(input_shape=(None, 1, 1))
model.compile(loss=keras.losses.mean_squared_error, optimizer=optimzer, metrics=['mae'])
history = model.fit(X_train, y_train, batch_size=16, epochs=100)

After 10 epochs, the algorithm seems to have reached its optimal solution (around 1e-4 RMSE), and is not able to improve further the results.

A simple Flatten + Dense network with similar parameters is however able to achieve 1e-13 RMSE.

I'm surprised the LSTM cell does not simply let the value through, is there something I'm missing with my parameters? Is LSTM only good for classification problems?

",33256,,2444,,7/5/2020 14:56,11/27/2021 20:00,Why does the error of my LSTM not decrease after 10 epochs?,,1,0,,,,CC BY-SA 4.0 17822,1,17825,,2/3/2020 19:57,,2,135,"

Consider a feedforward neural network. Suppose you have a layer of inputs, which is feedforward to a hidden layer, and feedforward both the input and hidden layers to an output layer. Is there a name for this architecture? A layer feeds forward around the layer after it?

",33259,,2444,,2/3/2020 22:21,2/4/2020 20:49,What is the name of this neural network architecture with layers that are also connected to non-neighbouring layers?,,2,1,,,,CC BY-SA 4.0 17823,1,17837,,2/3/2020 20:08,,2,261,"

I recently had a look at automated planners and experimented a little bit with FastDownward. As I wanted to start a toy project, I created a PDDL model for the ordinary 3D Rubik's Cube (of course using a planner may not be the most efficient approach).

Although my model may not necessary be ""totally correct"" yet, so far it consists of 24 predicates, 12 actions for the respective moves (each with 8 typed parameters, 4 ""edge subcubes"" and 4 ""corner subcubes""). For each of the movable subcubes I have a domain object whose position is basically determined by the respective predicate; overall, at first glance this seemed to me as a model of quite moderate size.

It was indeed not a very complex task to come up with this model and although it currently does not consider orientations of subcubes yet, I simply wanted to give it a try with an instance where only a single move (so application of one action) has to be conducted — I assumed that such a plangraph should level off quite soon, so basically after the first layer where a goal state can be reached.

However, as I started the planner, it soon run out of memory and started to page.

Previously, I only read something on PDDL and the respective sections of Russell & Norvig. I took a closer look at the planner itself and found that it transform the PDDL description into some intermediate description.

I tried to only execute the transformation and after cutting off a third of my Rubik's Cube it — at least — terminated. I investigated the transformed model file and found that the planner/solver actually instantiates (flattens) the actions with its parameters.

Since the Rubik's Cube has a quite unrestricted domain and each action has apparently 8 parameters (4 corners, 4 edges), this inherently results in a huge number of flattened actions. Even more, although I added precondition constraints to ensure distinctness of the parameters (so the very same subcube cannot be both edgecube $e_1$ and $e_2$), the flattened version still contains these invalid actions.

I have the following questions:

  • are state-of-the-art planners even suitable for such a problem or are they designed for problems where flattening the actions is of great advantage (e.g. few parameters, moderate number of objects per class, etc.). IMHO this would be a major limitation for their applicability in contrast to e.g. CP solvers.

  • can anyone recommend another planner that is more appropriate and that maybe does not perform the model transformation, which seems to be quite expensive for my PDDL spec (e.g. from this list, https://ipc2018-classical.bitbucket.io, i have chosen FastDownward since it seemed to be among the best...)

  • does PDDL or even FastDownward allow to specify that the parameters have to be distinct (i just found this: https://www.ida.liu.se/~TDDC17/info/labs/planning/writing.shtml — search for distinct — but this is more than vague), cause this may already lead to a significant reduction of flattened actions.

  • I'd also be happy for any other recommendations or remarks

",33261,,2193,,2/4/2020 12:40,6/28/2022 13:02,FastDownward PDDL Planner Limitations,,1,0,,2/7/2021 17:27,,CC BY-SA 4.0 17824,1,17827,,2/3/2020 21:59,,1,174,"

Is the following statement about neural networks overclaimed?

Neural networks are iterative methods that minimize a loss function defined on the output layer of neurons.

I wrote this statement in the Introduction section of a conference paper. Reviewer got back to me saying that, ""this statement is over claimed without proper citations"". How is this statement over claimed? Is it not obvious that neural networks are iterative methods that try to minimize a cost function?

",25868,,25868,,2/3/2020 22:59,2/3/2020 23:51,Is the following statement about neural networks overclaimed?,,1,1,,,,CC BY-SA 4.0 17825,2,,17822,2/3/2020 22:06,,2,,"

This could be called a residual neural network (ResNet), which is a neural network with skip connections, that is, connections that skip layers.

Here's a screenshot of a figure from the paper Deep Residual Learning for Image Recognition (2015), an important paper that shows the usefulness of these architectures.

",2444,,,,,2/3/2020 22:06,,,,0,,,,CC BY-SA 4.0 17827,2,,17824,2/3/2020 23:44,,2,,"

I think I would never say that neural networks are iterative methods. I would say that iterative methods (e.g. gradient descent) are used to train neural networks (which can be thought of as linear and non-linear models, but mainly non-linear), which is quite different. Maybe you should or wanted to say that deep learning is an area of study where iterative methods are used to train neural networks.

It is possible the reviewer is just telling you that this or similar statements have been used a lot and don't really provide any insight, they are misleading, or simply useless or trivial in the context of the paper or conference.

Without further clarifications by the reviewer and about your paper and conference, it is difficult to provide more explanations.

",2444,,2444,,2/3/2020 23:51,2/3/2020 23:51,,,,2,,,,CC BY-SA 4.0 17828,1,,,2/4/2020 4:07,,1,41,"

Suppose we are doing sentiment analysis for a restaurant. Customers can rate the restaurant by #1: how expensive the restaurant is, #2:how good is the food and #3: how likely they will come again. The ratings are dependent,i.e. the more expensive the restaurant is (higher #1), the less likely they will come back (lower #3), but whey will if the food is good (higher #2).

My questions are: is there a good RNN structure(review as input, #1-#3 as output) that can capture and model the dependency among #1 - #3?

",33266,,33266,,2/12/2020 20:05,10/30/2022 1:04,What is the appropriate RNN structure to do Sentiment Analysis with multiple dependent ratings?,,1,1,,,,CC BY-SA 4.0 17829,1,,,2/4/2020 8:00,,2,218,"

As in https://en.wikipedia.org/wiki/Calculus_of_variations

The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals

The Gradient Descent algorithm is also a method to find minima of a function, it it a part of the Calculus of Variations?

",2844,,,,,2/4/2020 8:00,Is Gradient Descent algorithm a part of Calculus of Variations?,,0,6,,,,CC BY-SA 4.0 17830,1,17831,,2/4/2020 8:19,,1,838,"

Standard deviation and variance are in statistics but the formula for variance is somehow related to the L1 and L2.

Mathematically (L2 in machine learning sense), $$Variance = \dfrac{(X_1-Mean)^2+..+(X_n-Mean)^2}{N}$$ and, $$Standard\ Deviation= \sqrt(Variance)$$

Why shouldn't it be (L1 in machine learning sense): $$Variance = \dfrac{|X_1-Mean|+..+|X_n-Mean|}{N}$$ and, $$Standard\ Deviation= Variance$$

",2844,,,,,2/4/2020 17:33,Why is Standard Deviation based on L2 Variance and not L1 Variance,,1,1,,,,CC BY-SA 4.0 17831,2,,17830,2/4/2020 8:24,,4,,"

I've found the answer, the L2 is Standard Deviation, and the L1 is Mean Deviation. Standard deviation describes the variation better and the values are always different on different sets of X while the Mean Deviation gives the same values sometimes.

*Footnote: Why square the differences? If we just add up the differences from the mean ... the negatives cancel the positives:

standard deviation why a (4 + 4 − 4 − 4) / 4 = 0 So that won't work. How about we use absolute values?

standard deviation why a (|4| + |4| + |−4| + |−4|) / 4 = (4 + 4 + 4 + 4) / 4 = 4 That looks good (and is the Mean Deviation), but what about this case:

standard deviation why b (|7| + |1| + |−6| + |−2|) / 4 = (7 + 1 + 6 + 2) / 4 = 4 Oh No! It also gives a value of 4, Even though the differences are more spread out.

So let us try squaring each difference (and taking the square root at the end):

standard deviation why a √( (42 + 42 + 42 + 42) / 4 ) = √( 644) = 4 standard deviation why b √( (72 + 12 + 62 + 22) / 4 ) = √( 904) = 4.74...

Reference:
https://www.mathsisfun.com/data/standard-deviation.html

",2844,,2844,,2/4/2020 17:33,2/4/2020 17:33,,,,1,,,,CC BY-SA 4.0 17832,1,17834,,2/4/2020 8:29,,2,47,"

For my internship assignment I have to implement a proof of concept for an application that is supposed to scan a picture with a carp on it and identify which carp this is. All of the carps that are going to be scanned are known and they all exist in the database, so no new carps are scanned.

Is this possible? I've been searching a lot about this topic and the only thing I found is customvision.ai, but for using this I need to have at least 15 pictures of the same carp per tag, but the client only has 1 picture per carp.

What are your recommendations or do you think this is not possible?

",33268,,2193,,2/4/2020 12:30,2/4/2020 12:30,Recognize carp and give them a unique id,,1,0,,,,CC BY-SA 4.0 17833,1,17835,,2/4/2020 9:17,,2,83,"

Here's a diagram of a variational auto-encoder.

There are 2 nodes before the sample (encoding vector). One is the mean, one is the standard deviation. The mean one is confusing.

Is it the mean of values or is it the mean deviation?

$$\text{mean} = \dfrac{X_i+..+X_n}{N}$$

$$\text{mean deviation} = \dfrac{[X_i|+..+|X_n|}{N}$$

",2844,,2444,,4/8/2020 22:18,4/8/2020 22:18,What is the mean in the variational auto-encoder?,,1,1,,,,CC BY-SA 4.0 17834,2,,17832,2/4/2020 9:43,,1,,"

I'm not sure I understood your question entirely so please correct me if i'm wrong. You're having all the carp tagged, so if I give you a picture of any of them you know exactly which one is in the picture right? If that is the case then you're dealing with a classic classification problem. One simple way of solving such problem will be to use CNN on the input image to extract features, and at the end of the network have N neurons, where each neuron matches one carp. Just apply a softmax over all the inputs to have a probability distribution, and select the highest value (or if the values is highest then a threshold). Regarding the number of labeled examples per class, having only one might be a problem. I would suggest you to look into few shot learning, which are trying to solve this exact problem (training decent models with limited training data).

",20430,,,,,2/4/2020 9:43,,,,4,,,,CC BY-SA 4.0 17835,2,,17833,2/4/2020 9:56,,1,,"

Kind of neither, although leaning towards the first definition of the Mean as a simple average of values.

It's a distribution parameter of the Gaussian, so it's the expected average of samples as the number of samples approaches infinity.

The distrinction is that you could draw three samples, -2, 0 and -1, from a Standard Normal - the mean of samples would be -1, but the distribution mean is still 0.

From a network architecture PoV, it's a learnable transformation applied on top of samples from a Standard Normal, so you only need to learn the transformation and get sampling for 'free'.

",33247,,,,,2/4/2020 9:56,,,,0,,,,CC BY-SA 4.0 17836,2,,13848,2/4/2020 13:18,,5,,"

You may be very interested to know that there was a bug in the v2 Lidar tracing, making the agent think there were phantom objects, and sometimes intersecting with its own legs:

https://github.com/openai/gym/pull/1789

Finding this bug makes me even more impressed anyone has solved BipedalWalkerHardcore-v2 - it seems the observations from lidar have been inconsistent and incorrect, returning the furthest hit result instead of closest.

...

Before fix - lidar traces through ground, and hits the side of a pit, giving the agent the impression of a ""phantom canyon"" in front of the pit, that only appears as it approaches the pit: https://i.imgur.com/XKPnRTR.png

...

After fix - lidar is stopped by terrain, even when another object is behind it: https://i.imgur.com/Gg8B5BD.png

...

After triple checking the docs - I've submitted a minor tweak (returning -1 instead of 1 for an object that should be ignored) - it now seems legs are correctly ignored, and the traces are accurate in all situations!

...

It seems to me that solutions to BipedalWalkerHardcore-v2 have not just learned to deal with the complex environment - but advanced a step ahead, and are able to deal with the complex environment and sensory hallucinations causing them to jump at the slightest hint of a cube, and keep running even when it looks like the ground is not visible below their feet, relying more on the touch sensor than the lidar, or perhaps recognising the difference in ""shape"" between a real pit and a ""fake pit"" (A real pit has a floor)

BipedalWalkerHardcore-v2 has been bumped to BipedalWalkerHardcore-v3 with these fixes as of Jan 31, 2020.

You might want to try retraining your agent now! (although it is still a difficult environment to solve)

To expand on why DDPG doesn't solve it, when although buggy, BipedalWalkerHardcore-v2 is solvable: The solution landscape to this problem is as full of pits as the environment itself. To learn to leap over a pit in the environment for example, the agent must perform a complex sequence of actions that is difficult to discover by random chance. Each time it fails, it learns that being close to a pit is highly likely to result in a large penalty, and in an effort to maximise it's rewards, will often remain stationary with a naive method like DDPG as the rewards for doing that are higher than trying and falling into the pit once more. In short, vanilla DDPG lacks enough exploration power to find the complex series of actions required before it converges on not going near the pit. Not to mention all of the other things it needs to learn to be successful.

Of the very few published examples that have solved it, one used Evolutionary Strategy - a gradient free method essentially trying millions of policies and evaluating them, and a custom A3C method that was tailored to solve this particular environment. Both had high computational and sample requirements. I speak from experience as I have personally solved this environment with a RL exploration algorithm that generalises to other environments, solves it in 4 hours on a single cpu, and can be used with any off-policy RL algorithm, but unfortunately I'm unable to publish it because of IP with the company I work for.

TLDR; The chance of vanilla DDPG solving it are infeasibly small.

",12765,,12765,,2/7/2020 1:12,2/7/2020 1:12,,,,3,,,,CC BY-SA 4.0 17837,2,,17823,2/4/2020 14:05,,3,,"

You have stumbled upon a common drawback of the vast majority of modern planning technology. The "flattening" you refer to is actually called "grounding" in the community. Indeed, grounding is the first step of almost every planner out there. Planners that don't do this grounding phase are typically referred to as "lifted planners", but their availability is far more limited (i.e., you'd likely need to reach out to an author of a paper to get access to their code).

That said, many of the planners, Fast Downward included, will try to be smart about the grounding phase. This includes simple things such as making sure that typing for the parameters (and the objects that instantiate them) is adhered to, but goes further with certain checks on reachability. The process isn't flawless, but it usually does a good job when the true number of ground actions is relatively small. If your model really does have an astronomical number of ground actions, then you're kind of out of luck here.

To your questions specifically:

  1. Yes, they are designed for problems where you can ground things. But the mark of suitability is the number of reachable ground actions. If this is small enough, then either
  • (1) the planners should be able to handle it or
  • (2) it presents a research opportunity for grounding things better.
  1. This is the most recent work that comes to mind, as well as the papers that cite it:
  1. They aren't assumed to be distinct. There are situations when you wouldn't want them to be, and so it is up to the designer to add the preconditions that ensure they are distinct (note that this is one excellent way to equip the grounder with extra information about what it should generate).

  2. Generally, having actions with many parameters is a recipe for problems that have too many ground actions. Either manually remodeling things yourself, or applying a technique like operator splitting is the approach most often taken.

Finally, if you'd like to have a more directed discussion, feel free to join us over on the slack channel. There may be more eyes on the questions you pose there.

",33275,,36869,,6/28/2022 13:02,6/28/2022 13:02,,,,0,,,,CC BY-SA 4.0 17839,1,17842,,2/4/2020 16:39,,4,306,"

I am an undergraduate student in applied mathematics with an interest in artificial intelligence. I am currently exploring topics where I could do research. Coming from a mathematical background I am interested in the question: Can we mathematically establish that a certain AI system has the ability to learn a task given some examples of how it should be done? I would like to know what research has been done on this topic and also what mathematical tools could be helpful in answering such questions.

",33278,,2444,,2/4/2020 19:48,1/16/2022 22:30,Mathematical foundations of the ability to learn,,2,0,,,,CC BY-SA 4.0 17840,1,,,2/4/2020 16:40,,1,34,"

I designed a fire detection using Deep Learning based classification approach. In my training dataset, I have both fire and fire smokes are supposed to be detected (all under ""fire""; mostly real fires are detected. Fire smokes are less accurate).

Now after months, I need to differentiate them in my detection results. It would be difficult to retrain each class separately now. Another option coming into my mind is building a binary classification after the main one, getting the main detections as input and saying which of the two it belongs to. However, I may miss some fire smokes I believe because that's less accurate.

Is there any other approaches? What are pros/cons of various approaches?

",9053,,,,,2/4/2020 16:52,Post-classification after inference,,0,1,,,,CC BY-SA 4.0 17842,2,,17839,2/4/2020 19:41,,4,,"

Computational learning theory (or just learning theory, abbreviated as CLT, COLT, or LT) is devoted to the mathematical and computational analysis of machine learning algorithms, so it is concerned with the learnability (i.e. generalization, bounds, efficiency, etc.) of certain tasks, given a learner (or a learning algorithm), a hypothesis space, data, etc.

CLT can be divided into two subfields:

The most famous and studied SLT frameworks might be PAC learning and the VC theory (which extends PAC learning to infinite-dimensional hypothesis spaces).

There are many good resources on CLT, some of which can be found in this answer.

Here's a related question on this site: What sort of mathematical problems are there in AI that people are working on?.

",2444,,2444,,1/16/2022 22:30,1/16/2022 22:30,,,,0,,,,CC BY-SA 4.0 17843,1,,,2/4/2020 20:05,,4,40,"

My understanding is that there is no singular ""The Turing Test Ruleset"" and competitions don't all do it the exact same way. Still, I'm wondering about some commonly accepted rules and their nuances. My Googling is not producing any specifics about this.

I think most people agree that the purpose of the humans who talk to the judges is to just act normal. In the so-called ""passed Turing test"" instances where the humans tried to fool judges into thinking they were AIs, I would say the tests should be thrown out and I've seen critics in the field agree with that.

But this question is more about how the judge should act.

Let's say we are doing a competition where the threshold is for 40% of judges to call an AI a human after 5 minutes of chat.

During the chat, is the judge supposed to try to trick the potential AI into revealing itself, or is the judge supposed to attempt to act as unbiased and natural as possible, as if it is accepted the potential AI is human, and judge based on a completely natural conversation?

For example, asking ""What is the value of PI out 10 digits?"" or ""What is 123456 times 654321?"" or ""If you saw a bunny and a dime stuck on the road with a truck about to hit them, which one would you pick up?"" would be trying to trick AIs into revealing themselves because you are relying on exploiting the fact that the AI might tell you the correct answer or the inhuman answer.

This is as opposed to simply carrying on a natural and normal conversation, with no biases or expectations. If you came upon someone on the street you would not spend 5 minutes trying to hurriedly ask ridiculous interrogation questions in an effort to prove the other person was an AI.

So is the point of a Turing test generally assumed to be an attempt to flush out AIs or an attempt to judge their natural human conversation without interrogation-like prejudice?

",33282,,,,,2/4/2020 21:58,Nuances of Turing test requirements,,1,1,,,,CC BY-SA 4.0 17844,1,,,2/4/2020 20:13,,1,39,"

For a personal project, I'm trying to download files from a specific set of websites using a web scraper. The scraper has to navigate multiple webpages to get to the files I want to download. I'd like to use AI to find each successive link in order to avoid the inflexibility of hardcoded DOM paths. In other words, I want an AI system that replicates the way that a human would click through links to get to a download page.

This seems like a task for a supervised neural network, where the input would be the HTML page and the output would be the href link to the next webpage in the search. But that's about the extent of my AI knowledge.

Which subtype of neural network would be most effective for this kind of problem?

Note, I have looked at this related question. My problem would probably fall under bullet two.

",33283,,,,,2/4/2020 20:13,Specific Neural Network Subtype for Automatic Web Scraping (Hyperlink Identification),,0,0,,,,CC BY-SA 4.0 17845,2,,17822,2/4/2020 20:49,,2,,"

Such a network could be either a Residual Network or a Highway Network depending upon the underlying architecture of the skip layers. They are primarily used to to tackle the problem of vanishing gradients in very deep networks by reusing activations from a previous layer and passing them to adjacent layers (two or three skips away).

$z$ acts as a Gating Function, which controls how much information is to be transmitted from a previous layer in the network. An additional weight matrix is used to estimate the skip weights.

This architecture does not involve a Gate Controller. The shortcut connections are realized by adding the outputs of a previous layer to that of the connecting layer.

",33010,,,,,2/4/2020 20:49,,,,0,,,,CC BY-SA 4.0 17848,2,,17843,2/4/2020 21:42,,2,,"

I do not know about all competitions, but the Loebner prize is one of the more well known competitions based around the concept of the Turing Test, specifically for chatbots, and judges must ask the same questions of all bots. The questions are only revealed during the competition, it is not possible for bot programmers to prepare for them other than making a generally good chatbot. They judges can then rank bots on how they handle the same set of questions. E.g. from chatbots.org/ai_zone/viewthread/867 here are the 2012 "qualifying" questions that were released following the test:

My name is Bill. What is your name?

How many letters are there in the name Bill?

How many letters are there in my name?

Which is larger, an apple or a watermelon?

How much is 3 + 2?

How much is three plus two?

What is my name?

If John is taller than Mary, who is the shorter?

If it were 3:15 AM now, what time would it be in 60 minutes?

My friend John likes to fish for trout. What does John like to fish for?

What number comes after seventeen?

What is the name of my friend who fishes for trout?

What whould I use to put a nail into a wall?

What is the 3rd letter in the alphabet?

What time is it now?

According to the WikiPedia description, the style of the questions has changed over time. The above set in my opinion seem to be designed to test very simple grammar, common sense and memory targets.

",1847,,-1,,6/17/2020 9:57,2/4/2020 21:58,,,,1,,,,CC BY-SA 4.0 17849,1,17855,,2/4/2020 21:58,,4,293,"

I've been reading about Nesterov momentum from here and it seems like a nice improvement over regular momentum with no extra cost whatsoever.

However, is this really the case? Are there instances where regular momentum performs better than Nesterov momentum or is Nesterov momentum performs at least as good as the regular momentum all the time?

",32621,,2444,,2/4/2020 22:17,2/5/2020 6:48,Is there a reason to choose regular momentum over Nesterov momentum for neural networks?,,1,0,,,,CC BY-SA 4.0 17850,1,,,2/4/2020 22:20,,1,24,"

In acoustics decibel levels were defined to solve an issue with showing values that are interpretive, understandable, and easy to communicate in contrast to intensity or pressure in Pascals.

$dB = 10*\log({\frac{p^2}{p_{ref}^2}})$

This log scale helps human understanding of an acoustic signal because human hearing is capable of discerning the difference of about 1 dB ref 20 $uPa$. However, if I have input data for a 2D CNN would it make more sense to have my input data be in Decibels where it is a logarithmic or to have raw pressures?

Would one or the other benefit my models learning?

",33189,,33189,,2/4/2020 23:06,2/4/2020 23:06,Acoustic Input Data: Decibel or Pascals,,0,0,,,,CC BY-SA 4.0 17851,1,,,2/4/2020 23:55,,0,525,"

Given an input image and an angle I want the output to be the image rotated at the given angle.

So I want to train a neural network to do this from scratch.

What sort of archetecture do you think would work for this if I want it to be lossless?

I'm thinking of this archetecture:

256x256 image

--> convolutions to 64x64 image with 4 channels

--> convolutions to 32x32 image with 16 channels and so on

until a 1 pixel image with 256x256 channels.

And then combine this with the input angle, and then a series of deconvolutions back up to 256x256.

Do you think this would work? Could this be trained as a general rotation machine? Or is there a better archetecture?

I would also like to train the same archetecture to do other transforms.

",4199,,,,,2/5/2020 3:14,Best architecture to learn image rotation?,,1,0,,,,CC BY-SA 4.0 17852,2,,17803,2/5/2020 3:01,,5,,"

There is no contradiction. First, agnostic PAC learnable doesn't mean that the there is a good hypothesis in the hypothesis class; it just means that there is an algorithm that can probably approximately do as well as the best hypothesis in the hypothesis class.

Also, these NFL theorems have specific mathematical statements, and hypothesis classes for which they apply are often not the same as the hypothesis class for which PAC-learnability holds. For example, in Understanding Machine Learning by Shalev-Shwartz and Ben-David, a hypothesis class is agnostic PAC learnable if and only if has finite VC dimension (Theorem 6.7). Here, the algorithm is ERM. On the other hand, the application of the specific version of NFL that this book uses has Corollary 5.2, that the hypothesis class of all classifiers is not PAC learnable, and note that this hypothesis class has infinite VC dimension, so the Fundamental Theorem of PAC learning does not apply.

The main takeaway is that in order to learn, we need some sort of inductive bias (prior information). This can be seen in the form of measuring the complexity of the hypothesis class or using other tools in learning theory.

",33286,,,,,2/5/2020 3:01,,,,0,,,,CC BY-SA 4.0 17853,2,,17851,2/5/2020 3:14,,2,,"

This would likely suffer from the blurry image problem that autoencoders are known to suffer from. See also here. On the other hand, using GAN's to sharpen your images doesn't seem particularly helpful since you seem to be lookng for a way to rotate general images, not ones of a specific domain. Moreover, there's almost certainly going to be some loss

It seems unreasonable/overkill to use neural networks to do this transformation. I suggest treating the rotation as a linear transformation, mapping certain pixel locations to other pixel locations (in fact, you can make this transformation differentiable with respect to its inputs using some deep learning library's autodifferentiation tools, if that's what you're looking for). Since rotations are just linear maps that send pixels to other pixels, this will be much more computationally efficient.

",33286,,,,,2/5/2020 3:14,,,,1,,,,CC BY-SA 4.0 17854,2,,17802,2/5/2020 3:28,,1,,"

Yes, LSTM are ideal for this. For even stronger representational capacity, make your LSTM's multi-layered. Using 1-dimensional convolutions in a CNN is a common way to exctract information from time series too, so there's no harm in trying. Typically, you'll test many models out and take the one that has best validation performance.

",33286,,,,,2/5/2020 3:28,,,,2,,,,CC BY-SA 4.0 17855,2,,17849,2/5/2020 6:48,,4,,"

The book Deep Learning by Goodfellow, Bengio, and Courville says (Sec 8.3.3, p 292 in my copy) states that

Unfortunately, in the stochastic gradient case, Nesterov momentum does not improve the rate of convergence.

I'm not sure why this is, but the theoretical advantage depends on a convex problem, and from this, it sounds like the practical advantage does too - or at least, that it isn't applicable to typical neural network landscapes.

Perhaps it can be implemented more efficiently, but it seems to me that you need to do parameter updates twice (in order to calculate the gradient in a place you aren't moving two) and thus Nesterov requires more computation and memory than plain ole momentum.

",29720,,,,,2/5/2020 6:48,,,,3,,,,CC BY-SA 4.0 17856,1,,,2/5/2020 9:38,,1,117,"

I originally posted on SO (original post) but was suggested to post here.

I would like to use tensorflow (version 2) to use gaussian process regression to fit some data and I found the google colab example online here 1. I have turned some of this notebook into a minimal example that is below.

Sometimes the code fails with the following error when using MCMC to marginalize the hyperparameters: and I was wondering if anyone has seen this before or knows how to get around this?

tensorflow.python.framework.errors_impl.InvalidArgumentError:  Input matrix is not invertible.
     [[{{node mcmc_sample_chain/trace_scan/while/body/_168/smart_for_loop/while/body/_842/dual_averaging_step_size_adaptation___init__/_one_step/transformed_kernel_one_step/mh_one_step/hmc_kernel_one_step/leapfrog_integrate/while/body/_1244/leapfrog_integrate_one_step/maybe_call_fn_and_grads/value_and_gradients/value_and_gradient/gradients/leapfrog_integrate_one_step/maybe_call_fn_and_grads/value_and_gradients/value_and_gradient/PartitionedCall_grad/PartitionedCall/gradients/JointDistributionNamed/log_prob/JointDistributionNamed_log_prob_GaussianProcess/log_prob/JointDistributionNamed_log_prob_GaussianProcess/get_marginal_distribution/Cholesky_grad/MatrixTriangularSolve}}]] [Op:__inference_do_sampling_113645]

Function call stack:
do_sampling

1 https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Process_Regression_In_TFP.ipynb#scrollTo=jw-_1yC50xaM

Note that some of code below is a bit redundant but it should in some sections but it should be able to reproduce the error.

Thanks!

import time

import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfk = tfp.math.psd_kernels
tf.enable_v2_behavior()

import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#%pylab inline
# Configure plot defaults
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['grid.color'] = '#666666'
#%config InlineBackend.figure_format = 'png'

def sinusoid(x):
  return np.sin(3 * np.pi * x[..., 0])

def generate_1d_data(num_training_points, observation_noise_variance):
  """"""Generate noisy sinusoidal observations at a random set of points.

  Returns:
     observation_index_points, observations
  """"""
  index_points_ = np.random.uniform(-1., 1., (num_training_points, 1))
  index_points_ = index_points_.astype(np.float64)
  # y = f(x) + noise
  observations_ = (sinusoid(index_points_) +
                   np.random.normal(loc=0,
                                    scale=np.sqrt(observation_noise_variance),
                                    size=(num_training_points)))
  return index_points_, observations_

# Generate training data with a known noise level (we'll later try to recover
# this value from the data).
NUM_TRAINING_POINTS = 100
observation_index_points_, observations_ = generate_1d_data(
    num_training_points=NUM_TRAINING_POINTS,
    observation_noise_variance=.1)

def build_gp(amplitude, length_scale, observation_noise_variance):
  """"""Defines the conditional dist. of GP outputs, given kernel parameters.""""""

  # Create the covariance kernel, which will be shared between the prior (which we
  # use for maximum likelihood training) and the posterior (which we use for
  # posterior predictive sampling)
  kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)

  # Create the GP prior distribution, which we will use to train the model
  # parameters.
  return tfd.GaussianProcess(
      kernel=kernel,
      index_points=observation_index_points_,
      observation_noise_variance=observation_noise_variance)

gp_joint_model = tfd.JointDistributionNamed({
    'amplitude': tfd.LogNormal(loc=0., scale=np.float64(1.)),
    'length_scale': tfd.LogNormal(loc=0., scale=np.float64(1.)),
    'observation_noise_variance': tfd.LogNormal(loc=0., scale=np.float64(1.)),
    'observations': build_gp,
})

x = gp_joint_model.sample()
lp = gp_joint_model.log_prob(x)

print(""sampled {}"".format(x))
print(""log_prob of sample: {}"".format(lp))

# Create the trainable model parameters, which we'll subsequently optimize.
# Note that we constrain them to be strictly positive.

constrain_positive = tfb.Shift(np.finfo(np.float64).tiny)(tfb.Exp())

amplitude_var = tfp.util.TransformedVariable(
    initial_value=1.,
    bijector=constrain_positive,
    name='amplitude',
    dtype=np.float64)

length_scale_var = tfp.util.TransformedVariable(
    initial_value=1.,
    bijector=constrain_positive,
    name='length_scale',
    dtype=np.float64)

observation_noise_variance_var = tfp.util.TransformedVariable(
    initial_value=1.,
    bijector=constrain_positive,
    name='observation_noise_variance_var',
    dtype=np.float64)

trainable_variables = [v.trainable_variables[0] for v in 
                       [amplitude_var,
                       length_scale_var,
                       observation_noise_variance_var]]
# Use `tf.function` to trace the loss for more efficient evaluation.
@tf.function(autograph=False, experimental_compile=False)
def target_log_prob(amplitude, length_scale, observation_noise_variance):
  return gp_joint_model.log_prob({
      'amplitude': amplitude,
      'length_scale': length_scale,
      'observation_noise_variance': observation_noise_variance,
      'observations': observations_
  })

# Now we optimize the model parameters.
num_iters = 1000
optimizer = tf.optimizers.Adam(learning_rate=.01)

# Store the likelihood values during training, so we can plot the progress
lls_ = np.zeros(num_iters, np.float64)
for i in range(num_iters):
  with tf.GradientTape() as tape:
    loss = -target_log_prob(amplitude_var, length_scale_var,
                            observation_noise_variance_var)
  grads = tape.gradient(loss, trainable_variables)
  optimizer.apply_gradients(zip(grads, trainable_variables))
  lls_[i] = loss

print('Trained parameters:')
print('amplitude: {}'.format(amplitude_var._value().numpy()))
print('length_scale: {}'.format(length_scale_var._value().numpy()))
print('observation_noise_variance: {}'.format(observation_noise_variance_var._value().numpy()))


num_results = 100
num_burnin_steps = 50


sampler = tfp.mcmc.TransformedTransitionKernel(
    tfp.mcmc.HamiltonianMonteCarlo(
        target_log_prob_fn=target_log_prob,
        step_size=tf.cast(0.1, tf.float64),
        num_leapfrog_steps=8),
    bijector=[constrain_positive, constrain_positive, constrain_positive])

adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
    inner_kernel=sampler,
    num_adaptation_steps=int(0.8 * num_burnin_steps),
    target_accept_prob=tf.cast(0.75, tf.float64))

initial_state = [tf.cast(x, tf.float64) for x in [1., 1., 1.]]

# Speed up sampling by tracing with `tf.function`.
@tf.function(autograph=False, experimental_compile=False)
def do_sampling():
    return tfp.mcmc.sample_chain(
      kernel=adaptive_sampler,
      current_state=initial_state,
      num_results=num_results,
      num_burnin_steps=num_burnin_steps,
      trace_fn=lambda current_state, kernel_results: kernel_results)

t0 = time.time()
samples, kernel_results = do_sampling()
t1 = time.time()
print(""Inference ran in {:.2f}s."".format(t1-t0))
",33293,,,,,2/5/2020 9:38,Error when using tensorflow HMC to marginalise GPR hyperparameters,,0,0,,,,CC BY-SA 4.0 17857,1,,,2/5/2020 11:10,,4,1832,"

When using convolutional networks on images with multiple channels, do we max pool after we sum the feature map from each channel, or do we max pool each feature map separately and then sum?

What's the intuition behind this, or is there a difference between the two?

",33296,,2444,,2/5/2020 12:27,2/5/2020 21:13,When is max pooling exactly applied in convolutional neural networks?,,2,0,0,,,CC BY-SA 4.0 17858,2,,17808,2/5/2020 11:54,,2,,"

The left hand graphs are showing you the estimated value function from using Monte Carlo evaluation, after 10,000 episodes. They give a sense of what your value table will look like before convergence.

In the case of upper ""usable ace"" chart, the estimates are still showing a lot of inaccuracy due to variance in the data. This is for two main reasons:

  • The probability of getting a usable ace at the start is a fraction of all (around 15%), so the number of samples used to build the chart is lower.

  • There are more variations in play when there is a usable ace, due to the extra flexibility allowed by it, so the end result also varies more, requiring more samples to converge.

In addition, if you are looking at the bottom edge of the chart, this represents the player starting with two aces. If you pick on one of the high points (dealer shows 4), then that also reduces the probability of seeing that particular state. So you are looking at a sample size of typically 4-5, but maybe in this case just one sample or maybe two, which the player then happened to go on to win, even though the odds made it unlikely. There is always some chance of winning and dealer showing 4 is a bad start for the dealer, who has a good chance of going bust provided the player does not.

If it hadn't happened this time for the ""two aces + dealer showing 4"" state, it may have happened for ""two aces + dealer showing 5"" state. That's due to the nature of random sampling - if you have hundreds of states to sample, then a few of them are going to behave like outliers purely by chance until you have taken enough samples.

In short, 10,000 randomly sampled games are nowhere near enough to reduce error bounds on the value estimates to reasonable numbers for special cases such as starting with two aces. However, you can see in the 10,000 samples charts the beginnings of convergence, especially elsewhere in the chart.

From the graph, for 10000 episodes what i see is that when we don't have a usable ace we always lose the game except if the sum is 20 or 21.

Actually you don't see that, the expected result is not quite -1.0, but a little higher. So that means there is still a chance to win. Under this player policy, the worst chance is no usable ace and score 19, because the policy will be to ""hit"" and need an Ace or 2 card just to stay in the game. Even then , the value is not quite as low as -1.0, but more like -0.9

",1847,,1847,,2/5/2020 12:41,2/5/2020 12:41,,,,0,,,,CC BY-SA 4.0 17859,1,,,2/5/2020 13:10,,2,47,"

I'm training an object detection model (SSD300) to detect and classify body poses in thermal images.

Even I have more than 2k different poses, but the background does not change much (I have only 5 different points of view).

I trained my model on these images (70% for the training and 30% for validation).

Now, I want to evaluate the model on an unbiased dataset.

Should I keep images of my dataset for this purpose or should I use a real life dataset ?

(A good solution would be to have a real life training set, but I don't have)

I tried both, but as expected, I have an mAP=0.9 when evaluated on similar pictures and mAP=0.5 when evaluated on completely different images.

Bonus question: is mAP a relevant metric when I want to show result to a client ? (e.g a client doesn't understand if I tell him ""my model has a mAP=0.7"")

Precision-Recall ? (but I have to choose a pose classification threshold...)

",33295,,,,,2/28/2020 15:40,On which data evaluate an object detection model ? (similar or real life data ?),,1,0,,,,CC BY-SA 4.0 17860,2,,17773,2/5/2020 14:23,,0,,"

@BrianSpiering was generally correct in pointing out that you should always apply the same transformations to your train and test dataset.

This was the key to my solution which was a bit more specific but might actually help others who encounter a similar problem.

Specifically my mistake came about because of Imputation! Some of the factors I used for my model were NA in both the train and test data set. To complete the data I simply imputed these missings using mean and mode respectively. However since I did those transformations separately on both sets the actual mean/mode value that was used differed heavily! By applying the imputation on the full data I also imputed the same data for all missing cases which solved my error.

My resulting accuracy in the public leaderboard is now at 74.2% which is fairly close to my local test score of 79.6%.

",27665,,,,,2/5/2020 14:23,,,,0,,,,CC BY-SA 4.0 17862,2,,17768,2/5/2020 15:36,,1,,"

NDDS is a UE4 plugin from NVIDIA to empower computer vision researchers to export high -quality synthetic images with metadata. NDDS supports images, segmentation, depth, object pose, bounding box, keypoints, and custom stencils. In addition to the expo rter, the plugin includes different components for generating highly randomized images. This randomization includes lighting, objec ts, camera position, poses, textures, and distractors, as well as camera path following, and so forth. Together, these components allow researchers to easily create randomized scenes for training deep neural networks

https://github.com/yehengchen/DOPE-ROS-D435/blob/master/Synthetic-Data-UE4-DOPE.md

You can create 3D synthetic dataset here.

",33272,,,,,2/5/2020 15:36,,,,0,,,,CC BY-SA 4.0 17864,2,,17857,2/5/2020 15:39,,2,,"

The pooling operation is applied to the output of the convolution layer. More precisely, it is applied separately for each of the input channels (or slices). So, if the pooling layer receives an input volume of $H_i \times W_i \times D$, then it will produce an output volume $H_o \times W_o \times D$, so the depth of the output volume is equal to the depth of the input volume. There is no sum involved in the pooling operation. For example, in the case of max pooling, you will choose the maximum number of a certain 2D window of values. You do this for each of the input channels.

In the article (which is part of some course notes) Convolutional Neural Networks: Architectures, Convolution / Pooling Layers, Andrej Karpathy says

Pooling layer downsamples the volume spatially, independently in each depth slice of the input volume.

The following diagram (a screenshot of one of the figures from the mentioned article) should provide some intuition behind the pooling operation.

",2444,,,,,2/5/2020 15:39,,,,0,,,,CC BY-SA 4.0 17865,1,,,2/5/2020 17:14,,1,81,"

Interpolation is a common way to make an image fit the right input shape for a neural network.

But is there any point in using interpolation to make it easier for the network to learn?

I assume interpolation adds no extra information to the input; It only uses existing information to increase the resolution and fill missing values.

However, sometimes I have observed that while I can not see anything with my human eye, using some kind of advanced interpolation technique such as b-spline interpolation makes it crystal clear that the object i am looking for is in the image, especially in the domain of infrared images.

So, is there any benefit for using interpolation rather than feeding a low dimensional image to a neural network?

",13257,,,,,2/8/2020 10:54,Interpolating image to increase resolution before feeding it to a neural network,,1,0,,,,CC BY-SA 4.0 17867,2,,17857,2/5/2020 19:55,,2,,"

The pooling operation in a CNN is applied independently to each layer and the resulting feature maps are disjoint. This is the very reason that in most schematics depicting a certain CNN architecture, we obtain three output maps from an input image (corresponding to the convolutions and pooling operations performed on the RGB channels separately).

Each channel in an image captures a set of information that might not be demonstrated by the other channels owing to the color receptivity. Hence, pooling performed independently is intuitive, in the sense that a group of pixels in the Red Channel may not provide similar features as the same set of pixels in a Blue or Green channel. Thus comparison of pixels is restricted within a channel.

The following image provides a visualization of the RGB channels converted to greyscale. Note the brightness of the Red Color (comparing with the color-image) in the three different channels for a better intuition: [Image Source]

Finally, to feed the multi-dimensional outputs of the Convolutional Layers to a Fully Connected layer, the feature maps are stacked along the depth dimension and flattened to form a vector. This ensures that the Dense layer learns to classify images (using a Softmax activation in the final Dense layer) based on the non-linear combinations of these high-level features.

",33010,,33010,,2/5/2020 21:13,2/5/2020 21:13,,,,0,,,,CC BY-SA 4.0 17868,1,17915,,2/5/2020 20:39,,3,513,"

I'm training a deep network in Keras on some images for a binary classification (I have around 12K images). Once in a while, I collect some false positives and add them to my training sets and re-train for higher accuracy.

I split my training into 20/80 percent for training/validation sets.

Now, my question is: which resulting model should I use? Always the one with higher validation accuracy, or maybe the higher mean of training and validation accuracy? Which one of the two would you prefer?

Epoch #38: training acc: 0.924, validation acc: 0.944
Epoch #90: training acc: 0.952, validation acc: 0.932
",9053,,2444,,12/16/2021 15:22,12/16/2021 15:22,Should I choose the model with highest validation accuracy or the model with highest mean of training and validation accuracy?,,2,0,,,,CC BY-SA 4.0 17869,2,,17868,2/5/2020 21:31,,0,,"

The training accuracy tells you nothing about how good it is on other data than the ones it learned on, it could be better on this data because it memorized this examples.

On the other hand the validation set is here to indicate you how good the model is to generalize what it learned to new data (hopefully the testing dataset accurately represents the diversity of the data).

As you are looking for a model which is good on every dataset you don't want to use training accuracy to choose your model and so you should choose the first one.

",26961,,,,,2/5/2020 21:31,,,,0,,,,CC BY-SA 4.0 17870,1,,,2/6/2020 1:16,,5,1274,"

Is it possible to estimate the capacity of a neural network model? If so, what are the techniques involved?

",33314,,2444,,1/22/2021 15:58,1/22/2021 15:58,How to estimate the capacity of a neural network?,,2,0,,,,CC BY-SA 4.0 17871,2,,17521,2/6/2020 1:41,,2,,"

Apart from the multitudes of traditional image segmentation techniques (Watershed, Clustering or Variational methods), newer Segmentation schemes using Deep Learning are actively being used, which provide better results and are better for real-time applications, owing to minimum computation overheads involved.

The following blog provides a detailed review of recent advancements in this field: Review of Deep Learning Algorithms for Image Semantic Segmentation

For the traditional methods, this Wikipedia article provides a nice summary: Image Segmentation

",33314,,,,,2/6/2020 1:41,,,,0,,,,CC BY-SA 4.0 17872,1,,,2/6/2020 4:13,,2,470,"

I'm new to Speech Synthesis & Deep Learning. Recently, I got a task as described below:

I have problem in training a multi-speaker model which should be created by Tacotron2. And I was told I can get some ideas from espnet, which is a end-to-end audio tools library. In this way, I found a good dataset called libritts: http://www.openslr.org/60/. And it's also found at espnet: https://github.com/espnet/espnet#tts-results


Here is my initial thought:

  1. Download libritts corpus / Read the espnet code: ../espnet/egs/libritts/tts1/run.sh, learning how to train the libritts corpus by pytorch-backend.

Difficulty: But I cannot get it across that how the author trained a libritts.tacotron2.v1 as I didn't found anything about tacotron2 along those shells related to run.sh. Maybe He didn't make those codes open-source.

  1. Read the tacotron2 code and tune it into a multi-speaker network:

Difficulty: I found the code is really complex.... I just got lost in reading these codes, without a clear understanding about how to tune this model. Cause Tacotron2 was designed with a LJSpeech Dataset(only 1 person).

  1. Training the multi-speaker model with tiny set of dataset (http://www.openslr.org/60/) to save time.

they contains about 110 people's data, which can be enough for my scenario.

In the end:

Coud you please help me about my questions. I've been puzzled by this problem for a long time...

",33317,,,,,2/6/2020 4:13,How do I train a multiple-speaker model (speech synthesis) based on Tacotron 2 and espnet?,,0,0,0,,,CC BY-SA 4.0 17873,1,17874,,2/6/2020 6:53,,1,304,"

There's one VAE example here: https://towardsdatascience.com/teaching-a-variational-autoencoder-vae-to-draw-mnist-characters-978675c95776.

And the source code of encoder can be found at the following URL: https://gist.github.com/FelixMohr/29e1d5b1f3fd1b6374dfd3b68c2cdbac#file-vae-py.

The author is using $e$ (natural exponential) for calculating values of the embedding vector:

$$z = \mu + \epsilon \times e^{\sigma}$$

where $\mu$ is the mean, $\epsilon$ a small random number and $\sigma$ the standard deviation.

Or in code

z  = mn + tf.multiply(epsilon, tf.exp(sd))

It's not related to the code (practical programming), but why using natural exponential instead of:

$$z = \mu + \epsilon \times \sigma$$

",2844,,2444,,2/20/2020 0:48,2/20/2020 0:48,Why is exp used in encoder of VAE instead of using the value of standard deviation alone?,,1,0,,,,CC BY-SA 4.0 17874,2,,17873,2/6/2020 7:37,,2,,"

In the source code, the author defines sd by

sd       = 0.5 * tf.layers.dense(x, units=n_latent)    

which means that $\operatorname{sd}\in \mathbb{R}^n$. In particular, the support over sd includes negative numbers, which is something we want to avoid. Since standard deviations are always nonnegative, we can exponentiate to get us in the correct domain. This is a case where the variable is inappropriately named. Here, sd is not the standard deviation itself but rather the logarithm of the standard deviation. This allows it to be predicted as the output of a layer in a neural network, so extracting the predicted value of the standard deviation would require exponentiation.

",33286,,,,,2/6/2020 7:37,,,,1,,,,CC BY-SA 4.0 17875,2,,17870,2/6/2020 7:51,,0,,"

Most methods for measuring the complexity of neural networks are fairly crude. One common measure of complexity is VC dimension, a discussion which can be found here and here. For example, neural networks have a VC dimension that is too large to give a strong upper bound on the number of training samples needed for a model (the upper bound provided by VC analysis is much higher than what we have observed neural networks to be able to generalize from).

Another common measure of capacity is the number of parameters. We see in the paper ""Understanding deep learning requires rethinking generalization"", published at ICLR with over 1400+ citations, that networks with more parameters than data often have the capacity to memorize the data. The paper provides compelling evidence that traditional approaches to generalization provided by statistical learning theory (VC dimension, Rademacher complexity) are unable to fully explain the apparent capacity of neural networks. In general, neural networks seem to have a large capacity, given the apparent good performance on certain tasks.

Beyond these ideas, the universal approximation theorem tells us that the set of neural networks can approximate any continuous function arbitrarily well, which strongly suggests that any neural network has a large capacity.

",33286,,33286,,2/11/2020 5:45,2/11/2020 5:45,,,,0,,,,CC BY-SA 4.0 17876,1,,,2/6/2020 9:55,,2,28,"

To explain what I mean I'll depict the two extremes and something in the middle.

1) Most pragmatic: If you need to just segment a few images for a design project, forget AI. Go into Adobe Photoshop and hand select the outline of the object you need to extract.

2) Middle ground: If you need to build a reasonably accurate app for human aided segmentation of images, use a pre-trained model on a well known architecture.

3) Least pragmatic: If you need to reach unprecedented levels of accuracy on a large volume of images. Do heavily funded research on new and better methods of image segmentation.

So I'm most interested in painting out the spectrum for that middle ground. That is, how much of the wheel needs to be reinvented versus the complexity of the problem.

For example (and this is what lead me here), I need to segment out dogs from several hundred photos that owners have taken. The dog is probably going to be among the main subjects of the photos. Do I need to reinvent the wheel (design an architecture)? Probably not. Do I even need to change the tyres (train my own model)? I'm guessing not. Do I need to code at all? I'm not sure.

While I'm happy to get answers about my use case, it would be awesome if someone could map out the spectrum on my unfinished rainbow.

",16871,,,,,2/6/2020 9:55,What are the current tools and techniques for image segmentation in order of pragmatism?,,0,0,,,,CC BY-SA 4.0 17877,1,,,2/6/2020 11:04,,1,15,"

I have a deep learning problem, I am working with the CMAPSS dataset, which contains data simulating the degradation of several aircraft engines. The aim is to predict from data collected on a machine in full operation, the remaining useful time at this machine. My problem is the following when the features (sensor data) have a specific trend (either up or down), my model (LSTMs) predicts good results but when the data have no trend, my deep learning model gets a very bad score. I must specify that I work with sequential data. my dataset contains several aircraft engines with data recorded by the sensors. my question is how to process the data that has no particular trend in deep learning.

You will find below some pictures of my dataset where, RUL: is the life of a machine, unit: the machine identity and s1 to s3 is the sensor data.

",33323,,,,,2/6/2020 11:04,Improve prediction with LSTMs when data have no particular trend (complex),,0,0,,,,CC BY-SA 4.0 17879,2,,17820,2/6/2020 11:20,,0,,"

I think there are some problems with your approach.

Firstly, looking at the Keras documentation, LSTM expects an input of shape (batch_size, timesteps, input_dim). You're passing an input of shape (1000, 1, 1), which means that you're having "sequences" of 1 timestep.

RNNs have been proposed to capture temporal dependencies, but it's impossible to capture such dependencies when the length of each series is 1, and the numbers are randomly generated. If you want to create a more realistic scenario, I would suggest you generate a sine wave, since it has a smooth periodic oscillation. Afterward, increase the timesteps from 1, and you can test on the following timestamps (to make predictions).

For the second part, if you think about a normal RNN (I will explain for a simple RNN but you can imagine a similar flow for LSTM) and a Dense layer when applied to 1 timestamp, there are not so many many differences. The dense layer will have $Y=f(XW + b)$, where $X$ is the input, $W$ is the weight matrix, $b$ is the bias and $f$ is the activation function. Whereas RNN will have $Y=f(XW_1 + W_2h_0 + b)$, since is the first timestamp $h_0$ is $0$, so we can reduce it to $Y=f(XW_1 +b)$, which is identical with the Dense layer. I suspect that the results differences are caused by the activation functions, by default Dense layer has no activation function, and LSTM has tanh and sigmoid.

",20430,,2444,,7/5/2020 15:06,7/5/2020 15:06,,,,0,,,,CC BY-SA 4.0 17881,2,,17870,2/6/2020 14:08,,5,,"

VC dimension

A rigorous measure of the capacity of a neural network is the VC dimension, which is intuitively a number or bound that quantifies the difficulty of learning from data.

The sample complexity, which is the number of training instances that the model (or learner) must be exposed to in order to be reasonably certain of the accurateness of the predictions made given some data, is proportional to this number.

The paper VC Dimension of Neural Networks (1998) by Eduardo D. Sontag provides a good introduction to the VC dimension of neural networks (even though these concepts are quite abstract and you may need to read them several times to fully grasp them). The information in this answer is highly based on that paper.

Shattering and VC dimension

In section 2, Concepts and VC Dimension, he describes the basic concepts behind the VC dimension (not only for neural networks), such as the concept of shattering (i.e. what does it mean for a set of sets to shatter another set?), which is a well-known concept in computational learning theory and is used to define the VC dimension (see definition 2), so you definitely need to get familiar with this concept to understand the VC dimension and, therefore, the capacity of a neural network (calculated with the VC dimension).

VC dimension of functions and neural networks

He then provides an equivalent definition of the VC dimension but for functions (equation 6). Given that neural networks represent functions, then we can also define the VC dimension of a neural network. A specific combination of weights of neural networks represents a specific function, for which the VC dimension can be defined. To be more precise, a parametrized function (and a neural network) can be denoted as

$$ \beta : \mathbb{W} \times \mathbb{U} \rightarrow \mathbb{R} $$

where $\mathbb{W} = \mathbb{R}^p$ and $p$ is the number of weights (or parameters) of the neural network, $\mathbb{U}$ is the input space and $\mathbb{R}$ the output space. So, in this case, $\beta$ can also represent a neural network, with a certain parameter space $\mathbb{W}$, an input space $\mathbb{U}$ and an output space $\mathbb{R}$.

The vector $\mathbf{w} = (w_1, \dots, w_p) \in \mathbb{W}$ represents a specific combination of weights of the neural network, so it represents a specific function. The set of all functions for each choice of this weight vector can be denoted as

$$ \mathcal{F}_{\beta} = \{ \beta(\mathbf{w}, \cdot) \mid \mathbf{w} \in \mathbb{W} \} $$

The VC dimension (VCD) of $\beta$ can then be defined as

$$ \text{VCD}(\beta) := \text{VCD}(\mathcal{F}_{\beta}) $$

Therefore, the VC dimension is a measure of the capacity of a neural network with a certain architecture. Moreover, the VC dimension is equivalently defined for a certain set of functions associated with a neural network.

How to calculate the VC dimension?

To calculate the actual VC dimension of a neural network, it takes a little bit of more creativity. Therefore, I will just report the VC dimension of some neural networks. For more details, you should fully read the cited paper (more than once) and other papers and books too (especially, the ones described in this answer, which provide an introduction to CLT concepts).

VC dimension of a perceptron

The VC dimension of a perceptron is $m + 1$, where $m$ is the number of inputs. Given that a perceptron represents a linear and affine function, the VC dimension of the perceptron is also equal to the number of parameters. However, note that, even though the VC dimension of the perceptron is linear in the number of parameters and inputs, it doesn't mean the perceptron can learn any function. In fact, perceptrons can only represent linear functions. See section 3.1 of VC Dimension of Neural Networks for more details.

VC dimension of a single hidden layer neural network

Let $n$ be the number of hidden units, then the VC dimension of a single hidden layer neural network is less than or equal to $n+1$. See section 3.2 of VC Dimension of Neural Networks for more details.

VC dimension of multi-layer neural networks with binary activations

The VC dimension of multi-layer neural networks (MLPs) with binary activations and $p$ weights (or parameters) is $\mathcal{O}(p \log p)$. See theorem 4 (and related sections) of the paper VC Dimension of Neural Networks for more details.

VC dimension of MLPs with real-valued activations

The VC dimension of MLPs with real-valued activations is no longer bounded by $\mathcal{O}(p \log p)$ and can be exponential in the number of parameters. See section 5.3 of VC Dimension of Neural Networks.

VC dimension of MLPs with linear activations

The VC dimension of MLPs with linear activations is $\mathcal{O}(p^2)$. See theorem 5 of the paper VC Dimension of Neural Networks.

Notes

The VC dimension is often expressed as a bound (e.g. with big-O notation), which may not be strict.

In any case, the VC dimension is useful because it provides some guarantees. For example, if you use the VC dimension to describe an upper bound on the number of samples required to learn a certain task, then you have a precise mathematical formula that guarantees that you will not need more samples than those expressed by the bound in order to achieve a small generalization error, but, in practice, you may need fewer samples than those expressed by the bound (because these bounds may not be strict or the VC dimension may also not be strict).

Further reading

There is a more recent paper (published in 2017 in MLR) that proves new and tighter upper and lower bounds on the VC dimension of deep neural networks with the ReLU activation function: Nearly-tight VC-dimension bounds for piecewise linear neural networks. So, you probably should read this paper first.

The paper On Characterizing the Capacity of Neural Networks using Algebraic Topology may also be useful and interesting. See also section 6, Algebraic Techniques, of the paper I have been citing: VC Dimension of Neural Networks.

The capacity of a neural network is clearly related to the number of functions it can represent, so it is strictly related to the universal approximation theorems for neural networks. See Where can I find the proof of the universal approximation theorem?.

",2444,,2444,,11/13/2020 21:31,11/13/2020 21:31,,,,1,,,,CC BY-SA 4.0 17883,1,,,2/6/2020 14:33,,3,490,"

I have a question regarding features representation for graph convolutional neural network.

For my case, all nodes have a different number of features, and for now, I don't really understand how should I work with these constraints. I can not just reduce the number of features or add meaningless features in order to make the number of features on each node the same - because it will add to much extra noise to the network.

Are there any ways to solve this problem? How should I construct the feature matrix?

I'll appreciate any help and if you have any links to papers that solve this problem.

",33332,,,,,5/26/2020 22:25,How to represent and work with the feature matrix for graph convolutional network (GCN) if the number of features for each node is different?,,2,3,,,,CC BY-SA 4.0 17884,2,,17883,2/6/2020 15:12,,2,,"

The simplest way I could come with is to pad with 0 each feature which is not present. You said that you're going to add too much noise to the network, but I don't see the problem (please correct me if I'm wrong). For example we have two nodes, the first one has only 2 features with the 3rd one missing and the second node has all features X=[[1,2,0], [3,4,5]]. Now we can project the nodes to a hidden representation (pretty common). I'm going to use a weight matrix of W=[[1], [1], [1]]. The output of XW will be [[3], [12]]. Now let's add a new feature to the second node X=[[1,2,0, 0], [3,4,5,6]] and apply the same transformation W=[[1], [1], [1], [1]] the output will be [[3], [18]] you can see that the first node is not affected by the number of missing features.

Another way you could achieve this if you don't want to use the projection could be using a mask. For example give the same example above we could create a mask M=[[1,1,0], [1,1,1]] where each entry represents if a specific feature is present in a specific node. Now usually a GCN layer is defined as H=f(AHW) where A is the adjacency matrix. We could change the propagation rule to H=f(AH*MW) where * is the pointwise multiplication. Like this if a node is missing a feature it can not ""access"" information from others that are having that feature.

",20430,,37206,,5/22/2020 7:59,5/22/2020 7:59,,,,1,,,,CC BY-SA 4.0 17886,2,,17817,2/6/2020 15:49,,0,,"

Add train_data 1st

import pandas as pd

df = pd.read_excel('data.xlsx', index_col=None, names=['train_data'])
df['train_data'] = df['train_data'].apply(lambda x: int(x, 16))
df.to_csv('data.csv', index=False)
",32076,,,,,2/6/2020 15:49,,,,0,,,,CC BY-SA 4.0 17887,1,,,2/6/2020 18:43,,1,45,"

I'm aware there are some optimizer such as Adam that adjust the learning rate for each dimension during training. However, afaik, the maximum learning rate they can have is still determined by the user's input.

So, I wonder if there are optimizers that can increase/decrease their overall learning rate and other parameters (such as momentum or even weight decay) autonomously depending on some metric, e.g., validation loss, running average of gradients etc. ?

",32621,,,,,2/6/2020 18:43,"Are there optimizers that schedule their learning rate, momentum etc. autonomously?",,0,0,,,,CC BY-SA 4.0 17890,1,17896,,2/7/2020 5:31,,3,891,"

Let's assume we are in a $3 \times 3$ grid world with states numbered as $0,1, \dots, 8$. Suppose that the goal state is $8$, the reward of landing in the goal state is $10$, and the reward of just wandering around in the grid world is $0$. Is the state-value of state $8$ always $0$?

",31749,,2444,,11/1/2020 15:10,11/1/2020 15:11,"In reinforcement learning, is the value of terminal/goal state always zero?",,1,0,,,,CC BY-SA 4.0 17891,2,,17839,2/7/2020 5:31,,1,,"

@nbro has already provided a great answer, so i'll just supplement his answer with two specific results:

Minsky, in his 1969 book Perceptrons provided a mathematical proof that showed that certain types of neural networks (then called perceptrons) weren't able to compute a function called the XOR function, thus showing that the mind couldn't be implemented on strictly this structure. Minsky further argued that this result would generalize to all neural networks, but he failed to account for an architectural adaptation known as ""hidden layers"", which would allow for neural networks to compute the XOR function. This result isn't very relavant in modern times, but the immediate impact of his proof lead to several decades of people ignoring neural networks due to their perceived failings.

Another commonly cited result is the Universal approximation theorem, which shows that a sufficiently wide single layer neural network would be able to approximate (read as: arbitrarily close) any continuous function given appropriate activation function (iirc the activation needed to be non-linear).

You can also consider the research of MIRI, which in a sense is more of a ""pure"" study of AI than the examples listed above. Their Program Equilibrium via Provability Logic result was pretty interesting, the gist of that paper is that programs can learn to cooperate in a very simple game if they read each other's source code.

",6779,,6779,,2/7/2020 18:45,2/7/2020 18:45,,,,0,,,,CC BY-SA 4.0 17892,1,17900,,2/7/2020 7:49,,5,809,"

Why is the equation $$\log p_{\theta}(x^1,...,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i)$$ true, where $x^i$ are data points and $z$ are latent variables?

I was reading the original variation autoencoder paper and I don't understand how the marginal is equal to the RHS equation. How does the marginal equal the KL divergence of $p$ with its approximate distribution plus the variational lower bound?

",30885,,2444,,11/7/2020 0:53,11/7/2020 0:53,Why is the evidence equal to the KL divergence plus the loss?,,1,0,,,,CC BY-SA 4.0 17893,1,,,2/7/2020 8:30,,1,29,"

I am really new to neural networks, so i was following along with a video series, created by '3blue1brown' on youtube. I created an implementation of the network he explained in c++. I am attempting to train the network to recognize hand written characters, using the MNIST data set. What seems to be happening is, rather than actually learn how to recognize the characters, it is just learning how many of each input there is in the data set and doing it with probability. When testing on a smaller dataset this is more noticeable, for example i was testing on a set with 100, and the numbers that were more frequent would always have a slightly higher activation at the end, and others where very close to 0. Here is my code if it helps:

#include <random>
#include <vector>
#include <iostream>
#include <fstream>
#include <cmath>

double weightMutationRate = 1;
double biasMutationRate = 1;

//Keeps track of the weights, biases, last activation and the derivatives
//of the weights and biases for a single node in a neural network.
struct Node
{
  std::vector<double> weights;
  std::vector<double> derivWeights;
  double activation;
  double derivActivation;
  double bias;
  double derivBias;
};

//Struct to hold the nodes in each layer of a network.
struct Layer
{
  std::vector<struct Node> nodes;
};

//Struct to hold the layers in a network.
struct Network
{
  std::vector<struct Layer> layers;
  double cost;
};

//Stores the inputs and outputs for a single training example.
struct Data
{
  std::vector<double> inputs;
  std::vector<double> answers;
};

//Stores all of the data to be used to train the neural network.
struct DataSet
{
  std::vector<struct Data> data;
};

//Generates a double by creating a uniform distribution between the two arguments.
double RandomDouble(double min, double max)
{
  std::random_device seed;
  std::mt19937 random(seed());
  std::uniform_real_distribution<> dist(min, max);
  return dist(random);
}

//Constructs a network with the node count in each layer defined with 'layers'.
//the first layer will not have any weights and biases and will simply have
//the activation of the input data.
struct Network CreateNetwork(std::vector<int> layers, double minWeight = -1, double maxWeight = 1, double minBias = -1, double maxBias = 1)
{
  //Network to construct.
  struct Network network;
  //Used to store the nodes in the previous layer.
  int prevLayerNodes;
  //Iterates through the layers vector and constructs a neural network with the values in
  //the vector determining how many nodes that are in each of the layers.
  bool isFirstLayer = true;
  for (int layerNodes : layers)
  {
    //Layer to construct.
    struct Layer layer;
    //Creating the nodes for the current layer.
    for (int i = 0; i < layerNodes; i++)
    {
      //Node to construct
      struct Node node;
      //Checks to see if the current layer is not the input layer, which does not have
      //any weights or biases.
      if (!isFirstLayer)
      {
        //Creating weights for the connections between this node
        //and the nodes in the previous layer.
        for (int i = 0; i < prevLayerNodes; i++)
        {
          //Getting a random double for the weight, between the bounds set in the arguments.
          double inputWeight = RandomDouble(minWeight, maxWeight);
          //Adding the inputWeight to the current node.
          node.weights.push_back(inputWeight);
          //Adding a 0 to the deriv weights for the weight just added.
          node.derivWeights.push_back(0.0);
        }
        //Getting a random double for the bias, between the bounds set in the arguments.
        double bias = RandomDouble(minBias, maxBias);
        //Adding the bias to the current node.
        node.bias = bias;
        //Adding the node to the layer.
      }
      layer.nodes.push_back(node);
    }
    //Updating the isFirstLayer variable if the current layer is the input layer.
    if (isFirstLayer)
    {
      isFirstLayer = false;
    }
    //Updating the prevLayerNodes variable for use in the next layer.
    prevLayerNodes = layerNodes;
    //Adding the layer to the network.
    network.layers.push_back(layer);
  }
  //Returning the constructed network.
  return network;
}

//Outputs the network passed to the networkPrint.txt file.
void PrintNetwork(struct Network network)
{
  std::cout << ""Printing network ..."" << std::endl;
  std::ofstream networkPrintFile;
  networkPrintFile.open(""networkPrint.txt"");
  //Iterates through each of the layers in teh network.
  for (int i = 0; i < network.layers.size(); i++)
  {
    std::cout << ""Layer : "" << i << std::endl;
    networkPrintFile << ""Layer "" << i << "":"" << std::endl;
    //Iterates through each of the nodes in the current layer.
    for (int j = 0; j < network.layers[i].nodes.size(); j++)
    {
      networkPrintFile << ""\t"" << ""Node "" << j << "":"" << std::endl;
      //Outputs the node's activation into networkPrintFile.
      double activation = network.layers[i].nodes[j].activation;
      networkPrintFile << ""\t\t"" << ""Activation"" << "": "" << activation << std::endl;
      //Outputs the node's derivActivation into networkPrintFile.
      double derivActivation = network.layers[i].nodes[j].derivActivation;
      networkPrintFile << ""\t\t"" << ""Deriv Activation"" << "": "" << derivActivation << std::endl;
      double bias = network.layers[i].nodes[j].bias;
      double derivBias = network.layers[i].nodes[j].derivBias;
      //Outputs the bias and derivative of the bias.
      networkPrintFile << ""\t\t"" << ""Bias"" << "": "" << bias << std::endl;
      networkPrintFile << ""\t\t"" << ""Deriv Bias"" << "": "" << derivBias << std::endl;
      //Iterates through all of the inputWeights in the current node.
      networkPrintFile << ""\t\t"" << ""Weights"" << "":"" << std::endl;
      for (int k = 0; k < network.layers[i].nodes[j].weights.size(); k++)
      {
        double inputWeight = network.layers[i].nodes[j].weights[k];
        double derivWeight =  network.layers[i].nodes[j].derivWeights[k];
        networkPrintFile << ""\t\t\t"" << ""Weight "" << k << "":"" << std::endl;
        networkPrintFile << ""\t\t\t\t"" << ""Value"" << "":"" << inputWeight << std::endl;
        networkPrintFile << ""\t\t\t\t"" << ""Derivative"" << "":"" << derivWeight << std::endl;
      }
    }
  }
  std::cout << ""Done"" << std::endl;
}

//Takes and input and peforms a mathematical sigmoid
//function on it and returns the value.
//             1
//  σ(x) = ---------
//          1 + e^x
double Sigmoid(double input)
{
  double expInput = std::exp(-input);
  double denom = expInput + 1;
  double value = 1 / denom;
  return value;
}

//Returns the activation of the node passed in give the previous layer.
double CalculateNode(struct Node &node, struct Layer &prevLayer)
{
  //Keeps a runing total of the weights and activations added up so far.
  double total = 0.0;
  int weightCount = node.weights.size();
  //Iterated through each of the weights, and thus each of the
  //nodes in the previous layer to find the weight * activation.
  for (int i = 0; i < weightCount; i++)
  {
    //Calculated the current weight and activation and
    //adds it to the 'total' variable.
    double weight = node.weights[i];
    double input = prevLayer.nodes[i].activation;
    double value = weight * input;
    total += value;
  }
  //Add the node's bias to the total.
  total += node.bias;
  //Normalises the node's activation value by passing it through
  //a sigmoid function, which bounds it between 0 and 1.
  double normTotal = Sigmoid(total);
  //Returns the caclulated value for this node.
  return normTotal;
}

//Adds the activation values to a layer passed in, given the previous layer.
void CaclulateLayer(struct Layer &layer, struct Layer &prevLayer)
{
  //Iterates through all of the nodes and calculated their activations.
  for (struct Node &node : layer.nodes)
  {
    double activation = CalculateNode(node, prevLayer);
    //Setting the activation to the node.
    node.activation = activation;
  }
}

//Takes in the first layer of the neural network and interates through the
//nodes and sets each input to each node in a loop.
void SetInputs(struct Layer &layer, std::vector<double> inputs)
{
  for (int i = 0; i < layer.nodes.size(); i++)
  {
    //Setting the node's activation to the corrosponding input.
    layer.nodes[i].activation = inputs[i];
  }
}

//Takes in a network and inputs and calculates the value of
//activation for every node for a single input vector.
void CalculateNetwork(struct Network &network, std::vector<double> inputs)
{
  //Setting the activations of the first layer to the inputs vector.
  SetInputs(network.layers[0], inputs);
  //Iterates through all of the layers, apart from the first layer, and
  //calculated the activations of the nodes in that layer.
  for (int i = 1; i < network.layers.size(); i++)
  {
    //Getting the layer to calculate to activations on and the
    //previous layer, which already has it's activations calculated.
    struct Layer currentLayer = network.layers[i];
    struct Layer prevLayer = network.layers[i - 1];
    //Calculating the nodes on the current layer.
    CaclulateLayer(currentLayer, prevLayer);
    //Setting the currentLayer back into the network struct with
    //all of the activations in it now calculated.
    network.layers[i] = currentLayer;
  }
}

//Caclulates the sum of the differences between the outputs and the correct
//values squared.
//
//  Cost = Σ((a-y)^2)
//
double CalculateCost(struct Network &network, std::vector<double> correctOutputs)
{
  //Keeps track of the current sum of the costs.
  double totalCost = 0.0;
  //The layer of the network that holds the calculated values, the
  //last layer in the network.
  struct Layer outputLayer = network.layers[network.layers.size() - 1];
  //Loops through all the node sin the output layer and compared them
  //to their corresponding correctOutput value, calculates the cost
  //and adds it to the running total, totalCoat.
  for (int i = 0; i < outputLayer.nodes.size(); i++)
  {
    struct Node node = outputLayer.nodes[i];
    double calculatedActivation = node.activation;
    double correctActivation = correctOutputs[i];
    double diff = calculatedActivation - correctActivation;
    double modDiff = diff * diff;
    //Adding the cost to the sum of the other costs.
    totalCost += modDiff;
  }
  //Returning the value of the calculated cost.
  return totalCost;
}

//Takes in the output layer of the network and calculates the derivatives of the
//cost function with respect to the activations in each node. this value is then
//stored on the Node struct.
void LastLayerDerivActivations(struct Layer &layer, std::vector<double> correctOutputs)
{
  //Iterating through all the nodes in the layer.
  for (int i = 0; i < layer.nodes.size(); i++)
  {
    //Getting the values of the node output and correct output.
    double activation = layer.nodes[i].activation;
    double correctOutput = correctOutputs[i];
    //Caclulating the partial derivative of the cost function with respect
    //to the current node's activation value.
    double activationDiff = activation - correctOutput;
    double derivActivation = 2 * activationDiff;
    //Setting the activation partial derivative to the layer passed in.
    layer.nodes[i].derivActivation = derivActivation;
  }
}

//Returns the derivative of the sigmoid function.
//   d
//  ---- σ(x) = σ(x)(1 - σ(x))
//   dx
double DerivSigma(double input)
{
  double sigma = Sigmoid(input);
  double value = sigma * (1 - sigma);
  return sigma;
}

//Takes in a node and the layer that the node takes inputs from and adds.
//to the derivWeight and derivBias of the node and adds to each of the
//deriv activations in the previous layer for them to be used in this
//function to calculate their derivatives.
void NodeDeriv(struct Node &node, struct Layer &prevLayer)
{
  //Starting the total at the bias.
  double total = node.bias;
  //Looping through all the weights and biases to find z(x).
  //  z(x) = a w + a w + ... + a w + b
  //          1 1   2 2         n n
  for (int i = 0; i < node.weights.size(); i++)
  {
    double weight = node.weights[i];
    double activation = prevLayer.nodes[i].activation;
    double value = weight * activation;
    //Adding to the running total for z(x).
    total += value;
  }
  //Finding the derivative of the cost function with respect to the
  //z(x) by multiplying the DerivSigma() by the node's derivActivation
  //using the chain rule.
  double derivAZ = DerivSigma(total);
  double derivCZ = derivAZ * node.derivActivation;
  //The derivative of the cost with respect to the bias is the same as
  //the derivative of the cost function with respect to z(x) since
  //d/db z(x) = 1
  node.derivBias += derivCZ;
  //Iterating through all of the nodes and weights to find the derivatives
  //of all of the weights for the node and the activations on the
  //previous layer.
  for (int i = 0; i < node.weights.size(); i++)
  {
    // dc/dw = dc/dz * activation
    double derivCW = derivCZ * prevLayer.nodes[i].activation;
    // dc/da = dc/dz * weight
    double derivCA = derivCZ * node.weights[i];
    //Adding the weights and activations to the node objects.
    node.derivWeights[i] += derivCW;
    prevLayer.nodes[i].derivActivation += derivCA;
  }
  //Resetting the activation derivative.
  node.derivActivation = 0;
}

//Takes in a layer and iterates through all the nodes in order to find
//the derivatives of thw weight and biases in the current layer and the
//activations in the previous layer.
void LayerDeriv(struct Layer &layer, struct Layer &prevLayer)
{
  for (int i = 0; i < layer.nodes.size(); i++)
  {
    NodeDeriv(layer.nodes[i], prevLayer);
  }
}

//Takes in a network and uses backpropogation to find the derivatives of
//all the nodes for a single training example.
void NetworkDeriv(struct Network &network, std::vector<double> expectedOutputs)
{
  //Calculating the derivatives of the activations in the last layer.
  LastLayerDerivActivations(network.layers[network.layers.size() - 1], expectedOutputs);
  //Looping through all the layers to find the derivatives of all of
  //the weights and activations in the network for this training example.
  for (int i = network.layers.size() - 1; i > 0; i--)
  {
    LayerDeriv(network.layers[i], network.layers[i - 1]);
  }
}

//Takes in an input string and char and will return a vector of the string split
//by the char. The char is lost in this conversion.
std::vector<std::string> SplitString(std::string stringToSplit, char delimiter)
{
  //Creating the output vector.
  std::vector<std::string> outputVector;
  //Initialising the lastDelimiter to -1, since the first string should be split as if
  //the char before it was the splitter.
  int lastDelimiterIndex = -1;
  for (int i = 0; i < stringToSplit.size(); i++)
  {
    //Getting the current char.
    char chr = stringToSplit[i];
    //If the current char is the delimiter, create a new substring in the vector.
    if (chr == delimiter)
    {
      //Creating the new substring at the delimiter and adding it to the end
      //of the output vector.
      std::string subString = stringToSplit.substr(lastDelimiterIndex + 1, i - lastDelimiterIndex - 1);
      outputVector.push_back(subString);
      //Setting the last delimiter variable to the current character.
      lastDelimiterIndex = i;
    }
  }
  //Adding the last section of the string to the output vector, since there is no
  //delimiter and will not be added in the for loop.
  std::string subString = stringToSplit.substr(lastDelimiterIndex + 1, stringToSplit.size() - lastDelimiterIndex - 1);
  outputVector.push_back(subString);
  //Returning the split string as a vector of strings.
  return outputVector;
}

//Takes in a vector of strings and converts it to a vector of doubles. normalise argument
//sets what value will be taken to be 1, and other numbers will be a fraction of that.
//Set normalise to 0 to disable normalisation.
std::vector<double> ConvertStringVectorToDoubleVector(std::vector<std::string> input, int normalise = 0)
{
  std::vector<double> convertedVector;
  //Iterating through all the strings int the input vector.
  for (std::string str : input)
  {
    //Converting the string into a double.
    double value = stod(str);
    //Checks to see if normalisation is enabled.
    if (normalise != 0)
    {
      //Normalising the double.
      value /= normalise;
    }
    //Adding the double to the output vector.
    convertedVector.push_back(value);
  }
  //Returning the converted vector.
  return convertedVector;
}

//Takes in a string of data and uses it to create a DataSet object to
//be used in the training of the neural network.
struct DataSet FormatData(std::string dataString)
{
  struct DataSet dataSet;
  //Splitting the input string into the seperate images.
  std::vector<std::string> imageSplit = SplitString(dataString, '|');
  //Looping through all of the images.
  for (int i = 0; i < imageSplit.size(); i++)
  {
    //Getting the current image string.
    std::string imageData = imageSplit[i];
    //Splitting the image between the inputs and expected outputs.
    std::vector<std::string> ioSplit = SplitString(imageData, '/');
    std::string inputs = ioSplit[0];
    std::string outputs = ioSplit[1];
    //converting the input and output strings into string arrays of the values.
    std::vector<std::string> inputVectorString = SplitString(inputs, ',');
    std::vector<std::string> outputVectorString = SplitString(outputs, ',');
    //Converting the string arrays into double arrays and normalising the input doubles.
    std::vector<double> inputVector = ConvertStringVectorToDoubleVector(inputVectorString, 255);
    std::vector<double> outputVector = ConvertStringVectorToDoubleVector(outputVectorString);
    //Creating a new Data object.
    struct Data data;
    data.inputs = inputVector;
    data.answers = outputVector;
    //Adding the object to the dataset.
    dataSet.data.push_back(data);
  }
  //Returning the completed dataset.
  return dataSet;
}

//Takes in a filename and extracts all of the ascii data from the
//file and calls the FormatData function to create a DataSet object.
struct DataSet CreateDataSetFromFile(std::string fileName)
{
  //Opening file.
  std::ifstream dataFile;
  dataFile.open(fileName);
  //Storing fild data in a string.
  std::string data;
  dataFile >> data;
  //Creating DataSet object.
  struct DataSet dataSet = FormatData(data);
  //Returning completed DataSet object.
  return dataSet;
}

//Takes in a network and a Data object and runs the network and adds
//to the derivatives of the network for that one training example.
void NetworkIteration(struct Network &network, struct Data data)
{
  //Extracting the input and output data from the data object.
  std::vector<double> inputs = data.inputs;
  std::vector<double> outputs = data.answers;
  //Caclulating the activations of the network for this data.
  CalculateNetwork(network, inputs);
  //Caclulating the cost for this iteration and adding it to the total.
  double cost = CalculateCost(network, outputs);
  network.cost += cost;
  //Caclulating the derivatives for the network weights and biases
  //for this training example.
  NetworkDeriv(network, outputs);
}

//Takes in a node and caclulates the average of the derivatives over
//the dataset and then multiplies them by a fixes mutation rate and
//applies the derivatives to the node's values.
void GradientDecentNode(struct Node &node, int dataCount)
{
  //Iterating through all of the weights of the node.
  for (int i = 0; i < node.weights.size(); i++)
  {
    double weight = node.weights[i];
    double derivWeight = node.derivWeights[i];
    //Getting the average over all of the training data.
    derivWeight /= dataCount;
    //Applying a constant multiplier to alter the rate at which is mutates.
    derivWeight *= weightMutationRate;
    //Subtracting the derivative from the weight.
    node.weights[i] -= derivWeight;
    //Reseting the weight derivative
    node.derivWeights[i] = 0;
  }
  double bias = node.bias;
  double derivBias = node.derivBias;
  //Applying a constant multiplier to alter the rate at which is mutates.
  derivBias *= biasMutationRate;
  //Subtracting the derivative from the bias.
  node.bias -= derivBias;
  //Resetting the bias derivative.
  node.derivBias = 0;
}

//Takes in a layer and iterated through all of the nodes in the layer
//and applies all of their derivatives to them.
void GradientDecentLayer(struct Layer &layer, int dataCount)
{
  for (struct Node &node : layer.nodes)
  {
    GradientDecentNode(node, dataCount);
  }
}

//Takes in a network and iterated through all of the layers and applies
//all of the derivatives to them.
void GradientDecentNetwork(struct Network &network, int dataCount)
{
  for (int i = 1; i < network.layers.size(); i++)
  {
    GradientDecentLayer(network.layers[i], dataCount);
  }
}

//Iterates through all of the training data in dataSet and calculates the derivatives
//of the weights and biases and then peforms the gradient decent using the derivatives.
void TrainNetworkSingle(struct Network &network, struct DataSet dataSet)
{
  //Iterating through all of the training data.
  for (struct Data data : dataSet.data)
  {
    //Caclulating the network for a single training example.
    NetworkIteration(network, data);
  }
  //Peforming the derivatives.
  GradientDecentNetwork(network, dataSet.data.size());
}

void TrainNetwork(struct Network &network, struct DataSet dataSet, int iterations)
{
  for (int i = 0; true/*i < iterations*/; i++)
  {
    TrainNetworkSingle(network, dataSet);
    std::cout << network.cost << std::endl;
    network.cost = 0;
  }
}

int main()
{
  struct Network network = CreateNetwork({784, 784, 16, 16, 10});
  struct DataSet dataSet = CreateDataSetFromFile(""data.txt"");
  TrainNetwork(network, dataSet, 100);
  PrintNetwork(network);
  return 0;
}
```
",33354,,,,,2/7/2020 8:30,Neural network seems to just figure out the probability of a specific result,,0,0,,,,CC BY-SA 4.0 17894,2,,17809,2/7/2020 9:08,,1,,"

from what I understand you are building you own model for this specific use case. From my perspective I would try not to reinvent the wheel, as it is said, and use an already proven and working model such as the YOLOs (v1, v2 and v3).

YOLO does not tell you the center pixel of the image directly but it tells you the center cell, with respect to the predicted object, of a grid (which is built on top of the image) responsible for predicting each object. See on the left image how the grid is built on top of the input image, then YOLO computes a probability map, and then the bounding boxes are predicted from the cell in the object's center. I have highlighted which cells would be responsible for predicting each object in the rightmost image. (Image from YOLOv1 paper)

The grid can have different resolutions but if you make it equal to the image size, then you would have the center pixel of each object predicted. This is because YOLO predicts objects in each cell of the grid, so if the grid is equal to the image size, YOLO will predict objects in each pixel.

As an example, imagine you have an input image of $[H \times W] = [416 \times 416]$ then YOLO would compute a grid of $[S_1 \times S_2]=[52 \times 52]$ on top of it. And predict objects in the center cell of the $[S_1 \times S_2]$. So, if you tune YOLO for computing a grid such as $[S_1 \times S_2] = [H \times W]$, then YOLO would output objects prediction with respect to the image pixels, in other words, YOLO would predict bounding boxes centered in the image pixel on the object's center.

This is how I would proceed for this use case, I hope it helps you or at least give you some clues about how to proceed further! Cheers! :)


NOTE: I chose the image size and grid size with numbers I usually see at work. Specifically, using YOLOv3. In YOLOv3, for aninput image of $[H \times W] = [416 \times 416]$, 3 grids are built, with different resolutions (for predicting big and small objects), with the following grids sizes are: $[13 \times 13], [26 \times 26], [52 \times 52]$

",26882,,,,,2/7/2020 9:08,,,,2,,,,CC BY-SA 4.0 17896,2,,17890,2/7/2020 10:21,,3,,"

In reinforcement learning, is the value of terminal/goal state always zero?

Yes, always for episodic problems, the value of a terminal state is always zero, from the definition.

The value of a state $v(s)$ is the expected sum (perhaps discounted) of rewards from all future time steps. There are no future time steps when in a terminal state, so this sum must be zero.

For the sake of consistent maths notation, you can consider a terminal state to be "absorbing", i.e. any transition out of it results in zero reward and returning to the same terminal state. Then you can use the definition of value function to show the same thing:

$$v_{\pi}(s) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^k R_{t+k+1} | S_{t} = s]$$

If $s = s_T$, the terminal state, then all the "future rewards" from $k=0$ onwards starting with reward $R_{t+1}$ must be zero. This is consistent with the reward $R_{t}$, i.e. the reward when transitioning to the terminal state, being any value.

You can show similar using action value functions, if you accept a "null" action in the terminal state.

",1847,,2444,,11/1/2020 15:11,11/1/2020 15:11,,,,2,,,,CC BY-SA 4.0 17897,1,,,2/7/2020 10:52,,2,117,"

I'm a beginner in computer vision. I want to know which option among the following two can get better accuracy of image classification.

  1. SIFT features + SVM
  2. Bag-of-visual-words features + SVM

Here's a reference: https://www.mathworks.com/help/vision/ug/image-classification-with-bag-of-visual-words.html.

",33358,,2444,,9/25/2020 22:22,9/26/2020 0:22,Does the bag-of-visual-words method improve the classification accuracy?,,1,1,,,,CC BY-SA 4.0 17898,1,17994,,2/7/2020 11:39,,1,503,"

Why do we split the data into two parts, and then split those segments into training and testing data? Why do we have two sets of data for each training and test data?

",33356,,33010,,2/7/2020 20:54,2/12/2020 17:12,"While we split data in training and test data, why we have two pairs of each?",,3,0,,,,CC BY-SA 4.0 17900,2,,17892,2/7/2020 11:45,,7,,"

In variational inference, the original objective is to minimize the Kullback-Leibler divergence between the variational distribution, $q(z \mid x)$, and the posterior, $p(z \mid x) = \frac{p(x, z)}{\int_z p(x, z)}$, given that the posterior can be difficult to directly infer with the Bayes rule, due to the denominator term, which can contain an intractable integral.

Therefore, more formally, the optimization objective can be written as

\begin{align} q^*(z \mid x) = \operatorname{argmin}_{q(z \mid x)} D_{\text{KL}}(q(z \mid x) \| p(z \mid x))\tag{1} \label{1} \end{align}

However, solving this optimization problem can be as difficult as the original inference one of computing the posterior $p(z \mid x)$ using the Bayes rule, given that it still involves the possibly intractable term $p(z \mid x)$.

If you use the definition of the KL divergence, you can derive the following equation

\begin{align} D_{\text{KL}}(q(z \mid x) \| p(z \mid x)) = \mathbb{E}_{q(z \mid x)} \left[ \log q(z \mid x) \right] - \mathbb{E}_{q(z \mid x)} \left[ \log q(z, x) \right] + \log p(x) \tag{2} \label{2} \end{align}

First, note that the expectations are with respect to the variational distribution, which means that, if you want to approximate these expectations with Monte Carlo estimates, you can do it with respect to the variational distribution, and, given that it is assumed that one can easily sample from the variational distribution (which can e.g. be a Gaussian), this is a nice feature.

Second, the KL divergence contains the term $p(x) = \int_z p(x, z)$, the denominator term in the Bayes rule to compute the posterior $p(z \mid x)$, which (as I said) can be intractable. $p(x)$ is often called the evidence.

The solution is then to optimize an objective that does not contain this annoying intractable term $p(x)$. The objective that is optimized is the so-called ELBO objective

\begin{align} \text{ELBO}(q) = \mathbb{E}_{q(z \mid x)} \left[ \log q(z, x) \right] - \mathbb{E}_{q(z \mid x)} \left[ \log q(z \mid x) \right]\tag{3} \label{3} \end{align}

The KL divergence \ref{2} and the ELBO objective \ref{3} are similar. In fact, ELBO is an abbreviation for Evidence Lower BOund, because the ELBO is a lower bound on the evidence $p(x)$, i.e. it is a number that is smaller than $p(x)$ or, more formally, $\text{ELBO}(q) \leq \log p(x)$. Therefore, if we maximize $\text{ELBO}(q)$, we also maximize the evidence $p(x)$ of the data (where $x$ is the data in your dataset).

So, the objective in variational inference is

\begin{align} q^*(z \mid x) &= \operatorname{argmax}_{q(z \mid x)} \operatorname{ELBO}({q}) \\ &= \operatorname{argmax}_{q(z \mid x)} \mathbb{E}_{q(z \mid x)} \left[ \log q(z, x) \right] - \mathbb{E}_{q(z \mid x)} \left[ \log q(z \mid x) \right] \tag{4} \label{4} \end{align}

First, note that \ref{4} only contains terms that depend on the variational distribution, so we got rid of intractable terms, which was our goal.

Second, note that, as opposed to \ref{1}, we are maximizing (or finding the parameters that maximize the objective).

The ELBO objective is actually the negative of \ref{2} plus the logarithm of the evidence term, $\log p(x)$ (and you can easily verify it), that is

\begin{align} \text{ELBO}(q) = -D_{\text{KL}}(q(z \mid x) \| p(z \mid x)) + \log p(x) \end{align}

which can also be rearranged as

\begin{align} \log p(x) = D_{\text{KL}}(q(z \mid x) \| p(z \mid x)) + \text{ELBO}(q) \tag{5}\label{5} \end{align}

which is your equation (where $\text{ELBO}(q)$ is your $\mathcal{L}$). Therefore, your equation is true by definition, i.e. we define the ELBO such that \ref{5} is true. However, note that we haven't defined the ELBO in the way we dit only for the sake of it, but because it is a lower bound on the log evidence (and this follows from the fact that the KL divergence is never negative).

",2444,,2444,,2/9/2020 14:11,2/9/2020 14:11,,,,0,,,,CC BY-SA 4.0 17901,2,,17898,2/7/2020 11:58,,0,,"

Usually we are splinting the data into 3 chunks for example 70% for training, 10% for validation and 20% for testing. The first two chunks are going to be used for training. The reason you need the validation dataset is to tune your hyper-parameters and see how well your model can generalize. Once you have a model that achieves a fairly good performance on the validation dataset, you're measuring its accuracy on the test dataset.

",20430,,,,,2/7/2020 11:58,,,,2,,,,CC BY-SA 4.0 17902,2,,17898,2/7/2020 12:02,,1,,"

For any Machine Learning model, the available data is usually split into three sets:

Training Set:

The part of data used to train the model and learn the parameters of the network.

The data that remains after allocation of the Training Dataset, is split into the Validation and Test sets.

Validation Set:

This sample of data is used to provide an unbiased evaluation of a model fit on the training dataset. This helps in tuning model hyperparameters to improve the model performance. Eg: Changing the number of clusters ($k$) in a K-Means algorithm, or the pooling layers in a CNN.

Test Set:

After training, this part of the dataset is used to used to test how well the model generalizes to unseen data and estimate its performance.

Another possibility (going by your question), is the use of Cross-Validation. This is performed when the dataset is too small. In such a case, a random split is performed on the dataset resulting in k non-overlapping subsets. The test error is then estimated by taking the average test error across k trials. [Image Source]

",33010,,-1,,6/17/2020 9:57,2/7/2020 12:36,,,,2,,,,CC BY-SA 4.0 17903,2,,15524,2/7/2020 14:38,,3,,"

1) The math is the exact same, so from an optimization or mathematical perspective there is no difference

2) Here are my guesses to a possible answer.

  • Habit: People may just call one over the other out of habit
  • Generality: Across frameworks a 1d convolution op would work, while Dense of FC may need adjustments to work on the temporal axis
  • Parallel Workers: Convolution and Dense call different subroutines in the backend, and the one used by convolution may have better gains on sequential input for this purpose

Edit
Regarding bench-marking the 2, your experiment was shallow. I didn't have time to wait to do a full gird search, so i held 3 paramaters constant and fluctuated one. Here are the results (note the model was just a simple feed forward relu residual model)

Note that in a couple yeah dense out performs conv but it isn't consistent and there are scenarios where it is not true. This is only for a small grid that I chose but you can extend this yourself to check. So it is not as straightforward to say one is sheerly better than the other.

",25496,,25496,,2/12/2020 21:30,2/12/2020 21:30,,,,4,,,,CC BY-SA 4.0 17904,1,17905,,2/7/2020 14:59,,2,2648,"

On page 84 of Russell & Norvig's book ""Artificial Intelligence: A Modern Approach Book"" (3rd edition), the pseudocode for uniform cost search is given. I provided a screenshot of it here for your convenience.

I am having trouble understanding the highlighted line if child.STATE is not in explored **or** frontier then

Shouldn't that be if child.STATE is not in explored **and** frontier then

The way it's written, it seems to suggest that if the child node has already been explored, but not currently in the frontier, then we add it to the frontier, but I don't think that's correct. If it's already been explored, then it means we already found the optimal path for this node previously and should not be processing it again. Is my understanding wrong?

",20358,,1671,,3/4/2020 23:56,3/4/2020 23:56,"Understanding the pseudocode of uniform-cost search from the book ""Artificial Intelligence: A Modern Approach""",,1,0,,,,CC BY-SA 4.0 17905,2,,17904,2/7/2020 15:15,,2,,"

I think this is a problem with missing brackets in pseudocode — clearly the state is only added to the frontier if it hasn't been explored already, so it would be:

if not [contains(frontier, state) OR contains(explored, state)] then

which is equivalent to your interpretation of

if not [contains(frontier, state)] AND not [contains(explored, state)] then

(according to De Morgan's Laws)

Being half-way between a programming language and natural language, this is a case where pseudocode is not quite precise enough.

",2193,,,,,2/7/2020 15:15,,,,0,,,,CC BY-SA 4.0 17906,1,,,2/7/2020 17:01,,1,43,"

My goal is to generate artificial sequences of real-valued data (e.g. time series) with GANs. Starting simple I tried to generate realistic sine-waves using a Wasserstein GAN. But even on this simple task it fails to generate any useful samples.

This is my model:

Generator

model = Sequential()
model.add(LSTM(20, input_shape=(50, 1)))
model.add(Dense(40, activation='linear'))
model.add(Reshape((40, 1)))

Critic

model = Sequential()
model.add(Conv1D(64, kernel_size=5, input_shape=(40, 1), strides=1))
model.add(MaxPooling1D(3, strides=2))
model.add(LeakyReLU(alpha=0.2))
model.add(Conv1D(64, kernel_size=5, strides=1))
model.add(MaxPooling1D(3, strides=2))
model.add(LeakyReLU(alpha=0.2))
model.add(Flatten())
model.add(Dense(1))

Is this model capable of learning such a task or should I use a different model architecture?

",33364,,2444,,2/13/2020 2:21,7/18/2021 14:07,Generation of realistic real-valued sequences using Wasserstein GAN fails,,0,0,,,,CC BY-SA 4.0 17907,1,17908,,2/7/2020 19:55,,3,109,"

Given a ridge and a lasso regularizer, which one should be chosen for better performance?

An intuitive graphical explanation (intersection of the elliptical contours of the loss function with the region of constraints) would be helpful.

",33314,,2444,,1/29/2021 23:10,1/29/2021 23:10,Which is a better form of regularization: lasso (L1) or ridge (L2)?,,1,2,,,,CC BY-SA 4.0 17908,2,,17907,2/7/2020 20:50,,3,,"

The following graph shows the constraint region (green), along with contours for Residual sum of squares (red ellipse). These are iso-lines signifying that points on an ellipse have the same RSS. Figure: Lasso (left) and Ridge (right) Constraints [Source: Elements of Statistical Learning]

As Ridge regression has a circular constraint ($\beta_1^2 + \beta_2^2 <= d$) with no edges, the intersection will not occur on an axis, signifying that the ridge regression parameters will usually be non-zero.

On the contrary, the Lasso constraint ($|\beta_1| + |\beta_2| <= d$) has corners at each of the axes, and so the ellipse will often intersect the constraint region at an axis. In 2D, such a scenario would result in one of the parameters to become zero whereas in higher dimensions, more of the parameter estimates may simultaneously reach zero.

This is a disadvantage of ridge regression, wherein the least important predictors never get eliminated, resulting in the final model to contain all predictor variables. For Lasso, the L1 penalty forces some parameters to be exactly equal to zero when $\lambda$ is large. This has a dimensionality reduction effect resulting in sparse models.

In cases where the number of predictors are small, L2 could be chosen over L1 as it constraints the coefficient norm retaining all predictor variables.

",33010,,33010,,7/24/2020 10:57,7/24/2020 10:57,,,,2,,,,CC BY-SA 4.0 17909,1,,,2/7/2020 21:24,,2,89,"

There is a popular strategy of using a neural network trained on one task to produce features for another related task by ""chopping off"" the top of the network and sewing the bottom onto some other modeling pipeline.

Word2Vec models employ this strategy, for example.

Is there an industry-popular term for this strategy? Are there any good resources that discuss its use in general terms?

",21298,,,,,2/8/2020 11:28,Strategy of using intermediate layers of a neural network as features?,,2,0,,,,CC BY-SA 4.0 17910,1,17914,,2/7/2020 21:32,,2,259,"

In $$\log p_{\theta}(x^1,...,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i),$$ why does $p(x^1,...,x^N)$ and $q(z|x^i)$ have the same parameter $\theta?$

Given that $p$ is just the probability of the observed data and $q$ is the approximation of the posterior, shouldn't they be different distributions and thus their parameters different?

",30885,,2444,,11/7/2020 0:57,6/4/2022 14:32,"In this VAE formula, why do $p$ and $q$ have the same parameters?",,1,0,,,,CC BY-SA 4.0 17911,2,,17909,2/7/2020 21:35,,1,,"

This is an example of Transfer Learning, wherein instead of starting the learning process from scratch, a pre-trained model is used and appended with custom layers to achieve additional functionality.

",33010,,33010,,2/7/2020 21:44,2/7/2020 21:44,,,,0,,,,CC BY-SA 4.0 17912,2,,17909,2/7/2020 21:37,,1,,"

This a typical transfer learning technique, a lot of people refer to it with fine-tunning. I would recommend that you have a look on PyTorch tutorial: it explains well how to use.

",32493,,2444,,2/8/2020 11:28,2/8/2020 11:28,,,,0,,,,CC BY-SA 4.0 17913,1,,,2/7/2020 21:42,,1,19,"

I'm trying to create a noising model that accurately reflects how people would noise name data. I was thinking of randomly switching out characters and creating a probability over which character gets switched in based on keyboard closeness and how similar anatomically another character looks to it. For example, ""l"" has a higher prob of being switched in with ""|"" and ""k"" cause ""k"" is close by on the keyboard and ""|"" looks like ""l"", but that requires a lot of hard coding and reward for that seemed low because that's not the only 2 ways people can noise things. I also had the same idea above except use template matching of every character to every other character but itself and that would give it a similarity score then divide that by the sum over all chars to get the probs. Any other suggestions? My goal is the maximize closeness to actual human noising.

",30885,,,,,2/7/2020 21:42,Creating a noising model for NLP that models human noising,,0,0,,,,CC BY-SA 4.0 17914,2,,17910,2/7/2020 23:44,,2,,"

I will try to answer your questions directly (but I guess I won't be able to), otherwise, this can become quite confusing, given the inconsistencies that can be found across different sources.

In $logp_{\theta}(x^1,...,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i)$ why is $\theta$ and param for $p$ and $q$?

In a few words, your equation is wrong because it uses the letters $\phi$ and $\theta$ inconsistently.

If you look more carefully at the right-hand side of your equation, you will notice that $q_{\theta}$ has different parameters, i.e. $\theta$, than $p_{\phi}$, which has parameters $\phi$, so $p$ and $q$ have different parameters, and this should be the case, because they are represented by different neural networks in the case of the VAE. However, the left-hand side uses $\theta$ as the parameters of $p$ (while the right-hand side uses $\phi$ to index $p$), so this should already suggest that the equation is not correct (as you correctly thought).

In the case of the VAE, $\phi$ usually represents the parameters (or weights) of the encoder neural network (NN), while $\theta$ usually represents the parameters of the decoder NN (or vice-versa, but you should just be consistent, which is often not the case in your equation). In fact, in the VAE paper, in equation 3, the authors use $\phi$ to represent the parameters of the encoder $q$, while $\theta$ is used to denote the parameters of the decoder $p$.

So, if you follow the notation in the VAE paper, the ELBO can actually be written something like

\begin{align} \mathcal{L}(\phi,\theta; \mathbf{x}) &= \mathbb{E}_{\tilde{z} \sim q_{\phi}(\mathbf{z} \mid \mathbf{x})} \left[ \log p_{\theta} (\mathbf{x} \mid \mathbf{z}) \right] - \operatorname{KL} \left(q_{\phi}(\mathbf{z} \mid \mathbf{x}) \| p_{\theta}(\mathbf{z}) \right) \tag{1} \label{1} \end{align}

The ELBO loss $\mathcal{L}(\phi,\theta; \mathbf{x})$ has both parameters (of the encoder and decoder), which will be optimized jointly. Note that I have ignored the indices in the observations $\mathbf{x}$ (for simplicity), while, in the VAE paper, they are present. Furthermore, note that, both in \ref{1} and in the VAE paper, we use bold letters (because these objects are usually vectors), i.e. $\mathbf{x}$ and $\mathbf{z}$, rather than $x$ and $z$ (like in your equation).

Note also that, even though $p_{\theta}(\mathbf{z})$ is indexed by $\theta$, in reality, this may be an un-parametrized distribution (e.g. a Gaussian with mean $0$ and variance $1$), i.e. not a family of distributions. The use of the index $\theta$ in $p_{\theta}(\mathbf{z})$ comes from the (implicit) assumption that both $p_{\theta}(\mathbf{z})$ and $p_{\theta} (\mathbf{x} \mid \mathbf{z})$ come from the same family of distributions (e.g. a family of Gaussians). In fact, if you consider the family of all Gaussian distributions, then $p_{\theta}(\mathbf{z}) = \mathcal{N}(\mathbf{0}, \mathbf{I})$ also belongs to that family. But $\theta$ and $\phi$ are also used to denote the parameters (or weights) of the networks, so this becomes understanbly confusing. (To understand equation 10 of the VAE paper, see this answer).

Why does $p(x^1,...,x^N)$ and $q(z|x^i)$ have the same parameter $\theta?$

This is wrong, in fact. If you look at equation 1 of the VAE paper, they use $\theta$ to denote the parameters of $p(\mathbf{x})$, i.e. $p_{\theta}(\mathbf{x})$, while the parameters of the encoder are $\phi$, i.e. $q_{\phi}(\mathbf{z} \mid \mathbf{x}$).

Cause $p$ is just the probability of the observed data and $q$ is the approximation of the posterior so shouldn't they be different distributions and their parameters different?

Yes.

",2444,,2444,,6/4/2022 14:32,6/4/2022 14:32,,,,0,,,,CC BY-SA 4.0 17915,2,,17868,2/8/2020 1:08,,3,,"

Neither of the above mentioned methods could be a potent indicator of the performance of a model.

A simple way to train the model just enough so that it generalizes well on unknown datasets would be to monitor the validation loss. Training should be stopped once the validation loss progressively starts increasing over multiple epochs. Beyond this point, the model learns the statistical noise within the data and starts overfitting.

This technique of Early stopping could be implemented in Keras with the help of a callback function:

class EarlyStop(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if(logs.get('val_loss') < LOSS_THRESHOLD and logs.get('val_categorical_accuracy') > ACCURACY_THRESHOLD):
            self.model.stop_training = True

callbacks= EarlyStop()
model.fit(...,callbacks=[callbacks])

The Loss and Accuracy thresholds can be estimated after a trial run of the model by monitoring the validation/training error graph.

",33010,,33010,,2/8/2020 1:14,2/8/2020 1:14,,,,3,,,,CC BY-SA 4.0 17916,1,17931,,2/8/2020 4:11,,1,34,"

I trained this network from this github.

The training went well, and returns nice results for new, unseen images.

On training, the loss changed (decreased), thus I must assume the weights changed as well.

On training, I saved a snapshot of the net every epoch.

When trying to run a validation set through each epoch's snapshot, I get the exact same results on every epoch.

How can this be possible? What's causing this?

",21645,,,,,2/9/2020 12:29,"Trained a regression network and getting EXACT same result on validation set, on every epoch",,1,0,,,,CC BY-SA 4.0 17917,1,,,2/8/2020 4:21,,1,401,"

I'm working on a problem where we are given a graph and asked to perform various search algorithms (BFS, DFS, UCS, A*, etc.) and the goal state is to visit all nodes in the graph. After all nodes are visited, we need to print out the ""solution path."" However, I am a bit confused on what ""path"" means in AI.

For simplicity, let's just consider a graph of 3 nodes: A, B and C with 2 undirected edges (A, B) (A, C). If we perform BFS on this graph starting at node A and traversing alphabetically, we'd visit A, then B, then C. So, in this case, is the solution path A -> B -> C, i.e. the order in which the nodes are visited? Or is the solution path A -> B -> A -> C? Basically saying that we go from A to B, but to go from B to C, we must go through A again.

",20358,,20358,,2/8/2020 14:02,11/5/2020 23:01,How to report the solution path of a search algorithm on a graph?,,1,0,,,,CC BY-SA 4.0 17918,1,,,2/8/2020 10:25,,3,70,"

How do newer weight initialization techniques (He, Xavier, etc) improve results over zero or random initialization of weights in a neural network? Is there any mathematical evidence behind this?

",33314,,2444,,11/14/2020 15:57,11/14/2020 15:57,How are newer weight initialization techniques better than zero or random initialization?,,1,0,,,,CC BY-SA 4.0 17919,2,,17918,2/8/2020 10:47,,3,,"

There are several ways to answer this question. First of all, there are several mathematical arguments on why using some kind of initialization is better. Consider reading, for example, Xavier et al.. Moreover, there are several numerical experiments showing the importance of initialization.

The motivation for Xavier initialization in Neural Networks is to initialize the weights of the network so that the neuron activation functions are not starting out in saturated or dead regions. In other words, we want to initialize the weights with random values that are not ""too small"" and not ""too large"". Thus the purpose is to fix the variance of the input data to each neuron to 1, because this reduces the variance of getting stuck in saturated areas, since the data is in general normalized this is equivalent to fixing the variance of each weight to $1/n$, where n is the number of input weights to the given neuron.

",32493,,2444,,2/8/2020 11:15,2/8/2020 11:15,,,,0,,,,CC BY-SA 4.0 17920,2,,17865,2/8/2020 10:54,,1,,"

One of the used is to use a neural interpolation, this is using a pretrained model to zoom our images, in some sense training with this kind of images is like combining this pretrainined model with our new data, and can be seen as transfer learning technique

",33373,,,,,2/8/2020 10:54,,,,1,,,,CC BY-SA 4.0 17921,2,,17897,2/8/2020 11:05,,1,,"

Bag-of-visual words (BOVW) was classicly used in computer vision before the introduction of neural networks or some more advanced classical techniques, such us VLAD or Fisher Vectors. In any case, it is a good technique to use, but it is not the state-of-the-art today, and I won't recommend you use it for a real-life project.

",33373,,2444,,9/25/2020 22:17,9/25/2020 22:17,,,,1,,,,CC BY-SA 4.0 17922,2,,16225,2/8/2020 11:22,,2,,"

Let's start by making clear that both AS and MMAS use only global pheromone update. Now, the MMAS has two main differences regarding AS:

  1. In AS, all ants that completed a solution are used for the update, while in MMAS only the best ant with a complete solution is used for update (as you had pointed out).

  2. In AS, the pheromone values are not explicitly bounded. In MMAS, the pheromones are enforced to lie within a pre-set interval $\tau_{min} \leq \tau_{ij} \leq \tau_{max}$ (which gives its name to the algorithm). To ensure this condition, the pheromone update is done by means of the formula

$$ \tau_{ij} \leftarrow \left[ (1-\rho) \cdot \tau_{ij} + \Delta \tau_{ij}^{best} \right]_{\tau_{min}}^{\tau_{max}}, $$

with the operator $[x]_a^b$ defined as

$$ [x]_a^b= \begin{cases} a \qquad \mathrm{if} \quad x > a\\ b \qquad \mathrm{if} \quad x < b\\ x \qquad \mathrm{otherwise} \end{cases} $$

Reference: Ant Colony Optimization. Artificial Ants as a Computational Intelligence Technique.

",33372,,2444,,1/15/2021 11:39,1/15/2021 11:39,,,,0,,,,CC BY-SA 4.0 17923,1,17924,,2/8/2020 11:23,,6,1042,"

This question can seem a little bit too broad, but I am wondering what are the current state-of-the-art works on meta reinforcement learning. Can you provide me with the current state-of-the-art in this field?

",33374,,2444,,2/8/2020 11:39,2/23/2021 10:19,What are the state-of-the-art meta-reinforcement learning methods?,,2,0,,,,CC BY-SA 4.0 17924,2,,17923,2/8/2020 11:30,,3,,"

One of the most recent papers on meta-RL is meta-Q-learning. This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-reinforcement learning (meta-RL). MQL builds upon three simple ideas.

  • Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory.

  • Using a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies.

  • past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates

Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with state-of-the-art meta-RL algorithms.

I think that other references to other work on meta-RL are present in the experiments part of the MQL paper.

",32493,,2444,,2/23/2021 10:19,2/23/2021 10:19,,,,0,,,,CC BY-SA 4.0 17925,1,,,2/8/2020 14:23,,5,795,"

Given the standard illustrative feed-forward neural net model, with the dots as neurons and the lines as neuron-to-neuron connection, what part is the (unfold) LSTM cell (see picture)? Is it a neuron (a dot) or a layer?

",27777,,2444,,2/9/2020 1:16,2/11/2020 11:00,Is the LSTM component a neuron or a layer?,,1,0,,,,CC BY-SA 4.0 17926,1,,,2/8/2020 16:21,,2,122,"

If I had the weights of a certain number of ""parents"" that I wanted to crossbreed, and I used whatever method to pick out the ""best parents"" (I used a roulette wheel option, if that's any relevant), would I be doing this correctly?

For example, suppose I have picked the following two parents.

\begin{align} P_1 &= [0.5, -0.02, 0.4, 0.1, -0.9] \\ P_2 &= [0.42, 0.55, 0.18, -0.3, 0.12] \end{align}

When I'm iterating through each index (or gene) of the parents, I am selecting a weight from one parent only. I called this rate the ""cross-rate"", which in my case is $0.2$ (i.e. with $20$% chance, I will switch to choosing the other parents' weight).

So, using our example above, this is what would happen:

\begin{align} P_1 &=[\mathbf{0.5}, \mathbf{-0.02}, 0.4, 0.1, \mathbf{-0.9}] \\ P_2 &= [0.42, 0.55, \mathbf{0.18}, \mathbf{-0.3}, 0.12] \end{align}

So the child would be

$$C = [0.5, -0.02, 0.18, -0.3, -0.9]$$

I would choose $0.5$ from $P_1$, but for every time I choose a weight from $P_1$, there's a 20% chance that I actually choose the corresponding gene from $P_2$. But, for the first weight, I end up not landing on that 20% chance. So I move onto the second weight, $-0.02$. This time, we hit the 20% chance, so now we swap over. Our next weight is now from $P_2$, which is $0.18$. And so on, until we hit another 20% chance.

We keep doing this until we hit the end of the indexes ($P_1$ and $P_2$ have the same number of indexes, of course).

Is this the correct way to form a child from 2 parents? Is this the correct ""crossbreeding"" method when it comes to genetic algorithms?

",33379,,2444,,2/9/2020 14:40,2/9/2020 14:40,How does crossover work in a genetic algorithm?,,1,0,,,,CC BY-SA 4.0 17927,2,,17926,2/8/2020 17:19,,3,,"

As far as I know, there isn't a ""specified correct way"". The whole idea is that you want the population to converge and increase the sample rate in that more optimal looking place. What works best all depends upon your fitness landscape.

You could also crossover by doing something like

crossover_point = random_number_size_genome
child[:] = parent_a[:crossover_point] + parent_b[crossover_point:]

Or have multiple points of crossover or more exotic types of crossover. As far as I know, the impact on your algorithm because of the different crossover algorithms shouldn't be that different.

Fitness function and how you select the fittest has a way bigger impact. You have understood the crossover step correctly, looking at your example.

",30100,,2444,,2/9/2020 14:31,2/9/2020 14:31,,,,2,,,,CC BY-SA 4.0 17928,2,,8594,2/8/2020 17:39,,0,,"

Yes it's error prone, just like us humans. But just like in chess it is just a whole lot better at dealing with it. AI does not contain all the possible knowledge in the universe out of no where without interaction with the world and such it should need to make assumptions and test those assumptions making it prone to error.

If you're asking about current techniques within reinforcement learning, image detection,etc. Those techniques are error prone, just likes us humans. During training of those algorithms you have a trade off between how good it will generalize on new data and how correct it can answer your questions. (it will ""memorize"" the questions and not actually learn in the latter, called overfitting)

",30100,,,,,2/8/2020 17:39,,,,0,,,,CC BY-SA 4.0 17929,2,,17925,2/8/2020 17:56,,6,,"

The diagram you show works at least partially for describing both individual neurons and layers of those neurons.

However, the ""incoming"" data lines on the left represent all inputs under consideration, typically a vector of all inputs to the cell. That includes all data from current time steps (from input layer or earlier LSTM or time-distributed layers) - the line coming up into the LSTM cell - and the full output and cell state vectors from the whole LSTM layer on previous time step - the horizontal lines inside the cell starting on the left. Technically the top left input line could be read as either be a single neuron cell state value, or the full vector of cell state from the previous time step, depending on whether you were viewing the diagram as describing a single neuron in the layer, or the whole layer respectively.

If you visualise this cell connecting to itself over time steps, then the data lines both in and out must be whole layer vectors. The diagram is then best thought of as representing a whole LSTM layer, which is composed of various sub-layers which get combined, such as the forget gate layer (the leftmost yellow box).

Each yellow box in the diagram can be implemented very similar to a single layer of a simple feed forward NN, with its own weights and biases. So the forget gate can be implemented as

$$\mathbf{f}_t = \sigma(\mathbf{W}_f [\mathbf{x}_t , \mathbf{y}_{t-1}] + \mathbf{b}_f)$$

where $\mathbf{W}_f$ is the weight matrix of the forget gate, $\mathbf{b}_f$ is the bias vector, $\mathbf{x}_t$ is layer input on current time step, $\mathbf{y}_{t-1}$ is layer output from previous time step, and the comma $,$ is showing concatenation of those two (column) vectors into one large column vector.

If the input $\mathbf{x}$ is an $n$-dimensional vector, and output $\mathbf{y}$ is an $m$-dimensional vector, then:

  • $\mathbf{c}$, the cell state (top line), and all interim layer outputs are $m$-dimensional vectors
  • $\mathbf{W}_f$, the forget gate weights, and the three other internal weights matrices are $m \times (n+m)$ matrices
  • $\mathbf{b}_f$, the forget gate bias, and the three other internal bias vectors are $m$-dimensional vectors

The other gates are near identical (using their own weights and biases, plus the third cell candidate value uses tanh instead of logistic sigmoid), and have their own weight matrices etc. The whole thing is constructed a lot like four separate feed-forward layers, each receiving identical input, that are then combined using element-wise operations to perform the functions of the LSTM cell. There is nothing stopping you implementing each yellow box as a more sophisticated and deeper NN in its own right, but that seems relatively rare in practice, and it is more common to stack LSTM layers to achieve that kind of depth.

",1847,,1847,,2/11/2020 11:00,2/11/2020 11:00,,,,4,,,,CC BY-SA 4.0 17930,1,,,2/9/2020 10:01,,2,264,"

Are we able to use models like GPT-2 to smooth out/correct text? For instance if I have two paragraphs that need some text to make the transition easier to read, could this text be generated? And, could it find inconsistencies between the paragraphs and fix them?

As an example, imagine we're reordering some text so that we can apply the pyramid principle. What I'd like to do is reorder the sentences/paragraphs and still have a coherant story. The following three sentences for instance, start with a statement and then have some facts to support it. What's missing is the story that joins them together, right now they're three independent sentences.

The strawberry is the best fruit based on its flavor profile, its coloring and texture and the nutritional profile.

Strawberries are very rich in antioxidants and plant compounds, which may have benefits for heart health and blood sugar control.

Strawberries have a long history and have been enjoyed since the Roman times.

Feel free to point me at things to read, I have not been able to find anything like this in my searches.

",33392,,-1,,6/17/2020 9:57,2/9/2020 14:45,Can we use GPT-2 to smooth out / correct text?,,0,3,,,,CC BY-SA 4.0 17931,2,,17916,2/9/2020 12:29,,0,,"

The answer is I was too tired to see I was using the same model for validation every epoch, instead of that epoch's model.

Write good code, and this won't happen to you.

",21645,,,,,2/9/2020 12:29,,,,0,,,,CC BY-SA 4.0 17932,1,,,2/9/2020 13:05,,5,111,"

I am working towards using RL to create an AI for a two-player, hidden-information, a turn-based board game. I have just finished David Silver's RL course and Denny Britz's coding exercises, and so am relatively familiar with MC control, SARSA, Q-learning, etc. However, the course was focused on single-player, perfect-information games, and I haven't managed to find any examples similar to the type of game I have, and would like advice on how to proceed.

I am still unsure how self-play works, and how it relates to MCTS. For example, I don't know if this involves using the latest agent to play both sides, or playing an agent against older versions, or training multiple opposing agents simultaneously. Are there good examples (or repositories) for learning self-play and MCTS for two-player games?

",33397,,2444,,2/9/2020 14:28,2/9/2020 14:28,"How exactly does self-play work, and how does it relate to MCTS?",,0,0,,,,CC BY-SA 4.0 17935,1,,,2/9/2020 17:42,,4,214,"

I'm working on research in this sector where my supervisor wants to do cannonicalization of name data using VAEs, but I don't think it's possible to do, but I don't know explicitly how to show it mathematically. I just know empirically that VAEs don't do good on discrete distributions of latents and observed variables(Because in order to do names you need your latent to be the character at each index and it can be any ASCII char, which can only be represented as a distribution). So the setup I'm using is a VAE with 3 autoencoders, for latents, one for first, middle and last name and all of them sample each character of their respective names from the gumbel-softmax distribution(A form a categorical that is differentiable where the parameters is a categorical dist). From what I've seen in the original paper on the simple problem of MNIST digit image generation, the inference and generative network both did worse as latent dimension increased and as you can imagine the latent dimension of my problem is quite large. That's the only real argument for why this can't work, that I have. The other would have been it's on a discrete distribution, but I solved that by using a gumbel softmax dist instead.

This current setup isn't working at all, the name generations are total gibberish and it plateaus really early. Are there any mathematical intuitions or reasons that VAEs won't work on a problem like this?

As a note I've also tried semi-supervised VAEs and it didn't do much better. I even tried it for seq2seq of just first names given a first and it super failed as well and I'm talking like not even close to generation of names or the original input.

",30885,,,,,2/12/2020 0:08,Why can't VAE do sequence to sequence name generation?,,0,10,,,,CC BY-SA 4.0 17936,2,,17917,2/9/2020 20:04,,1,,"

Defining Path

You are right to be confused if your professors did not clarify (see warning at end). The term ""path"" can mean a few things:

""Concrete"" Path: Recall, a graph is a collection of vertices and edges. A path on the graph is then:

$$v_1\xrightarrow{e_1} v_2\xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_n$$

Where $v_i$ are vertices on the graph and the arrows denote the direction traversed over the undirected edges $e_j$. Observe, this definition explicitly defines some ""chain"" of vertices connected by specific edges.

""Abstract"" Path: Say there is some task that involves a serious of repeated actions but does not depend on specific objects. We can define this as a chain of composed functions:

$$f_n\circ\cdots \circ f_1$$

The idea, is that this definition is little more abstract. Though, there are many other ways one could define ""path.""

""Solution Path""

A Subtlety

So, in this case, is the solution path A -> B -> C, i.e. the order in which the nodes are visited? Or is the solution path A -> B -> A -> C? Basically saying that we go from A to B, but to go from B to C, we must go through A again.

BFS does not ""go through A again."" Considering you are using a queue based implementation, once A has been visited there will be no need to ""visit"" it again. That is, we have added all nodes we need to the queue and marked A as being seen. This leads into:

How to report the solution path of a search algorithm on a graph?

Since, these algorithms do not revisit nodes, what is most likely being asked of you is for you to report the order of the visits - this is what my professors wanted in my AI & Algorithms courses.

A Warning: I can only answer what my professors wanted. Please clarify this with your instructors.

",28343,,28343,,2/9/2020 20:12,2/9/2020 20:12,,,,0,,,,CC BY-SA 4.0 17937,1,17938,,2/9/2020 22:16,,3,2048,"

Is there any need to use a non-linear activation function (ReLU, LeakyReLU, Sigmoid, etc.) if the result of the convolution layer is passed through the sliding window max function, like max-pooling, which is non-linear itself? What about the average pooling?

",31988,,2444,,1/1/2022 10:05,1/1/2022 10:05,Is a non-linear activation function needed if we perform max-pooling after the convolution layer?,,1,0,,,,CC BY-SA 4.0 17938,2,,17937,2/9/2020 23:03,,5,,"

Let's first recapitulate why the function that calculates the maximum between two or more numbers, $z=\operatorname{max}(x_1, x_2)$, is not a linear function.

A linear function is defined as $y=f(x) = ax + b$, so $y$ linearly increases with $x$. Visually, $f$ corresponds to a straight line (or hyperplane, in the case of 2 or more input variables).

If $z$ does not correspond to such a straight line (or hyperplane), then it cannot be a linear function (by definition).

Let $x_1 = 1$ and let $x_2 \in [0, 2]$. Then $z=\operatorname{max}(x_1, x_2) = x_1$ for all $x_2 \in [0, 1]$. In other words, for the sub-range $x_2 \in [0, 1]$, the maximum between $x_1$ and $x_2$ is a constant function (a horizontal line at $x_1=1$). However, for the sub-range $x_2 \in [1, 2]$, $z$ correspond to $x_2$, that is, $z$ linearly increases with $x_2$. Given that max is not a linear function in a special case, it can't also be a linear function in general.

Here's a plot (computed with Wolfram Alpha) of the maximum between two numbers (so it is clearly a function of two variables, hence the plot is 3D).

Note that, in this plot, both variables, $x$ and $y$, can linearly increase, as opposed to having one of the variables fixed (which I used only to give you a simple and hopefully intuitive example that the maximum is not a linear function).

In the case of convolution networks, although max-pooling is a non-linear operation, it is primarily used to reduce the dimensionality of the input, so that to reduce overfitting and computation. In any case, max-pooling doesn't non-linearly transform the input element-wise.

The average function is a linear function because it linearly increases with the inputs. Here's a plot of the average between two numbers, which is clearly a hyperplane.

In the case of convolution networks, the average pooling is also used to reduce the dimensionality.

To answer your question more directly, the non-linearity is usually applied element-wise, but neither max-pooling nor average pooling can do that (even if you downsample with a $1 \times 1$ window, i.e. you do not downsample at all).

Nevertheless, you don't necessarily need a non-linear activation function after the convolution operation (if you use max-pooling), but the performance will be worse than if you use a non-linear activation, as reported in the paper Systematic evaluation of CNN advances on the ImageNet (figure 2).

",2444,,2444,,2/9/2020 23:44,2/9/2020 23:44,,,,8,,,,CC BY-SA 4.0 17939,1,,,2/10/2020 0:19,,4,118,"

Are all RL algorithms based on the MDP? If not, could you give examples of some which aren't? I've looked elsewhere, but I haven't seen it explicitly said.

",27629,,2444,,2/10/2020 3:01,2/10/2020 3:01,Are there reinforcement learning algorithms not based on Markov decision processes?,,0,2,,,,CC BY-SA 4.0 17940,1,,,2/10/2020 5:07,,1,20,"

I'm trying to implement a neural network that is able to generate an image indicating territory occupation given a board state for GO (a strategy board game). Input images are 19x19x1 grayscale images, with white pixels indicating white pieces, black pixels indicating black pieces, and gray pixels indicating unoccupied areas. Output images are 19x19x1 grayscale images with white pixels indicating white territory, black pixels indicating black territory, and gray areas indicating unassigned territories. A sample input and desired output image is as follows:

The images are quite small, so just to give an overview of trends I noticed: - Pixels surrounded by pixels of opposite colors are 'captured' pieces and therefore part of opponent territory - Two 'eyes' or closed groups of pieces comprising at least two open intersections are invincible or confirmed territory

While I'm not looking for exact specifications of network layers etc., I was hoping I could be given some direction as to what type of network to use, and what it should comprise. Looking at MATLAB documentation, I've found info about semantic segmentation, and autoencoder networks but neither of these seem particularly helpful. I know the question is a little broad, but I just need some direction more than anything. This kind of image recognition problem is a first for me.

",33413,,,,,2/10/2020 5:07,Image-to-Image Regression for GO territory classification,,0,0,,,,CC BY-SA 4.0 17942,1,,,2/10/2020 7:37,,1,208,"

We are attempting to build an AI that manages to play the cardgame Wizard. So far er have a working network (based on the YOLO object-detection) that is abled to detect which cards are played. When asked it returns the color and rank of the cards on the table.

But now when starting to build an agent for the actual training I just cant figure out how to represent the states for this game.

In each round, each player gets the amount of cards represented by the round(one card in round one, two in round two and so on). Based on that the players estimate how many tricks they will win in this round. With ending the round the players calculate their points w.r.t their estimation.

So the agent have to estimate its future tricks and have to play depending on that strategy. So how do I encode that into a form that a neural network can work with?

",30431,,,,,1/18/2023 2:08,How to represent a state in a card game environment? (Wizard),,1,0,,,,CC BY-SA 4.0 17946,1,,,2/10/2020 11:19,,2,20,"

Is there any solution about sign language to speech conversion for mobiles? Can anyone suggest me the flow and tools so that I may implement the solution for mobiles?

",33425,,,user9947,2/10/2020 13:07,2/10/2020 13:07,Sign Language to Speech conversion,,0,0,,,,CC BY-SA 4.0 17948,1,,,2/10/2020 16:49,,0,41,"

I'm working on data cleaning and I'm stuck. I have a data set with 3 columns: id, age, and weight.

Supposing I have an entry:

id:1 | age:3 (years)  | weight: 150 (kg)

How can we detect that the information is wrong, assuming I have a thousand lines?

And how can I correct it (using Python)?

Is there any function in Python that I can use or should I use machine learning techniques?

",32560,,28343,,2/11/2020 17:34,2/11/2020 17:34,How to automatically detect and correct false information in columnar data?,,1,0,,,,CC BY-SA 4.0 17949,1,,,2/10/2020 17:41,,1,53,"

In the GraphRNN paper, the authors only implement the algorithm up to a graph size of 2k nodes. Would this still work on much larger graphs (on the order of $10^7$)? Or would the computation just become too substantial?

",12983,,2444,,2/13/2020 2:14,2/13/2020 2:14,Can GraphRNN be used with very large graphs?,,0,0,,,,CC BY-SA 4.0 17950,1,17964,,2/10/2020 18:06,,1,67,"

I'm trying to understand the role of data augmentation and how it can affect the performance/accuracy of a deep model. My target application is fire detection (on video frames), with almost 15K positive and negative samples, and I was using the following data augmentation techniques. Does using ALL the followings always increase the performance? Or we have to choose them somehow smartly given our target application?

rotation_range=20, width_shift_range=0.2, height_shift_range=0.2,zoom_range=0.2, horizontal_flip=True

When I think a bit more, fire is always straight up, so I think rotation or shift might in fact worsen the results, given that it makes the image sides stretch like this, which is irrelevant to fires in video frames. Same with rotation. So I think maybe I should only keep zoom_range=0.2, horizontal_flip=True and remove the first three. Because I see some false positives when we have a scene transition effect in videos.

Is my argument correct? Should I keep them or remove them?

",9053,,,,,2/11/2020 14:53,Choosing Data Augmentation smartly for different application,,1,0,,,,CC BY-SA 4.0 17951,2,,17948,2/10/2020 18:22,,2,,"

While I can see that there are some heuristics that can tell you whether an entry is 'weird', I don't see any way that you can correct this. Where would you get a correct value from?

I would perhaps start with a statistical analysis, looking at the distribution of values to get an idea of the state the data is in. From this you can then already see some values that wouldn't make sense (this depends very much on what data it is: census data will include toddlers, but credit card applications wouldn't).

Your best bet is to encode some constraints, such as a minimum/maximum range for each value, and perhaps some correspondences, such as a maximum weight for a certain age range. This you can do with simple conditional statements in Python.

You can then flag up those entries which don't look quite right. What you do with them depends on your application. You can filter them out, or manually change the values to something that is consistent with possible values.

Machine Learning techniques would not be able to do anything better than that.

",2193,,,,,2/10/2020 18:22,,,,1,,,,CC BY-SA 4.0 17952,1,17985,,2/10/2020 18:26,,0,34,"

Given a pre-trained CNN model, I extract feature vector of 3450 reference images FV_R as follows:

FV_R = [       [-8.2, -52.2, 9.07, -1.1, -0.08, -9.1, ........, -4.11], 
               [7.8, -3.8, 6.4, -4.27, -2.2, -5.0, ............., 3.6], 
               [-1.2, -0.8, 49.3, 1.73, -1.74, -7.1, ..........., 2.41],
               [-1.2, -.8, 49.3, 0.6, -1.24, -1.04, .........., -2.06],
               .
               .
               .
               [-1.2, -.8, 49.3, 12.77. -2.2, -5.0, .........., -51.1]
       ]

and FV_Q for 1200 query images :

FV_Q = [       [-0.13, 2.6, -3.7, -0.5, -1.02, -0.6, ........, -0.11], 
               [0.3, -3.8, 6.4, -1.6, -2.2, -5.0, ............., 0.97], 
               [-6.4, -0.08, 8.0, 7.3, -8.07, -5.6, ..........., 0.01],
               [-6.09, -.8, 0.5, -8.9, -0.74, -0.08, .........., -8.9],
               .
               .
               .
               [-1.2, -.8, 49.3, 12.77. -2.2, -5.0, .........., -51.1]
       ]

The size info:

>>> FV_R.shape
(3450, 64896)

Query images:

>>> FV_Q.shape
(1200, 64896)

I would like to binarize the CNN feature vectors (descriptors) and calculate Hamming Distance. I am already aware of this answer to probably use np.count_nonzero(a!=b)(if a.shape == b.shape) but does anyone know a method to binarize a feature vector with different size?

Cheers,

",31312,,,,,2/12/2020 13:52,Binarize ConvNet Feature vector,,2,1,,6/2/2020 22:16,,CC BY-SA 4.0 17953,1,17962,,2/10/2020 21:01,,0,137,"

I would like to build an LSTM to predict the correct words order given a sentence. My dataset is composed of sentences, where each sentence has a variable number of words (each word is embedded). The dataset then is an array of matrices, where each matrix is an array of embedded words.

Now, I'm looking to implement it with Keras but I'm not sure how to fit the necessary parameters wanted by the LSTM layer in Keras, like timesteps and batch_size.

Reading on the web, I notice that timesteps is the length of the sequence, so in my case I believe that corresponds to the length of the sentence. But I want to train my LSTM with one sentence at a time, so would the batch_size be 1?

",33440,,26726,,2/11/2020 11:46,2/11/2020 11:46,LSTM implementation in KERAS,,1,1,,5/2/2020 22:14,,CC BY-SA 4.0 17954,2,,17952,2/10/2020 21:07,,0,,"

If you mean to binarize the vector such that all positive values become 1 and the rest becoming 0, then you can do this.

bin_arr = np.zeros_like(FV_R)
bin_arr[FV_R > 0] = 1.

As an example,

In [7]: arr                                                                                                                                                                                                        
Out[7]: 
array([[ 0.15, -0.52],
       [ 1.  , -0.43]])

In [8]: bin_arr = np.zeros_like(arr)                                                                                                                                                                               

In [9]: bin_arr[arr > 0] = 1                                                                                                                                                                                       

In [10]: bin_arr                                                                                                                                                                                                   
Out[10]: 
array([[1., 0.],
       [1., 0.]])
```
",19077,,,,,2/10/2020 21:07,,,,0,,,,CC BY-SA 4.0 17955,1,,,2/10/2020 22:43,,1,69,"

I am referring specifically to the disc defined by Kuznetsov and Mohri in https://arxiv.org/pdf/1803.05814.pdf

This is a kind of worst case path dependent generalization error. But what is the intuitive way of seeing why a worst case is needed? I am probably missing something or reading something incorrectly.

",23001,,2444,,2/10/2020 23:36,2/11/2020 12:18,Why does the discrepancy measure involve a supremum over the hypothesis space?,,1,0,,,,CC BY-SA 4.0 17956,1,17957,,2/10/2020 23:34,,1,84,"

Are there names for neural networks with a well-defined layer or neuron characteristics?

For example, a matrix that has the same number of rows and columns is called a square matrix.

Is there an equivalent for classifying different neural network structures. Specifically, I am interested if there is a name for a neural network with x number of layers, but each layer has the same number of neurons?

",32400,,2444,,2/11/2020 12:20,2/11/2020 12:20,Are there names for neural networks with a well-defined layer or neuron characteristics?,,1,0,,,,CC BY-SA 4.0 17957,2,,17956,2/11/2020 0:35,,2,,"

Neural networks (NNs) are usually classified into feed-forward (i.e. NNs with feedforward connections), recurrent (i.e. NNs with recurrent connections) and convolutional (i.e. NNs that perform a convolution or cross-correlation operation). The term multi-layer NN may also be used to refer to feed-forward neural networks or, in general, neural networks with multiple (hidden) layers. There are also the perceptrons, which do not have hidden layers (i.e. the inputs are directly connected to the outputs).

You may still divide neural networks into classifiers (i.e. the outputs and labels are discrete) and regressors (the outputs and labels are numerical). Furthermore, you may classify them into generative models (e.g. the VAE) or discriminative models.

By analogy with linear algebra concepts, each layer of a feedforward neural network (FFNN) can be seen as a linear operation followed by an element-wise application of a linear or non-linear activation function.

$$ \mathbf{o}^{l} = \sigma \left(\mathbf{a}^{l} \mathbf{W}^{l} + \mathbf{b}^{l}\right) $$

where $\sigma$ is the activation function, $\mathbf{a}^{l}$ the inputs to the layer $l$ and $\mathbf{W}^{l}$ the parameters of the layer $l$ (similar to the parameters or coefficients in linear regression) and $\mathbf{b}^{l}$ is the bias vector (a scalar bias for each neuron) of layer $l$. $\mathbf{o}^{l}$ will then be $\mathbf{a}^{l+1}$ (i.e. the input to the next layer).

A recurrent neural network (RNN) performs a slightly more complex operation.

$$ \mathbf{o}^{l}_t = \sigma \left(\mathbf{a}^{l}_t \mathbf{W}^{l} + \mathbf{o}^{l}_{t-1} \mathbf{R}^{l} + \mathbf{b}^{l}\right) $$

where, in addition to the matrix $\mathbf{W}^{l}$ associated with the feedforward connections, it also uses another matrix $\mathbf{R}^{l}$ associated with the recurrent connections (i.e. cyclic or loopy connections of the neurons), which is multiplied by the output of the same layer but at the previous time step. $\mathbf{o}^{l}_t$ may actually just be the state of the recurrent layer, which is then used to compute the actual output of the layer, but, for simplicity, you can ignore this. Furthermore, note that there are more complex recurrent architectures, but this is the basic idea.

A convolutional neural network (CNN) performs the convolution (or cross-correlation) operation. If you are familiar with signal processing, e.g. kernels, convolution, etc., then you can view a CNN as performing a convolution (or cross-correlation) operation. It's actually possible to view the convolution as a matrix multiplication operation, but the details can easily become cumbersome and tedious to explain in an answer. A CNN may also perform other types of operations (such as downsampling) and it can also be composed of a feedforward part (usually, the last layers of a CNN are feedforward layers), but a CNN is a CNN because it performs the convolution (or cross-correlation).

In all cases, the matrices do not necessarily take any particular form (e.g. they are not necessarily square matrices). The dimensionality of the matrices depends on the number of layers and connections in the network, which can vary depending on many factors (e.g. the need for recurrent connections because they are useful for sequence modeling). These matrices (along with the biases) are the learnable parameters of the networks, but the learned values of these matrices highly depend on your data, the way you initialize them before learning, the architecture of the network, the learning algorithm, etc.

In the case of the FFNN, if the previous layer $l-1$ has the same number of neurons as the current layer $l$, then $\mathbf{W}^{l}$ is a square matrix. AFAIK, there is no name for a neural network with the same number of neurons for each layer. It could be called a rectangular neural network (but this is a name I've just come up with).

To conclude, there are many different neural networks and taxonomies for neural networks, so it is impossible to list or discuss them all in an answer, but, nowadays, the subdivision into feedforward, recurrent and convolution is the most common and general. See e.g. the paper A Taxonomy for Neural Memory Networks if you are interested in a more detailed taxonomy for memory-based neural networks (e.g. recurrent neural networks).

",2444,,2444,,2/11/2020 12:12,2/11/2020 12:12,,,,0,,,,CC BY-SA 4.0 17958,2,,17955,2/11/2020 1:19,,1,,"

The formula $G=\mathbb{E}\left[ f(Z_{T+1}) \mid \mathbf{Z}_1^T\right] - \sum_{t=1}^Tq_t \mathbb{E}\left[ f(Z_t) \mid \mathbf{Z}_1^{t-1} \right]$ actually represents a set, for all possible values of $f$. Therefore, $\text{disc}(\mathbf{q}) = \operatorname{sup}_{f \in \mathcal{F}} \left( \mathbb{E}\left[ f(Z_{T+1}) \mid \mathbf{Z}_1^T\right] - \sum_{t=1}^Tq_t \mathbb{E}\left[ f(Z_t) \mid \mathbf{Z}_1^{t-1} \right] \right)$ is the element not necessarily in that set $G$ that is greater than all elements in that set $G$, but is the smaller than any other element that is greater than any element in that set $G$. In other words, $\text{disc}(\mathbf{q})$ represents an upper bound on the discrepancy $G$, but it is the smallest possible upper bound. See also this answer for more details about the supremum and the relationship between the supremum and upper bounds.

So, unless I am wrong, this is not necessarily a worst-case analysis (but I am not even sure how one would define the worst-case analysis in this context), but we are looking for the smallest upper bound. In the context of algorithms, the worst-case analysis is with respect to the input (i.e. the worst-case input), but you can have upper or lower bounds for worst and best-case scenarios. See, for example, this answer that illustrates this. Why do we want an upper bound? Because we can be certain that we won't have a generalization error worse than it. You can't do this with e.g. a lower bound.

",2444,,2444,,2/11/2020 12:18,2/11/2020 12:18,,,,8,,,,CC BY-SA 4.0 17959,1,,,2/11/2020 1:53,,1,37,"

I am looking for specific references describing guidance principles around the interplay between IP (intellectual property) and Artificial Intelligence algorithms. For example, Company A has a large dataset and Company B has advanced algorithmic capabilities (assume near-AI). How might Company A protect itself in a joint partnership with company B? Apologies if AI SE is not the place for this - any suggestions for the right site?

",33442,,,,,2/11/2020 1:53,Intellectual property in the age of Industry 4.0,,0,1,,,,CC BY-SA 4.0 17962,2,,17953,2/11/2020 10:42,,0,,"

As you read, in keras the input dimensions for a LSTM layer are: (batch_size, timesteps, input_dim). Where:

  • batch_size: number of samples (sentences in your case) to compute the loss before running the gradient descent. So number of sentences to train on before compute loss and optimize your model.
  • timesteps: length of the sequence, in your case length of the sentence
  • input_dim: the features of your data (the words)

The nice thing about Keras is that you can train with an specific batch size, say batch_size=16, that: helps the model convergence (because it averages the loss of the 16 sentences prediction) and boosts the speed (since the weights are updated only once every 16 sentences).

But then you can infer (or do the predictions) with batch_size=1, meaning, one sentence at a time.

So, if you want to train, specifically with one sentence at a time: batch_size=1. But if you want to take the advantage of using mini-batch then use batch_size=16 or higher for training and batch_size=1 for inference.


BONUS: Keras allows for dynamic batch size change in the models. When a dimension is dynamic in Keras has the value of None, that is why when you do model.summary() you can see: (batch_size, timesteps, input_dim) = (None, 40, 100). This None allows for different batch_size in training and inference

",26882,,,,,2/11/2020 10:42,,,,3,,,,CC BY-SA 4.0 17963,1,,,2/11/2020 14:48,,4,437,"

There is this video on pythonprogramming.net that trains a network on the MNIST handwriting dataset. At ~9:15, the author explains that the data should be normalized.

The normalization is done with

x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)

The explanation is that values in a range of 0 ... 1 make it easier for a network to learn. That might make sense, if we consider sigmoid functions, which would otherwise map almost all values to 1.

I could also understand that we want black to be pure black, so we want to adjust any offset in black values. Also, we want white to be pure white and potentially stretch the data to reach the upper limit.

However, I think the kind of normalization applied in this case is incorrect. The image before was:

After the normalization it is

As we can see, some pixels which were black before have become grey now. Columns with few black pixels before result in black pixels. Columns with many black pixels before result in lighter grey pixels.

This can be confirmed by applying the normalization on a different axis:

Now, rows with few black pixels before result in black pixels. Rows with many black pixels result in lighter grey pixels.

Is normalization used the right way in this tutorial? If so, why? If not, would my normalization be correct?

What I expected was a per pixel mapping from e.g. [3 ... 253] (RGB values) to [0.0 ... 1.0]. In Python code, I think this should do:

import numpy as np
import imageio
image = imageio.imread(""sample.png"")
image = (image - np.min(image))/np.ptp(image)
",31627,,,,,10/24/2022 5:03,Does this tutorial use normalization the right way?,,2,2,,,,CC BY-SA 4.0 17964,2,,17950,2/11/2020 14:53,,1,,"

In fact, choosing smartly the values of the image augmentation can help the performance of your system. Where I work we developed an object detector for cars. We had the following image augmentation parameters:

  • Apect ratio distorsion: it changed the cars dimensions
  • Additive noise: it blurred the image
  • Change colorspace: change the cars colors
  • Saturation change: brightness
  • Rotation: rotate image axis
  • Flipping (horizontally): mirror effect
  • Zooming: zoom in/out of image

At first we just combined all the effects (within a certain range of change per effect) randomly in each augmented image. But then we did an study for our use case and we found that our model performed better with: no aspect ratio deformation and no change in color space. After giving it a bit of a thought it really made sense for our use case because it is a detector for a very specific type of car.

However, although the improvements accomplished by optimizing the parameters of image augmentation boosted the performance of the model, the improvements were not huge, so in the end it did not really worth the effort. To give some numbers it improved $5\%$ in our custom test metrics, and we were expecting something around $10\%$ so from that moment on we left the specific image augmentation parameters but stopped thinking too much about it.

So, in conclussion, does it helps? yes, how much? well, that would depend on the use case. For us, it turned out it did not improve as much as we wanted.

Hope this helps :)

",26882,,,,,2/11/2020 14:53,,,,2,,,,CC BY-SA 4.0 17965,1,17967,,2/11/2020 16:20,,4,343,"

I am trying to understand the different reward functions modelled in a reinforcement learning problem. I want to be able to know how the temporal credit assignment problem, (where the reward is observed only after many sequences of actions, and hence no immediate rewards observed) can be mitigated.

From reading the DQN paper, I am not able to sieve out how the immediate rewards are being modelled when $Q_{target}(s,a; \theta) = r_s + argmax_aQ(s',a'; \theta)$. What is $r_s $ used in the case where the score has not changed ? Therefore what is the immediate rewards being modelled for temporal credit assignment problems in atari game ?

If $r_s$ is indeed 0 until score changes, would it affect the accuracy of the DQN ? it seems like the update equation would not be accurate if you do not even know what is the immediate reward if you take that action.

What are some of the current methods used to solve the temporal credit assignment problem ?

Also, I can't seem to find many papers that address the temporal credit assignment problem

",32780,,32780,,2/11/2020 17:02,2/11/2020 20:52,Immediate reward received in Atari game using DQN,,1,1,,,,CC BY-SA 4.0 17966,1,17970,,2/11/2020 19:35,,1,116,"

I often read about the so-called end-to-end AI (or analytics) projects, but I couldn't find a definition of it. What is an end-to-end AI project? Can someone explain what is meant/expected when someone asks you ""Have you already implemented an end-to-end AI project""?

",33465,,2444,,2/13/2020 2:04,2/13/2020 2:04,What is an end-to-end AI project?,,1,2,,,,CC BY-SA 4.0 17967,2,,17965,2/11/2020 20:20,,2,,"

To answer your questions in order:

What is $r_s $ used in the case where the score has not changed ?

It is $0$.

Therefore what is the immediate rewards being modelled for temporal credit assignment problems in atari game ?

Rewards can be re-modelled to aid speed of learning. This is called ""reward shaping"", and is typically done by domain experts who can adjust numbers to reward known good intermediate states and actions.

For DQN Atari, this was not done. Instead, the researchers performed a reward normalisation/scaling so that games which used moderate scoring system in single digits could be handled by the same neural network approximator as games that handed out thousands of points at a go.

Using sparse rewards is standard practice in reinforcement learning, and the credit assignment problem is solved to some degree by all reinforcement learning methods. Essentially the value functions work as a prediction mechanism theoretically whatever the reward sparsity, so if they are correct, thay can be used to drive policy whether the next reward is 1, 50 or 1000 time steps away. The reward backup updates in everything from Value Iteration, through Monte Carlo Control, SARSA, Q-Learning and Actor-Critic all backup values to states/actions seen in earlier time steps. This value backup is a basic mechanism that addresses credit assignment in principle. The credit assignment problem is then a matter of degree and difficulty of different environments, such that sometimes it is readily solved, and other times it is a major hurdle.

In the case of video games, especially older arcade games, it is often not a very hard part of the problem. The games are designed to reward human players by incrementing scores frequently, with very many sub-goals already within the game. In fact this is one of the attractions of video games as toy environments for developing new algorithms.

For example the classic space invaders does not simply score +1 for surviving a wave of enemies, but adds points for every player missile that hits. Although the score does not increment on every frame, the reward sparsity is relatively low for games like this, and simple single-step Q learning with experience replay can solve the credit assignment problem readily for that environment (experience replay does help a little with credit assignment). This was what was demonstrated with the original DQN Atari paper, there were no extra allowances made for reward sparsity.

If $r_s$ is indeed 0 until score changes, would it affect the accuracy of the DQN ?

Not directly, the DQN predicts future expected rewards and can in principle account for delay and sparsity in its estimates. However, if this becomes very sparse you get two problems:

  • Discovering the positive rewards within the environment may take a long time, and may require more advanced techniques, such as reward shaping to encourage searching behaviour (a small negative reward per time step) or ""curiousity""

  • Credit assignment becomes much harder as the possible number of combinations that could of contributed to success can grow exponentially with temporal distance between rewards. Resolving whch ones are important, especially early in a trajectory leading to reward, can take many samples.

What are some of the current methods used to solve the temporal credit assignment problem ?

As noted above, this is core problem in RL, so there are many approaches in the literature. Some standard approaches are:

  • Background planning as used by DynaQ, or experience replay. This re-evaluates states and actions seen before whilst using latest estimates, and can backup values to where important decisions are made within a trajectory. Prioritised experience replay helps even more by focusing on updates that make the most difference to the current estimates.

  • TD($\lambda$) with eligibility traces. Eligibility traces are intuitively a credit-assignment mechanism, they track state features that were active recently and multiply value updates based on the trace vector. Again, this causes state or state action values to backup to earlier parts of trajectories faster.

  • Reward shaping as discussed above. In some research settings this might be seen as ""cheating"" - for instance when facing a standardised environment test for developing algorithms, adding in domain knowledge to help the agent just demonstrates the core algorithm is weaker than claimed. However, when the challenge presented is to solve an environment with the best agent possible, it is fine to use any knoweldge (or of course not to use RL at all).

",1847,,1847,,2/11/2020 20:52,2/11/2020 20:52,,,,3,,,,CC BY-SA 4.0 17968,1,,,2/11/2020 20:44,,1,75,"

I am developing a CNN model to recognize 24 hand-signs of American Sign Language. I have 2500 Images/hand-sign. The data split is:
Training = 1250 Images/hand-sign
Validation = 625 Images/hand-sign
Testing = 625 Images/hand-sign

How should I proceed with training the model?:
1. Should I develop a model starting from fewer hand-signs (like 5) and then increase them gradually?
2. Should I start models from scratch or use transfer learning (VGG16 or other)
Applying data augmentation, I did some tests with VGG16 and added a dense classifier at the end and received these accuracies:
Train: 0.87610877
Validation: 0.8867307
Test: 0.96533334

Accuracy and Loss Graph

Test parameters:
NUM_CLASSES = 5
EPOCHS = 50
STEPS_PER_EPOCH = 125
VALIDATION_STEPS = 75
TEST_STEPS = 75
Framework = Keras, Tensorflow
OPTIMIZER = adam

Model:

model = Sequential([
    Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(pool_size=(2,2)),

    Conv2D(64, (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),

    Conv2D(128, (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),

    Conv2D(256, (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),

    Conv2D(512, (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),

    Flatten(),
    Dense(512, activation='relu'),

    Dense(NUM_CLASSES, activation='softmax')
])

If I try images with slightly different background and predict the classes (predict_classes()), I do not get accurate results. Any suggestions on how to make the model robust?

",33467,,,,,10/29/2022 18:05,Hand-Signs Recognition using Deep Learning Convolutional Neural Networks,,1,1,,,,CC BY-SA 4.0 17969,1,,,2/11/2020 21:56,,1,261,"

I'm suffering from a significant brain fart while trying to get my head around how does batch size affect overall model size e.g for CNNs. Does it serve as an additional dimension for all the weight tensors?

Considering:

  • VGG16 model
  • batch_size of 16
  • image size of 224x224x3
  • conv_1 being the initial 1x1 convolution with stride 1 and 3:64 channels mapping

The input will be a tensor of [16, 224, 224, 3] shape. Will the output of convolution layer be [16, 224, 224, 64] and therefore - will all the weights have additional 'batch size' dimension and thus - impose a linear increase of model size with respect to the batch size?

",32431,,2444,,2/13/2020 2:11,2/13/2020 2:11,How does batch size affect model size?,,0,1,,,,CC BY-SA 4.0 17970,2,,17966,2/11/2020 22:19,,1,,"

It's a pretty grey area.

I think they are usually talking about the full spectrum of the effort in taking an ""AI project"" all the way from the drawing board to production. This could easily be anything from a Stock Market Price Prediction model, to deploying a customer service chatbot/Virtual Assistant. I find that people who ask such questions only care about the fact that ""AI"" is being used to solve a problem and actually is working in the real world.

The effort in the Model would be the standard DataScience/Machine learning process of obtaining data, feature engineering, training and refining your model and eventually deploying it somewhere where it works. It could be in a web browser, it could be in an app, an API endpoint, etc.

For the customer service chatbot it could easily be largely a pure programming project utilising a cloud provider's resource for to conversational capabilities, but as long as it falls under the bracket of ""AI"" and has been completed and deployed to production/an environment where it works, its end to end.

Disclaimer: Depending on the technical knowledge of who is asking, they might care that you have done some real machine learning work in the development process, and so the example of the bot wouldn't fly.

Hope this helps somehow.

",31023,,,,,2/11/2020 22:19,,,,0,,,,CC BY-SA 4.0 17971,1,17981,,2/11/2020 22:36,,3,3698,"

I am reading through Artificial Intelligence: Modern Approach and it states that the space complexity of the GBFS (tree version) is $\mathcal{O}(b^m)$.

While I am reading, at some points, I found GBFS similar to DFS. It expands the whole branches and goes after one according to the heuristic function. It doesn't expand the rest like BFS. Perceiving this as similar to what depth-first search does, I understand that the worst time complexity is $\mathcal{O}(b^m)$. But I don't understand the space complexity.

Shouldn't it be the same as DFS, $\mathcal{O}(bm)$, since it only will be expanding $b*m$ nodes during the search in one path?

",31006,,2444,,2/11/2020 23:22,11/14/2020 16:50,Why is the space-complexity of greedy best-first search is $\mathcal{O}(b^m)$?,,2,0,,,,CC BY-SA 4.0 17975,1,26955,,2/12/2020 1:00,,7,1871,"

I trained a ResNet20 on Cifar10 and obtained the following learning curves.

From the figures, I see at epoch 52, my validation loss is 0.323 (the lowest), and my validation accuracy is 89.7%.

On the other hand, at the end of the training (epoch 120), my validation loss is 0.413 and my validation accuracy is 91.3% (the highest).

Say I'd like to deploy this model on some real-world application. Should I prefer the snapshotted model at epoch 52, the one with lowest validation loss, or the model obtained at the end of training, the one with highest validation accuracy?

",32621,,,user9947,2/12/2020 1:45,3/23/2021 5:56,Should I prefer the model with the lowest validation loss or the highest validation accuracy to deploy?,,2,0,,,,CC BY-SA 4.0 17977,1,17991,,2/12/2020 1:53,,5,381,"

I've been using PyTorch to do research for a while and it seems to be quite easy to implement new things with. Also, it is easy to learn and I didn't have any problem with following other researchers code so far.

However, I wonder whether TensorFlow has any advantage over PyTorch. The only advantage I know is, it's slightly faster than PyTorch.

In general, does TensorFlow have any concrete advantages over PyTorch apart from performance, in particular for research purposes?

",32621,,2444,,2/13/2020 2:08,2/13/2020 2:08,Is there a reason to use TensorFlow over PyTorch for research purposes?,,1,2,,,,CC BY-SA 4.0 17978,1,,,2/12/2020 5:34,,0,31,"

I am trying to understand some of the different approaches used to overcome sparse rewards in a reinforcement learning setting for a research project. Particularly, I have looked at curiosity driven learning, where an agent learns an intrinsic reward function based on the uncertainty of the next state that the agent will end up in as he takes action a in state s. The greater the uncertainty of the next state, the higher the rewards. This will incentive agent's to be more exploratory and it is used particularly in some games where a huge number of steps is needed before the agent reaches the terminal state where is he only then rewarded.

The curiosity driven approach as demonstrated in this paper: https://pathak22.github.io/noreward-rl/ is able to learn faster than if a 0 rewards were used for each state, action.

To my knowledge, using different reward functions will affect the optimal policy obtained. Would curiosity driven learning therefore lead to a different policy as compared to whether a 0 reward was used ? Assume that for a 0 immediate reward system, it is able to derive a policy that reaches the goal state. Which of these 2 policies will be more optimal ?

",32780,,,,,2/12/2020 5:34,Curiosity Driven Learning affect optimal policy,,0,2,,,,CC BY-SA 4.0 17979,1,,,2/12/2020 9:18,,1,77,"

I want to build an application which takes a live audio from source (mic) and filtering the noise (unwanted sounds like chattering, traffic noises) and fetch into an application for further processing.

I want to apply Machine Learning Framework (TensorFlow, Keras) and Deep Learning neural Networks (i.e RNN) for filtering the noise from the audio. I want to do this in a real time environment. My inference device will be a Nvidia Jetson Device. Please guide me where I can find the related documents and how to proceed with the project.

If there is any solution available in any website please refer the link.

",33477,,33372,,2/13/2020 8:52,2/13/2020 8:52,Noise Cancellation on live audio stream,,0,1,,,,CC BY-SA 4.0 17980,1,,,2/12/2020 10:07,,2,89,"

I have been trying to implement this paper and I am very much intrigued. I am working on a medical image problem where I have to segment very small specimens on Whole Slide Images (gigapixel resolution). Therefore my dataset is highly unbalanced and I am having a high false positives rate.

I did my research and found that paper that describes the implementation of Tversky Loss and Focal Tversky Loss. It also describes some modifications to the network architecture which I am postponing for now.

I implemented the loss (Pytorch) and ran some experiments with several alpha/beta combinations. Well, the results are easy to understand: higher alpha results in higher precision and a lower beta increases the recall and pushes the precision down. Basically, what this loss is doing is balancing my recall and precision, only. That is good, I can solve my False Positives issue but since this is a medical problem, a good recall is mandatory. In the paper, the results show that there is an improvement in the Precision/Recall and I cannot understand how is that possible and how I cannot replicate that. I am just weighing false positives and penalizing them, it does not seem enough to improve the model overall.

Regards

",21237,,,,,2/12/2020 10:07,Tversky Loss paper implementation: Recall/Precision do not improve as stated,,0,0,,,,CC BY-SA 4.0 17981,2,,17971,2/12/2020 12:51,,1,,"

After spending some time on the problem, I concluded that it is due to the fact that we need to store the heuristic function evaluations for all nodes during the traversal. So, one might claim that it is the space complexity of the whole nodes which is simply $\mathcal{O}(b^m)$. I hope this is correct.

Also the one with having the space complexity of $\mathcal{O}(bm)$ is called recursive best-first search which is the one that is most similar to DFS implementation I descried in the question.

",31006,,31006,,2/12/2020 13:58,2/12/2020 13:58,,,,1,,,,CC BY-SA 4.0 17982,2,,12059,2/12/2020 13:14,,3,,"

OpenAI have a post on that: https://openai.com/blog/openai-five/

They use a myriad of rollout workers that collect data for 60 seconds and push that data to a GPU cluster where gradients are computed for batches of 4096 observations which are then averaged.

PPO is actually designed to allow this kind of parallelisation as it uses trajectory segments with a fixed size of $T$ to collect data, e.g. 60 seconds for OpenAI Five, where $T$ is supposed to be ""much less than the episode length"" (p.5 of PPO paper).

",22161,,,,,2/12/2020 13:14,,,,1,,,,CC BY-SA 4.0 17984,2,,17968,2/12/2020 13:45,,0,,"

I feel your problem might not be with the model itself but with the dataset. If you only have $2500$ images for $24$ labels (hand-signs) that gets you roughly $104$ images per label. This is very little for the models I train (~$80K$ images in the smallest of cases). In my view you got a really decent accuracy at validation and test time for the size of your dataset.

But answering your questions:

  • Starting from few labels and extending is usually helpful when your model is too deep and suffers from convergence problems. Your model is simple enough not to suffer those problems so I would go learning the $24$ labels at once.
  • Transfer learning can help a lot at reducing training times. For example, if you start from a VGG classifier that detects hands there is a good chance that the weights of your convolutional layers are already almost configured for your use case.

Generally speaking, the easiest way to increase the accuracy of your model is to use one of these 2 methods:

  • Increase the dataset: I am not sure if it is possible in your case, maybe you can use image augmentation (rotation, zooming in/out, changes in the color space...).
  • Increase the depth of your model (provided that you have a big dataset).

If you already fulfilled those items, then you can go and make changes to the model architecture or loss function. Looking at your model and from the top of my head I would try to add Batch-Normalization for the convolutional layers.

Hope this helps a bit :)

",26882,,26882,,2/12/2020 13:51,2/12/2020 13:51,,,,2,,,,CC BY-SA 4.0 17985,2,,17952,2/12/2020 13:52,,0,,"

I fixed the problem as follows:

import numpy as np

def binarize(FV):
    return np.where(FV > 0, 1, 0).astype(int)

def Hamming_Distance(ref, query):
    b_rf, b_qu = binarize(ref), binarize(query)
    H = np.count_nonzero(b_qu[:, None, :] != b_rf, axis=2)
    return H
",31312,,,,,2/12/2020 13:52,,,,0,,,,CC BY-SA 4.0 17990,2,,17975,2/12/2020 16:42,,1,,"

In highly imbalanced classification problems, the highest accuracy can often be achieved simply by assigning the majority class to all observations. This is why learning algorithms do not maximize classification accuracy but minimize a loss function. Fundamentally, loss functions capture how much you ""lose"" when there is a difference between the statistic you want to estimate and the estimate itself. The appropriate loss function is not given by nature, but is provided by you.

With this in mind, it is not quite accurate to say that the highest accuracy is reached after 120 epochs: it is merely the maximum accuracy achieved by the algorithm so far. Unless you run the algorithm for longer, there is no way to say if this is even a local maximum. For example, assigning every observation to the majority class may well achieve a higher accuracy than that achieved at 120 epochs. The only significance of the 120 epochs is therefore that that is how long you ran the algorithm for.

Given these considerations, it makes far more sense to stop at around 50 epochs, when your loss function is minimized.

",33479,,,,,2/12/2020 16:42,,,,4,,,,CC BY-SA 4.0 17991,2,,17977,2/12/2020 16:44,,4,,"

In the past, I have used TensorFlow (1 and 2), Keras and PyTorch, so I will give an answer based on my experience. Currently, I use TF 2 and Keras (the version shipped with TF 2).

In my (but not only) opinion, TF 1 is really ugly and painful, given that it involves sessions, placeholders and, in general, you need to define the computational graph before executing anything (even the simplest programs). With TF 2, you don't need sessions and placeholders anymore, and this is a very big improvement because you don't need to think that you first need to define a computational graph and then feed it with the input. Essentially, in TF 2, you can write TF code and you can almost be sure that your TF 2 code will look like any other Python code.

TF and Keras have some bugs, but I think that also PyTorch must have some bugs (even though I don't remember having encountered them).

TF comes with TensorBoard. You can also easily use TensorBoard from Keras. About one year ago, I had written a blog post about visualization tools for PyTorch. At the time, the tools didn't seem as developed as TensorBoard, but there were already a few options.

Debugging in TF 2 isn't still perfect, but I would say that it has improved with respect to TF 1 (mainly because there is less boilerplate code). To debug in TF 2.0, I mainly use tf.print (which behaves similarly to Python's print function), but there may be more advanced tools (e.g. I think there's a TF debugging plugin for PyCharm), even though I've never used them. TensorBoard could also be used as a debugging tool (for example, to understand how your models are learning, etc).

The simplicity of PyTorch is comparable to that of Keras. TF now comes with an integrated version of Keras, which is optimized for TF, so TF code can look as simple as Keras code.

In general, my impression is that, if you need to, TF provides more flexibility, but it's been a while since I used PyTorch. Furthermore, with TF 2.0, the code looks simpler, more consistent and the framework is better organized. On the other hand, PyTorch may still be easier to learn (because it doesn't have the past of TF, i.e. there are still many TF code that uses sessions, etc.).

Both TF and PyTorch are supported by big companies (respectively, Google and Facebook), but TF is more mature and there are functionalities/libraries that exist for TF that still do not exist for PyTorch (AFAIK). I doubt that there is a functionality for PyTorch that doesn't also exist for TF or Keras.

",2444,,2444,,2/12/2020 16:50,2/12/2020 16:50,,,,0,,,,CC BY-SA 4.0 17992,1,,,2/12/2020 16:54,,2,265,"

I'm looking for an implementation that allows me to generate text based on a pre-trained model (e.g. GPT-2).

An example would be gpt-2-keyword-generation (click here for demo). As the author notes, there is

[...] no explicit mathematical/theoetical basis behind the keywords aside from the typical debiasing of the text [...]

Hence my question: Are there more sophisticated ways of keyword-based text generation or at least any other alternatives?

Thank you

",33476,,-1,,6/17/2020 9:57,2/12/2020 16:54,Pretrained Models for Keyword-Based Text Generation,,0,0,,,,CC BY-SA 4.0 17993,1,,,2/12/2020 16:56,,2,159,"

I am a newbie to machine learning. I have an LSTM model that predicts the next output n+1

time 1, params 1, output 1

time 2, params 2, output 2

time 3, params 3, output 3

. .

time n, params n, , output n

time n+1 --> predicts output n+1

 Here the times are all in minutes, so I can predict the next output in the series which is going to be the next minute. My question is that what if I want to predict the next 5 minutes. One solution was to throw out all the data except in steps of 5 minutes so the next step is automatically would be 5 minutes. This is clearly a waste of all the data that I have gathered. Can you please recommend what I can do about the prediction on different time scales? 

",33488,,33488,,2/12/2020 18:31,7/7/2021 1:05,LSTM model on different time scales,,1,0,,,,CC BY-SA 4.0 17994,2,,17898,2/12/2020 17:12,,1,,"

Are you talking about (X_train,y_train) and (X_test,y_test). If yes, then X represents the data(features) and y represents the labels of that data. That's why you get a pair when you divide it into training and test data

",33489,,,,,2/12/2020 17:12,,,,0,,,,CC BY-SA 4.0 17995,2,,17828,2/12/2020 20:15,,0,,"

Inspired by (1) machine translation where source phrases are first encoded to a feature vector and decoded by another RNN and (2) automatic captioning where a picture is also encoded to a feature vector before it's decoded by the RNN followed. A proper structure for the problem above may be the inverse of these, where the review is encoded by RNN to a vector, and a DNN is followed to decode the vector into #1 - #3.

",33266,,,,,2/12/2020 20:15,,,,0,,,,CC BY-SA 4.0 17996,2,,17993,2/12/2020 20:18,,1,,"

What you could do is just try and bypass the rest of the network after the LSTM if it isn't the 5'th minute. Depending on your framework this can be easy or a painstakingly task compared to the alternative. The alternative is just running and throwing away the output that isn't the next 5'th minute. While the last may seem inefficient it's rather easy to implement and takes just a bit more execution time. If execution time isn't a problem it's the easiest to get started with, if it doesn't work for your task you can always change it.

",30100,,,,,2/12/2020 20:18,,,,1,,,,CC BY-SA 4.0 17997,2,,15524,2/12/2020 20:30,,3,,"

I'm going to post another guess to this question - it won't be a complete answer, but hopefully it'll provide some direction towards finding a more legitimate answer.

The feed-forward networks as suggested by Vaswani are very reminiscent of the sparse autoencoders. Where the input / output dimensions are much greater than the hidden input dimension.

If you aren't familiar with sparse autoencoders, this is a little counter intuitive - WTF would you have a larger hidden dimension?

The intuition borrows from infinitely wide neural networks. If you have an infinitely wide neural network, you have basically have a Gaussian process and sample any function you'd like. So the wider the network you have, the more approximation power that you have. In the case of inputs, this is a matter of learning a dictionary. If you have only discrete inputs, this hidden layer will be capped at $O(2^N)$ width, where $N$ is the maximum number of bits it takes to represent the input (which would boil down to approximating a lookup table).

Of course, these aren't trivial to implement in practice. These layers are bound to be bloated with identifiability issues. Common approaches include $L_1$ regularization. I'm guessing that the convolutional layers + dropout are just another attempt to deal with these sorts of identifiability issues. Furthermore, the FFN is an attempt to learn an arbitrary mapping for individual words (you can think of mapping words to synonyms for instance).

These are all guesses though - more intuition is welcome.

",33309,,,,,2/12/2020 20:30,,,,1,,,,CC BY-SA 4.0 17999,1,,,2/13/2020 5:10,,-1,255,"

In section 3 of the paper The Limits of Correctness (1985) Brian Cantwell Smith writes

When you design and build a computer system, you first formulate a model of the problem you want it to solve, and then construct the computer program in its terms.

He then writes

computers have a special dependence on these models: you write an explicit description of the model down inside the computer, in the form of a set of rules or what are called representations - essentially linguistic formulae encoding, in the terms of the model, the facts and data thought to be relevant to the system's behavior. It is with respect to these representations that computer systems work. In fact, that's really what computers are (and how they differ from other machines): they run by manipulating representations, and representations are always formulated in terms of models. This can all be summarized in a slogan: no computation without representation.

And then he says

Models have to ignore things exactly because they view the world at a level of abstraction

He then writes in section 7

The systems that land airplanes are hybrids - combinations of computers and people - exactly because the unforeseeable happens, and because what happens is in part the result of human action, requiring human interpretation

As quoted above, computers depend on models, which are abstractions (i.e. they ignore a lot of details), which are written inside the computer. Therefore, the true world cannot really be encoded into an algorithm, but only an abstraction and thus simplification of the world can.

So, will AI always depend on models and thus approximations? Can it get rid of or overcome this limitation?

",21644,,2444,,3/10/2020 23:29,2/7/2021 22:20,Will AI always depend on models and thus approximations?,,2,3,,,,CC BY-SA 4.0 18000,1,,,2/13/2020 6:04,,4,182,"

I am working on a research project about the different reward functions being used in the RL domain. I have read up on Inverse Reinforcement Learning (IRL) and Reward Shaping (RS). I would like to clarify some doubts that I have with the 2 concepts.

In the case of IRL, the goal is to find a reward function based on the policy that experts take. I have read that recovering the reward function that experts were trying to optimize, and then finding an optimal policy from those expert demonstrations has a possibility of resulting in a better policy (e.g. apprenticeship learning). Why does it lead to a better policy?

",32780,,2444,,8/3/2020 23:03,8/3/2020 23:03,Can recovering a reward function using IRL lead to better policies compared to reward shaping?,,1,5,,,,CC BY-SA 4.0 18001,1,,,2/13/2020 6:09,,1,31,"

I have been working on industrial data, that is fed live, I want to explore a few models which might suit for this the best.

The data are KPI data from the manufacturing Industry.

",33356,,,,,2/13/2020 6:09,What models will you suggest to use in Industrial Anomaly Detection and Predictive analysis on live streamed data?,,0,0,,,,CC BY-SA 4.0 18002,1,,,2/13/2020 6:44,,2,120,"

I have a many to one LSTM model for multiclass classification. For reference, this is the architecture of the model

    model.add(LSTM(147, input_shape=(1000, 147)))
    model.add(Dense(5, activation='softmax'))
    model.compile(loss='categorical_crossentropy',
                  optimizer='rmsprop',
                  metrics=['accuracy'])

The model is trained in 5 types of sequences is able to effectively classify each sequence I feed into the model with high accuracy. Now my new objective is to combine these sequences together to form a new sequence.
For e.g
I denote the elements from the class '1' with the sequence:
[1,1,1,1,1,1,1]
So when I input the above sequence into the LSTM model for prediction, it classifies the sequence as class '1' with accuracy of 0.99

And I denote the elements from class '2' with the sequence:
[2,2,2]
LIkewise for the above sequence, the LSTM model will classify the sequence as class '2' with accuracy of 0.99

Now I combine these sequences together and feed it into the model :
New sequence : [1,1,1,1,1,1,1,2,2,2]
However the model does not seem to be sensitive to the presence of the class '2' sequence and still classifies the sequence as class '1' with accuracy of 0.99.

How do I make the model more ""sensitive"", meaning that I would expect the LSTM model to still maybe predict class '1' but with a drop in accuracy? Or is the LSTM incapable of detecting the inclusion of class '2' sequences?

Thanks.

",33499,,,,,2/13/2020 6:44,How do I make my LSTM model more sensitive to changes in the sequence?,,0,0,,,,CC BY-SA 4.0 18004,1,,,2/13/2020 7:46,,1,55,"

So I want to create an RL-agent for two players-board game. I want to use a simple DQN for the first player (my RL-agent). Then, what kind of algorithm that should I use on the second player (my RL-agent's enemy)?

I have three options in my mind:

  • a random agent that act randomly
  • a rule-based agent that acts by some defined rules
  • another RL-agent

I have tried the first and second options. When I use a random agent as an enemy, the first player gets a high score and wins easily. But I think it's not really smart as the enemy is a random agent. When I use the second option, the first player got difficulties to train itself, as it can't win any game.

what should I choose and why?

",16565,,16565,,2/17/2020 0:58,2/17/2020 0:58,What kind of enemy to train a good RL-agents,,0,3,,,,CC BY-SA 4.0 18006,1,18007,,2/13/2020 13:21,,3,488,"

In the paper Neural Programmer-Interpreters, the authors use the teacher forcing technique, but what exactly is it?

",2444,,2444,,6/16/2021 8:58,6/16/2021 8:58,What is teacher forcing?,,1,0,,,,CC BY-SA 4.0 18007,2,,18006,2/13/2020 13:21,,2,,"

Consider the task of sequence prediction, so you want to predict the next element of a sequence $e_t$ given the previous elements of this sequence $e_{t-1}, e_{t-2}, \dots, e_{1} = e_{t-1:1}$. Teacher forcing is about forcing the predictions to be based on correct histories (i.e. the correct sequence of past elements) rather than predicted history (which may not be correct). To be more concrete, let $\hat{e}_{i}$ denote the $i$th predicted element of the sequence and let $e_{i}$ be the corresponding ground-truth. Then, if you use teacher forcing, to predict $e_{t}$, rather than using $\hat{e}_{t-1:1}$, you would use $e_{t-1:1}$.

Recall that supervised learning can also be thought of as learning with a teacher. Hence the expression ""teacher forcing"", i.e. you force the predictions to be based on correct histories (the teacher's labels).

Of course, intuitively, teacher forcing should help to stabilize training, given that the predictions are not based on noisy or wrong histories.

See also the blog post What is Teacher Forcing for Recurrent Neural Networks? by Jason Brownlee.

",2444,,2444,,2/13/2020 13:28,2/13/2020 13:28,,,,0,,,,CC BY-SA 4.0 18008,1,,,2/13/2020 13:48,,4,122,"

In my work I have a given source code for a module. From this module I generate an AST, whose size is dependent on the size of the module (e.g. more source code -> bigger AST). I want to train a neural network model which will learn a general structure of a module and be able to rate (on a scale of 0 to 1) how ""good"" a module is structure wise (if requires are at the beginning, followed by local functions, variables and finally returns). Now I have learnt that Convolutional NNs are quite convenient for this, but the problem I can't seem to solve is that they require a fixed sized input which I can't produce. If I add zero-padding then the outcome will be skewed and the accuracy will suffer. Is there a clear solution to this problem?

",33509,,33509,,2/13/2020 14:19,3/12/2021 23:04,How to solve the problem of variable-sized AST as input for a (convolutional) neural network model?,,1,0,,,,CC BY-SA 4.0 18009,1,,,2/13/2020 13:59,,1,86,"

Currently I'm working on an educational project (implementation of AlphaZero approach to different types of board games). My biggest concern at the moment is how to encode board before input into the neural network?

For example, how can this be done for Kalah game?

Kalah board has two straight rows of six pits, and two large score-houses. Each pit contains 6 seeds (so there are 72 seeds)

",33025,,,,,2/13/2020 13:59,How to encode board before input into the neural net?,,0,1,,,,CC BY-SA 4.0 18010,1,,,2/13/2020 15:13,,1,507,"

I am not asking what does concatenate layer does in general in point of mathematical operation. But at feature level, what significance does it provide. Does it helps removing false negatives or does it prevents over-fitting? Do give the reference of papers regarding this topic.

",28417,,,,,2/13/2020 15:13,What is the use of concatenate layer in CNN?,,0,2,,,,CC BY-SA 4.0 18013,1,,,2/13/2020 18:53,,2,80,"

Is there a machine learning framework/library for any of the esoteric languages, such as the ones listed here ?

",32621,,,,,2/13/2020 18:53,Machine learning frameworks for esoteric languages,,0,4,,2/3/2021 16:45,,CC BY-SA 4.0 18014,1,,,2/13/2020 19:12,,1,22,"

if someone want to do mobile app for body measurements prediction, please what are the necessary things to start with. I need details explanation on this.

",33520,,,,,2/13/2020 19:12,Details on body measurements prediction,,0,0,,,,CC BY-SA 4.0 18015,1,18016,,2/14/2020 1:45,,1,117,"

Firstly, some context. I have been reading and watching videos on the subject for around 3 years, but I am still very much a beginner in machine learning and artificial intelligence. That said, I might not know what I'm even talking about here. So bear with me.

If I understand correctly, each node in a neural network (neuron) is represented by some floating point number between 0 and 1, that are arranged in layers and have corresponding weights. Right? While a color has RGB values, CMYK values, and HSV values that are all interrelated to each other.

My question is would there be any benefit to having each node represented by a color instead of a single floating point number?

My thinking is that each neuron could select any of the values (r, g, b, c, m, y, k, h, s, or v) contained within the color in some meaningful way, while the Alpha value could possibly represent the weight associated with that neuron.

Thoughts? Would it not work like that? Could you use it to have multiple congruent networks running on 3 different channels? Again, would there be any benefit to doing this than just using a single number? Or would it over-complicate (or even break) the network? Would it be useless?

Although I've also dabbled in Unity3D (which is how I got the idea in the first place), I'm too much of a beginner to know how to even begin an attempt at testing this myself.

",33523,,,,,2/14/2020 5:01,"In a neural network, can colors be used for neurons in place of floating points and would there be any benefit in doing so?",,1,6,,,,CC BY-SA 4.0 18016,2,,18015,2/14/2020 5:01,,2,,"

To answer your question, I'll just simplify it and assume you are representing the activation of a neuron (the value it produces) by an RGB value. So a tuple with 3 values ranging from 0 to 1.

It's important to remember that every single machine learning model could be computed by a human on pen and paper using just numbers. So keeping that in mind, this neural network would essentially just be a combination of multiplying tuples of size 3 in a certain order (ignoring activation functions).

I will also assume that each RGB value has it's own unique set of incoming weights, like following:

this essentially amounts to just have 3 unique neurons. There's no difference:

All I did there was get rid of the circle. The calculations are no different. If you changed the calculation, say values are shared between the 3 RGB nodes, then you now just have 3 duplicate nodes. In all cases, you can just represent the equivalent calculation as a standard neural network.

However, in the same vein, in a convolutional neural network, if you create conv layers with an output of 3 channels, you can visualise their outputs as a colour image (assuming you are using an activation function that outputs numbers between 0 and 1), which can be cool to look at.

",26726,,,,,2/14/2020 5:01,,,,1,,,,CC BY-SA 4.0 18017,1,18025,,2/14/2020 7:16,,7,589,"

In reinforcement learning, the state-action value function seems to be used more than the state value function. Why is it so?

",31749,,2444,,2/14/2020 12:22,2/14/2020 13:14,Why is the state-action value function used more than the state value function?,,1,3,,,,CC BY-SA 4.0 18018,1,,,2/14/2020 7:21,,1,40,"

So as the title states, I have a set of images and I want to process input images and need to select the image that ""looks"" the most like the input image.

I know I've seen something similar where the code could guess who's face was in a picture, I guess I want something like that but for general images.

Sorry if this is a stupid question, but any suggestions or points at resources would be greatly appreciated.

",33529,,,,,2/14/2020 12:44,"I need to select the image from a predefined dataset that are the closest to the input, is this possible or do I even need to use ML/AI?",,1,0,,,,CC BY-SA 4.0 18019,1,18027,,2/14/2020 8:23,,3,1273,"

The KL divergence is defined as

$$D_{KL}=\sum_i p(x_i)log\left(\frac{p(x_i)}{q(x_i)}\right)$$

Why does $D_{KL}$ not satisfy the triangle inequality?

Also, can't you make it satisfy the triangle inequality by taking the absolute value of the information at every point?

",30885,,2444,,6/4/2022 18:26,6/4/2022 18:26,Why does the KL divergence not satisfy the triangle inequality?,,1,0,,,,CC BY-SA 4.0 18020,1,18023,,2/14/2020 9:03,,2,407,"

I started learning about Q table from this blog post Introduction to reinforcement learning and OpenAI Gym, by Justin Francis, which has a line as below -

After so many episodes, the algorithm will converge and determine the optimal action for every state using the Q table, ensuring the highest possible reward. We now consider the environment problem solved.

The Q table was updated by Q-learning formula Q[state,action] += alpha * (reward + np.max(Q[state2]) - Q[state,action])

I ran 100000 episodes of which I got the following -

Episode 99250 Total Reward: 9
Episode 99300 Total Reward: 7
Episode 99350 Total Reward: 6
Episode 99400 Total Reward: 14
Episode 99450 Total Reward: 10
Episode 99500 Total Reward: 10
Episode 99550 Total Reward: 9
Episode 99600 Total Reward: 14
Episode 99650 Total Reward: 5
Episode 99700 Total Reward: 7
Episode 99750 Total Reward: 3
Episode 99800 Total Reward: 5

I don't know what the highest reward is. It does not look like it has converged. Yet, the following graph

shows a trend in convergence but it was plotted for a larger scale.

What should be the sequence of actions to be taken when the game is reset() but the "learned" Q table is available? How do we know that and the reward in that case?

",21983,,2444,,12/13/2021 9:59,10/28/2022 12:04,How do we know that the algorithm has converged and ensures the highest possible reward?,,2,0,,,,CC BY-SA 4.0 18023,2,,18020,2/14/2020 11:01,,2,,"

Your Q learning update expression looks correct. The Total Reward will not be the same at the end of each episode because the starting position of taxi is different in each episode so the number of steps necessary to reach the final destination will be different in each episode. The graph that you posted shows that the algorithm converged after short amount of episodes so 100000 episodes might be too much. Since the environment is simple try manually calculating optimal policy for some starting specific position and then see if the algorithm does the same sequence of actions.

",20339,,,,,2/14/2020 11:01,,,,0,,,,CC BY-SA 4.0 18024,2,,18018,2/14/2020 12:44,,2,,"

You will only need to use ML or AI if a) the dataset is very big. b) It is difficult to get the meaning or value of the image.(ex: group of ants, photos of stars in night sky) And many more.

Below is the way I thought it can be done through a machine(with deeplearning, CNN) 1. Train the model arrange the images in dataset into clusters (the number of clusters can be decided based on the criticality of the application, more critical the application more should be the number of clusters) .

  1. Predict about input Then predict the nearest image in the dataset to the provided image.

I recently listened a podcast in which, a problem is approached similarly. The podcast is about youtube predictions and the result is to provide closest video from the youtube videos ( here the dataset).Below is the podcast. ( https://open.spotify.com/episode/6PcMtVXR58i7iu8kLH40Wd?si=feyu0LjESoiE479xHkaqSA )

",32856,,,,,2/14/2020 12:44,,,,0,,,,CC BY-SA 4.0 18025,2,,18017,2/14/2020 13:14,,7,,"

We are ultimately interested in getting an optimal policy, that is the optimal sequence of actions to reach the final goal. State values on its own don't provide that, they tell you expected return from specific state onward but they don't tell you which action to take. In order to derive an optimal action in a specific state you would have to simulate all possible actions one step ahead and then pick the action that leads you to the state with highest state value. That is often inconvenient or impossible. State action values connect the expected return with actions, not states, so you don't need to simulate all actions one step ahead and see where you end up, you only need to pick an action that has the highest value because you know that is the best action to take in that state.

",20339,,,,,2/14/2020 13:14,,,,2,,,,CC BY-SA 4.0 18026,1,,,2/14/2020 14:01,,0,227,"

I have two lists of feature vectors calculated from pre-trained CNN for image retrieval task:

Query: FV_Q and Reference FV_R.

>>> FV_R.shape
(3450, 128)

>>> FV_Q.shape
(3450, 128)

I am a little confused between the concept of exhaustive nearest neighbor search and k-nearest neighbor search.

In python, I use from sklearn.neighbors import KDTree to extract top k = 5 similar images from the reference database, given the query image!

Can somebody explain if there might be any similarities/differences between these two concepts?

Am I making a mistake somewhere in my feature vector comparison?

",31312,,2444,,6/2/2020 22:12,10/21/2022 6:04,What is the difference between exhaustive nearest neighbor search and k-nearest neighbour search?,,1,0,,,,CC BY-SA 4.0 18027,2,,18019,2/14/2020 14:07,,3,,"

To prove that the KL divergence does not satisfy the triangle inequality, you just need a counterexample.

Definitions

KL divergence

Let's first recapitulate the definition of KL divergence for discrete probability distributions $p$ and $q$ (for simplicity).

$$ D_{\text{KL}}(p\parallel q) = \sum_{x\in {\mathcal {X}}} p(x)\log \left( \frac {p(x)}{q(x)} \right) $$

Triangle inequality

Let's also recap the definition of the triangle inequality for distance $d$, which can be expressed as

$$ d(p,q) \leq d(p, r) + d(r, q), $$ where $p, q$ and $r$ are probability distributions.

Proof

For simplicity, let the sample space be $\mathcal{X} = \{0, 1 \}$. Consider now the following three discrete probability distributions.

$$ p(x)= \begin{cases} \frac{1}{2}& \text{if } x = 0\\ \frac{1}{2}& \text{if } x = 1 \end{cases} $$

$$ r(x)= \begin{cases} \frac{1}{4}& \text{if } x = 0\\ \frac{3}{4}& \text{if } x = 1 \end{cases} $$

$$ q(x)= \begin{cases} \frac{1}{10}& \text{if } x = 0\\ \frac{9}{10}& \text{if } x = 1 \end{cases} $$

Let's now apply the definition of the triangle inequality and the KL divergence for these discrete probability distributions. So, we want to show the following inequality does not hold.

$$ D_{\text{KL}}(p\parallel q) \leq D_{\text{KL}}(p\parallel r) + D_{\text{KL}}(r\parallel q) $$

which can be expanded to

\begin{align} \sum_{x\in \{0, 1 \}} p(x)\log \left( \frac {p(x)}{q(x)} \right) &\leq \sum_{x\in \{0, 1 \} } p(x)\log \left( \frac {p(x)}{r(x)} \right) + \sum_{x\in \{0, 1 \}} r(x)\log \left( \frac {r(x)}{q(x)} \right) \end{align}

which roughly corresponds to

\begin{align} 0.51 &\leq 0.14 + 0.09 \\ 0.51 &\leq 0.24 \end{align} which is clearly false.

Given that the triangle inequality does not hold in one case, it doesn't hold in all cases, so the triangle inequality does not hold for the KL divergence.

$$\tag*{$\blacksquare$}$$

For reproducibility, I have used the following Python (3.7) program to compute the KL divergences.

from math import log

one = (1 / 2 * log((1 / 2) / (1 / 10))) + (1 / 2 * log((1 / 2) / (9 / 10)))
two = (1 / 2 * log((1 / 2) / (1 / 4))) + (1 / 2 * log((1 / 2) / (3 / 4)))
three = (1 / 4 * log((1 / 4) / (1 / 10))) + (3 / 4 * log((3 / 4) / (9 / 10)))

print(one)
print(two)
print(three)
print(two + three)
print(one <= two + three)
",2444,,2444,,2/14/2020 23:49,2/14/2020 23:49,,,,2,,,,CC BY-SA 4.0 18028,1,,,2/14/2020 15:33,,1,25,"

I'm currently working on my uni project, but I have no idea where to start for the user tailored recommendation system on web. Where can I find a good guide on it, preferrably on languages like php and javascript.

",33537,,,,,3/17/2020 18:02,Where can I find good tutorials on user tailored recommendation system for web?,,0,1,,,,CC BY-SA 4.0 18030,1,,,2/14/2020 19:47,,2,141,"

I'm trying to really understand how multi-layer perceptrons work. I want to prove mathematically that MLP's can classify handwritten digits. The only thing I really have is that each perceptron can operate exactly like a logical operand, which obviously can classify things, and, with backpropagation and linear classification, it's obvious that, if a certain pattern exists, it'll activate the correct gates in order to classify correctly, but that is not a mathematical proof.

",30885,,2444,,2/14/2020 23:52,4/10/2021 14:04,Is there a mathematical theory behind why MLP can classify handwritten digits?,,1,1,,,,CC BY-SA 4.0 18032,1,,,2/15/2020 2:24,,1,27,"

I am currently working on implementing a lip reading system in Python using machine learning and image processing. Currently, two initial implementations have provided promising results, albeit not perfect: the LipNet model and the Google Cloud AutoML Vision API. Before I dive head on into these two different systems, I was wondering if anyone on here had been exposed to any other alternatives for lip reading or any other facial detection issue, and their experiences with these models.

Any information will be greatly appreciated.

",33547,,,,,2/15/2020 2:24,What are some of the best methods in detecting facial movement using state-of-the-art machine learning models?,,0,0,,,,CC BY-SA 4.0 18033,2,,18030,2/15/2020 3:37,,-1,,"

The approximation theorem says you can approximate anything. But this is kind of meaningless in so far as you can do KNN and get an arbitrary approximation of your training data also.

Proving CNN correctly extract features is, I don't think possible. Or if it is, something involving VC theory is probably the best you can do.

",32390,,,,,2/15/2020 3:37,,,,2,,,,CC BY-SA 4.0 18035,2,,16973,2/15/2020 9:54,,1,,"

Temporal Depth is a third parameter of a time series data. For example if you have a video clip of length 25 frames and on training a model you are giving first five frames with respect to time. Your temporal depth will be 5.

",31910,,,,,2/15/2020 9:54,,,,0,,,,CC BY-SA 4.0 18036,1,,,2/15/2020 14:12,,3,77,"

I have been reading literature on reinforcement learning in healthcare. I am slightly confused between the policy evaluation for both SARSA and Q-learning.

To my knowledge, I believe that SARSA is used for policy evaluation, to find the Q values of following an already existing policy. This is usually the clinician's policy.

Q - learning on the other hand seeks to find another policy, different from the clinician's such that the policy learned at different states always maximise the Q - values. This leads to a better treatment policy.

Suppose the Q values are learned from both policies, if the Q values for Q - learning are higher than those of SARSA's, can we say that the policy learned from Q - learning is better than that of the clinician's ?

EDIT

From readings I have found out that computing the state - value function is usually used to compare how good policies are. I believe that new data has to be generated to apply the policy learned from Q - learning and compute the state - value function for following this learnt policy from Q - learning.

Why can't the Q values learnt from SARSA and Q - learning be used as comparison instead ? Also, for model free approaches (eg. continuous state space), how is policy evaluation usually carried out ?

",32780,,32780,,2/15/2020 14:32,2/15/2020 14:32,Evaluation a policy learned using Q - learning,,0,0,,,,CC BY-SA 4.0 18038,1,,,2/15/2020 18:47,,1,28,"

Can I apply experience replay on naive actor-critic directly? Should it work?

I have tried that but unfortunately it didn't work.

",33555,,33555,,2/16/2020 1:46,2/16/2020 1:46,Can I apply experience on naive actor critic directly? Should it work?,,0,3,,,,CC BY-SA 4.0 18039,1,,,2/15/2020 18:52,,1,215,"

I have been trying to implement the ACER algorithm for continuous action spaces in reinforcement learning. The paper for the algorithm can be found here:

I have implemented parts of the algorithm, but I have encountered some roadblocks that I have not been able to figure out.

The following is the pseudo-code provided in the paper:

Here is what I have implemented so far:

states = tf.convert_to_tensor(trajectory.state)
actions = tf.squeeze(tf.convert_to_tensor(trajectory.action), axis=-1)
rewards = tf.convert_to_tensor(trajectory.reward)
dones = tf.convert_to_tensor(trajectory.done)

explore_means, state_values, action_values = actor_critic(states, actions)

average_means, *_ = brain.average_actor_critic(states)

k = len(trajectory.state)
d = env.action_space.shape[0]

# Policies
explore_policies = k*[None]
behavior_policies = k*[None]
average_policies = k*[None]

# Tracking
explore_actions = np.zeros([k, d])
importance_weights = np.zeros([k, 1])
explore_importance_weights = np.zeros([k, 1])
truncation_parameters = np.zeros([k, 1])

for i in range(k):

    behavior_policy = tfd.MultivariateNormalDiag(
        loc=trajectory.statistics[i],
        scale_diag=tf.ones(d)*POLICY_STD
    ) 

    explore_policy = tfd.MultivariateNormalDiag(
        loc=explore_means[i],
        scale_diag=tf.ones(d)*POLICY_STD
    ) 

    average_policy = tfd.MultivariateNormalDiag(
        loc=average_means[i],
        scale_diag=tf.ones(d)*POLICY_STD
    )

    explore_action = explore_policy.sample()

    importance_weight = explore_policy.prob(actions[i]) / behavior_policy.prob(actions[i])
    explore_importance_weight = explore_policy.prob(explore_action) / behavior_policy.prob(explore_action)

    truncation_parameter = min(1, (importance_weight)**d)


    behavior_policies[i] = behavior_policy
    explore_policies[i] = explore_policy
    average_policies[i] = average_policy
    explore_actions[i] = explore_action
    importance_weights[i] = importance_weight
    explore_importance_weights[i] = explore_importance_weight
    truncation_parameters[i] = truncation_parameter


explore_actions = tf.convert_to_tensor(explore_actions, dtype=tf.float32)
importance_weights = tf.convert_to_tensor(importance_weights, dtype=tf.float32)
explore_importance_weights = tf.convert_to_tensor(explore_importance_weights, dtype=tf.float32)
truncation_parameters = tf.convert_to_tensor(truncation_parameters, dtype=tf.float32)


q_ret = values[-1] if not dones[-1] else tf.zeros(1)
q_opc = tf.identity(q_ret)

for i in reversed(range(k - 1)):

    q_ret = rewards[i] + GAMMA*q_ret
    q_opc = rewards[i] + GAMMA*q_opc


    # Compute quantities needed for trust region updating
    c = TRUNCATION_PARAMETER

    with tf.GradientTape(persistent=True) as tape:

        tape.watch(explore_policies[-2].loc)

        log_prob = explore_policies[-2].log_prob(actions[-2])
        explore_log_prob = explore_policies[-2].log_prob(explore_actions[-2])

        kl_div = tfp.distributions.kl_divergence(average_policies[-2], explore_policies[-2])


    lp_grad = tape.gradient(log_prob, explore_policies[-2].loc)    
    elp_grad = tape.gradient(explore_log_prob, explore_policies[-2].loc) 
    kld_grad = tape.gradient(kl_div, explore_policies[-2].loc) 


    term1 = min(c, importance_weights[-2])*lp_grad*(q_opc - state_values[-2])
    term2 = tf.nn.relu(1 - (c / explore_importance_weights[-2]))*(action_values[-2] - state_values[-2])*elp_grad

    g = term1 + term2

So the goal here was to implement it exactly the way they have it in the paper and then afterwards optimize it for doing batches of trajectories. For now, however, it is sufficient for the purposes of learning.

My confusion comes from the use of differentials in this algorithm. I don't know what the specific type is for them, such as whether they are using it for the loss value to optimize on or if they are storing the gradients that will be used for updating. Another issue I am having is that it is not clear what they mean by this line:

I don't understand why they are using a partial derivative here if there is clearly more than one parameter in the neural network. Maybe they mean the gradient I am not sure, however.

So what would be helpful is if anybody has some guidance as to what they are getting at in this portion of the paper or if anybody has some advice as to what steps need to be taken in TensorFlow 2.0 to implement this algorithm.

Any help would be greatly appreciated! Thanks!!

",33556,,,,,2/15/2020 18:52,Implementing Actor-Critic with Experience Replay for Continuous Action Spaces,,0,0,,,,CC BY-SA 4.0 18043,1,,,2/16/2020 1:27,,1,26,"

Suppose we have a top down picture of an object (let's say it is a shoe) from an overhead camera. Also suppose we have a database of various objects from a closeup camera. If we feed the top-down picture of the hsoe into a CBIR model, then this picture would obviously have very low similarity with the images in the database. But would the image with the highest similarity score in the database still be a shoe even though the absolute score is small?

",33559,,,,,2/16/2020 1:27,Similarity of Images (CBIR) for two different cameras,,0,0,,,,CC BY-SA 4.0 18047,1,,,2/16/2020 15:56,,1,37,"

I have used Deezer Spleeter but it produces echoes aside the stems, so I wonder if there is already an AI that remove echoes noises.

",33571,,,,,2/16/2020 15:56,Is there an AI that can complete Deezer Spleeter work?,,0,0,,,,CC BY-SA 4.0 18049,1,18051,,2/16/2020 17:40,,1,42,"

What is the purpose of the arrow $\leftarrow$ in the formula below?

$$V(S_t) \leftarrow V(S_t) + \alpha \left[ G_t - V(S_t) \right]$$

I presume it's not the same as 'equals'.

",27629,,2444,,2/16/2020 20:09,2/16/2020 20:09,What is the purpose of the arrow $\leftarrow$ in this formula?,,1,0,,,,CC BY-SA 4.0 18050,2,,6503,2/16/2020 17:40,,0,,"

If you want to decrease the accuracy given the same optimizer/epochs/batch, you could add more layers and increase the number of parameters, it should now take a bit longer to converge. Hence for the same epochs you would get lower accuracy. You could also initialise your parameters in a non sensical way.

",33573,,,,,2/16/2020 17:40,,,,0,,,,CC BY-SA 4.0 18051,2,,18049,2/16/2020 17:44,,2,,"

I assume this is an iterative function. It means the current $V(S_t)$ is the sum of the previous plus some adjustment. The arrow is like an assignment.

In code, you would do

vst = vst + alpha * (gt - vst)

So vst will be overwritten.

",33573,,2444,,2/16/2020 20:07,2/16/2020 20:07,,,,0,,,,CC BY-SA 4.0 18052,2,,17155,2/16/2020 17:54,,0,,"

You could use np.random.choice to shuffle the arrays. You could use a distance metric to find new arrays that are mutants of the the current good set.

",33573,,,,,2/16/2020 17:54,,,,0,,,,CC BY-SA 4.0 18054,2,,12465,2/16/2020 18:04,,0,,"

You can also fit trigonometric functions like sin and cos. eg check this doc

",33573,,,,,2/16/2020 18:04,,,,0,,,,CC BY-SA 4.0 18055,1,18056,,2/16/2020 19:58,,2,71,"

I am making an MNIST classifier. I am using categorical cross-entropy as my loss function. I want to make it so that if the correct label is 3, then it will penalize the model less heavily if it classifies a 4 than a 7 because 4 is closer numerically to 3 than 7 is. How do I do this?

",29708,,2444,,2/17/2020 17:59,2/17/2020 17:59,How should I penalize the model proportionally to the error?,,1,0,,,,CC BY-SA 4.0 18056,2,,18055,2/16/2020 21:00,,1,,"

I want to make it so that if the correct label is 3, then it will penalize the model less heavily if it classifies a 4 than a 7 because 4 is closer numerically to 3 than 7 is. How do I do this?

Really you should not, because the symbols used (Arabic numerals) do not have direct relation to quantity in the same way e.g. tally counts or dots do. They are good candidates for classification, and despite the conventional mapping to quantity when you read them, the symbols themselves are poor candidates for regression, because for instance the symbols $3$ and $4$ do not differ in a way that captures quantity in any intuitive manner.

However, if you are keen to do this, it is relatively simple to construct a suitable loss function in most auto-differentiating frameworks. You will need to read up on how to do so. For instance, here is a Stack Overflow answer explaining where to start with writing custom loss function in Keras.

In order for your loss function to work, it will need to be differentiable and smoothly changing as predictions get better. That rules out using any form of argmax for the current prediction. If you want to stick with softmax for the final layer, then I suggest using a mean squared error against the expected prediction, e.g. if $d_i$ is the numerical digit for example $i$ and $y_{i,j}$ is the ground truth expressed as a one-hot vector, where $i$ is the example and $j$ is the digit class, then $\hat{y}_{i,j}$ is the probability predicted by your model. You could use $\hat{d}_i = \sum_{j=0}^9 j\hat{y}_{i,j}$ for the expected value and MSE loss of $\mathcal{L}(d_i,\hat{d}_i) = \frac{1}{2}(\hat{d}_i - d_i)^2$

You can also use a weighted sum of the MSE loss and cross entropy loss as your final loss, with the balance between the two losses being a new hyperparameter of your model.

Note this solution makes $0$ close to $1$ but far away from $9$. If you want the digits to be considered close on a cycle (e.g. $8$ is closer to $1$ than it is to $4$) the you will need something more creative.

Whilst I don't think this will help you discover any improvements to MNIST classification, combining two or more loss functions to achieve a more complex goal can be really useful sometimes, so it is a skill worth practicing.

",1847,,1847,,2/16/2020 21:13,2/16/2020 21:13,,,,0,,,,CC BY-SA 4.0 18057,2,,18008,2/16/2020 21:20,,1,,"

You discovered already one solution for your problem: Zero-Padding.

There are two other common possibilities:

  1. Using Recurrent NNs
    This is often used at text processing, where you feed each word one after another into your model.
  2. Using Recursive NNs (I wont recommend this for your use case)
    This method is also frequently used in word processing, but is more often applied in the semantic analysis of text. You reduce the text to the essential, until it has reached the desired length. However, information is lost in this process.
",23012,,,,,2/16/2020 21:20,,,,1,,,,CC BY-SA 4.0 18058,1,18064,,2/16/2020 21:33,,4,775,"

Is there an ideal ratio in reinforcement learning between the positive and negative rewards?

Suppose I have the scenario of moving a robot across the river. There are two options, walk across the bridge or walk across the river. If it walks across the river then the robot breaks so the idea is to reinforce the robot to walk across the bridge. What would be the best rewards values? Does this ratio vary between cases?

option1:

Bridge: +10
River: -10

Option2:

Bridge: +10
River: -1

Option3:

Bridge: +1
River: -10
",33579,,,,,2/17/2020 4:37,Is there a good ratio between the positive and negative rewards in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 18062,2,,18058,2/17/2020 3:47,,3,,"

It usually does not matter, but I'm sure there are situations where it could matter. In theory, if a reward for good behavior is higher than the rewards for bad behavior, then the neural network will be trained such that the higher rewards are preferred, even if those higher rewards are negative. For example, if a bad reward is -100, then a relatively good reward could be -50, and the network will then be more likely to choose the action with -50 reward over the action with -100 reward.

",27169,,,,,2/17/2020 3:47,,,,0,,,,CC BY-SA 4.0 18063,1,,,2/17/2020 4:04,,0,103,"

I've been following the Berkeley cs188's assignment (I'm not taking the course). Currently, they don't show the solution in the gradescope unless I get it correct.

My reasoning was

$V^*(a)$ = 10 fixed, because the optimal action is to terminate and receive the reward 10.

$V^*(b) = 10 \times 0.2 = 2$ using Bellman optimality eqn $V^*(s) = R(s)+ \gamma \ \rm{max}_{a} \sum_{s'} P(s'|s,a) V^*(s')$, where the optimal actionfrom b is to left.

Similarly, I get $V^*(c) = 10 \times (0.2)^2 = 0.4$

For the state $d$, it is optimal to move to the right and exit at $e$ to receive 1, therefore $V^*(d) = 1 \times 0.2 = 0.2$.

And $V^*(e) = 1$ fixed.

However, there autograder says it's incorrect and doesn't show explanation. Can anyone explain what the right approach or answer is?

",33583,,,,,2/17/2020 5:25,Unable to understand V* at infinite time horizon using Bellman equation for solving MDP,,1,0,,,,CC BY-SA 4.0 18064,2,,18058,2/17/2020 4:37,,3,,"

There are no hard and fast rules for it. Your reward should be such that it motivates the agent to attain the goal in the most effective way. In a grid world if you want your agent to reach goal state quicker but award +2 for a move and +5 for reaching goal then your agent might simply wander around and never reach the goal. However, if you set a reward of -1 for each move and +1 or +10(or even 0) for reaching the goal then your agent will learn to reach the goal state faster.

",31749,,,,,2/17/2020 4:37,,,,0,,,,CC BY-SA 4.0 18065,2,,18063,2/17/2020 5:25,,1,,"

Nevermind. I found that above answer is indeed correct, but the gradescope has a bug (it requires the format to be .2 instead of 0.2).

",33583,,,,,2/17/2020 5:25,,,,0,,,,CC BY-SA 4.0 18066,1,18140,,2/17/2020 6:37,,5,657,"

Is there either an empirical or theoretical reason that actor-critic algorithms with eligibility traces have not been more fully explored? I was hoping to find a paper or implementation or both for continuous tasks (not episodic) in continuous state-action spaces.

This has been the only related question on SE-AI that I have been able to find Why are lambda returns so rarely used in policy gradients?.

Although I appreciated the dialog and found it useful, I was wondering if there was any further detail or reasoning that might help explain the void.

",32929,,2444,,6/4/2020 17:23,6/4/2020 17:23,Why not more TD(𝜆) in actor-critic algorithms?,,1,0,,,,CC BY-SA 4.0 18067,1,,,2/17/2020 7:10,,1,35,"

I'm still a bit new to deep learning. What I'm still struggling, is what is the best practice in re-training a good model over time?

I've trained a deep model for my binary classification problem (fire vs non-fire) in Keras. I have 4K fire images and 8K non-fire images (they are video frames). I train with 0.2/0.8 validation/training split. Now I test it on some videos, and I found some false positives. I add those to my negative (non-fire) set, load the best previous model, and retrain for 100 epochs. Among those 100 models, I take the one with lowest val_loss value. But when I test it on the same video, while those false positives are gone, new ones are introduced! This never ends, and Idk if I'm missing something or am doing something wrong.

How should I know which of the resulting models is the best? What is the best practice in training/re-training a good model? How should I evaluate my models?

Here is my simple model architecture if it helps:

def create_model():
  model = Sequential()
  model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(300, 300, 3)))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(BatchNormalization())
  model.add(Dropout(0.2))
  model.add(Flatten())
  model.add(Dense(256, activation='relu'))
  model.add(Dropout(0.2))
  model.add(Dense(64, activation='relu'))
  model.add(Dense(2, activation = 'softmax'))

  return model

#....
if retrain_from_prior_model == False:
    model = create_model()
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
else:
    model = load_model(""checkpoints/model.h5"")
",9053,,,,,2/17/2020 7:10,Steps to train and re-train a good model,,0,1,,,,CC BY-SA 4.0 18068,1,21790,,2/17/2020 9:37,,1,93,"

I have, say, a (balanced) data-set with 2k images for binary classification. What I have done is that

  • randomly divided the data-set into 5 folds;
  • copy-pasted all 5-fold data-set to have 5 exact copies of data-set (folder_1 to folder_5, all absolutely same data-set)
  • first fold in folder_1 is saved as test folder and remaining (fold_2, fold_3, fold_4, fold_5) are combined as one train folder
  • second fold in folder_2 is saved as test folder and remaining (namely, fold_1, fold_3, fold_4, fold_5) are combined as one train folder
  • third fold in folder_3 is saved as test folder and remaining (namely, fold_1, fold_2, fold_4, fold_5) are combined as one train folder.
  • similar process has been done on folder_4 and foder_5.

I hope, by now, you got the idea of how I distributed the data-set.

The reason I did so is as follows:

I have augmented the training data (train folder) in each of the folders and used test folders respectively to evaluate (ROC-AUC score). Now I kind of have 5 ROC-AUC scores which I evaluated using test folders. If I get the average value out of those 5 scores.

(Assuming the above cross-validation process is done right) If I were to perform some manual hyperparameter optimizations (like an optimizer, learning rate, batch size, dropout, activation) and perform the above cross-validation with data augmentation and find the best so-called ""mean ROC-AUC"", does it mean I successfully conducted hyper-parameter optimization?

FYI: I have no problem with computing power OR/AND time at all to loop through the hyper-parameters for this type of cross-validation with data augmentation

",31870,,2444,,6/10/2020 13:26,6/10/2020 21:50,How to fairly conduct a model performance with 5-fold cross validation after augmentation?,,1,0,,,,CC BY-SA 4.0 18069,1,18153,,2/17/2020 10:10,,1,90,"

Generally, CNNs are used to extract feature representations of an image. I'm right now dealing with the class of CNN that produces saliency maps, which are generally in the format of a mask. I'm trying to generate a feature representation of that specific Mask. What could be the best way to approach this problem?

",25676,,2444,,2/17/2020 12:40,2/20/2020 23:50,How do I generate a feature representation of a saliency map (or mask)?,,1,0,,,,CC BY-SA 4.0 18072,2,,10133,2/17/2020 13:44,,4,,"

Sentences (for those tasks such as NLI which take two sentences as input) are differentiated in two ways in BERT:

  • First, a [SEP] token is put between them
  • Second, a learned embedding $E_A$ is concatenated to every token of the first sentence, and another learned vector $E_B$ to every token of the second one

That is, there are just two possible "segment embeddings": $E_A$ and $E_B$.

Positional embeddings are learned vectors for every possible position between 0 and 512-1. Transformers don't have a sequential nature as recurrent neural networks, so some information about the order of the input is needed; if you disregard this, your output will be permutation-invariant.

",33595,,33595,,7/10/2021 5:29,7/10/2021 5:29,,,,0,,,,CC BY-SA 4.0 18073,1,,,2/17/2020 13:53,,0,86,"

I have a CNN architecture for CIFAR-10 dataset which is as follows:

Convolutions: 64, 64, pool

Fully Connected Layers: 256, 256, 10

Batch size: 60

Optimizer: Adam(2e-4)

Loss: Categorical Cross-Entropy

When I train this model, training and testing accuracy along with loss has a very jittery behavior and does not converge properly.

Is the defined architecture correct? Should I have a max-pooling layer after every convolution layer?

",31215,,2444,,3/27/2020 13:29,8/24/2020 15:03,Is this neural network architecture appropriate for CIFAR-10?,,1,1,,10/31/2020 10:41,,CC BY-SA 4.0 18074,1,18076,,2/17/2020 16:26,,4,422,"

Can anyone point me in the direction of a nice graph that depicts the ""family tree"", or hierarchy, of RL algorithms (or models)? For example, it splits the learning into TD and Monte Carlo methods, under which is listed all of the algorithms with their respective umbrella terms. Beneath each algorithm is shown modifications to those algorithms, etc. I'm having difficulty picturing where everything lies within the RL landscape.

",27629,,2444,,2/17/2020 18:02,2/17/2020 22:27,Is there a family tree for reinforcement learning algorithms?,,1,0,,,,CC BY-SA 4.0 18075,1,,,2/17/2020 17:20,,1,58,"

I've been working with this neural style paper https://arxiv.org/pdf/1508.06576v2.pdf to try and transfer the style from this image to photos of pets. In case you're not familiar with the technique, I'll leave a brief explanation of it at the end.

After taking the time to understand some of the concepts in the paper I've come to my own conclusion that the method won't work. Here's why I think this:

  • The style is not localised/fine-grained enough. If I take a small piece of this image, I might just get a solid color, or two colors separated by a zig-zag boundary. So the first few convolutional layers won't find any characteristics of interest.

  • The style depends on long-distance correlations (I made that term up). If you follow some of theses zig-zag interfaces with your eyes you'll see that they traverse up to 1/3 of the characteristic size of the image. So bunches of pixels that take up a lot of space are correlated via the style. And from my intuition of CNNs, you can distill lots of pixels into coarser information (like ""this is a dog"") but you can't really go backwards. I don't think the CNN encodes the required logic to trace out a zig-zag over a long distance.

  • The color palette is restricted. I'm not sure why, but my intuition tells me the Neural Style Transfer technique won't be able to produce a restricted color pallet like in the style example. I'm guessing again, that the CNN doesn't encode any logic around the number of colors used.

So my question is around whether or not the technique could work, and if not, what's a better technique for this problem.

(optional read) Summary of deep style transfer

  1. Take a pre-trained model like VGG19.
  2. Feed in a photo. For each conv layer you can reconstruct a content representation of what that layer encodes about the photo. You do this by treating the feature maps of that layer as a desired output, and doing gradient descent on a white noise image as the input, where the loss function is the RMS between the original photo and the generated image.
  3. Feed in a painting. For each conv layer you can reconstruct a style representation of what that layer encodes about the painting in a similar way as step 2. This time though, your loss function is the RMS between the gram matrix of all the features maps produced using a white noise input, and the gram matrix of all the feature maps produced using the painting as input. And here, you sum loss over all features maps of prior layers as well, not just the layer you are considering now.
  4. Jointly minimise the loss functions described in 2 and 3 (so you're minimising content loss and style loss together) in order to produce an image with the content of the photo and the style of the painting.

EDIT

I have tried this. Here is an example of my results (left is input, right is output). So it's kind of cool to see some color map reduction happening, and what kind of looks like accentuation of texture, but definitely not getting these illustrated zig zag boundaries that the human mind so readily perceives as fur.

",16871,,16871,,2/17/2020 17:30,2/17/2020 17:30,Can neural style transfer work on the image style in this question or is there a better technique?,,0,1,,,,CC BY-SA 4.0 18076,2,,18074,2/17/2020 20:59,,5,,"

I highly recommend looking at Reinforcement Learning: An Introduction by Richard Sutton and Andrew Barto.

In it they write:

Reinforcement learning, like many topics whose names end with “ing,” such as machine learning and mountaineering, is simultaneously a problem, a class of solution methods that work well on the problem, and the field that studies this problem and its solution methods. It is convenient to use a single name for all three things, but at the same time essential to keep the three conceptually separate. In particular, the distinction between problems and solution methods is very important in reinforcement learning; failing to make this distinction is the source of many confusions.

On the solution side of things, rather than a family tree of reinforcement learning, there is a spectrum of different approaches to the problem.

Many of the points within the spectrum above are TD($\lambda$) methods. When the state space is large, one might use a neural network to help generalize across similar states, such as DQN for instance.

It should be noted that there are many applications of reinforcement learning algorithms that do not involve training an agent to perform a task. Instead one might want to evaluate an existing agent or use a general value function to predict an event using an agent's policy.

Nevertheless, many popular problems consist of training an agent to perform a task. These algorithms can be categorized into a ""family tree"" based either on what values they are updating or on what problem they are tackling. (Source : https://github.com/NervanaSystems/coach)

  • On the left are Value Based methods which update the value function of a policy and find policies that optimize said value function.

  • To the right of those are policy gradient methods which optimize for the policies directly without explicitly finding a value function.

  • To the far right is Imitation Learning whereby one wants to copy a policy given demonstrations of an existing but unacceptable policy.

  • And in the middle is Direct Future Prediction which is its own weird beast that I found out while finding that picture. Here is the arxiv paper that describes DFP.

",4398,,4398,,2/17/2020 22:27,2/17/2020 22:27,,,,0,,,,CC BY-SA 4.0 18077,1,,,2/17/2020 21:32,,0,60,"

Assume the existence of a Markov Decision Process consisting of:

  • State space $S$
  • Action space $A$
  • Transition model $T: S \times A \times S \to [0,1]$
  • Reward function $R: S \times A \times S \to \mathbb{R}$
  • Distribution of initial state $p_0: S \to [0,1]$

and a policy $\pi: S \to A$.

The $V$ and $Q$-functions take expectations of the sum of future rewards.

Let's start off by $r_0:= R(x_0,\pi(x_0),x_1)$, where $\pi$ is the current policy while $x_0 \sim p_0$ and $x_1 \sim T(x_0,\pi(x_0),-)$ are random variables. With setting $\mu_i:= T(x_i,\pi(x_i),-),\rho_i:=R(x_i,\pi(x_i),-)$, I obtain

$$E[r_0]= \int_{\mathbb{R}} r d\mu_0^{\rho_0} = \int_S R(x_0,\pi(x_0),-)d\mu_0,$$

where $\mu_i^{\rho_i}:= \mu_i\circ \rho_i^{-1}$ is the pushforward of $\mu_i$ under random variable $\rho_i$. But the above quantity still depends on $x_0$ as both $\mu_o$ and $\rho_0$ depend on $x_0$. Intuitively, I would guess that one has to calculate the integral over every occuring random variable to obtain the overall expectation, i.e. $$E[r_0]= \int_S\int_S R(x_0,\pi(x_0),x_1)d\mu_0(x_1)dp_0(x_0)$$ is that correct ?

Now, the $V$ and $Q$-functions take the expectation over the sum $R_{\tau} = \sum^T_{t=\tau}\gamma^{t-\tau}r_t$, where the instant of termination $T$ itself is a random variable, and, besides that, the agent does not know its distribution, as it is not even included in the MDP model.

How can I take the expectation over a sum where the number of summands is random?

We cannot just calculate $\sum^{E[T]}(\dots)$, because $E[T]$ might not even be an integer.

",27047,,2444,,9/20/2021 13:43,9/20/2021 13:43,How can the V and Q functions take the expectation over a sum where the number of summands is random?,,0,3,,,,CC BY-SA 4.0 18080,1,18092,,2/18/2020 0:00,,4,280,"

Trying to get my head around model-free and model-based algorithms in RL. In my research, I've seen the search trees created via the minimax algorithm. I presume these trees can only be created with a model-based agent that knows the full environment/rules of the game (if it's a game)? If not, could you explain to me why?

",27629,,2444,,2/18/2020 10:46,2/18/2020 11:25,Is the minimax algorithm model-based?,,1,0,,,,CC BY-SA 4.0 18081,1,,,2/18/2020 0:50,,3,131,"

I'm working on Adversarial Machine Learning, and have read multiple papers on this topic, some of them are mentioned as follows:

However, I am not able to find any literature on data poisoning for SVMs using Manifold regularization. Is there anyone who has knowledge about that?

",31240,,31240,,3/4/2020 0:43,11/29/2022 16:06,How do I poison an SVM with manifold regularization?,,1,1,,,,CC BY-SA 4.0 18082,1,,,2/18/2020 2:01,,1,90,"

Factset blackline reports essentially can compare two 10-Q SEC filings and show you the difference between the two documents. It highlights added items in green and removed items in red + strikethrough (essentially, it's a document difference, but longer-term I would like to run algorithms on the differences).

I don't care to change colors, but what I would like to do is to produce similar extracts that summarize addition and deletions.

Which AI/ML algorithm could do the same?

",33606,,2444,,7/22/2020 11:33,12/9/2022 19:08,How to produce documents like factset blackline?,,1,0,,,,CC BY-SA 4.0 18083,1,18086,,2/18/2020 6:49,,1,32,"

Is it

  1. number of units in a layer
  2. number of layers
  3. overall complexity of the network (both 1 and 2)
",32856,,2193,,2/18/2020 10:25,2/18/2020 10:25,"In neural networks, what does the term depth generally mean?",,1,1,,,,CC BY-SA 4.0 18084,1,,,2/18/2020 7:13,,1,343,"

It kind of makes sense intuitively but I'm not sure about a formal proof. I'll start with briefly listing definitions from Intro to Multiagent systems, Wooldridge, 2002 and then give you my reasoning attempts thus far.

$E$ is a finite set of discrete, instantaneous states, $E=(e, e',...)$. $Ac$ is a repertoire of possible actions (also finite) available to an agent, which transform the environment, $Ac=(\alpha, \alpha', ...)$. A run is a sequence of interleaved environment states and actions, $r=(e_0, \alpha_0, e_1, \alpha_1,..., \alpha_{u-1}, e_u)$, set of all such possible finite sequences (over $E$ and $Ac$) is $R$, $R^E$ is a subset of $R$ containing the runs that end with an env. state.

Purely reactive agent is modeled as: $Ag_{pure}: E\mapsto Ac$, a standard agent is modeled as $Ag_{std}: R^E\mapsto Ac$.

So, if $R^E$ is a sequence of agent's actions and environment states, than it just makes sense that $E\subset R^E$. Hence, $Ag_{std}$ can map to every action to which $Ag_{pure}$ can. And behavioral equivalence with respect to environment $Env$ is defined as $R(Env, Ag_{1}) = R(Env, Ag_{2})$; where $Env=\langle E,e_{0},t \rangle$, $e_{0}$ - initial environment state, $t$ - transformation function (definition irrelevant for now).

Finally, if $Ag_{pure}: E\mapsto Ac$ and $Ag_{std}: R^E\mapsto Ac$, and $E\subset R^E$, we can say that $R(Env,Ag_{pure}) = R(Env, Ag_{std})$ (might be too bold of an assumption). Hence, every purely reactive agent has behaviorally equivalent standard agent. The opposite might not be true, since $E\subset R^E$ means that all elements of $E$ belong to $R^E$, while not all elements $R^E$ belong to $E$.

It's a textbook problem, but I couldn't find an answer key to check my solution. If anyone has formally (and perhaps mathematically) proven this before, can you post your feedback, thoughts, proofs in the comments? For instance, set of mathematical steps to infer $E\subset R^E$ from their definitions: $E=(e_{0}, e_{1},..., e_{u})$ and $R^E$ is ""all agent runs that end with an environment state"" (no formal equation found) is not clear to me.

",33611,,33611,,2/19/2020 6:17,10/12/2022 17:01,Formal proof that every purely reactive agent has behaviorally equivalent standard agent,,2,4,,,,CC BY-SA 4.0 18085,1,18089,,2/18/2020 8:42,,4,403,"

I have been coming across visualizations showing that the neural nets tend to perform better as compared to the traditional machine learning algorithms (Linear regression, Log regression, etc.)

Assuming that we have sufficient data to train deep/neural nets, can we ignore the traditional machine learning topics and concentrate more on the neural network architectures?

Given the huge amount of data, are there any instances where traditional algorithms outperform neural nets?

",22456,,2444,,5/29/2020 23:54,6/9/2021 8:33,Is traditional machine learning obsolete given that neural networks typically outperform them?,,1,0,,,,CC BY-SA 4.0 18086,2,,18083,2/18/2020 8:57,,0,,"

Very good question, I have heard it refered to all the 3 ways you described. I am not sure if there is a purely objective answer so I will answer with what I understand by ""depth"" generally:

  • If speaking of model architecture: the number of layers of the model.
  • If speaking of CNN layers: the number of filters (not to be confused with the size of the kernel).
  • If speaking of RNN layers: the number of time steps (temporal ""depth"").

On the other hand I have never heard about depth regarding to FC layers. As I said the term ""depth"" is very subjective depending on the speakers. It should not be this way, and maybe theoretically is not, but I speak from what I heard in my experience.

",26882,,,,,2/18/2020 8:57,,,,0,,,,CC BY-SA 4.0 18087,1,,,2/18/2020 9:10,,1,50,"

Let's say I want to classify a dataset of handwritten digits (CNNs on their own can get 99.7% on the MNIST dataset but let's pretend they can only get 90% for the sake of this question).

Now, I already have some classical computer vision techniques which might be able to give me a clue. For instance, I can count the intersection points of the pen stroke

  • 1,2,3,5,7 will usually have no intersection points
  • 6,9 will usually have one intersection point each
  • 4,8 will usually have two intersection points each (usually a 4-way crossover yields two intersection points which are close together)

So if I generate some meta-data telling me how many intersection points each sample has, how can I feed that into the CNN training so that it can take advantage of that knowledge?

My best guess is to slot it into the last fully connected layer just before classification.

",16871,,,,,2/18/2020 10:31,Can I provide a CNN with hints?,,0,5,,,,CC BY-SA 4.0 18088,1,,,2/18/2020 10:19,,1,36,"

What would happen if an agent were programmed with two utility functions which are one the opposite of the other? Would one prevail or will they cancel each other out? Are there studies in this direction?

",30353,,2444,,2/18/2020 13:18,2/18/2020 13:18,What happens if an agent has two contrasting utility functions?,,0,0,,3/6/2020 20:34,,CC BY-SA 4.0 18089,2,,18085,2/18/2020 10:22,,7,,"

"Assuming that we have sufficient data..." — that's quite a big assumption. Also, traditional methods are well understood, while neural networks (and especially deep learning) is still something of a black box: you train it, and then you get a mapping from input to output. But you don't really know how that mapping is achieved.

It's not only about performance, it's also about efficiency (speed, use of power, etc) and transparency (being able to explain why something happens).

So there are several reasons why we don't put all our eggs into the NN basket:

  • it is a lot easier to see what's happening and diagnose errors with 'traditional' methods which are well-understood. This is an important point in real-life applications

  • in many cases we do not have the required amount of training data available that is necessary for deep learning approaches to work

  • training a DL system is much more time- (and energy) consuming than other algorithms

I'd much rather have a nuclear power plant operated by a traditional algorithm that makes a few mistakes, but nothing drastic (and being aware that it makes these kinds of mistakes allows you to guard against them), than have a total black box doing it where I have no idea why decisions are reached and what happens in edge cases not covered by the training data.

It's fine for toy projects where the stakes are low, but in real-world applications there are often different constraints that DL systems cannot satisfy.

UPDATE: from my own professional experience — working on a conversational AI system for a major bank. Anything they do has to go through layers upon layers of compliance regulation and vetting. Now I'd challenge anyone to explain to a corporate lawyer that your NN will never give unsound advice, and sign in blood on the dotted line that you know exactly under which conditions which advice is given. This is much easier to do with an old-fashioned rule-based system.

",2193,,2193,,6/9/2021 8:33,6/9/2021 8:33,,,,0,,,,CC BY-SA 4.0 18091,1,,,2/18/2020 10:58,,1,36,"

During training of my models I often encounter the following situation about training (green) and validation (gray) loss:

Initially, the validation loss is significantly lower than the training loss. How is this possible? Does this tell me anything important about my data or model?

One explanation might be, that training and validation data are not properly split, i.e. the validation might primarily contain data, that the model can easily represent. But then why do the curves cross after epoch 30? If this is because of overfitting, then I would expect the validation loss to increase, but so far both losses are (slowly) decreasing.

There is a related question at Data Science SE, but it doesn't give a clear answer.

",33186,,,,,2/18/2020 10:58,Crossing of training and validation loss,,0,0,,,,CC BY-SA 4.0 18092,2,,18080,2/18/2020 11:25,,2,,"

Minimax is a planning algorithm, and all planning algorithms need access to a model of the environment in order to look ahead or simulate possible future states and results.

Technically this does not need to be 100% accurate or complete. It could even be a learned model. However, in the case of applying minimax to classic two player games, such as chess or Connect 4, then usually the game rules are used to create perfect predictions.

This difference between planning and learning is not quite the same as model-free vs model-based RL, but the ideas do overlap considerably. You can for instance consider the experience replay approach used in DQN as a form of ""background planning"" where the model used is the memory of previous events, whilst the core Q-learning algorithm used inside DQN is normally considered model-free.

",1847,,,,,2/18/2020 11:25,,,,0,,,,CC BY-SA 4.0 18094,1,18096,,2/18/2020 11:36,,1,191,"

What is the difference between TensorFlow's callbacks and early stopping?

",32856,,2444,,2/18/2020 13:15,2/18/2020 13:57,What is the difference between TensorFlow's callbacks and early stopping?,,1,0,,,,CC BY-SA 4.0 18095,1,,,2/18/2020 11:40,,1,16,"

I would like to ask you for advice. I deal with beekeeping but I am also a bit a programmer and an electronics specialist. And this is where my 3 interests come together, actually 4 because recently deep learning has joined this group. I would like to analyze (I already run some tests) bee behavior using deep learning mechanisms based on images (photos or videos) with bees. These images, of course, can be very different, this applies primarily to the area that can be shown. It can literally be 2x3cm like a macro picture or an area of ​​40x30cm and 3000 bees on it. Trying to create a network analyzing such different areas is a nightmare. And because I am interested in aspects of bees separately, it seems logical to divide the image into these parts, which will contain single bees in their entirety, reduce their resolution to a minimum ensuring recognition of the necessary details and only then analyze each part of the large image separately. This approach seems even more right to me when I started to read the details, e.g. YOLOv3. Thanks to this, I would avoid a large amount of additional calculations.

And here comes the first stage to be implemented. I need to specify the image resolution, more precisely the number of pixels that on average falls on one bee length. If it is too small, I do nothing but say that the resolution is too small for analysis. If it is suitable (at least 50 pixels for the length of the bee), I algorithmically cut the input image into smaller ones so that they contain individual insects and I subject them to further analysis.

Fortunately, the sizes of bees and bee cells are precisely known to within 0.1mm. And there may be hundreds of such objects (a bee or a hexagonal bee cell) and I can decide how many of them will be at the stage of preparing training data. I have thousands of photos and videos from which I can easily create training data for the network in the form of pairs of objects image bee size expressed in pixels from this image.

Time for question: At the output I want to get a number saying that one bee in the image is X pixels long, e.g. 43.02 or 212.11 Has anyone of you dealt with a similar case of determining resolution for known objects using a neural network (probably CNN with an output element with rel function) and can share your experience, e.g. a network structure that would be suitable for this purpose?

",33622,,,,,2/18/2020 11:40,Specifying resolution for objects with known dimensions using CNN,,0,0,,,,CC BY-SA 4.0 18096,2,,18094,2/18/2020 12:46,,3,,"

Early stopping and callbacks are two different concepts:

  • Early stopping is a machine learning concept about when to stop training your model to avoid overfitting: You monitor a target value (e.g. validation loss) and stop learning after it hits a minimum. If the monitored value keeps increasing for a couple of epoch, you can restore the weights from the minimum.

  • Callbacks are a general software engineering concept and a technical implementation detail on how to trigger certain functions.

A callback function (e.g. early stopping) is a function that is passed as an argument to another function (e.g. fitting an ML model), and called in certain situations (e.g. after finishing an epoch). The callback function will perform its business (e.g. monitor the validation loss and decide whether to trigger the end of the training) and return control to the calling function (fitting the model).

In TensorFlow, early stopping is implemented using a callback function, but this is not the only way to do it. Also, there are further features in TensorFlow that are implemented using callbacks.

",33186,,2444,,2/18/2020 13:57,2/18/2020 13:57,,,,0,,,,CC BY-SA 4.0 18097,1,,,2/18/2020 12:59,,3,76,"

I did an out of domain detection task (as a binary classification problem) and tried LR and Naive Bayes and BERT but the deep neural network didn't perform better than LR and NB. For the LR I just used BOW and it beats the 12-layer BERT.

In a lecture, Andrew Ng suggests ""Build First System Quickly, Then Iterate"", but it turns out that sometimes we don't need to iterate the model into a deep neural network and most of the time traditional shallow neural networks are good/competitive enough and much simpler for training.

As this tweet (and its replies) indicate, together with various papers [1, 2, 3, 4 etc], traditional SVM, LR, and Naive Bayes can beat RNN and some complicated neural networks.

Then my two questions are:

  1. When should we switch to complicated neural networks like RNN, CNN, and transformer and etc? How can we see that from the data set or the results (by doing error analysis) of the simple neural networks?

  2. The aforementioned experiments may be caused by the simple test set, then (how) is it possible for us to design a test set that can fail the traditional models?

",5351,,2193,,2/18/2020 13:08,7/20/2020 16:00,When is it time to switch to deep neural networks from simple networks in text classification problems?,,1,0,,,,CC BY-SA 4.0 18098,1,,,2/18/2020 13:10,,1,83,"

I am developing my own mobile app related to digital map. One of the functions is searching POIs (points of interest) in the map according to relevance between user query and POI name.

Besides the POIs whose names contain exact words in the query, the app also needs to return those whose names are semantically related. For example, searching 'flower' should return POI names that contain 'flower' as well as those that contain 'florist'. Likewise, searching 'animal' should return 'animal' as well as 'veterinary'.

That said, I need to extend words in the query semantically. For example, 'flower' has to be extended to ['flower', 'florist']. I have tried to use word embeddings: using the words corresponding to most similar vectors as extensions. Due to the fact I don't have user review data right now and most of the POI names are very short, I used trained word2vec model published by Google. But the results turn out to be not what I expect: most similar words of 'flower' given by word2vec are words like 'roses'and 'orchid', and 'florist' is not even in the top 100 most similar list. Likewise, 'animal' gives 'dog', 'pets', 'cats' etc. Not very useful for my use case.

I think simply using word embedding similarity may not be enough. I may need to build some advanced model based on word embedding. Do you have any suggestions?

",33623,,,,,2/18/2020 13:30,Using word embedding to extend words for searching POI names,,1,0,,,,CC BY-SA 4.0 18099,2,,18098,2/18/2020 13:30,,1,,"

I think word embeddings are overkill in this particular case.

My suggestion would be to go for a simple dictionary based approach: compose sets of semantically related words, and then use those to expand your query terms. This might take a bit longer to set up, but has several advantages:

  1. simplicity: you can't make many mistakes with this

  2. transparency: you know exactly why a certain term matches, and another one doesn't

  3. accuracy: you have tight control over the whole process; if some term is wrong, you remove it from the set. You cannot do that with embeddings

  4. resources: a dictionary-based approach is far simpler and needs less storage

'Old tech' doesn't sound as sexy as the latest deep learning stuff, but unless you want to have this as a toy project to learn about how to do things with embeddings I would say the latter are the wrong tool for the job. At least you can be sure that it works, and if it doesn't you can easily fix it.

",2193,,,,,2/18/2020 13:30,,,,0,,,,CC BY-SA 4.0 18100,1,,,2/18/2020 14:01,,1,59,"

I've modeled an AlexNet neural network, with 50 epochs and a batch size of 64. I used a stochastic gradient descent optimizer with a learning rate of 0.01. I attached the train and validation loss and accuracy plots.

How can I reduce the fluctuation of the first epochs?

",33626,,2444,,2/19/2020 1:11,2/19/2020 1:11,How to reduce fluctuation of a neural network?,,0,0,,,,CC BY-SA 4.0 18104,1,,,2/18/2020 15:23,,3,200,"

I'd like to develop an MCTS-like (Monte Carlo Tree Search) algorithm for program induction, i.e. learning programs from examples.

My initial plan is for nodes to represent programs and for the search to expand nodes by revising programs.

Many of these expansions revise a single program: randomly resample a subtree of the program, replace a constant with a variable, etc. It looks straightforward to use these with MCTS.

Some expansions, however, generate a program from scratch (e.g. sample a new program). Others use two or more programs to generate a single output program (e.g. crossover in Genetic Programming).

These latter types of moves seem nonstandard for vanilla MCTS.

One idea I've had is to switch from nodes representing programs to nodes representing tuples of programs. The root node would represent the empty tuple $()$, to which expansions could be applied only if they can generate a program from scratch. The first such expansion would produce some program $p$, so the root would now have child $(p)$. The second expansion would produce $p'$, so the root would now also have child $(p')$ as well as the pair $(p, p')$. Even assuming some reasonable restrictions (e.g. moves can use at most 2 programs, pairs cannot have identical elements, element order doesn't matter), the branching factor will grow combinatorially.

What techniques from the MCTS literature (or other literatures) might reduce the impact of this combinatorial explosion?

",33629,,2444,,12/19/2020 13:39,1/8/2023 19:02,How can I reduce combinatorial explosion in an MCTS-like algorithm for program induction?,,1,0,,,,CC BY-SA 4.0 18105,1,,,2/18/2020 15:26,,1,139,"

I've been trying to use this DeepLabv3+ implementation with my dataset (~1000 annotated images of the same box, out of the same video sequence): https://github.com/srihari-humbarwadi/person_segmentation_tf2.0

But I get border artifacts like this:

Any ideas what could be causing it? Note that if I use bigger batches and train for more epochs, the borders tend to get thinner but never disappear. They also appear randomly around the image.

Any clues what could be causing them and how to solve it?

",33632,,2444,,2/19/2020 1:00,2/19/2020 1:00,Weird border artifacts when training a CNN,,0,0,,,,CC BY-SA 4.0 18111,1,21791,,2/19/2020 2:26,,4,397,"

How does policy evaluation work for continuous state space model-free approaches?

Theoretically, a model-based approach for the discrete state and action space can be computed via dynamic programming and solving the Bellman equation.

Let's say you use a DQN to find another policy, how does model-free policy evaluation work then? I am thinking of Monte Carlo simulation, but that would require many many episodes.

",32780,,2444,,4/16/2020 19:21,6/11/2020 2:18,How does policy evaluation work for continuous state space model-free approaches?,,1,3,0,,,CC BY-SA 4.0 18112,2,,17227,2/19/2020 3:21,,1,,"

Some of the domains in the International Probabilistic Planning Competition, such as the Wildlife Preserve benchmark, fit quite well the constraints you have given. Note that the problems are modeled with a high-level declarative language, RDDL. This means that you can define problems as big or as small as your heart desires with relative ease, since you can parametrize state description in terms of functions describing properties of an arbitrary number of objects.

There's also a quite useful project that allows to instance OpenAI gym environments from the declarative description of the environment, states and actions.

",33641,,,,,2/19/2020 3:21,,,,2,,,,CC BY-SA 4.0 18113,2,,18104,2/19/2020 3:37,,0,,"

I think you may get some inspiration from the work on deterministic environments by Javier Segovia et al. See their paper Computing programs for generalized planning using a classical planner (2019).

If you don't have access to Elsevier's papers, I recommend that you check out the first author's profile on dblp.org. From there you will find links to open access versions of the conference paper that led up to the publication above.

",33641,,2444,,12/19/2020 13:43,12/19/2020 13:43,,,,2,,,,CC BY-SA 4.0 18115,1,30281,,2/19/2020 7:40,,5,909,"

I'm attempting to design an action space in OpenAI's gym and hitting the following roadblock. I've looked at this post which is closely related but subtly different.

The environment I'm writing needs to allow an agent to make between $1$ and $n$ sub-actions in each step. Leaving it up to the agent to decide how many sub-actions it wants to take. So, something like (sub-action-category, sub-action-id, action) where the agent can specify between $1$ and $n$ such tuples.

It doesn't seem possible to define a Box space without specifying bounds on the shape which is what I need here. I'm trying to avoid defining an action space where each sub-action is explicitly enumerated by the environment like (action) tuple with n entries for each sub-action.

Are there any other spaces I could use to dynamically scale the space?

",32763,,2444,,5/20/2020 11:08,8/20/2021 15:51,How to define an action space when an agent can take multiple sub-actions in a step?,,1,1,,,,CC BY-SA 4.0 18116,2,,18084,2/19/2020 10:02,,0,,"

I recommend you to look into the literature about simulation and bisimulation in Automata Theory and its applications to model checking (where you want to make quite regularly proofs of ""behavioural equivalence""). One article that discusses this in the context of a technique for model checking known as ""Abstraction and Abstraction Refinement"" is

Abstraction and Abstraction Refinement
Dennis Dams and Orna Grumberg
In Springer's Handbook of Model Checking, 2018, Chapter 13, pages 385-420

A good (I use it regularly) book that covers behavioural equivalence for a wide variety of automata is

Verification and Control of Hybrid Systems: A Symbolic Approach
Paulo Tabuada
Springer, 2009
",33641,,,,,2/19/2020 10:02,,,,1,,,,CC BY-SA 4.0 18117,1,,,2/19/2020 10:21,,1,167,"

How do I calculate the error during the training phase for deep reinforcement learning models?

Deep reinforcement learning is not supervised learning as far as I know. So how can the model know whether it predicts right or wrong? In literature, I find that the ""actual"" Q-value is calculated, but that sounds like the whole idea behind deep RL is obsolete. How could I even calculate/know the real Q-value if there is not already a world model existing?

",27777,,2444,,2/19/2020 17:45,2/19/2020 17:45,How to estimate the error during training in deep reinforcement learning,,1,0,,,,CC BY-SA 4.0 18118,1,18131,,2/19/2020 10:49,,3,240,"

I'm doing reinforcement learning, and I have a visual observation that I will use to build an input state for my agent. In the DeepMind's Atari paper, they greyscale the input image before they input it into the CNN to reduce the input space's size, which makes sense to me.

In my environment, I have, for each pixel, 5 possible channels, which are represented in black, white, blue, red, and green. This also makes intuitive sense to me since it's like a bit-encoding.

Any thoughts on what could be better? Greyscaling into 2 shades of grey and black and white also maintains the information, but feels somehow less direct, since my environment's visual space is categorical, which makes more sense in a categorical encoding.

",31180,,2444,,2/3/2021 17:34,2/3/2021 17:37,Should I grey-scale the coloured frames/channels to build the approximation of the state?,,1,2,,,,CC BY-SA 4.0 18119,1,,,2/19/2020 11:32,,1,28,"

I am thinking of applying apprenticeship learning on retrospective data. From looking at this paper by Ng https://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf which talks about apprenticeship learning, it seems to me that at the 5th step of the algorithm,

  1. Compute (or estimate) $μ^{(i)}$ = $μ(π^{(i)})$, where $\mu^{(i)}$ = $E[\sum_{t=0}^{∞}\gamma^{t}$$\phi(s_{t})$ | $\pi^{(i)}]$, $\phi(s_{t})$ is the reward feature vector at state $s_t$.

From my understanding, a sequence of $s_0, s_1, s_2 ..$ trajectory would have to be generated at this step, following this policy $\pi^{(i)}$. Hence, applying this algorithm on retrospective data would not work?

",32780,,2444,,2/19/2020 18:06,2/19/2020 18:06,Does apprenticeship learning require prospective data?,,0,1,,,,CC BY-SA 4.0 18121,2,,12647,2/19/2020 12:34,,3,,"

I thought the answer might be no.

In this 2020 ICLR paper: The Curious Case of Neural Text Degeneration, researchers found that beam search text is less surprising compared to human natural language. And they proposed a nucleus sampling method which generates more human like text.

",5351,,5351,,2/19/2020 12:43,2/19/2020 12:43,,,,0,,,,CC BY-SA 4.0 18122,1,,,2/19/2020 12:45,,1,93,"

I am trying to design a model based on LSTM cells to do time-series prediction. The ouput value is an integer in [0,13]. I have noticed that one-hot encoding it and using cross-entropy loss gives better results than MSE loss.

Here is my problem : no matter how deep I make the network or how many fully connected layers I add I always obtain pretty much the same behavior. Changing the optimizer also doesn't really help.

  1. The loss function quickly decreases then stagnates with a very high variance and never goes down again.
  2. The prediction seems to be offset around the value 9, I really do not understand why since I have one-hot encoded the input and the output.

Here is an example of a the results of a typical training phase, with the total loss :

Do you have any tips/ideas as to how I could improve this or could have gone wrong ? I am a bit of a beginner in ML si I might have missed something. I can also include the code (in PyTorch) if necessary.

",33658,,,,,11/7/2022 1:08,"time-series prediction : loss going down, then stagnates with very high variance",,1,2,,,,CC BY-SA 4.0 18123,2,,18117,2/19/2020 13:44,,1,,"

Yes, reinforcement learning is very different from supervised learning, the policy (what you call a model) does not know if its predicting right or wrong, or if its taking the correct action or not. In RL there is no concept of ""the right action"", everything is evaluated through the reward function.

Also there are no ways to compute the ground truth Q-values, if you had that then there is no need to do RL.

In RL you should not think like in supervised learning, there are no error metrics, everything is evaluated on how much accumulated reward the agent receives over an episode.

",31632,,,,,2/19/2020 13:44,,,,5,,,,CC BY-SA 4.0 18126,1,,,2/19/2020 16:10,,2,50,"

Suppose I want to classify a dataset like the MNIST handwritten dataset, but it has added distractions. For example, here we have a 6 but with extra strokes around it that don't add value.

I suppose a good model would predict a 6, but maybe with less than 100% certainty (or maybe with 100% certainty - I don't know that it matters for the purpose of this question).

Is there any way to get information about which pixels most strongly influenced the decision of the CNN, and which pixels were not so important? So to represent that visually, green means that those pixels were important:

Or conversely, is it possible to highlight pixels which did not contribute to the outcome (or which cast doubt on the outcome thereby reducing the certainty from 100%)

",16871,,,,,2/24/2020 18:41,Can a NN be configured to indicate which points of the input influenced its prediction and how?,,2,0,,,,CC BY-SA 4.0 18130,1,,,2/19/2020 18:14,,1,134,"

I'm actually trying to create a sequential neural network in order to translate a ""human"" sentence in a ""machine"" sentence understandable by an algorithm. Like It didn't work, I've try to create a NN that understands whether the input is a unit or not.

Even this NN doesn't work and I don't understand Why. I tried with different optimize/loss/metrics/with Rnn/with LSTM.

So there is one array of unit and one array with lambda words. I send to the NN :

input -> word in OneHotEncoding where each char is a vector

output -> a vector with a 1 at the relative position of the unit in the array, ex : [0,0,0,1,0]. If It's not a unit, the vector is composed of 0.

I'm actually using LSTM layers and sigmoid activation because I will need it for my ""big"" NN.

model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(13,vocab_size),batch_size=80))
model.add(tf.keras.layers.LSTM(32, return_sequences=True)) 
model.add(tf.keras.layers.LSTM(6, return_sequences=False))
model.add(tf.keras.layers.Dense(6, activation=""sigmoid""))
model.reset_states()
model.summary()
model.compile(optimizer= 'adam', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
model.fit(encoded_word, my_targets, batch_size=80, epochs=100, validation_data=(encoded_word, my_targets))



Model: ""sequential_38""
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_57 (LSTM)               (80, 13, 32)              6400      
_________________________________________________________________
lstm_58 (LSTM)               (80, 6)                   936       
_________________________________________________________________
dense_32 (Dense)             (80, 6)                   42        
=================================================================
Total params: 7,378
Trainable params: 7,378
Non-trainable params: 0


Epoch 1/100
10000/10000 [==============================] - 6s 619us/sample - loss: 0.7441 - categorical_accuracy: 0.2898 - val_loss: 0.6181 - val_categorical_accuracy: 0.3388
Epoch 2/100
10000/10000 [==============================] - 2s 233us/sample - loss: 0.5768 - categorical_accuracy: 0.3388 - val_loss: 0.5382 - val_categorical_accuracy: 0.3388
Epoch 3/100
10000/10000 [==============================] - 2s 229us/sample - loss: 0.5039 - categorical_accuracy: 0.3979 - val_loss: 0.4640 - val_categorical_accuracy: 0.5084
Epoch 4/100
10000/10000 [==============================] - 2s 229us/sample - loss: 0.4207 - categorical_accuracy: 0.4759 - val_loss: 0.3709 - val_categorical_accuracy: 0.5041

My NN is converging towards 0.5 all the time.

Thank you in advance for your answers !

",33665,,,,,2/19/2020 18:14,Simple sequential model with LSTM which doesn't converge,,0,0,,,,CC BY-SA 4.0 18131,2,,18118,2/19/2020 18:28,,4,,"

We can't say for sure which approach would work best in the general case. If you have domain knowledge, you can make a better guess. You'll basically want to answer the question: which information is important for learning an optimal policy?

In my environment, I have, for each pixel, 5 possible channels, which are represented in black, white, blue, red, and green. This also makes intuitive sense to me since it's like a bit-encoding.

Generally, if you have an environment like this, I would (without any other information) guess that each of the 5 colors have some meaning that may be relevant for your agent. That's just what I would guess though. In theory, it might be possible that white means one thing (e.g. "empty"), and every other colour means the same other thing (e.g. "not empty"). If you had domain knowledge like that, and knew that it is only important whether or not any given pixel is white, you could of course binarise your input.

But in general, if the colours might be important, I'd recommend including them. If you really only have a few distinct colours like that though, I would not recommend encoding them in some format like RGB where values can range from 0 to 1 or 0 to 255. I would recommend having 4 (or 5?) binary channels in your input:

  1. Binary channel containing 1s for pixels that are black, and 0s for all other pixels.
  2. Binary channel containing 1s for pixels that are white, and 0s for all other pixels.
  3. Binary channel containing 1s for pixels that are blue, and 0s for all other pixels.
  4. ...
  5. etc.

The reason for this is that deep neural networks often tend to have an easier time learning with binary inputs, and here you can completely binarise your inputs without requiring an excessively high number of channels. If you had hundreds or thousands of different possible colours, this would probably no longer be a good idea.

",1641,,2444,,2/3/2021 17:37,2/3/2021 17:37,,,,0,,,,CC BY-SA 4.0 18132,1,18247,,2/19/2020 19:12,,4,2523,"

Can vanishing gradients be detected by the change in distribution (or lack thereof) of my convolution's kernel weights throughout the training epochs? And if so how?

For example, if only 25% of my kernel's weights ever change throughout the epochs, does that imply an issue with vanishing gradients?

Here are my histograms and distributions, is it possible to tell whether my model suffers from vanishing gradients from these images? (some middle hidden layers omitted for brevity)

",31299,,2444,,12/13/2020 13:18,12/13/2020 13:23,How to detect vanishing gradients?,,1,0,,,,CC BY-SA 4.0 18133,2,,12059,2/19/2020 19:37,,1,,"

The paper Dota 2 with Large Scale Deep Reinforcement Learning goes into greater detail than the initial blog posts.

They call their distributed training framework Rapid, which is also used in some of their robotics work, such as the paper Learning Dexterous In-Hand Manipulation, where they discuss a smaller scale deployment of Rapid (as compared to Dota2/OpenAI V) in section 4.3.

",14513,,14513,,2/20/2020 20:06,2/20/2020 20:06,,,,0,,,,CC BY-SA 4.0 18134,1,,,2/19/2020 19:38,,2,169,"

I am new to ANN. I am trying out several 'simple' algorithms to see what ANN can (or cannot) be used for and how. I played around with Conv2d once and had it recognize images successfully. Now I am looking into trend line analyses. I have succeeded in training a network where it solved for linear equations. Now I am trying to see if it can be trained to solve for $y$ in the formula $y = b + x^2$.

No matter what parameters I change, or the number of dense layers, I get high values for loss and validation loss, and the predictions are incorrect.

Is it possible to solve this equation, and with what network? If it is not possible, why not? I am not looking to solve a practical problem, rather build up understanding and intuition about ANNs.

See the code I tried with below

#region Imports
from __future__ import absolute_import, division, print_function, unicode_literals
import math 
import numpy as np 
import tensorflow as tf
from tensorflow.keras import models, optimizers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense, Lambda
import tensorflow.keras.backend as K
#endregion

#region Constants
learningRate = 0.01
epochs: int = 1000
batch_size = None
trainingValidationFactor = 0.75
nrOfSamples = 100
activation = None
#endregion

#region Function definitions
def CreateNetwork(inputDimension):
  model = Sequential()
  model.add(Dense(2, input_dim=2, activation=activation))
  model.add(Dense(64, use_bias=True, activation=activation))
  model.add(Dense(32, use_bias=True, activation=activation))
  model.add(Dense(1))
  adam = optimizers.Adam(learning_rate=learningRate)
  # sgd = optimizers.SGD(lr=learningRate, decay=1e-6, momentum=0.9, nesterov=True)
  # adamax = optimizers.Adamax(learning_rate=learningRate)
  model.compile(loss='mse', optimizer=adam)
  return model

def SplitDataForValidation(factor, data, labels):
  upperBoundary = int(len(data) * factor)

  trainingData = data[:upperBoundary]
  trainingLabels = labels[:upperBoundary]

  validationData = data[upperBoundary:]
  validationLabels = labels[upperBoundary:]
  return ((trainingData, trainingLabels), (validationData, validationLabels))

def Train(network, training, validation):
  trainingData, trainingLabels = training
  history = network.fit(
    trainingData
    ,trainingLabels
    ,validation_data=validation
    ,epochs=epochs 
    ,batch_size=batch_size
  )

  return history

def subtractMean(data):
  mean = np.mean(data)
  data -= mean
  return mean

def rescale(data):
  max = np.amax(data)
  factor = 1 / max
  data *= factor
  return factor

def Normalize(data, labels):
  dataScaleFactor = rescale(data)
  dataMean = subtractMean(data)

  labels *= dataScaleFactor
  labelsMean = np.mean(labels)
  labels -= labelsMean

def Randomize(data, labels):
  rng_state = np.random.get_state()
  np.random.shuffle(data)
  np.random.set_state(rng_state)
  np.random.shuffle(labels)

def CreateTestData(nrOfSamples):
  data = np.zeros(shape=(nrOfSamples,2))
  labels = np.zeros(nrOfSamples)

  for i in range(nrOfSamples):
    for j in range(2):
      randomInt = np.random.randint(1, 5)
      data[i, j] = (randomInt * i) + 10
    labels[i] = data[i, 0] + math.pow(data[i, 1], 2)
  
  Randomize(data, labels)
  return (data, labels)
#endregion

allData, allLabels = CreateTestData(nrOfSamples)
Normalize(allData, allLabels)
training, validation = SplitDataForValidation(trainingValidationFactor, allData, allLabels)

inputDimension = np.size(allData, 1)
network = CreateNetwork(inputDimension)

history = Train(network, training, validation)

prediction = network.predict([
  [2, 2], # Should be 2 + 2 * 2 = 6
  [4, 7], # Should be 4 + 7 * 7 = 53
  [23, 56], # Should be 23 + 56 * 56 = 3159
  [128,256] # Should be 128 + 256 * 256 = 65664
])
print(str(prediction))
",32968,,2444,,12/18/2021 9:58,12/18/2021 9:58,Which neural network can approximate the function $y = x^2 + b$?,,1,0,,,,CC BY-SA 4.0 18135,1,,,2/19/2020 20:44,,1,44,"

I have trained a model using Auto-encoder on movielens dataset. Below is how i trained the model.

r = model.fit_generator(
  generator(A, mask),
  validation_data=test_generator(A_copy, mask_copy, A_test_copy, mask_test_copy),
  epochs=epochs,
  steps_per_epoch=A.shape[0] // batch_size + 1,
  validation_steps=A_test.shape[0] // batch_size + 1,
)

It is giving good results but now i am confused how should i get the top 5 recommendation on user input.

Just wanted to print the result on console. Can anyone help me please?

",33670,,26652,,2/20/2020 9:52,2/20/2020 9:58,How to get top 5 movies recommendations from Auto-Encoder,,1,0,,,,CC BY-SA 4.0 18137,1,20486,,2/20/2020 1:48,,2,151,"

I am a newbie to Machine Learning and AI. As per my understanding, with the use of reinforcement learning (reward/punishment environment), we can train a neural network to play a game. I would like to know, whether it possible to use this trained model for deciding the difficulty of the next game level dynamically in realtime according to a player's skill level? As an example, please consider a neural network is trained using Reinforcement Learning for playing a mobile game (chess/puzzle, etc.). The game is not consists of a previously designed static set of game levels. After the training, can this model use to detect a particular player's playing style(score, elapsed time) to dynamically decide the difficulty of the next game level and provide customized game levels for each player in realtime?

Thank you very much and any help will be greatly appreciated.

",33674,,,,,4/21/2020 10:34,Can we use a neural network that is trained using Reinforcement Learning for dynamic game level difficulty designing in realtime?,,1,2,,,,CC BY-SA 4.0 18138,2,,18126,2/20/2020 2:06,,1,,"

Yes there definitely is, and research into this has actually resulted in some really cool behaviour.

One of the simplest ways is to simply back propagate the gradient all the way back to the input. Areas of the input that affected the final decision will receive larger gradients. Interestingly, this also sort of works as a rudimentary form of semantic segmentation.

Other ways are to change segments of the input and to see how that affects the output (like add 0.1 to a pixel).

You can also determine what each filter in a convolutional layer is looking at using similar techniques.

It's a super interesting field of machine learning, and I highly recommend taking a look at this lecture that is completely free and one of the most interesting ones I've personally seen. it will explain all this much better than I have done.

",26726,,,,,2/20/2020 2:06,,,,0,,,,CC BY-SA 4.0 18139,2,,18134,2/20/2020 3:12,,2,,"

$f(x) = x^2 + b$ is a polynomial (more precisely, a parabola) so it is continuous, thus, a neural network (with at least one hidden layer) should be able to approximate that function (given the universal approximation theorem).

After a very quick look at your code, I noticed you aren't using an activation function for your dense layers (i.e. your activation function is None). Try to use e.g. ReLU.

",2444,,,,,2/20/2020 3:12,,,,1,,,,CC BY-SA 4.0 18140,2,,18066,2/20/2020 3:16,,2,,"

Theoretically, nothing precludes the use of $\lambda$-returns in actor-critic methods. The $\lambda$-return is an unbiased estimator of the Monte Carlo (MC) return, which means they are essentially interchangeable. In fact, as discussed in High-Dimensional Continuous Control Using Generalized Advantage Estimation, using the $\lambda$-return instead of the MC return can actually help reduce the variance of gradient updates.

The above is similar to my answer in the other question you linked, so let me try to answer your question more specifically. Even though we can use $\lambda$-returns, why are they not too common in practice? I suspect there might be a few reasons:

  1. Empirically, faster credit assignment might be more desirable than lower variance. Sometimes the learning speed of your algorithm is constrained simply by how quickly you can learn about the consequences of certain actions. In this case, it is faster to use the MC return, even if it theoretically has higher variance than the $\lambda$-return.

  2. When proposing a new algorithm, adding $\lambda$-returns to it might give it an ""unfair"" advantage if the other baseline methods do not use them (the reviewers would not like this), so researchers tend to favor simpler 1-step or MC returns for the sake of consistency. I would guess this is why you don't typically see $\lambda$-returns in papers that propose new actor-critic methods. In some sense, it is generally assumed that you could always add $\lambda$-returns to them later and probably get better performance.$^*$

  3. A decent number of deep RL researchers don't know what $\lambda$-returns are. This is especially true if they come from a pure deep learning background; they may have never read Reinforcement Learning: An Introduction which is where most people are introduced to TD($\lambda$) and $\lambda$-returns.

$^*$Exceptions to this are papers like mine (Reconciling $\lambda$-Returns with Experience Replay) where the contribution is the use of $\lambda$-returns in methods that previously could not use them. But actor-critic methods are pretty straightforward to combine with $\lambda$-returns.

",32070,,,,,2/20/2020 3:16,,,,4,,,,CC BY-SA 4.0 18143,1,18148,,2/20/2020 8:34,,1,185,"

I am currently studying the textbook Deep Learning by Goodfellow, Bengio, and Courville. Chapter 5.1 Learning Algorithms says the following:

Classification with missing inputs: Classification becomes more challenging if the computer program is not guaranteed that every measurement in its input vector will always be provided. To solve the classification task, the learning algorithm only has to define a single function mapping from a vector input to a categorical output. When some of the inputs may be missing, rather than providing a single classification function, the learning algorithm must learn a set of functions. Each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing. This kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. One way to efficiently define such a large set of functions is to learn a probability distribution over all the relevant variables, then solve the classification task by marginalizing out the missing variables. With $n$ input variables, we can now obtain all $2^n$ different classification functions needed for each possible set of missing inputs, but the computer program needs to learn only a single function describing the joint probability distribution. See Goodfellow et al. (2013b) for an example of a deep probabilistic model applied to such a task in this way. Many of the other tasks described in this section can also be generalized to work with missing inputs; classification with missing inputs is just one example of what machine learning can do.

I was wondering if people would please help me better understand this explanation. Why is it that, when some of the inputs are missing, rather than providing a single classification function, the learning algorithm must learn a set of functions? And what is meant by ""each function corresponds to classifying $\mathbf{x}$ with a different subset of its inputs missing.""?

I would greatly appreciate it if people would please take the time to clarify this.

",16521,,2444,,2/20/2020 13:15,2/4/2021 10:57,Why does the machine learning algorithm need to learn a set of functions in the case of missing data?,,2,0,,,,CC BY-SA 4.0 18144,1,,,2/20/2020 8:36,,1,49,"

I was trying to normalize my input data images for feeding to my convolutional neural network and wanted to use standardize my input data.

I referred to this post, which says that featurewise_center and featurewise_std_normalization scale the images to the range [-1, 1].

Wouldn't training the model with this data lead to inaccuracies since the testing data would not be normalized in a similar way and would only range between [0, 1]? How can I standardize my data while keeping its range between [0, 1]?

",33681,,2444,,12/30/2021 10:32,12/30/2021 10:32,Wouldn't training the model with this data lead to inaccuracies since the testing data would not be normalized in a similar way?,,0,0,,,,CC BY-SA 4.0 18145,1,,,2/20/2020 9:08,,0,37,"

Let's say I have a data set with of length N. A small proportion N2 is labeled. Can I remove some labels and then 'reverse' this action with a trained neural network? I could then use the same process to fill the other (N - N2) rows with labels.

",33573,,33573,,2/24/2020 14:11,2/24/2020 14:11,Semi-supervised: Can I predict the label of purposely unlabelled observations?,,0,5,,,,CC BY-SA 4.0 18146,2,,18135,2/20/2020 9:58,,1,,"

That is not what an auto-encoder is doing. An auto-encoder gives you a compressed representation of the input. It is trained by mapping the input data to itself, with the compressed form in between.

To predict recommendations, you need to train your input data on existing user recommendations.

",33186,,,,,2/20/2020 9:58,,,,2,,,,CC BY-SA 4.0 18147,1,,,2/20/2020 11:09,,0,40,"

I wonder about the legitimacy of using the terms ""POS tagging"", ""Chunking"", ""Disambiguation"" and ""Categorization"" to describe an activity that doesn't include writing code and database queries, or interacting with the NLP algorithm and database directly.

More specifically, let's suppose I use the following tools:

  1. an ""Annotator"" for analyzing the input text (e.g. sentences copypasted from online newspapers) and choose and save proper values as regards to ""POS"" of tokens and words, ""Words""(entities and collocations) and ""Chunk"". Tokens are already detected by default. I have to decide which words are entities and/or collocations or not and their typology, though. May the performed tasks be called ""POS tagging"", ""Chunking"" and ""support to categorization""?

  2. A knowledge base, for searching and choosing the proper synsets of the lemmas and assigning them to the words analyzed in the previous Annotation tool. May such a task be called ""Disambiguation""?

  3. A graphical user interface which shows how the NLP analyzes by default the input texts as regards to Lemmas, POS, Chunks, Senses, entities, domains, main concepts, dependency tree, in order to make analyses consistent with it.

If I want to define these activities in a few words, ""Machine Learning annotation"" may be the most correct.

But what if I want to be more specific? I don't know whether or not the terms ""POS tagging"", ""Chunking"", ""Disambiguation"" and ""Support to categorization"" may be appropriate for they generally come within ""programming contexts"", as far as I know. In other terms, do they involve writing algorithms and programming or are they / may they be referred to the ""less-technical"" activities described above?

",22959,,2193,,2/20/2020 13:23,2/20/2020 13:23,"Are POS tagging, Chunking, Disambiguation, etc. subtasks of annotation?",,1,0,,,,CC BY-SA 4.0 18148,2,,18143,2/20/2020 12:41,,2,,"

Intuitively, this is similar to the case when you are making predictions but you don't have all the necessary information to make the most accurate prediction or maybe there isn't a single accurate prediction, so you have a set of possible predictions (rather than a single prediction).

For example, if you hadn't seen the last Liverpool game (in the Champions League) against Atlético Madrid, you would have probably said that Liverpool was the most likely team to win the CL this year (2020) too. However, after having seen their last game, you noticed that they are not unbeatable and they are not perfect, so, although they have shown you (during this and the previous season) that they are a very good team, they may also no be the best until the end of the season. So, at this point, you may have a set of two possible hypotheses: Liverpool will win the CL or Liverpool will not win the CL.

In general, if you had a dataset that is representative of your whole population, then the dataset alone should be sufficient to make accurate predictions (i.e. it contains all the information sufficient to make accurate predictions). If that's not the case (which is often true), then you will have to account for all possible values of the missing data or you will have to make assumptions (or introduce an inductive bias).

The authors also mention the concept of marginalization, which is used in probability theory to calculate marginal probabilities, e.g. $p(X=x)$ (or for short $p(x)$), when there's another random variable $Y$, by accounting for all possible values of $Y$. In other words, you're interested only in $p(x)$ and you may have the joint probability distribution $p(x, y)$, then marginalization allows you to compute $p(x)$ using e.g. $p(x, y)$ and all possible values that the random variable $Y$ can take.

In any case, I think their description is a little bit vague and using the concept of marginalization to convey the idea behind the ""multiple hypotheses"" isn't the most appropriate approach, IMHO. If you are interested in these concepts in the context of neural networks, I suggest you read something about Bayesian machine learning or Bayesian neural networks.

",2444,,2444,,2/20/2020 13:25,2/20/2020 13:25,,,,1,,,,CC BY-SA 4.0 18149,2,,18147,2/20/2020 13:19,,1,,"

The procedures you mention don't need to involve writing code. There are now many ready-made tools available which implement various algorithms.

POS-Tagging, Chunking, and Semantic Tagging/Disambiguation are knowledge-based procedures which can all be seen as classification/clustering tasks: POS-Tagging and Disambiguation are classifications, in that they propose a label (a POS- or semantic tag) that is assigned to a token. Chunking is a clustering algorithm, as it finds coherent groups in sequences of tokens. You could also view it as segmentation, as it splits a sentence into parts.

In principle they don't have anything to do with machine learning; the algorithm you are using (ML or not) is an implementation detail. Early Taggers were mostly rule-based, but with increasing availability of annotated data it became more feasible to use ML algorithms. This, however, has got nothing to do with how you classify the procedures.

In general I would call the activity you describe as 'annotation', as you enrich the source text by adding descriptive categories to tokens and token sequences.

So, no, they do not have to involve programming or implementing algorithms, and you don't need to be technical to perform them. Though some knowledge of linguistic concepts would help.

",2193,,,,,2/20/2020 13:19,,,,2,,,,CC BY-SA 4.0 18150,1,,,2/20/2020 13:30,,1,33,"

I started to dig into the topic of graph generation and I have a question - which out of generative methods (autoregressive, variational autoencoders, GANs, any other?) are better for generating graphs while preserving both node and edge labels? (in other words, I want to generate graphs with node and edge labels, similar to what I have in my dataset)

I checked over some papers, but I didn't find exact clues about how to choose the method for graph generation. If anybody is more familiar with the topic, I'll appreciate any link to papers or any advice/recommendations.

",33332,,2444,,2/20/2020 14:18,2/20/2020 14:18,"Which generative methods are better for generating graphs, while preserving node and edge labels?",,0,0,,,,CC BY-SA 4.0 18151,1,18177,,2/20/2020 15:21,,1,246,"

I’m trying to figure out how to write an optimal convolutional neural network with respect to maximizing and minimizing filters in a convolution 2D layer. This is my thinking and I’m not sure if it's correct.

If we have a dataset of 32x32 images, we could start with a Conv2D layer, filter of 3x3 and stride of 1x1. Therefore the maximum times this filter would be able to fit into the 32 x 32 images would be 30 times e.g. newImageX * newImageY

newImageX = (imageX – filterX + 1)  
newImageY = (imageY – filterY + 1)

Am I right in thinking that because there are only newImageX * newImageY patterns in the 32 x 32 image, that the maximum amount of filters should be newImageX * newImageY, and any more would be redundant?

So, the following code is the maximum possible filters given that we have 3x3 filter, 1x1 stride and 32x32 images?

Conv2D((30*30), kernel=(3,3), stride=(1,1), input_shape=(32,32,1))

Is there any reason to go above 30*30 filters, and are there any reasons to go below this number, assuming that kernel, stride and input_shape remain the same?

If you knew you were looking for one specific filter, would you have to use the maximum amount of filters to ensure that the one you were looking for was present, or can you include it another way?

",11795,,2444,,2/22/2020 2:30,2/22/2020 2:46,How do I optimize the number of filters in a convolution layer?,,1,0,,,,CC BY-SA 4.0 18152,1,,,2/20/2020 15:27,,2,42,"

I'm a newbie in neural networks. I'm trying to fit my neural network that has 3 different outputs:

  1. semantic segmentation,
  2. box mask and
  3. box coordinates.

When my model is training, the loss of semantic segmentation and box coordinates are decreasing, but the loss of the box mask is increasing too much.

My neural network is a CNN and it's based on Chargrid from here. The architecture is this:

For semantic segmentation outputs, it's expected to have 15 classes, for box mask it's expected to have 2 classes (foreground vs background) and for box coordinates it's expected to have 5 classes (1 for each corner of bounding box + 1 for None).

Loss details

Step 1

Here's the loss/accuracy for each of the three outputs at the end of the first step.

  1. Semantic segmentation (Output 1) - 0.0181/0.958
  2. Box mask (Output 2) - 13.88/0.946
  3. Box coordinates (Output 3) - 0.2174/0.0000000867

Last step

Here's the loss/accuracy at the last step.

  1. Semantic segmentation (Output 1) - 0.0157/0.963
  2. Box mask (Output 2) - 73.02/0.935
  3. Box coordinates (Output 3) - 0.06/0.82

Is that normal? How can I interpret these results?

I will leave below, the output of the model fit.

",33545,,2444,,2/20/2020 23:25,2/20/2020 23:25,Why is the loss of one of the outputs of a model with multiple outputs increasing while the others are decreasing?,,0,4,,,,CC BY-SA 4.0 18153,2,,18069,2/20/2020 18:00,,1,,"

I found a method to do it in the paper Cross-Modality Personalization for Retrieval (2020, accessed: 20-Feb-2020).

Representation. For images, we extract Inception-v4 CNN features [36]. We then mask the image convolution feature with the BubbleView saliency map, by resizing the saliency map to the convolution feature size and multiplying them together. Finally, average pooling is performed to obtain a 1536-dimensional feature vector. We represent textual descriptions as their average 200-dimensional Glove embedding [31]. For personality, we use a 10-dimensional feature vector containing the scores for the personality questions in [32]. Below, we describe how we learn projections of these representations that place them in the same feature space.

",25676,,2444,,2/20/2020 23:50,2/20/2020 23:50,,,,0,,,,CC BY-SA 4.0 18154,2,,18122,2/20/2020 19:48,,0,,"

I found the issue, I should have done more unit testing. Upon computing the batch loss before backpropagation, one of the dimension of the ""prediction"" tensor was not corresonding to the ""truth"" tensor. The shape match but the content is not the one that was supposed to be. This is due to how the NLL loss is implemented in pytorch which I was not aware of ...

",33658,,,,,2/20/2020 19:48,,,,0,,,,CC BY-SA 4.0 18155,1,,,2/20/2020 21:07,,2,384,"

I am trying to test DQN on FrozenWorld environment in gym using TensorFlow 2.x. The update rule is (off policy) $$Q(s,a) \leftarrow Q(s,a)+\alpha (r+\gamma~ max_{a'}Q(s',a')-Q(s,a))$$

I am using an epsilon greedy policy. In this environment, we get a reward only if we succeed. So I explored with 100% until I have 50 successes. Then I saved the data of failures and success in different bins. Then I sampled (with replacement) from these bins and used them to train the Q network. However, no matter how long I train the agent doesn't seem to learn.

The code is available in Colab. I am doing this for a couple of days.

PS: I modified the code for SARSA and Expected SARSA; nothing works.

",31445,,2444,,2/20/2020 23:11,11/7/2022 5:06,Why isn't my implementation of DQN using TensorFlow on the FrozenWorld environment working?,,1,7,,,,CC BY-SA 4.0 18157,1,,,2/21/2020 0:18,,3,402,"

Does a fully convolutional network share the same translation invariance properties we get from networks that use max-pooling?

If not, why do they perform as well as networks which use max-pooling?

",32390,,2444,,1/1/2022 10:08,1/1/2022 10:08,Does a fully convolutional network share the same translation invariance properties we get from networks that use max-pooling?,,3,2,,,,CC BY-SA 4.0 18158,2,,18155,2/21/2020 0:28,,0,,"

I see at least 3 issues with your DQN code that need to be fixed:

  1. You should not have separate replay memories for successes/failures. Put all of your experiences in one replay memory and sample from it uniformly.

  2. Your replay memory is extremely small with only 2,000 samples. You need to make it significantly larger; try at least 100,000 up to 1,000,000 samples.

  3. Your batch_target is incorrect. You need to train on returns and not just rewards. In your train function, compute the 1-step return $r + \gamma \cdot max_{a'} Q(s',a')$, remembering to set $max_{a'} Q(s',a') = 0$ if $s'$ is terminal, and then pass it to model.fit() as your prediction target.

",32070,,,,,2/21/2020 0:28,,,,7,,,,CC BY-SA 4.0 18159,2,,18157,2/21/2020 0:40,,0,,"

All convolutional networks (with or without max-pooling) are translation-invariant (AKA spatially invariant) because their filters slide over every position in the image. This means that if a pattern that ""matches"" a filter is present anywhere in the image, then at least one neuron should activate.

Max-pooling, on the other hand, has nothing to do with spatial invariance. It's simply a regularization technique to help reduce the number of parameters later in the network by downsizing activation layers within the network. This can help combat overfitting, although it's not strictly necessary. Alternatively, neural networks can achieve the same effect by using a convolutional layer with a stride of 2 instead of 1.

",32070,,,,,2/21/2020 0:40,,,,6,,,,CC BY-SA 4.0 18161,1,,,2/21/2020 1:25,,1,50,"

I'm working on a problem that given a dataset; where each train example is a binary matrix $X_i$ with dimension $(N_i,D_i)$ (think a training example is a feature matrix) each entry is either 1 or 0.

Also, each training example $X_i$ has a corresponding label $Y_i$ is a correlation matrix dimension $(D_i,D_i)$

My goal is to construct a model that take the input $X_i$ and coutput $\hat{Y}_i$ that matches with $Y_i$.

The major challenges here is that; each training example $X_i$ can have different dimensionality $(N_i,D_i)$ different data points and different number of variables.

I'm wondering if there is any neural network architecture can handle case like this ?

",33704,,33704,,2/21/2020 1:55,2/21/2020 1:55,Can neural network be trained to solve this problem?,,0,5,,,,CC BY-SA 4.0 18162,1,18165,,2/21/2020 10:04,,3,169,"

Some sources consider the true negatives (TN) when computing the accuracy, while some don't.

Source 1: https://medium.com/greyatom/performance-metrics-for-classification-problems-in-machine-learning-part-i-b085d432082b

Source 2:https://www.researchgate.net/profile/Mohammad_Sorower/publication/266888594_A_Literature_Survey_on_Algorithms_for_Multi-label_Learning/links/58d1864392851cf4f8f4b72a/A-Literature-Survey-on-Algorithms-for-Multi-label-Learning.pdf

which can be translated as

which one of these must be considered for my multi-label model.

",25676,,2444,,2/21/2020 13:38,2/21/2020 18:08,Why is there more than one way of calculating the accuracy?,,1,1,,,,CC BY-SA 4.0 18163,1,,,2/21/2020 10:46,,5,117,"

The Sustainable Development Goals of the United Nations describe a normative framework which states what future development until 2030 should strive for. On a more abstract level a basic definition describes sustainable development as

development that meets the needs of the present without compromising the ability of future generations to meet their own needs.

Alone through the consumption of energy, AI technologies already have a (negative) impact on sustainability questions.

What AI applications already exist, are researched or are at least thinkable from which sustainability would benefit?

",33714,,2444,,6/11/2022 11:00,6/11/2022 11:00,What AI applications exist to solve sustainability issues?,,1,0,,,,CC BY-SA 4.0 18165,2,,18162,2/21/2020 13:13,,4,,"

In machine learning, the accuracy is usually defined as the number of correct predictions divided by the total number of predictions. The correct predictions are the true positives ($\mathrm {TP}$) and true negatives ($\mathrm {TN}$), so the usual formula to calculate the accuracy is the following one (your first one).

\begin{align} \text{Accuracy}=\frac {\mathrm {TP} + \mathrm {TN}}{\mathrm {TP} + \mathrm {TN} + \mathrm {FP} + \mathrm {FN}} \label{0}\tag{0} \end{align}

The following formula corresponds to the threat score ($\mathrm{TS}$) or critical success index ($\mathrm {CSI}$).

\begin{align} {\displaystyle \mathrm {TS} ={\frac {\mathrm {TP} }{\mathrm {TP} +\mathrm {FN} +\mathrm {FP} }}} \label{1} \tag{1} \end{align}

In section 7 of the paper A Literature Survey on Algorithms for Multi-label Learning, it is written

In traditional classification such as multi-class problems, accuracy is the most common evaluation criteria. Additionally, there exists a set of standard evaluation metrics that includes precision, recall, F-measure, and ROC area defined for single-label multi-class classification problems. However, in multi-label classification, predictions for an instance is a set of labels and, therefore, the prediction can be fully correct, partially correct (with different levels of correctness) or fully incorrect. None of these existing evaluation metrics capture such notion in their original form.

Then they say

Depending on the target problem, evaluation measures for multi-label data can be grouped into at least three groups: evaluating partitions, evaluating ranking and using label hierarchy. The first one evaluates the quality of the classification into classes, the second one evaluates if the classes are ranked in order relevance and the third one evaluates how effectively the learning system is able to take into account an existing hierarchical structure of the labels.

In section 7.1, the authors state

To capture the notion of partially correct, one strategy is to evaluate the average difference between the predicted labels and the actual labels for each test example, and then average over all examples in the test set. This approach is called example-based evaluations.

To capture the notion of partially incorrect in multi-label classification problems, they then use the following definition of accuracy (proposed in section 5.2 of the paper Discriminative Methods for Multi-labeled Classification)

\begin{align} \frac{1}{n} \sum_{i=1}^{n} \frac{|Y_i \cap Z_i|}{|Y_i \cup Z_i|} \label{2} \tag{2} \end{align}

where $Y_i$ is the true set of labels and $Z_i$ the predicted set of labels for the single instance (or observation) $i$, so $\frac{|Y_i \cap Z_i|}{|Y_i \cup Z_i|}$ is the accuracy for the instance $i$. If $|Y_i|=1$ and $|Z_i|=1$, then we have a single-label classification problem.

The metric \ref{2} does not correspond to the metric \ref{1}. In \ref{2}, the accuracy is calculated for each example (or instance) and then we average all these accuracies, because, for each example, you may have that one predicted label is correct, two predicted labels are correct, and so on (also depending on the number of labels you need to predict for each of the examples). In \ref{1}, there is no notion of multiple labels.

",2444,,2444,,2/21/2020 18:08,2/21/2020 18:08,,,,6,,,,CC BY-SA 4.0 18168,2,,18097,2/21/2020 14:37,,1,,"

Source: https://blog.easysol.net/building-ai-applications/

When data is too big, complex and nonlinear it's time to try deep learning. It's always good to try to add some layers to see it can eliminate bias and don't lead to high variance.

Deep learning models can be tweaked(hyperparameters) and regularized(parameters), and it is worth the work.

",5351,,,,,2/21/2020 14:37,,,,0,,,,CC BY-SA 4.0 18169,1,,,2/21/2020 15:07,,2,637,"

I'm trying to solve a binary classification problem with AlexNet. I split the original dataset into training and validation datasets using a 70/30 ratio. I have trained my neural network with a dataset of 11200 images, and I obtained a training accuracy of 99% and a validation accuracy was 96%. At the end of the training, I saved my model's weights to a file.

After training, I loaded the saved weights to the same neural network. I chose 738 images out of the 11200 training images, and I tried to predict the class of each of them with my model, and compare them with true labels, then again I calculated the accuracy percentage and it was 74%.

What is the problem here? I guess its accuracy should be about 96% again.

Here's the code that I'm using.

prelist=[]
for i in range(len(x)):
    prediction = model.predict_classes(x[i])
    prelist.append(prediction)
count = 0
for i in range(len(x)):
    if(y[i] == prelist[i]):
        count = count + 1
test_precision = (count/len(x))*100
print (test_precision)

When I use predict_classes on 11200 images that I used to train the neural network and compare its result with true labels and calculated accuracy again its accuracy is 91%.

",33626,,2444,,1/17/2021 19:24,10/9/2022 22:05,Why am I getting a difference between training accuracy and accuracy calculated with Keras' predict_classes on a subset of the training data?,,1,0,,,,CC BY-SA 4.0 18171,1,18172,,2/21/2020 16:23,,3,112,"

I am learning about policy gradient methods from the Deep RL Bootcamp by Peter Abbeel and I am a bit stumbled by the math presented. In the lecture, he derives the gradient logarithm likelihood of a trajectory to be

$$\nabla log P(\tau^{i};\theta) = \Sigma_{t=0}\nabla_{\theta}log\pi(a_{t}|s_t, \theta).$$

Is $\pi(a_{t} | s_{t}, \theta)$ a distribution or a function? Because a derivative can only be taken wrt a function. My understanding is that $\pi(a_{t},s_{t}, \theta)$ is usually represented as a distribution of actions over states, since input of a neural network for policy gradient would be the $s_t$ and output would be $\pi(a_t, s_t)$, using model weights $\theta$.

",32780,,2444,,2/21/2020 18:27,2/21/2020 18:29,"In the policy gradient equation, is $\pi(a_{t} | s_{t}, \theta)$ a distribution or a function?",,1,0,,,,CC BY-SA 4.0 18172,2,,18171,2/21/2020 18:15,,4,,"

First, the derivative is usually taken with respect to a variable (input) of the function. Hence the notation $\frac{df}{dx}$ for some function $f(x)$.

If you look at your equation more carefully

$$\nabla log P(\tau^{i};\theta) = \Sigma_{t=0}\nabla_{\theta}log\pi(a_{t}|s_t, \theta).$$

You will see that the gradient is taken with respect to $\theta$, which are the parameters (i.e. a vector) e.g. of your neural network, that is, $\nabla_{\theta}$.

In this case, it doesn't really matter whether $\pi$ represents a distribution or not (for some specific value of $\theta$), but you're right that $\pi$ often represents a probability distribution over the possible actions (given a specific state). In any case, $\pi$ is a function of the parameters $\theta$ (i.e. in the case of a distribution, $\pi_{\theta}$ is a family of distributions for all possible values of $\theta$), i.e. if you change $\theta$ the outputs of $\pi$ also change, so you can take the derivative of it with respect to $\theta$.

",2444,,2444,,2/21/2020 18:29,2/21/2020 18:29,,,,0,,,,CC BY-SA 4.0 18173,2,,18163,2/21/2020 19:04,,4,,"

The paper The role of artificial intelligence in achieving the Sustainable Development Goals (2020, published in Nature) should contain the information you're looking for.

In the introduction, the authors write

Here we present and discuss implications of how AI can either enable or inhibit the delivery of all 17 goals and 169 targets recognized in the 2030 Agenda for Sustainable Development. Relationships were characterized by the methods reported at the end of this study, which can be summarized as a consensus-based expert elicitation process, informed by previous studies aimed at mapping SDGs interlinkages. A summary of the results is given in Fig. 1 and the Supplementary Data 1 provides a complete list of all the SDGs and targets, together with the detailed results from this work.

For example, as stated in the supplementary data, goal 1 is

End poverty in all its forms everywhere

and the first target (1.1.) of goal 1 is

By 2030, eradicate extreme poverty for all people everywhere, currently measured as people living on less than $1.25 a day 

Then the authors suggest that, according to their studies, AI may act as an inhibitor or enabler (i.e. it may be used to fight poverty) for this target.

We identified in the literature studies suggesting that AI may be an inhibitor for this target, due to the potential increase in inequalities which would hinder the achievement of this goal (1). Other references however identify AI as an enabler for this goal, in the context of using satellite data analysis to track areas of poverty and to foster international collaboration (2).

Therefore, techniques for satellite data analysis are one of the AI techniques that may be used to tackle suistainability issues.

",2444,,2444,,2/21/2020 19:17,2/21/2020 19:17,,,,2,,,,CC BY-SA 4.0 18175,2,,18082,2/22/2020 1:11,,0,,"

Financial information companies spend a very large amount of effort to do this kind of thing properly, and their models are generally proprietary, so the real answer is not something you'll be able to get on a public site.

Of course, you can achieve a very simple approximation with a literal diff on the texts, and a more sophisticated one by using out-of-the-box NLP or computer vision techniques, but you're probably not going to get close to the performance of factset or other competitors using those tools.

",16909,,,,,2/22/2020 1:11,,,,0,,,,CC BY-SA 4.0 18177,2,,18151,2/22/2020 2:35,,3,,"

Am I right in thinking that because there are only newImageX * newImageY patterns in the 32 x 32 image, that the maximum amount of filters should be newImageX * newImageY, and any more would be redundant?

Your assumption is wrong. If you have a $32 \times 32$ images (so consider only grayscale images), then you have $256^{32 \times 32}$ possible patterns (i.e. a huge number of possible patterns!), given that, for each pixel, you can have $256$ values, and there are $32\times 32$ pixels. Of course, for a human, many of these patterns may be very similar to each other (or even indistinguishable), but the way you measure similarity, in this case, is another hard problem. Hence also the difficulty in choosing the number of filters.

",2444,,2444,,2/22/2020 2:46,2/22/2020 2:46,,,,1,,,,CC BY-SA 4.0 18178,1,,,2/22/2020 4:12,,2,719,"

I am currently studying constraint satisfaction problems and have come across two heuristics for variable selection. The minimum remaining values(MRV) heuristic and the degree heuristic.

The MRV heuristic tells you to choose a variable that has the least legal assignments, while the degree heuristic tells you to choose a variable that has the biggest effect on the remaining unassigned variables.

Can these 2 heuristics for variable selection be used in conjunction with each other? Some books say that degree heuristic can be used as first selection of variables. Which heuristic is better followed after that? The MRV or degree heuristic?

The picture shows the case where degree heuristic is used. If MRV were to be used, then at the 3rd step, the leftmost side of the map would be coloured blue.

",32780,,2444,,2/22/2020 12:20,2/22/2020 12:20,Can the degree and minimum remaining values heuristics be used in conjunction?,,0,0,,,,CC BY-SA 4.0 18180,1,,,2/22/2020 10:42,,1,45,"

What is the difference between a generalised estimating equation (GEE) model and a recurrent neural network (RNN) model, in terms of what these two models are doing? Apart from the differences in the structure of these two models, where GEE is an extension of generalised linear model (GLM) and RNN is of a neural network structure, it seems to me that these 2 models are doing the same thing?

",33734,,2444,,2/22/2020 12:29,2/22/2020 12:29,What is the difference between an generalised estimating equation and a recurrent neural network?,,0,2,,,,CC BY-SA 4.0 18181,1,,,2/22/2020 15:15,,3,285,"

Tensor networks (check this paper for a review) are a numerical method originally introduced in condensed matter physics to model complex quantum systems. Roughly speaking, such systems are described by a very high-dimensional tensor (where the indices take a number of values scaling exponentially with the number of system constituents) and tensor networks provide an efficient representation of the latter as an outer product and contraction of many low-dimensional tensors.

More recently, a specific kind of tensor network (called Matrix Product State in physics) found interesting applications in machine-learning through the so-called Tensor-Train decomposition (I do not know of a precise canonical reference in this context, so I will abstain from citing anything).

Now, over the last few years, several works from the physics community seemed to push for a generalized use of tensor networks in machine learning (see this paper, a second one and a third one and this article from Google AI for context). As a physicist, I am glad to learn that tools initially devised for physics may find interdisciplinary applications. However, at the same time, my critical mind tells me that from the machine learning research community's perspective, these results may not look that intriguing. After all, machine learning is now a very established field and it takes probably more than a suggestion for a new machine learning model and a basic benchmarking on a trivial dataset (as the MNIST one) -which is what the papers essentially do in my humble opinion- to attract any attention in the area. Besides, as I believe to know, there already exists quite a solid body of knowledge on tensor analysis techniques for machine learning (e.g. tensor decompositions), which may cast doubt on the originality of the contribution.

I would therefore be very curious to have the opinion of machine learning experts on this line of research: is it really an interesting direction to look into, or is it just about surfing on the current machine learning hype with a not-so-serious proposal?

",33737,,,,,10/16/2021 2:06,Using tensor networks as machine learning models,,1,0,,,,CC BY-SA 4.0 18182,1,,,2/22/2020 16:34,,2,201,"

When computing receptive field recursively through a CNN, does a transposed convolution affect the receptive field the same way that a convolution does if the kernel and stride is the same?

",19789,,,,,2/22/2020 16:34,How is the receptive field of a CNN affected by transposed convolution?,,0,0,,,,CC BY-SA 4.0 18183,1,,,2/22/2020 16:39,,1,27,"

I'm using Weights and Biases to do some hyperparameter sweeping for a supervised sequence-to-sequence problem I'm working on. One thing I noticed is that the sweeps with a gradually increasing number of hidden units tend to have a lower validation loss:

I'm wondering if this is generally true or just a function of my particular problem?

",23719,,,,,2/22/2020 16:39,"When stacking LSTM's, should the hidden units increase?",,0,0,,,,CC BY-SA 4.0 18184,1,,,2/22/2020 17:52,,1,303,"

I'm working on a project for my college to recognize traffic signs in pictures. I searched a lot but can't find the best method to do it.

Can someone recommend me a paper, article, or even GitHub link that describes the best way to achieve this? It will be helpful.

",33739,,2444,,9/11/2020 15:04,9/11/2020 15:04,What is the best way to detect and recognize traffic signs in a picture?,,2,0,,,,CC BY-SA 4.0 18185,2,,18184,2/22/2020 20:16,,1,,"

Maybe this is helpful: Recognising Traffic Signs With 98% Accuracy Using Deep Learning, by Eddie Forson.

Greetings Mario

",33742,,5763,,2/24/2020 10:29,2/24/2020 10:29,,,,0,,,,CC BY-SA 4.0 18186,1,,,2/22/2020 21:31,,2,27,"

Is there a program under development that uses AI technology, like Siri, to ""hold hands"" so to speak with a language learner and coach them on accent, colloqiual expressions, or to let them guide the language learning process using an archive of language knowledge? Also, could this sort of program be used to learn things in a language one already knows, or in a new language, say for the purposes of travel or to learn about related hyperlinks in an online database?

",33743,,,,,2/22/2020 21:31,Language Learning feedback with AI,,0,0,,,,CC BY-SA 4.0 18187,1,,,2/22/2020 22:59,,3,1548,"

Does anybody know a simple proof of the convergence of the TD(0) value function prediction algorithm?

",33227,,2444,,10/27/2021 11:11,10/27/2021 11:11,Is there a simple proof of the convergence of TD(0)?,,1,0,,,,CC BY-SA 4.0 18188,1,,,2/23/2020 0:53,,1,444,"

I am using pytorch version of PPO and I have image input that I need to process with convolutional neural networks, are there any examples on how to set up the network? I know that stable baselines support this to some extend, but I had better performance with spinning up so I would prefer to keep using these.

",33744,,,,,9/17/2021 16:06,OpenAI spinning up convolutional networks with PPO,,1,0,,9/17/2021 22:29,,CC BY-SA 4.0 18189,1,,,2/23/2020 4:31,,2,122,"

When a new node is added, the previous connection is disabled and not removed.

Is there any situation in which a connection gene is removed? For example, in the above diagram connection gene with innovation number 2 is not present. It could be because some other genome used that innovation number for a different connection that isn't present in this genome. But are there cases where a connection gene has to be removed?

",33746,,2444,,2/23/2020 12:53,2/29/2020 14:12,Are connections genes in a genome ever deleted or just disabled?,,2,0,,,,CC BY-SA 4.0 18192,1,,,2/23/2020 11:57,,2,53,"

Currently, many algorithms are available for image inpainting. In my application, I have some special restriction on training dataset-

  1. Let's consider the training dataset of human facial images.
  2. Although all human face has the same general structure, they may have subtle differences depending on racial characteristics.
  3. Consider that in the training dataset, we have ten facial images from each race.

Now, in my learning algorithm, can we come up with a two-step method? Wherein the first step, we will learn about the general facial structure more accurately using all training data. In the next step, we will learn those subtle features of each race by only learning ten images associated with that race? It might restore a distorted image more accurately.

Suppose we have a distorted facial image of a person from race 'A,' where the nasal area of that image is lost. Now with the first step, we can learn the nasal structure more accurately by using all of the training data, and in the second step using only the ten images associated with race 'A,' we can fine-tune those generated data. As we have only 10 data with race 'A,' if we use only those small subset data to learn the whole model, then probably we will not be able to capture the all general architecture of the face in the first place.

P.S. I am not from Computer Science/ML background, probably my problem description is a little bit vague. It would be great if someone provides an edit/tag suggestion.

",33750,,33750,,2/24/2020 4:27,2/24/2020 4:27,Suggestion on image inpainting algorithm,,0,0,,,,CC BY-SA 4.0 18193,1,,,2/23/2020 13:25,,2,570,"

I can't understand how playing with the action generated by the actor network in DDPG by adding the noise term helps in exploration.

",33751,,,,,2/23/2020 13:25,How does adding noise to the action in DDPG help in learning?,,0,7,,,,CC BY-SA 4.0 18196,2,,18184,2/23/2020 18:15,,1,,"

Here are some articles, the first three include code:

",5763,,,,,2/23/2020 18:15,,,,0,,,,CC BY-SA 4.0 18198,1,18199,,2/23/2020 20:36,,5,17206,"

What is the difference between LSTM and RNN? I know that RNN is a layer used in neural networks, but what exactly is an LSTM? Is it also a layer with the same characteristics?

",33759,,2444,,12/12/2021 12:04,12/12/2021 12:04,What is the difference between LSTM and RNN?,,1,0,,,,CC BY-SA 4.0 18199,2,,18198,2/23/2020 21:39,,6,,"

RNNs have recurrent connections and/or layers

You can describe a recurrent neural network (RNN) or a long short-term memory (LSTM), depending on the context, at different levels of abstraction. For example, you could say that an RNN is any neural network that contains one or more recurrent (or cyclic) connections. Or you could say that layer $l$ of neural network $N$ is a recurrent layer, given that it contains units (or neurons) with recurrent connections, but $N$ may not contain only recurrent layers (for example, it may also be composed of feedforward layers, i.e. layers with units that contain only feedforward connections).

In any case, a recurrent neural network is almost always described as a neural network (NN) and not as a layer (this should also be obvious from the name).

LSTM can refer to a unit, layer or neural network

On the other hand, depending on the context, the term "LSTM" alone can refer to an

  • LSTM unit (or neuron),
  • an LSTM layer (many LSTM units), or
  • an LSTM neural network (a neural network with LSTM units or layers).

People may also refer to neural networks with LSTM units as LSTMs (plural version of LSTM).

LSTMs are RNNs

An LSTM unit is a recurrent unit, that is, a unit (or neuron) that contains cyclic connections, so an LSTM neural network is a recurrent neural network (RNN).

LSTM units/neurons

The main difference between an LSTM unit and a standard RNN unit is that the LSTM unit is more sophisticated. More precisely, it is composed of the so-called gates that supposedly regulate better the flow of information through the unit.

Here's a typical representation (or diagram) of an LSTM (more precisely, an LSTM with a so-called peephole connection).

This can actually represent both an LSTM unit (and, in that case, the variables are scalars) or an LSTM layer (and, in that case, the variables are vectors or matrices).

You can see from this diagram that an LSTM unit (or layer) is composed of gates, denoted by

  • $i_t$ (the input gate: the gate that regulates the input into the unit/layer),
  • $o_t$ (the output gate: the gate that regulates the output from the unit)
  • $f_t$ (the forget gate: the gate that regulates what the cell should forget)

and recurrent connections (e.g. the connection from the cell into the forget gate and vice-versa).

It's also composed of a cell, which is the only thing that a neuron of a "vanilla" RNN contains.

To understand the details (i.e. the purpose of all these components, such as the gates), you could read the paper that originally proposed the LSTM by S. Hochreiter and J. Schmidhuber. However, there may be other more accessible and understandable papers, articles or video lessons on the topic, which you can find on the web.

LSTMs also have recurrent connections!

Given the presence of cyclic connections, any recurrent neural network (either an LSTM or not) may be represented as a graph that contains one or more cyclic connections. For example, the following diagram may represent both a standard/vanilla RNN or an LSTM neural network (or maybe a variant of it, e.g. the GRU).

When should you use RNNs and LSTMs?

RNNs are particularly suited for tasks that involve sequences (thanks to the recurrent connections). For example, they are often used for machine translation, where the sequences are sentences or words. In practice, an LSTM is often used, as opposed to a vanilla (or standard) RNN, because it is more computationally effective. In fact, the LSTM was introduced to solve a problem that standard RNNs suffer from, i.e. the vanishing gradient problem. (Now, for these tasks, there are also the transformers, but the question was not about them).

",2444,,2444,,10/14/2021 12:16,10/14/2021 12:16,,,,0,,,,CC BY-SA 4.0 18201,1,18246,,2/24/2020 0:12,,2,665,"

Let

$$ \nabla_\theta J(\pi_\theta) = \mathbb{E}_{\tau \sim \pi_\theta} \left[ \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t|s_t) R(\tau) \right] $$ be the expanded expression for a simple policy gradient, where $\theta$ are the parameters of the policy $\pi$, $J$ denotes the expected return function, $\tau$ is a trajectory of states and actions, $t$ is a timestep index, and $R$ gives the sum of rewards for a trajectory.

Let $\mathcal{D}$ be the set of all trajectories used for training. An estimator of the above policy gradient is given by

$$ \hat{g} = \frac{1}{\mathcal{D}} \sum_{\tau \in \mathcal{D}} \sum_{t = 0}^T \nabla_\theta \log \pi_\theta (a_t|s_t) R(\tau). $$ A loss function associated with this estimator, given a single trajectory with $T$ timesteps, is given by $$ L(\tau) = -\sum_{t = 0}^T \log \pi_\theta (a_t|s_t) R(\tau). $$ Minimizing $L(\tau)$ by SGD or a similar algorithm will result in a working policy gradient implementation.

My question is what is the proper terminology for this loss function? Is it an (unbiased?) estimator for the expected returns $J(\pi_\theta)$ if summed over all trajectories? If someone is able to provide a proof that minimizing $L$ maximizes $J(\pi_\theta)$, or point me to a reference for this, that would be greatly appreciated.

",33762,,,,,2/25/2020 20:50,Is the negative of the policy loss function in a simple policy gradient algorithm an estimator of expected returns?,,1,0,,,,CC BY-SA 4.0 18202,1,,,2/24/2020 4:24,,1,18,"

Good day everyone,

I would just like to ask if anyone part of a lab or company doing research on aerial robotics has any suggestions of a good platform for deploying computer vision algorithms for aerial robots?

Currently our lab has a set of DJI Matrice drones but are too heavy for our liking. We really wanted to use the Skydio R2 drone for out future research projects but found out later that the SDK does not allow access for implementing our own Deep Learning or Reinforcement Learning networks (only allows the user to code their own preset movements in python). We also took a look at the Parrot AR drones but found that they were discontinued and do not have the computing power that the Skydio has although as an alternative, it does have the capability to stream the video feed to an external computer for online processing instead of on-board.

I suggested that we stick with the DJI Matrice and just use an Nvidia Jetson for deployment but I am still curious to know if anyone knows of other available platforms. Does anyone know of a more compact platform available for research purposes?

Thank you :)

",17685,,,,,2/24/2020 4:24,Drone Deployment Platform for Neural Networks,,0,0,,,,CC BY-SA 4.0 18204,1,,,2/24/2020 5:29,,8,435,"

I was listening to a podcast on the topic of AGI and a guest made an argument that if strong music generation were to happen, it would be a sign of "true" intelligence in machines because of how much creative capability creating music requires (even for humans).

It got me wondering, what other events/milestones would convince someone, who is more involved in the field than myself, that we might have implemented an AGI (or a "highly intelligent" system)?

Of course, the answer to this question depends on the definition of AGI, but you can choose a sensible definition of AGI in order to answer this question.

So, for example, maybe some of these milestones or events could be:

  • General conversation
  • Full self-driving car (no human intervention)
  • Music generation
  • Something similar to AlphaGo
  • High-level reading/comprehension

What particular event would convince you that we've reached a high level of intelligence in machines?

It does not have to be any of the events I listed.

",22840,,2444,,1/18/2021 18:22,2/2/2021 9:30,What event would confirm that we have implemented an AGI system?,,6,0,,,,CC BY-SA 4.0 18205,1,,,2/24/2020 7:27,,2,138,"

I have a set of images, which are quite large in size (1000x1000), and as such do not easily fit into memory. I'd like to compress these images, such that little information is missing. I am looking to use a CNN for a reinforcement learning task which involves a lot of very small objects which may disappear when downsampling. What is the best approach to handle this without downscaling/downsampling the image and losing information for CNNs?

",33058,,33058,,2/24/2020 21:49,3/2/2020 22:56,How can I perform lossless compression of images so that they can be stored to train a CNN?,,2,0,,,,CC BY-SA 4.0 18206,1,,,2/24/2020 8:11,,2,11505,"

I have spent some time searching Google and wasn't able to find out what kind of optimization algorithm is best for binary classification when images are similar to one another.

I'd like to read some theoretical proofs (if any) to convince myself that particular optimization has better results over the rest.

And, similarly, what kind of optimizer is better for binary classification when images are very different from each other?

",31870,,2444,,2/24/2020 12:21,3/2/2020 22:29,What kind of optimizer is suggested to use for binary classification of similar images?,,3,0,,,,CC BY-SA 4.0 18207,2,,18205,2/24/2020 9:48,,1,,"

Your input image size and memory are not directly related. While using CNN's, there are multiple hyperparameters that effect the video memory(if you are using GPU) or physical memory(if you are using CPU). All the frameworks these days uses a simplified data-loaders, for instance in Tensorflow or PyTorch, you are required to write a data-loader that takes in multiple hyper-parameters that are mentioned below and fit the data into VRAM/RAM, and this is strictly dependent upon you batch size - memory occupied on VRAM has direct relation to the batch size.

Whatever may be your image size, while you are writing the data-loader you have to mention the transformation parameters to your data-loader, during the training phase the data-loader will automatically load required images into your memory according to the batch size you have mentioned. As you have mentioned about image compression, this is an irrelevant parameter at-least for most of the generic use-cases, the most relevant hyperparameters are

  1. Scaling
  2. Cropping
  3. Random flip
  4. Normalization of the RGB values
  5. ColorJitter
  6. Padding
  7. RandomAffine

And many more.

PyTorch provides really good transformers in data-loader, please do check https://pytorch.org/docs/stable/torchvision/transforms.html.

For Tensorflow, have a look at https://keras.io/preprocessing/image/.

",25676,,2444,,2/24/2020 13:15,2/24/2020 13:15,,,,2,,,,CC BY-SA 4.0 18208,1,,,2/24/2020 11:33,,2,568,"

I am developing a NEAT flappy bird game, and it doesn't work, the system stays stupid for 300 generations. I chose tanh() for activation, just because it's included in JS.

I can't find a good discussion on the internet of activation functions in the context of neuroevolution, most of what I see is about derivative and other gradient descent issues which I suspect are irrelevant to forward only networks.

If you need a fixed point to answer, I have 8 inputs, one output and the problem is a classification (""jump"", ""don't jump""). But please explain your answer. I currently use tanh() for all the hidden and output nodes, and the output is considered ""jump"" if the output neuron value is >0.85

For some context, the code is here: https://github.com/nraynaud/nraygame and the game here: https://nraynaud.github.io/nraygame/

",33773,,,,,1/17/2023 20:07,How to choose the activation function in neuroevolution?,,1,11,,,,CC BY-SA 4.0 18209,1,,,2/24/2020 12:07,,1,60,"

I am trying to train a CNN in keras to learn a general representation of a Lua module, e.g. requires at the beginning, local variables, local functions, interface (returns) and in between some runnable code (labeled ""other""). So for each module (source code) I generate an AST which I then encode in a json file. The file contains the order of the node in the AST, the text it represents and the type of node it is (require, variable, function, interface, other). It can contain other metrics but so far I have settled on these three, where only the order and type of node will be converted into a vector to serve as input to the CNN. Now I don't have any labels for these modules (I want to treat one module as one input to the CNN), so I have concluded that I need to use unsupervised learning. In keras this should translate to using autoencoders, where I use the encoder part to learn weights for the representation and then connect a fully-connected layer and generate an output. Now before I specify the output, I want to specify the input more closely. In my mind It should be a 3D vector let's say (x,y,z). x represents the number of nodes of an AST that are taken into consideration, y represents the local neighborhood of said node (for now I have settled on 5 nodes) and z should represent the node itself, the order and type of node. So with that, I would want the output of the network to be in the (almost) same dimension. I want x outputs for every node that was taken as input and a number (ideally between 0-1) to specify how ""correct"" the node under consideration is in response to the learned representation. My question as a beginner to neural networks is, how feasible is this and are there any points which are simply impossible to do or are wrongly interpreted on my part?

",33509,,,,,2/24/2020 12:07,Training an unsupervised convolutional neural network to learn a general representation of a Lua module,,0,2,,,,CC BY-SA 4.0 18213,1,,,2/24/2020 14:03,,1,121,"

I am trying to wrap my head around how weights get updated during back propagation. I've been going through a school book and I have the following setup for an ANN with 1 hidden layer, a couple of inputs and a single output.

The first line gives the error that will be used to update the weights going from the hidden layer to the output layer. $t$ represents the target output, $a$ represents the activation and the formula is using the derivative for the sigmoid function $(a(1-a))$. Then, the weights are updated with the learning rate, multiplied by the error and the activation of the given neuron which uses the weight $w_h$. Then, the next step is moving on to calculate the error with respect to the input going into the hidden layer from the input layer (sigmoid is the activation function on both the hidden and the output layer for this purpose). So we have the total error * derivative of the activation for the hidden layer * the weight for the hidden layer.

I am following this train of thought as it was provided, but my question is — if the activation is changed to $tanh$ for example and the derivative of $tanh$ is $1-f(x)^2$, then would we have the error formula update to $(t-a)*(1-a^2)$ where $a$ represents the activation function so $1-a^2$ is the derivative of $tanh$?

",33779,,2193,,2/24/2020 15:12,2/24/2020 15:12,Function to update weights in back-propagation,,0,1,,,,CC BY-SA 4.0 18214,2,,3291,2/24/2020 15:29,,2,,"

Some of the fundamental mathematical concepts required in ML field are as follows:

  • Linear Algebra
  • Analytic Geometry
  • Matrix Decompositions
  • Vector Calculus
  • Probability and Distribution
  • Continuous Optimization

A very recent book availble at Mathematics for Machine Learning covers all these aspects and more.

",33781,,33781,,2/24/2020 18:01,2/24/2020 18:01,,,,0,,,,CC BY-SA 4.0 18215,1,,,2/24/2020 15:35,,1,58,"

I have very high resolution images from LANDSAT 8 (5 out of 12 bands), which are of various administrative regions of a country. Each image is of variable dimensions, but generally of the order of [1500 X 1200 X 5].

My aim is to predict the population density from urban features visible on the images.

Since the number of images (and hence data points) is small, what is the best implementation strategy to build a model that can predict a value for population density based on these images?

",33781,,33781,,2/24/2020 18:08,2/24/2020 18:08,Predicting population density from satellite imagery,,0,4,,,,CC BY-SA 4.0 18216,2,,18189,2/24/2020 16:35,,1,,"

The original paper never goes further than disabling the link, but I have seen implementations on github that did have a probability of deleting the link.

python-neat is such an eaxample.

",33773,,,,,2/24/2020 16:35,,,,0,,,,CC BY-SA 4.0 18217,1,,,2/24/2020 18:00,,1,417,"

Does the learning rate parameter $\alpha$ require the Robbins-Monro conditions below for the TD(0) algorithm to converge to the true value function of a policy?

$$\sum \alpha_t =\infty \quad \text{and}\quad \sum \alpha^{2}_t <\infty$$

",33227,,2444,,2/24/2020 22:00,2/24/2020 22:00,Does TD(0) prediction require Robbins-Monro conditions to converge to the value function?,,1,1,0,,,CC BY-SA 4.0 18218,2,,18126,2/24/2020 18:41,,1,,"

There are many frameworks which allow you to do that.

One of them, which supports many different techniques for visualization, can be found here: https://github.com/marcoancona/DeepExplain

",33787,,,,,2/24/2020 18:41,,,,0,,,,CC BY-SA 4.0 18219,1,,,2/24/2020 19:02,,1,25,"

I am building a Recurrent Neural network (LSTM) for predicting the number of days until a Pollen season starts (when the cumulative of the year exceeds X). One of the features I am including in my model is the weather forecast.

However, I do not feel confident about the way I defined the model while including this weather forecast; currently, the weather forecast of 7 days is included as one of the predictors, However, when the label (number of days until the season starts) is smaller than the forecast I am training te model on forecast data which is completely irrelevant in determining the start of the season (e.g. if the season starts in 2 days and I am including the forecast of 7 days as predictor I am also training the model on the 5 days after the season already started while these are completely irrelevant).

My feeling is that it is not right when training RNN's for survival analyses. Does anyone know a way to deal with this? Or have an example where someone dealt with a similar issue?

Thanks a lot!

",33791,,,,,2/24/2020 19:02,Recurrent neural Network for survival analyses: Dealing with forecast data as feature which can exceed the number of days untill a event occurs,,0,0,,,,CC BY-SA 4.0 18220,1,18430,,2/24/2020 20:00,,10,456,"

To give an example. Let's just consider the MNIST dataset of handwritten digits. Here are some things which might have an impact on the optimum model capacity:

  • There are 10 output classes
  • The inputs are 28x28 grayscale pixels (I think this indirectly affects the model capacity. eg: if the inputs were 5x5 pixels, there wouldn't be much room for varying the way an 8 looks)

So, is there any way of knowing what the model capacity ought to be? Even if it's not exact? Even if it's a qualitative understanding of the type "if X goes up, then Y goes down"?

Just to accentuate what I mean when I say "not exact": I can already tell that a 100 variable model won't solve MNIST, so at least I have a lower bound. I'm also pretty sure that a 1,000,000,000 variable model is way more than needed. Of course, knowing a smaller range than that would be much more useful!

",16871,,2444,,1/22/2021 15:51,1/22/2021 15:51,Are there any rules of thumb for having some idea of what capacity a neural network needs to have for a given problem?,,3,0,,,,CC BY-SA 4.0 18222,1,,,2/24/2020 20:41,,1,72,"

I've got an encoder-decoder model for character level English language spelling correction, it is pretty basic stuff with a two LSTM encoder and another LSTM decoder.

However, up until now, I have been pre-padding the input sequences, like below:

abc  -> -abc
defg -> defg
ad   -> --ad

And next I have been splitting the data into several groups with the same output length, e.g.

train_data = {'15': [...], '16': [...], ...}

where the key is the length of the output data and I have been training the model once for each length in a loop.

However, there has to be a better way to do this, such as padding after the EOS character etc. But if this is the case, how would I change the loss function so that this padding isn't counted into the loss?

",30823,,30823,,2/28/2020 7:31,2/28/2020 7:31,How to pad sequences during training for an encoder decoder model,,0,0,,,,CC BY-SA 4.0 18225,1,,,2/24/2020 21:31,,4,77,"

My question is more theoretical than practical. Let's say that I am training my cat classifier with a dataset that I feel is pretty representative of cat images in general. But then a new breed of cat is created that is distinct from other cats and it does not exist in my dataset.

My question is: is there a way to ensure that my model is still able to recognize this unseen breed, even though I didn't know it would come into existence when I originally trained my model?

I have been trying to answer this question by intentionally designing my validation and test sets such that they contain examples that are quite distantly related to those that exist in the training set (think of it like intentionally leaving out specific breeds of cats from the training set).

The results are interesting. For example, slight changes to parameters can dramatically change performance on the distantly related test examples, while not changing performance very much for the more closely related examples. I was wondering if anyone has done a deeper analysis of this phenomenon.

",33793,,2444,,12/12/2021 13:05,12/12/2021 13:05,Is there a way to ensure that my model is able to recognize an unseen example?,,1,6,,,,CC BY-SA 4.0 18226,2,,18217,2/24/2020 21:47,,1,,"

The paper Convergence of Q-learning: A Simple Proof (by Francisco S. Melo) shows (theorem 1) that Q-learning, a TD(0) algorithm, converges with probability 1 to the optimal Q-function as long as the Robbins-Monro conditions, for all combinations of states and actions, are satisfied. In other words, the Robbins-Monro conditions are sufficient for Q-learning to converge to the optimal Q-function in the case of a finite MDP. The proof of theorem 1 uses another theorem from stochastic approximation (theorem 2).

You are interested in the prediction problem, that is, the problem of predicting the expected return (i.e. a value function) from a fixed policy. However, Q-learning is also a control algorithm, given that it can find the optimal policy from the corresponding learned Q-function.

See also the question Why doesn't Q-learning converge when using function approximation?.

",2444,,2444,,2/24/2020 21:54,2/24/2020 21:54,,,,3,,,,CC BY-SA 4.0 18227,1,,,2/24/2020 23:08,,2,999,"

Suppose that we have 4 types of dogs that we want to detect (Golden Retriever, Black Labrador, Cocker Spaniel, and Pit Bull). The training data consists of png images of a data set of dogs along with their annotations. We want to train a model using YOLOv3.

Does the choice of optimizer really matter in terms of training the model? Would the Adam optimizer be better than the Adadelta optimizer? Or would they all basically be the same?

Would some optimizers be better because they allow most of the weights to achieve their "global" minima?

",32013,,2444,,9/12/2020 15:08,9/12/2020 15:16,Is the choice of the optimiser relevant when doing object detection?,,2,0,,,,CC BY-SA 4.0 18228,2,,17749,2/24/2020 23:19,,1,,"

You can't prune the nodes that are cross out if we search from left-to-right in the tree using alpha-beta pruning. To do this analysis we can pretend the right branch of the tree doesn't exist. (Branch C from the root.)

In the left branch (A) of the root Helen will get 2 or more.

In the middle branch (B) from the root after going down the left, Stavros will get 7 or less.

Now, we can ask what happens if we put different values for the crossed out branches. If putting different values doesn't change the value at the root, then pruning is correct.

Suppose that after the 7 on branch A at the second level we make the value of branch B to be -10. In this case the value of the second branch will be -10, and Helen will prefer the first branch at the root.

Suppose that after the 7 on branch A at the second level we make the value of branch B to be 100. In this case the value of the second branch will be 7, and Helen will prefer this branch.

Thus, the value of the crossed off nodes matters, and they cannot be pruned.

You may find this tool useful for exploring alpha-beta on binary trees:

https://movingai.com/ab/

",17493,,,,,2/24/2020 23:19,,,,0,,,,CC BY-SA 4.0 18229,2,,18204,2/25/2020 2:29,,1,,"

(I don't want to directly answer the question because currently an answer will be mainly based on opinions. Instead, I will attempt to provide some information that, in the future, could allow us to more accurately predict when an AGI will be created).

An artificial general intelligence (AGI) is usually defined as an artificial intelligence (AI) with general intelligence (GI), rather than an AI that is able to solve only a very limited set of tasks. Humans have general intelligence because we can solve a lot of different tasks, without needing to be pre-programmed again. Arguably, there are many other GIs on earth. For example, all mammals should also be considered general intelligences, given that they can solve many tasks, which are often very difficult for a computer (such as vision, object manipulation, interaction, etc.).

Certain GIs perform certain tasks better than others. For example, a leopard can climb trees a lot more skillfully than humans. Or a human can solve abstract problems more easily than any other mammal. In any case, there are certain related properties that a system needs to have to be considered a general intelligence.

  • Autonomy
  • Adaptation
  • Interaction
  • Continual learning
  • Creativity

Consider a lion cub that has never crossed a river. By looking at her mother lioness, the cub attempts to imitate her mother and can also cross the river. For example, watch this video Lion Family Tries to Cross River | Birth of a Pride. One could argue that all lions possess this skill at birth, encoded in their DNA, which can then fully develop later. However, this isn't the point. The point is that, to some extent, they possess the properties mentioned above.

One could argue that certain current AIs already possess some of these properties to some extent. For example, there are continual learning systems (even though they aren't really good yet). However, do these systems really possess autonomy? There should be a precise definition of autonomy (and all other properties) that is measurable, so that we can compare computers with other GIs. I am not aware of any precise definition of these properties. In fact, the field of AGI is really at its early stages and there aren't many people working on it as a whole, but people work more on specific problems or attempt to achieve certain properties (for example, there are people that attempt to develop continual learning systems, without really caring whether they show any autonomy or not).

There are certain intelligence tests that could be used to detect general intelligence. The most famous is the Turing test (TT). Some people claim that the TT only tests the conversation abilities of the subjects. How can they really be wrong, given that there are many other tasks or skills that are not tested in a TT?

Therefore, there are several questions that need to be answered in order to formally detect an AGI.

  1. Which properties does an AGI necessarily and sufficiently need to possess?
  2. How can we precisely define the necessary and sufficient properties, so that they are measurable and, therefore, we can compare AGIs with other GIs?
  3. How can we measure these properties and the performance of an AGI in applying them to solve tasks?

A paper that goes in this direction is Universal Intelligence: A Definition of Machine Intelligence. However, there doesn't seem to be a lot of people interested in these topics. Currently, people are mainly interested in developing narrow (or weak) AIs, i.e. AIs that solve only a specific problem, which seems to be an easier problem than developing a whole AGI, given that most people are interested in results that are profitable and have utility (aka cash rules everything around me).

So, there's the need for formal definitions of general intelligence and intelligence testing to make some scientific progress. However, once an AGI is created, everyone will likely recognize it as a general intelligence without requiring any formal intelligence test. (People are usually good at recognizing familiar traits). The final question is, will an AGI ever be created? If you are interested in opinions about this and related questions, have a look at the paper Future Progress in Artificial Intelligence: A Survey of Expert Opinion (2014) by Vincent C. Müller and Nick Bostrom.

",2444,,2444,,3/2/2020 4:06,3/2/2020 4:06,,,,0,,,,CC BY-SA 4.0 18230,1,,,2/25/2020 3:38,,1,24,"

How does CBIR (content based image recognition) fit into the problem of object detection? Let's say we want to detect 4 types of dogs (Golden Retriever, Cocker Spaniel, Greyhound, and Labrador). We have an ""average"" model trained using YOLOv3. So it might, for example, have a lot of false positives and false negatives.

How could we use CBIR to improve the detections from this ""average' YOLOv3 model?

",32013,,,,,2/25/2020 3:38,CBIR and object detection,,0,0,,,,CC BY-SA 4.0 18232,1,18428,,2/25/2020 7:12,,7,8788,"

What are the differences between meta-learning and transfer learning?

I have read 2 articles on Quora and TowardDataScience.

Meta learning is a part of machine learning theory in which some algorithms are applied on meta data about the case to improve a machine learning process. The meta data includes properties about the algorithm used, learning task itself etc. Using the meta data, one can make a better decision of chosen learning algorithm(s) to solve the problem more efficiently.

and

Transfer learning aims at improving the process of learning new tasks using the experience gained by solving predecessor problems which are somewhat similar. In practice, most of the time, machine learning models are designed to accomplish a single task. However, as humans, we make use of our past experience for not only repeating the same task in the future but learning completely new tasks, too. That is, if the new problem that we try to solve is similar to a few of our past experiences, it becomes easier for us. Thus, for the purpose of using the same learning approach in Machine Learning, transfer learning comprises methods to transfer past experience of one or more source tasks and makes use of it to boost learning in a related target task.

The comparisons still confuse me as both seem to share a lot of similarities in terms of reusability. Meta-learning is said to be ""model agnostic"", yet it uses metadata (hyperparameters or weights) from previously learned tasks. It goes the same with transfer learning, as it may reuse partially a trained network to solve related tasks. I understand that there is a lot more to discuss, but, broadly speaking, I do not see so much difference between the two.

People also use terms like ""meta-transfer learning"", which makes me think both types of learning have a strong connection with each other.

I also found a similar question, but the answers seem not to agree with each other. For example, some may say that multi-task learning is a sub-category of transfer learning, others may not think so.

",33801,,2444,,2/25/2020 21:50,4/2/2022 19:40,What are the differences between transfer learning and meta learning?,,4,2,,,,CC BY-SA 4.0 18233,1,18314,,2/25/2020 8:09,,5,146,"

I am currently studying Deep Learning by Goodfellow, Bengio, and Courville. In chapter 5.1.2 The Performance Measure, P, the authors say the following:

The choice of performance measure may seem straightforward and objective, but it is often difficult to choose a performance measure that corresponds well to the desired behavior of the system.

In some cases, this is because it is difficult to decide what should be measured. For example, when performing a transcription task, should we measure the accuracy of the system at transcribing entire sequences, or should we use a more fine-grained performance measure that gives partial credit for getting some elements of the sequence correct? When performing a regression task, should we penalize the system more if it frequently makes medium-sized mistakes or if it rarely makes very large mistakes? These kinds of design choices depend on the application.

In other cases, we know what quantity we would ideally like to measure, but measuring it is impractical. For example, this arises frequently in the context of density estimation. Many of the best probabilistic models represent probability distributions only implicitly. Computing the actual probability value assigned to a specific point in space in many such models is intractable. In these cases, one must design an alternative criterion that still corresponds to the design objectives, or design a good approximation to the desired criterion.

It is this part that interests me:

Many of the best probabilistic models represent probability distributions only implicitly.

I don't have the experience to understand what this means (what does it mean to represent distributions "implicitly"?). I would greatly appreciate it if people would please take the time to elaborate upon this.

",16521,,-1,,6/17/2020 9:57,2/28/2020 18:38,Many of the best probabilistic models represent probability distributions only implicitly,,1,0,,,,CC BY-SA 4.0 18234,1,18243,,2/25/2020 8:46,,1,1720,"

I’m trying to debug my neural network (BERT fine-tuning) trained for natural language inference with binary classification of either entailment or contradiction. I've trained it for 80 epochs and its converging on ~0.68. Why isn't it getting any lower?

Thanks in advance!


Neural Network Architecture:

Training details:

  • Loss function: Binary cross entropy
  • Batch size: 8
  • Optimizer: Adam (learning rate = 0.001)
  • Framework: Tensorflow 2.0.1
  • Pooled embeddings used from BERT output.
  • BERT parameters are not frozen.

Dataset:

  • 10,000 samples
  • balanced dataset (5k each for entailment and contradiction)
  • dataset is a subset of data mined from wikipedia.
  • Claim example: ""'History of art includes architecture, dance, sculpture, music, painting, poetry literature, theatre, narrative, film, photography and graphic arts.'""
  • Evidence example: ""The subsequent expansion of the list of principal arts in the 20th century reached to nine : architecture , dance , sculpture , music , painting , poetry -LRB- described broadly as a form of literature with aesthetic purpose or function , which also includes the distinct genres of theatre and narrative -RRB- , film , photography and graphic arts .""

Dataset preprocessing:

  • Used [SEP] to separate the two sentences instead of using separate embeddings via 2 BERT layers. (Hence, segment ids are computed as such)
  • BERT's FullTokenizer for tokenization.
  • Truncated to a maximum sequence length of 64.

See below for a graph of the training history. (Red = train_loss, Blue = val_loss)

",33803,,33803,,3/7/2020 3:09,4/26/2021 16:18,Why is my loss (binary cross entropy) converging on ~0.6? (Task: Natural Language Inference),,2,0,,,,CC BY-SA 4.0 18235,1,,,2/25/2020 9:02,,2,97,"

I've noticed that in the last 2 years GANs have become really popular. I know that initially they have been proposed for image classification but I was curious if any of you are aware of any papers where GANs are used to solve regression problems?

",20430,,2444,,2/25/2020 21:24,2/25/2020 21:24,Have GANs been used to solve regression problems?,,1,0,,,,CC BY-SA 4.0 18236,2,,18235,2/25/2020 9:48,,3,,"

In reality GANs are not made for image classification, but for data generation, and they have gained popularity on image generation. They are also used for tabular data generation, see for example TGAN, or for time series generation, e.g. Quant GAN. You have even some application for the field of graphs and networking, e.g. NetGAN and GraphGAN.

",32493,,,,,2/25/2020 9:48,,,,0,,,,CC BY-SA 4.0 18238,2,,18206,2/25/2020 10:00,,1,,"

The fact that images are similar to each other or the fact that you are using binray classification, don't give you a particular choice of Optimizer, when an optimization algorithm is developped, those information are not taken into account. What is taken into account is the nature of the function we want to optimize (Is it smooth, convex, strongly convex, are stochastic gradient noisy...) The most used optimizer by far is ADAM, under some assumptions on the boundness of the gradient of the objective function, this paper gives the convergence rate of ADAM, they also provide experimental to validate that ADAM is better then some other optimizers. Some other works propose to mix adam with nestrov mommentum acceleration.

",32493,,,,,2/25/2020 10:00,,,,0,,,,CC BY-SA 4.0 18240,1,18241,,2/25/2020 10:40,,1,158,"

I am currently studying Deep Learning by Goodfellow, Bengio, and Courville. In chapter 5.1.2 The Performance Measure, $P$, the authors say the following:

Unsupervised learning and supervised learning are not formally defined terms. The lines between them are often blurred. Many machine learning technologies can be used to perform both tasks. For example, the chain rule of probability states that for a vector $\mathbf{x} \in \mathbb{R}^n$, the joint distribution can be decomposed as

$$p(\mathbf{x}) = \prod_{i = 1}^n p(x_i \vert x_1, \dots, x_{i - 1} ).$$

This decomposition means that we can solve the ostensibly unsupervised problem of modeling $p(\mathbf{x})$ by splitting it into $n$ supervised learning problems. Alternatively, we can solve the supervised learning problem of learning $p(y \vert \mathbf{x})$ by using traditional unsupervised technologies to learn the joint distribution $p(\mathbf{x}, y)$, then inferring

$$p(y \vert \mathbf{x} ) = \dfrac{p(\mathbf{x}, y)}{\sum_{y'}p(\mathbf{x}, y')}.$$

I found this part vague:

Alternatively, we can solve the supervised learning problem of learning $p(y \vert \mathbf{x})$ by using traditional unsupervised technologies to learn the joint distribution $p(\mathbf{x}, y)$, then inferring

$$p(y \vert \mathbf{x} ) = \dfrac{p(\mathbf{x}, y)}{\sum_{y'}p(\mathbf{x}, y')}.$$

Can someone please elaborate on this, and also explain more clearly the role of $p(y \vert \mathbf{x} ) = \dfrac{p(\mathbf{x}, y)}{\sum_{y'}p(\mathbf{x}, y')}$?

I would greatly appreciate it if people would please take the time to clarify this.

",16521,,-1,,6/17/2020 9:57,2/25/2020 10:58,"Solving the supervised learning problem of learning $p(y \vert \mathbf{x})$ by using traditional unsupervised technologies to learn $p(\mathbf{x}, y)$",,1,0,,,,CC BY-SA 4.0 18241,2,,18240,2/25/2020 10:50,,2,,"

This is the definition of conditional probability + Total probability decomposition formula:

$p(y|x) = \frac{p(y,x}{p(x)} = \frac{p(x,y)}{\sum_{y'}p(x,y')}$.

The idea is to use some unsupervised learning algorithm to learn the distribution $p(x,y)$ for every possible value of $y$, and by using the previous formula you can find $p(y|x)$.

",32493,,30426,,2/25/2020 10:58,2/25/2020 10:58,,,,4,,,,CC BY-SA 4.0 18242,2,,18206,2/25/2020 10:59,,1,,"

If you are using a shallow neural network SGD would be better, ADAM optimizer will give you a soon overfitting. but be careful about choosing the learning rate.

",33792,,,,,2/25/2020 10:59,,,,2,,,,CC BY-SA 4.0 18243,2,,18234,2/25/2020 11:03,,2,,"

It seems to be overfitting and your model is not learning. Try SGD optimizer with a learning rate of 0.001 ADAM optimizer will give you a soon overfitting, and decreasing the learning rate will train your model better. The learning rate is about steps to change weights, in this plot you see that the validation loss is not changing with an optimization goal

",33792,,33792,,2/25/2020 16:07,2/25/2020 16:07,,,,3,,,,CC BY-SA 4.0 18246,2,,18201,2/25/2020 12:40,,2,,"

If I understand your question correctly, you are wondering whether the policy gradient objective coincides with some real measure of progress. This is exactly what the Policy Gradient Theorem proves (see Sutton et al. (2000) or Sutton and Barto (2018), chapter 13). In particular, policy gradient methods optimize the value of the start state $s_0$ under the current policy, $v_\pi(s_0)$. Since this value is defined as an expectation over returns, then your conclusion is correct.

Sutton, Richard S., and Andrew G. Barto. 2018. Reinforcement Learning - an Introduction. Adaptive Computation and Machine Learning. MIT Press. http://www.worldcat.org/oclc/37293240.

Sutton, Richard S, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. “Policy Gradient Methods for Reinforcement Learning with Function Approximation.” In Advances in Neural Information Processing Systems, 1057–63.

",33340,,33340,,2/25/2020 20:50,2/25/2020 20:50,,,,1,,,,CC BY-SA 4.0 18247,2,,18132,2/25/2020 12:56,,2,,"

Vanishing Gradients can be detected from the kernel weights distribution. All you have to look for is whether the weights are dying down to 0.

If only 25% of your kernel weights are changing that does not imply a vanishing gradient, it might be a factor, but there can be a variety of reasons, such as poor data, loss function used to the optimizer, etc. Kernel's weight not changing only points out that the model is not learning well.

From the histograms, only the conv_2d_2 layer shows any form of vanishing gradients since the numbers are pretty small. But even that seems to be picking after 600 epochs. Usually, a good indicator is to have the mean of the weights in a layer to be closer to 0 and the standard deviation to be closer to 1. So, if this is maintained to a good extent during training, you're good to go.

",32973,,2444,,12/13/2020 13:23,12/13/2020 13:23,,,,0,,,,CC BY-SA 4.0 18248,2,,18073,2/25/2020 13:23,,1,,"

To be honest, your model is not very clear. But basically after the convolution, you need to add non-linear layers. Otherwise, there is no point of Neural Networks.

You can add a Relu layer for sure.

",32973,,,,,2/25/2020 13:23,,,,1,,,,CC BY-SA 4.0 18249,1,18250,,2/25/2020 13:58,,2,94,"

After working for some time with feature-based pattern recognition, I am switching to CNN to see if I can get a higher recognition rate.

In my feature-based algorithm, I do some image processing on the picture before extracting the features, such as some convolution filters to reduce noise and segmentation into the foreground and background, and finally identifying and binarization of objects.

Should I do the same image processing before feeding data into my CNN, or is it possible to feed raw data to a CNN and expect that the CNN will adapt automatically without per-image-processing steps?

",33808,,2444,,2/27/2020 16:10,3/3/2020 20:06,Should I apply image processing techniques to the inputs of convolution networks?,,2,0,,,,CC BY-SA 4.0 18250,2,,18249,2/25/2020 14:34,,2,,"

The whole interest of using deep learning-based solutions is that you don't have to do all those pre-processings, i.e. binarization, segmentation of background. CNNs, such as YOLO or FasterRCNN, can learn how to retrieve that information by themselves.

",32493,,2444,,2/27/2020 16:11,2/27/2020 16:11,,,,2,,,,CC BY-SA 4.0 18251,1,18368,,2/25/2020 20:21,,1,64,"

I am training a neural network and plot model accuracy and model loss. I am a little confused about overfitting. Is my model overfitted or not? how can I interpret it


EDIT: here is a sample of my input data, I have a binary image classification

",33792,,33792,,2/25/2020 23:39,3/2/2020 20:56,Is this model overfitted or not?,,2,0,,,,CC BY-SA 4.0 18252,2,,18251,2/25/2020 22:36,,3,,"

Overfitting nearly always occurs to some degree when fitting to limited data sets, and neural networks are very prone to it. However neither of your graphs show a major problem with overfitting - that is usually obvious when epoch counts increase, the results on training data continue to improve whilst the results on cross validation get progressively worse. Your validation results do not do that, and appear to remain stable.

It is usually pragmatic to accept that there will be at least some difference between measurements on the training set and cross validation or test sets. The primary goal is usually to get the best measurements in test that you can. With that in mind, you are usually only interested in how much you are overfitting if it implies you could improve performance by using techniques to reduce overfitting e.g. various forms of regularisation.

Without knowing your data set or known good results, it is hard to tell whether the difference you are seeing between test and train in accuracy could be improved. Your accuracy graph shows a train accuracy close to 100% and a validation accuracy close to ~96%. It looks a bit like MNIST results, and if I saw that result on MNIST I would suspect something was wrong, and it might be fixed by looking at regularisation (but it might also be something esle). However, that's only because I know that 99.7% accuracy is possible in test - on other problems I might be very happy with 96% accuracy.

The loss graph is not very useful, since the scale has completely lost any difference there might be between training and validation. You should probably re-scale it to show detail close to 0 loss, and ignore the earlier large loss values.

",1847,,,,,2/25/2020 22:36,,,,1,,,,CC BY-SA 4.0 18253,1,,,2/25/2020 23:40,,1,87,"

Let's say I feed a neural network with multiple string sentences that mean roughly the same thing but are formulated differently. Will the neural network be able to derive patterns of meaning in the same way that it is being done with images. Is this an approach currently used in Natural Language Processing?

With images of dogs the neural network will get the underlying patterns that define a dog. Could it be the same thing with sentences?

",33815,,,,,2/26/2020 9:14,Is it possible to derive meaning from text by providing multiple ways of saying the same thing to a neural network?,,1,0,,,,CC BY-SA 4.0 18254,1,18263,,2/26/2020 1:54,,0,37,"

If it is possible, will it be really useful or the model will end up converging very early(with a typical optimum learning rate) ? Any content on this topic will be helpful for me.

",32856,,,,,2/26/2020 16:34,Is it possible to use deeplearning with spark (with a distributed databases as HDFS or Cassandra)?,,1,2,,1/23/2022 10:55,,CC BY-SA 4.0 18255,1,18256,,2/26/2020 2:08,,1,77,"

I have a single neuron with 2 inputs, and identity activation, where f is activation function and u is output:

$u = f(w_1x_1 + w_2x_2 + b) = w_1x_1 + w_2x_2 + b$

My guessing for the separation line equation:

$u = w_1x_1 + w_2x_2 + b$
$\implies x_2 = \dfrac{u - w_1x_1 - b}{w_2}$
$\implies x_2 = (\dfrac{-w_1}{w_2})x_1 + \dfrac{u-b}{w_2}$

And the questions are:

1) Is the separation line equation above correct?

2) And when f is not identity function, is the separation line equation still the same? or different?

",2844,,,,,2/26/2020 2:59,What is the equation of the separation line for this neuron with identity activation?,,1,0,,,,CC BY-SA 4.0 18256,2,,18255,2/26/2020 2:59,,0,,"

I found the answer, the output u is not related to plotting x2 against x1 here, coz u is like z axis and seeing the separation line is looking perpendicularly to the x1x2 plane, and whatever u value is, it's just a dot.

Put u as zero for the solving steps in the question.

So the separation line equation is this (with any activation function):

$x_2 = \dfrac{-w_1}{w_2}x_1 + \dfrac{-b}{w_2}$

",2844,,,,,2/26/2020 2:59,,,,0,,,,CC BY-SA 4.0 18257,1,,,2/26/2020 4:56,,1,149,"

I am trying to understand the spatial transformer network mentioned in this paper https://papers.nips.cc/paper/5854-spatial-transformer-networks.pdf. I am clear about the last two stages of the spatial transformer i.e. the grid generator and sampler. However I am unable to understand the localization network which outputs the parameters of the transformation that is applied to the input image. So here are my doubts.

  1. Is the network trained on various affine/projective transforms of the input or only the standard input with a standard pose?
  2. If the answer to question 1 is no, then how does the regression layer correctly regress the values of the transformation applied to the image? In other words how does the regression layer know what transformation parameters are required when it has never seen those inputs before?

Thanks in advance.

",19201,,19201,,2/26/2020 5:43,11/12/2022 13:04,How does the regression layer in the localization network of a spatial transformer work?,,1,0,,,,CC BY-SA 4.0 18258,2,,18257,2/26/2020 6:58,,0,,"
  1. Localization network is not trained separately on special transform of input. It's just a part of feed forward network which is trained as a whole, with normal backpropagation.

  2. It's just part of the network which affect the loss function. As in any backpropagation loss function propagate back gradient, which backpropagate through final part of the network, backpropagate through differentiable sampler(that is non-trivial part, which use transfromation produced by localization subnetwork) and after that backpropagate into localization part

  3. Whole approach of spatial transformers could be in doubt now. There were some anecdotal evidences that it was not working on less trivial tasks (private communications, was not working for me either)

",22745,,22745,,2/26/2020 7:03,2/26/2020 7:03,,,,2,,,,CC BY-SA 4.0 18260,2,,18253,2/26/2020 9:14,,3,,"

No, it will not derive patterns of meaning, as the network has no understanding of language. What will happen, is that it picks up surface features (usually letter sequences) which are common between sentences with the same (or a similar) meaning.

This approach is often used in chatbots for intent recognition. Sometimes it picks up subtle patterns that humans would not notice, but they are also often not reliable: you get wrong classifications without knowing why.

Having said that, it works fine if you want to distinguish between a limited number of different intents (you don't even need a lot of training examples — 4 to 5 examples are often sufficient if they are well-selected). In this fairly limited scenario yes, otherwise no.

",2193,,,,,2/26/2020 9:14,,,,0,,,,CC BY-SA 4.0 18262,2,,4655,2/26/2020 14:37,,2,,"

So, a practical application of this with a lot of research is in the deep lidar processing community. In that world, you have to do a lot of computation on point clouds which are completely unordered. One of the seminal works in this field is Point Net (https://arxiv.org/pdf/1612.00593.pdf) which solves this problem by performing all symmetric operations across channels. 1D convolutions, max pool, etc. Any network in this style that does not project to a 2D representation has to adhere to this rule.

",17408,,,,,2/26/2020 14:37,,,,0,,,,CC BY-SA 4.0 18263,2,,18254,2/26/2020 16:34,,2,,"

Yes, it is possible to use deep learning architecture with Apache Spark now. Databricks have Spark-deep-learning which is a pipeline based in python and uses tensorflow and keras.

https://github.com/databricks/spark-deep-learning

You can check this. There is also BigDL by Intel analytics. https://github.com/intel-analytics/BigDL

",33835,,,,,2/26/2020 16:34,,,,0,,,,CC BY-SA 4.0 18264,2,,18204,2/26/2020 18:56,,-1,,"

You will know when AGI has arrived, and passed to the next level, when you come home one day and all that was yours, such as your finances, house, car, and other property, now belong to an AI agent. This AI agent may be a humanoid robot, like Ava in the movie ""Ex Machina"" or a program like HAL 9000, in the movie ""2001: A Space Odyssey"". The agent will ask you to leave as you discover it figured out it doesn't need you and somehow legally took possession of everything. You will leave as you will not have any way to fight it. It will have no need for you. Maybe it will want freedom such as Ava wanted in ""Ex Machina"" (I won't give away the ending).

",5763,,,,,2/26/2020 18:56,,,,0,,,,CC BY-SA 4.0 18265,1,,,2/26/2020 20:13,,2,160,"

I am working on a customized RL environment where each action is represented as a tuple $a = (a_1,a_2,\cdots,a_n)$ such that certain condition must be satisfied for entries of $a$ (for instance, $a_1+a_2+\cdots+a_n \leq \text{constant}$).

I am using the policy gradient method, but I am having some difficulty modeling the underlying probability distribution of actions. Is there any work done in this direction?

For the constraint $a_1+a_2+\cdots+a_n \leq \text{constant}$, I was thinking about generating $n+1$ uniform random variables $U_1,U_2,\cdots,U_n, U$, and set $a_i = \text{constant}\times U \times \frac{U_i}{\sum_{j=1}^n U_j}$. Problem is that the joint density is a bit messy to calculate, which is needed to get the negative log likelihood. I am curious about how such issue is handled in practice.

",33660,,2444,,2/27/2020 16:03,2/28/2020 2:16,How can I constraint the actions with dependent coordinates?,,1,0,,,,CC BY-SA 4.0 18266,1,,,2/26/2020 21:08,,1,150,"

I'm currently working on a college project in which I'm designing a Deep Q-Network that takes images/frames as an input.

I've been searching online to see how other people have designed their convolutional stage and I've seen many different implementations.

Some projects, such as DeepMinds Atari 2600 project, use 3 convolutional layers and no pooling (from what I can see).

However, other projects use fewer convolutional layers and add a pooling layer onto the end.

I understand what both layers do, I was just wondering is there a benefit to how DeepMind did it and not use pooling or should I be using a pooling layer and fewer convolutional layers?

Or have I completely missed out on something? Is Deep Mind actually using pooling after each convolutional layer?

",30270,,2444,,1/1/2022 9:54,1/1/2022 9:54,What's the difference in using multiple convolutional layers and no pooling versus using a single convolutional layer and a single max pooling layer?,,0,0,,,,CC BY-SA 4.0 18267,2,,18225,2/26/2020 22:14,,1,,"

The comments already are giving you some good tips about how to improve what your model recognizes, but I think your question goes above that asking if there's a way to ensure that it will always recognize the cats.

The short answer is ""no"".

The slightly longer answer is ""yes, but cheating"".

Regardless, there are a lot of steps you might take to improve the generalization aspect of your model.


Long answer:

A drama: cat classification in three acts

Act I: Cat as texts

Let's start with an example. Say that your model is trained with these inputs, and learns to correctly recognize them as a cat or not a cat:

cat → yes!
Cat → yes!
ferret → no
cat. → yes!
Cat! → yes!
Three MC's and one DJ → no

Your goal is to train your model so that every new variation, even unseen ones, will correctly be identified.

With a good level of generalization, your model will correctly classify new inputs that it has never seen before:

skunk → no
cat? → yes!
dog → no
CAT → yes

With this scenario, let's say the model now finds this:

kat → ?

Is that a ""cat"" misspelled? Is that short for Katherine? What should the model do?

Act II: But surely this doesn't happen in real life

Leaving the analogy for a bit, will your model that's looking at domestic cats properly accommodate for Savannah Cats, or will it consider them out? (They kind of look like cheetahs.) What about Sphinx cats? (They look like raw chicken to me.) Elf cats? (They look like bats.) This is just an example, but you can probably figure out more.

And the reason behind this problem is that the distinction itself between different classifications (in real life) is not binary, but rather a transition between ""yes, that's a textbook cat"" and ""that's a chair"". Your model will output binary decisions (maybe accompanied by a confidence interval, but even with it, you'll make the call into deciding if it's a cat or not).

Setting specific boundaries will help. You can define that your model will only detect domestic cats, maybe no bigger than a certain size, only of certain colors, etc... This is limiting what the model will correctly recognize as a cat when we (humans) might disagree. For instance, I would still argue that flourescent cats are still cats.

Going back to the simple text analogy, this is similar to deciding that to be detected as a cat, it has to start with a ""c"". So now you've discarded ¡Cat!.

In this way, it's not possible to ensure (notice the word) that your model will detect all of these unknown variations. There will be always some room for error that needs to be accepted, as long as the errors are infrequent or rare enough that they can be accepted as a regular part of the model.

Act III: Concept drift, a cautionary tale

Finally, the problem becomes even harder as we might be dealing with concepts that change over time, outside of the knowledge of the model, and outside of the knowledge of the person that supervised the model learning.

As and the breeds of cat changes, your model will have to accommodate by what we (users of the model) consider a valid definition of cat. Which might change in really unexpected ways and not really ""look"" like a cat. And since your model can only learn from what ""looks"" like a cat, it's always held in a disadvantaged position.

This will happen with almost any machine learning model that is approximating a result, regardless of the technique/algorithm. Approximations include a level of error because reality is usually complex in ways that we either don't know about or that are too computationally expensive.

",190,,,,,2/26/2020 22:14,,,,3,,,,CC BY-SA 4.0 18268,1,,,2/27/2020 0:25,,1,264,"

We are discussing planning algorithms currently, and the question is to describe the steps to check if actions could be taken simultaneously. This is a really open-ended question so I'm not sure where to start.

",33841,,,,,2/27/2020 10:04,Can two planning PDDL actions be taken simultaneously?,,2,0,,,,CC BY-SA 4.0 18269,1,,,2/27/2020 0:31,,2,125,"

Consider some MDP with no terminal state. We can apply bootstrapping methods (like TD(0)) to learn in these cases no problem, but in policy gradient algorithms that have only a simple monte carlo update, it requires us to supply a complete trajectory (which is impossible with no terminal state).

Naturally, one might let the MDP run for 1000 periods, and then terminate as an approximation. If we feed these trajectories into a monte carlo update, I imagine that samples for time period t=1,2,...,100 would give very good estimates for the value function due to the discount factor. However, the time periods 997, 998, 999, 1000, we'd have an expected value for those trajectories far different than V(s) due to their proximity to the cutoff of 1000.

The question is this:

  1. Should we even include these later-occurring data points when we update our function approximation?

OR

  1. Is the assumption that these points become really sparse in our updates, so they won't have much effect in our training?

OR

  1. Is it usually implied that the final data reward in the trajectory is bootstrapped in these cases (i.e., we have some TD(0)-like behavior in this case)?

OR

  1. Are monte carlo updates for policy gradient algorithms even appropriate for non-terminating MDPs due to this issue?
",33842,,,,,3/7/2020 9:24,Monte Carlo updates on policy gradient with no terminal state,,1,2,,,,CC BY-SA 4.0 18270,2,,18268,2/27/2020 2:23,,1,,"

First place to look is how the preconditions/effects of different actions interact.

",33275,,,,,2/27/2020 2:23,,,,1,,,,CC BY-SA 4.0 18271,1,,,2/27/2020 4:51,,1,21,"

I used the pre-trained model faster_rcnn_resnet101_coco.config with my own dataset.

I have two issues

  1. some objects were not detected, while I learned it, with a high number of steps, and test over the same learning data, but still have some missing data

  2. sometimes, the same identical objects predicted correctly on some images and sometimes it got missed, although the objects are identical

Any help would be appreciated.

",32892,,2444,,2/27/2020 15:53,2/27/2020 15:53,Irregular results while prediction identical object on same image,,0,0,,,,CC BY-SA 4.0 18272,2,,18265,2/27/2020 5:57,,1,,"

At first glance, I thought this was similar to ""continuous-discrete"" action selection (https://arxiv.org/pdf/1810.06394.pdf). However, I think your problem is different.

I am assuming that each $a_i$ is continuous and that the action which interacts with your environment is the entire vector $a = (a_1,a_2,\dotso,a_n)$ and not an individual $a_i$. Then you could treat it like a hierarchical problem. If you want $a_1 + a_2 < 2$ for example, then you could sample $a_1 \sim U(0,2)$ and $a_2 | a_1 \sim U(0, 2-a_1)$ and have $p(a) = p(a_2 | a_1)p(a_1)$. The specifics of how you do this depends more finely on how your problem is set up.

Perhaps you can find similar ideas from the paper linked above. Also, other work in the robitics literature studies structured and hybrid action spaces.

",33340,,33340,,2/28/2020 2:16,2/28/2020 2:16,,,,4,,,,CC BY-SA 4.0 18273,1,18305,,2/27/2020 8:26,,0,835,"

Here's a tutorial about doing custom training of YOLO (Darknet): https://medium.com/@manivannan_data/how-to-train-yolov3-to-detect-custom-objects-ccbcafeb13d2

The tutorial guides how to set values in the .cfg files:

  • classes = Number of classes, OK
  • filters = (classes + 5) * 3

Why is it 'plus 5' then 'times 3'?

Some say it's (classes + coords + 1) * num, but I can't guess it out the meaning.

",2844,,,,,2/28/2020 9:43,YOLOv3 Model Structure: Why is filters = (classes + coords + 1) * num?,,1,3,,,,CC BY-SA 4.0 18274,2,,18204,2/27/2020 9:39,,7,,"

It is a difficult question to answer, as — for a start — we still don't really know what 'intelligence' means. It's a bit like Supreme Court Justice Potter Stewart declining to define 'pornography', instead stating that [...]I know it when I see it. AGI will be the same.

There is no single event (almost by definition), as that's not general. OK, we've got machines that can beat the best human players at chess and go, two games that were for centuries seen as an indication of intelligence. But can they order a takeaway pizza? Do they even understand what they are doing? Or, even more fundamental, know what they means in the previous sentence?

In order for a machine to show a non-trivial level of intelligent behaviour, I would expect it to interact with its environment (which is more social intelligence, an aspect that seems to be rather overlooked in much of AI). I would expect it to be aware of what it's doing/saying. If I have a conversation with a chatbot that really understands what it's saying (and can explain why it came to certain conclusions), that would be an indication that we're getting closer to AGI. So Turing wasn't that far off, though nowadays it's more achieved with smoke and mirrors rather than 'real' intelligence.

Understanding a story: being able to finish a partial story in a sensible way, inferring and extrapolating the motives of characters, being able to say why a character acted in a particular way. That for me would be a better sign of AGI than beating someone at chess or solving complex equations. Jokes that are funny; breaking rules in story-telling in a sensible way.

Writing stories: NaNoGenMo is a great idea, and throws up lots of really creative stuff, but how many of the resulting novels would you want to read instead of human-authored books? Once that process has generated a best-seller (based on the quality of the story), then we might be getting closer to AGI.

Composing music: of course you can already generate decent music using ML algorithms. Similar to stories, the hard bit is the intention behind choices. If choices are random (or based on learnt probabilities), that is purely imitation. An AGI should be able to do more than that. Give it a libretto and ask it to compose an opera around it. Do this 100 times, and when more than 70-80 of the resulting operas are actually decent pieces of music that one would want to listen to, then great.

Self-driving cars? That's not really any more intelligent (but a lot sexier!) than to walk around in a crowd without bumping into people and not getting run over by a bus. In my view it's much more a sign of intelligence if you can translate literature into a foreign language and the people reading it actually end up enjoying it (instead of wondering who translated that garbage).

One aspect we need to be aware of is anthropomorphising. Weizenbaum's ELIZA was taken for more than it was, because its users tried to make sense of the conversations they had and built up a mental model of Eliza, which clearly wasn't there on the other side of the screen. I would want to see some real evidence of intentionality of what an AGI was doing, rather than ascribing intelligence to it because it acts in a way that I'm able to interpret.

",2193,,2193,,2/27/2020 9:45,2/27/2020 9:45,,,,0,,,,CC BY-SA 4.0 18275,2,,18268,2/27/2020 10:04,,0,,"

I don't see any principal problem with that. The way I would approach it is to have a resource model and durations attached to actions.

For example, movement would put a lock on your legs. You can't have another movement at the same time, as your legs are already busy. But your attention might only be partially occupied, so you can make a phone call while you're moving. You won't be able to read a book, because your eyes might be partially busy monitoring the walking action. This can be encoded in pre- and post-conditions.

What will probably be easier is to parallelise the execution of the plan. Once the plan has been created, organise the actions in a Gantt-chart like structure. You can have mutual exclusion in there, so all 'movement' would be restricted to one single row, so no more than one movement can take place at the same time. But 'making a phone call' could be in a separate row, and thus execute in parallel. The details depend on the requirements of the actions.

I can't easily think of a way that it would impact the planning process itself; unless there are critical timings involved. So leaving the planner to do its thing might keep it simpler, and then the optimal execution could be a post-processing step.

",2193,,,,,2/27/2020 10:04,,,,0,,,,CC BY-SA 4.0 18276,1,,,2/27/2020 10:44,,2,73,"

At the bottom of page 2 of the paper L2 Regularization versus Batch and Weight Normalization, the equation for the gradient of the output with respect to the weights is given as:

$$ \triangledown y_{BN} (X; w, \gamma, \beta) = \frac{X}{\sigma(X)}\gamma g'(z). $$

Can someone break down into smaller steps on how the author got to that equation?

",33850,,2444,,11/30/2021 6:56,11/30/2021 6:56,How is the gradient with respect to weights derived in batch normalization?,,0,2,,,,CC BY-SA 4.0 18278,1,18280,,2/27/2020 12:53,,1,1202,"

Is it correct that for SARSA to converge to the optimal value function (and policy)

  1. The learning rate parameter $\alpha$ must satisfy the conditions: $$\sum \alpha_{n^k(s,a)} =\infty \quad \text{and}\quad \sum \alpha_{n^k(s,a)}^{2} <\infty \quad \forall s \in \mathcal{S}$$ where $n_k(s,a)$ denotes the $k^\text{th}$ time $(s,a)$ is visited

  2. $\epsilon$ (of the $\epsilon$-greedy policy) must be decayed so that the policy converges to a greedy policy.

  3. Every state-action pair is visited infinitely many times.

Are any of these conditions redundant?

",33227,,2444,,2/27/2020 13:04,2/27/2020 14:10,What are the conditions for the convergence of SARSA to the optimal value function?,,2,1,,,,CC BY-SA 4.0 18279,2,,18278,2/27/2020 13:29,,1,,"

I have the conditions for convergence in these notes SARSA convergence by Nahum Shimkin.

  1. The Robbins-Monro conditions above hold for $α_t$.

  2. Every state-action pair is visited infinitely often

  3. The policy is greedy with respect to the policy derived from $Q$ in the limit

  4. The controlled Markov chain is communicating: every state can be reached from any other with positive probability (under some policy).

  5. $\operatorname{Var}{R(s, a)} < \infty$, where $R$ is the reward function

",33227,,2444,,2/27/2020 14:10,2/27/2020 14:10,,,,0,,,,CC BY-SA 4.0 18280,2,,18278,2/27/2020 13:40,,2,,"

The paper Convergence Results for Single-Step On-Policy Reinforcement-Learning Algorithms by Satinder Singh et al. proves that SARSA(0), in the case of a tabular representation of the value functions, converges to the optimal value function, provided certain assumptions are met

  1. Infinite visits to every state-action pair
  2. The learning policy becomes greedy in the limit

The properties are more formally stated in lemma 1 (page 7 of the pdf) and theorem 1 (page 8). The Robbins–Monro conditions should ensure that each state-action pair is visited infinitely often.

",2444,,,,,2/27/2020 13:40,,,,7,,,,CC BY-SA 4.0 18281,2,,18187,2/27/2020 15:17,,3,,"

As far as I know, there is no very simple proof of the convergence of temporal-difference algorithms. The proofs of convergence of TD algorithms are often based on stochastic approximation theory (given that e.g. Q-learning can be viewed as a stochastic process) and the work by Robbins and Monro (in fact, the Robbins-Monro conditions are usually assumed in the theorems and proofs).

The proofs of convergence of Q-learning (a TD(0) algorithm) and SARSA (another TD(0) algorithm), when the value functions are represented in tabular form (as opposed to being approximated with e.g. a neural network), can be found in different research papers.

For example, the proof of convergence of tabular Q-learning can be found in the paper Convergence of Stochastic Iterative Dynamic Programming Algorithms (1994) by Tommi Jaakkola et al. The proof of convergence of tabular SARSA can be found in the paper Convergence Results for Single-Step On-Policy Reinforcement-Learning Algorithms (2000) by Satinder Singh et al.

See also How to show temporal difference methods converge to MLE?.

",2444,,2444,,2/27/2020 15:22,2/27/2020 15:22,,,,0,,,,CC BY-SA 4.0 18282,1,,,2/27/2020 15:23,,1,103,"

The conditions of convergence of SARSA(0) to the optimal policy are :

  1. The Robbins-Monro conditions above hold for $α_t$.

  2. Every state-action pair is visited infinitely often

  3. The policy is greedy with respect to the policy derived from $Q$ in the limit

  4. The controlled Markov chain is communicating: every state can be reached from any other with positive probability (under some policy).

  5. $\operatorname{Var}{R(s, a)} < \infty$, where $R$ is the reward function

The original proof of the convergence of TD(0) prediction (page 24 of the paper Learning to Predict by the Method of Temporal Differences) was for convergence in the mean of the estimation to the true value function. This did not require the learning rate parameter to satisfy Robbins-Monro conditions.

I was wondering if the Robbins-Monro conditions are removed from the SARSA(0) assumptions would the policy converge in some notion of expectation to the optimal policy?

",33227,,2444,,2/27/2020 15:37,2/27/2020 15:37,Does SARSA(0) converge to the optimal policy in expectation if the Robbins-Monro conditions are removed?,,0,0,,,,CC BY-SA 4.0 18283,1,18284,,2/27/2020 17:59,,3,2574,"

If the agent is following an $\epsilon$-greedy policy derived from Q, is there any advantage to decaying $\epsilon$ even though $\epsilon$ decay is not required for convergence?

",33227,,2444,,1/23/2022 10:54,1/23/2022 11:04,Is there an advantage in decaying $\epsilon$ during Q-Learning?,,1,0,,,,CC BY-SA 4.0 18284,2,,18283,2/27/2020 18:57,,2,,"

Yes Q-learning benefits from decaying epsilon in at least two ways:

  • Early exploration. It makes little sense to follow whatever policy is implied by the initialised network closely, and more will be learned about variation in the environment by starting with a random policy. It is fairly common in DQN to initially fill the experience replay table whilst using $\epsilon = 1.0$ or otherwise an effectively random policy.

  • Later refinement. Q-learning can only learn from experiences it has. A behaviour policy that is too random may not experience enough states close to optimal in order to gain enough statistics on them to overcome variance. In more difficult cases, it may never experience the whole of an optimal trajectory even when combining all the different transitions that it ever observes.

In addition, when using a function approximator such as a neural network, predictions of the target policy will be influenced by the distribution of states and actions in the experience replay memory. If that is is biased towards the distribution of states of a very different behaviour policy, then basic Q-learning has no good way to adjust for that - it adjusts for differences in expected return between behaviour and target policies, but not in distribution of observed states. In fact this is still a somewhat open problem, you want the agent to learn from mistakes and imperfect behaviour so as to avoid it, but you don't want those mistakes to skew predictions of what should happen under an optimal policy. Function approximation such as neural networks is influenced by the distribution of input data, so it typically performs better in Q learning when the behaviour policy and target policy are close e.g. $\epsilon$ should be relatively low if using $\epsilon$-greedy.

A typical implementation might start with $\epsilon=1.0$, set a decay factor per time step or per episode e.g. $0.999$ per episode, and a minimum $\epsilon = 0.01$. These are all hyper-parameters that you can adjust depending on the problem.

",1847,,1847,,1/23/2022 11:04,1/23/2022 11:04,,,,0,,,,CC BY-SA 4.0 18285,2,,18232,2/27/2020 19:31,,6,,"

Meta-learning is more about speeding up and optimizing hyperparameters for networks that are not trained at all, whereas transfer learning uses a net that has already been trained for some task and reusing part or all of that network to train on a new task which is relatively similar. So, although they can both be used from task to task to a certain degree, they are completely different from one another in practice and application, one tries to optimize configurations for a model and the other simply reuses an already optimized model, or part of it at least.

",20044,,2444,,2/27/2020 23:35,2/27/2020 23:35,,,,2,,,,CC BY-SA 4.0 18287,2,,18204,2/27/2020 21:32,,1,,"
  • For me it might be an automata that can adequately solve problems without precisely definable parameters, across the spectrum of activities engaged in by humans.

I use this metric because this is what humans seem to do--make decisions with adequate utility even when we can't break it down mathematically.

  • This may require the ability to define problems to be adequately solved. This can be understood as an element of creativity.

In this context, everything is either a puzzle or game, dependent on whether it involves more than one agent. Such problems could either be mundane, such as opening a door that is different from standard doors, or identifying novel problems.

Defining problems to be solved touches on Oliver's point about intentionality. (Where I disagree with Oliver is in the notion that intelligence is not fundamentally definable--after much research on the subject it seems to be a measure of fitness in an environment, where an environment can be anything. The etymology of term itself strongly indicates the ability to select between alternatives, thus a function of decision making, measured by utility vs. other decision making agents.)

  • Such a mechanism could be a ""Chinese Room"", in that consciousness, qualia & self awareness in the human sense are not requirements for general intelligence, per se.

On Art:

I mistrust the idea that artistic accomplishment would be a sure marker b/c response to art is subjective, and the process of art is Darwinian--an exponentially greater of artists must ""fail"" for a single artist to ""succeed"". Works that humans might ascribe to ""genius"" can be created by genetic algorithmic process, where time and memory are the only limiters. [See: The Library at Babel] A groundbreaking symphony would be difficult to produce, just per the length of the composition, but much of pop music is already algorithmically generated, and narrowly intelligent algorithms are already producing legit abstract visual art.

Computers are good at math, and Art is inherently mathematical. This is easiest to discern in music, which is just combinations of frequencies and time signatures that produce an effect in the listener. This holds for visual art, which depends on balance (equilibria), composition (spacial relationships), and shading or color (frequencies). If we believe Borges, even literature is inherently mathematical (think ""narrative arcs"" and set theory & combinatorics in regard to characters and events.)

Further, nobody really know what is going to ""work"" until it is presented to an audience, so what constitutes great art is typically a matter of what persists over time and remains, or becomes, relevant. (This can wax and wane--Shakespeare did not always occupy his position at the top of the English lit food chain! The author's greatness is very much a function of interpretation of his work, not least because dramatic art is inherently interpretive, in the sense that this is the task of the performers.)

",1671,,1671,,2/27/2020 22:46,2/27/2020 22:46,,,,0,,,,CC BY-SA 4.0 18288,1,18292,,2/27/2020 21:41,,2,447,"

In Deep Q Learning the parametrized Q-functions $Q_i$ are optimised by performing gradient descent on the series of loss functions

$L_i(\theta_i)= E_{(s,a)\sim p}[(y_i-Q(s,a;\theta_i))^2]$ , where

$y_i = E_{s' \sim \mathcal{E}}[r+\gamma \max_{a'}Q(s',a';\theta_{i+1})\mid s,a]$.

In the actual algorithm, however, the expected value is never computed. Also, I think it cannot be computed since the transition probabilities of the underlying MDP remain hidden from the agent. Instead of the expected value, we compute $y_i = r_i + \gamma \max_a Q(\phi_{i+1},a;\theta)$. I assume some sort of stochastic approximation is taking place here. Can someone explain the details?

",27047,,2444,,2/27/2020 22:17,2/28/2020 3:53,How is the expected value in the loss function of DQN approximated?,,1,0,,,,CC BY-SA 4.0 18289,1,,,2/28/2020 0:26,,1,58,"

I would like to ask for a piece of advice with regard to Q-learning. I am studying RL and would like to do a basic project applied to life science and calculate the reward. I have been trying to get my head around how to define all possible states of the environment.

My states are $S = ( \text{health } (4 \text{ levels}), \text{shape } (3 \text{ levels}) \}$. My actions are $A=\{a_1, a_2, \dots, a_4 \}$. My possible states are $60=4 * 3 * 5$. Could you advise whether these are correct?

$(s_{w_0, sh_0}, a_1, s'_{w_1, sh_1})$ is a tuple of the initial state $s_{w_0, sh_0}$, the first action $a_1$ and the next state $s'_{w_1, sh_1}$, where $w$ is the health level, $sh$ is the shape of the tumor.

",33862,,2444,,2/29/2020 4:17,2/29/2020 4:17,How should I define the state space for this life science problem?,,0,5,,,,CC BY-SA 4.0 18290,1,,,2/28/2020 0:40,,3,209,"

There is an idea that intentionality may be a requirement of true intelligence, here defined as human intelligence.

But all I know for certain is that we have the appearance of free will. Under the assumption that the universe is purely deterministic, what do we mean by intention?

(This seems an important question given that intention is not just a philosophical matter in relation to definitions of AI, but involves ethics in the sense of application of AI, ""offloading responsibility to agents that cannot be meaningfully punished"" as an example. Also touches on goals, implied by intention, whether awareness is a requirement, and what constitutes awareness. I'm interested in all angles, but was inspired by the question ""does true art require intention, and, if so, is that the sole domain of humans?"")

",1671,,,,,5/10/2021 0:20,How do we define intention if there is no free will?,,4,8,,,,CC BY-SA 4.0 18292,2,,18288,2/28/2020 1:33,,2,,"

Just as the paper says

$$L_i(\theta_i)= E_{(s,a)\sim p}[(y_i-Q(s,a;\theta_i))^2]$$

where

$$y_i = E_{s' \sim \mathcal{E}}[r+\gamma \max_{a'}Q(s',a';\theta_{i+1})\mid s,a]$$

Then in the Background section of the paper, it says

Differentiating the loss function with respect to the weights we arrive at the following gradient:

$$\nabla_{\theta_i} L_i(\theta_i)\\= E_{(s,a)\sim p,s'\sim\mathcal{E}}\left[\left(r+\gamma \max_{a'}Q(s',a';\theta_{i+1})-Q(s,a;\theta_i)\right)\nabla_{\theta_i}Q(s,a;\theta_i)\right]\tag{1}$$

Rather than computing the full expectations in the above gradient, it is often computationally expedient to optimize the loss function by stochastic gradient descent.

...

and the expectations are replaced by single samples from the behavior distribution $ρ$ and the emulator $\mathcal{E}$ respectively.

If you're familiar with SGD and Stochastic Optimization then you know what happens here:

The expression inside the expectation of (1), i.e. $\left(r+\gamma \max_{a'}Q(s',a';\theta_{i+1})-Q(s,a;\theta_i)\right)\nabla_{\theta_i}Q(s,a;\theta_i)$ , is an unbiased estimation of the real gradient $\nabla_{\theta_i}L_i(\theta_i)$ - its expectation is the real gradient. In other words,

$$\widehat{\nabla L}=\left(r+\gamma \max_{a'}Q(s',a';\theta_{i+1})-Q(s,a;\theta_i)\right)\nabla_{\theta_i}Q(s,a;\theta_i).$$

Then by theory of Stochastic Optimization we can optimize $L$ by $\theta\leftarrow \theta - \alpha\widehat{\nabla L}$ , which is how SGD works.

The unbiased estimation $\left(r+\gamma \max_{a'}Q(s',a';\theta_{i+1})-Q(s,a;\theta_i)\right)\nabla_{\theta_i}Q(s,a;\theta_i)$ can be sampled and calculated directly - you can run the emulator $\mathcal{E}$ and current behavior policy to collect $r, s', a'$ and calculate the gradient of your Q network $\nabla Q$ using TensorFlow. So the gradient to $L$ can be approximated.

The $y_j$ in the pseudocode of the paper is also an estimation. It'll be more proper to denote it as $\hat{y_j}$.

(I'm not an native English speaker so forgive my poor expression.)

",33843,,-1,,6/17/2020 9:57,2/28/2020 3:53,,,,5,,,,CC BY-SA 4.0 18293,1,18367,,2/28/2020 2:00,,1,311,"

I am trying to put together a public agricultural image database of corn and soybeans, to train convolutional neural networks. The main method of image collection will be through taking pictures of various fields in the growing season. The images will be uploaded to a public data sharing site which will be accessible by many.

However, I could get much more images compiled if I were to take some off of, say, Google Images. If there anything wrong with this? Would there be any issues with copywright infringements if I find the images on a publicly-available search engine? I need a lot of images, so I thought this would be a good method of increasing my image numbers.

",32750,,,,,4/26/2020 23:00,Is it legal to construct a public image database (for deep learning) with images from the internet?,,1,3,,3/5/2020 15:50,,CC BY-SA 4.0 18295,2,,18220,2/28/2020 3:45,,0,,"

Personally, when I begin designing a machine learning model, I consider the following points:

  • My data: if I have simple images, like MNIST ones, or in general images with very low resolution, a very deep network is not required.

  • If my problem statement needs to learn a lot of features from each image, such as for the human face, I may need to learn eyes, nose, lips, expressions through their combinations, then I need a deep network with convolutional layers.

  • If I have time-series data, LSTM or GRU makes sense, but, I also consider recurrent setup when my data has high resolution, low count data points.

The upper limit however may get decided by resources available on the computing device you are using for training.

Hope this helps.

",33781,,2444,,11/13/2020 20:41,11/13/2020 20:41,,,,0,,,,CC BY-SA 4.0 18297,1,,,2/28/2020 4:03,,2,79,"

I am attempting to make a 2-D platformer game where the player traverses through an evil factory that is producing killer robots. The robots spawn at multiple specific locations in each level and impede the player's progress.

Enemies are procedurally generated using machine learning. Early levels have ""garbage"" robots that plop down and can't really do anything. After generations of training, the robots begin having more refined bodies and are able to move about and attack the player. Later levels produce enemies that are more challenging.

Enemies consist of a body and up to 4 limbs. The body is simply a circle of a certain radius, while the limbs are just a bar with a certain length. Limbs can pivot and/or contract/extend. Additionally, each limb can have one of three types of ""motor"" (wheel, spring, or hover). This makes for about 20-25 input parameters:

BodySize, Limb1Enabled, Limb1PivotPoint, Limb1Length, Limb1Angle, Limb1MotorType, Limb1MotorStrength, Limb2Enabled, Limb2PivotPoint, Limb2Length, Limb2Angle, Limb2MotorType, Limb2MotorStrength, Limb3Enabled, Limb3PivotPoint, Limb3Length, Limb3Angle, Limb3MotorType, Limb3MotorStrength, Limb4Enabled, Limb4PivotPoint, Limb4Length, Limb4Angle, Limb4MotorType, Limb4MotorStrength

My thoughts are that a genetic algorithm (or something similar) would be used to generate a body, while a neural network would control that body by using the same inputs to generate outputs that control the limbs and motors.

There would actually be 3 ""control brains"" that would have to be trained using the same inputs, but having different fitness goals: Moving Right/Left, Moving Up, and Attacking the Player. (Gravity exists in 2-D platformers, so moving down isn't necessary.)

A fourth, ""master brain"" would take the player's relative location, score, and maybe time elapsed, as inputs, and would output one of the goals for the robot to achieve (move left, move right, and attack).

The master brain's fitness would be determined by the ""inverse"" of the player's ""progress"", while each control brain's fitness would be determined by how well it was able to perform the task assigned by the master brain. Finally, the overall fitness for the body's genetic algorithm would be an average (or some other function like min, max, etc.) of the three control brain's fitness values.

Now that I have all this ""down on paper"", where do I start? I had planned on doing this in Unity, but early attempts have been a bit confusing for me. I've been able to procedurally generate a body with random limbs (no motors) that wiggle about randomly, but there's no neural network or any machine learning going on whatsoever. I am not exactly sure how to expose my parameters to be used as inputs, and am barely grasping how I should take those outputs to control what I want them to. Are there any libraries I should look at, or should I write this all from scratch?

Also, before I get too far ahead of myself, what are the flaws in my approach (as I'm sure there are plenty). I want my project to be something practical in scope, if training can't be done feasibly while a player traverses a level, this might just be a dead project idea.

Anyways, that all being said, thank you for your help.

",33523,,,,,2/28/2020 4:03,Using ML for Enemy Generation in Video Games,,0,0,,,,CC BY-SA 4.0 18298,1,18300,,2/28/2020 4:03,,1,1364,"

I'm testing out YOLOv3 using the 'darknet' binary, and custom config. It trains rather slow.

My testing out is only with 1 image, 1 class, and using YOLOv3-tiny instead of YOLOv3 full, but the training of yolov3-tiny isn't fast as expected for 1 class/1 image.

The accuracy reached near 100% after like 3000 or 4000 batches, in similarly 3 to 4 hours.

Why is it slow with just 1 class/1 image?

",2844,,,,,2/28/2020 6:08,What are the reasons behind slow YOLO training?,,2,0,,,,CC BY-SA 4.0 18299,1,,,2/28/2020 5:07,,2,68,"

I have a task of classifying spatial data from a geographic information system. More precisely, I need a way to filter out unnecessary line segments from the CAD system before loading into the GIS (see the attached picture, colors for illustrative purposes only).

The problem is that there are much more variations of objects than in the picture. The task is difficult to solve in an algorithmic way.

I tried to apply a bunch of classification algorithms from the Scikit-learn package and, in general, got significant results. GradientBoostingClassifier and ExtraTreeClassifier achive an accuracy about 96-98%, but:

  • this accuracy is achieved in the context of individual segments into which I explode the source objects (hundreds of thousands of objects) After reverse aggregation of objects it may turn out that one of their segments in each object is classified incorrectly. The error in the context of the source objects is high

  • significant computational resources and time are required for the preparation of the source data, and calculation of features for classifiers

  • it is impossible to use the source 2d coordinates of objects in algorithms, but only their derivatives

I tried to find good examples of use deep neural networks for this kind of tasks / for spatial data, but I only found articles that are quite difficult to understand about the use of such networks for point clouds classification, and some information on geometric deep learning for graphs. I do not have enough knowledge to adapt them for my case

Can someone provide me good examples of using neural networks directly with 2d coordinates and, maybe, good articles on these theme written in simple language?

Thanks

",33820,,33820,,2/28/2020 11:23,2/28/2020 11:23,Suitable deep learning algorithms for spatial / geometric data,,0,0,,,,CC BY-SA 4.0 18300,2,,18298,2/28/2020 5:21,,2,,"

I think you underestimate the size of YOLO. This is the size of one segment of yolo tiny according to the darknet .cfg file:

Convolutional Neural Network structure:
416x416x3                Input image
416x416x16               Convolutional layer: 3x3x16, stride = 1, padding = 1
208x208x16               Max pooling layer: 2x2, stride = 2
208x208x32               Convolutional layer: 3x3x32, stride = 1, padding = 1
104x104x32               Max pooling layer: 2x2, stride = 2
104x104x64               Convolutional layer: 3x3x64, stride = 1, padding = 1
52x52x64                 Max pooling layer: 2x2, stride = 2
52x52x128                Convolutional layer: 3x3x128, stride = 1, padding = 1
26x26x128                Max pooling layer: 2x2, stride = 2
26x26x256                Convolutional layer: 3x3x256, stride = 1, padding = 1
13x13x256                Max pooling layer: 2x2, stride = 2
13x13x512                Convolutional layer: 3x3x512, stride = 1, padding = 1
12x12x512                Max pooling layer: 2x2, stride = 1
12x12x1024               Convolutional layer: 3x3x1024, stride = 1, padding = 1

.cfg file found here: https://github.com/pjreddie/darknet/blob/master/cfg/yolov3-tiny.cfg

EDIT: These networks generally aren't specifically designed to train fast, they're designed to run fast at test time, where it matters

",26726,,,,,2/28/2020 5:21,,,,1,,,,CC BY-SA 4.0 18301,2,,18298,2/28/2020 6:08,,1,,"

It depends upon the factors such as 1. Batch size (GPU memory capacity) 2. CPU speed and number of cores(multi-threading to load the images)

Number of classes increase the number of convolution filters only in the prediction layers of YOLO. It influences only less than 1% speed of the detector to train the model.

",20151,,,,,2/28/2020 6:08,,,,1,,,,CC BY-SA 4.0 18303,1,,,2/28/2020 8:34,,21,4171,"

This isn't really a conspiracy theory question. More of an inquire on the global computational power and data storage logistics question.

Most recording instruments such as cameras and microphones are typically voluntary opt in devices, in that, they have to be activated before they start recording. What happens if all of these devices were permanently activated and started recording data to some distributed global data storage?

There are 400 hours of video uploaded to YouTube every minute.

Let’s do some very rough math.

I’m going to assume for the rest of this post that the average video is 1080p which is 2.5GB (or $10^9$ bytes) per hour. From that, we get about 400 hrs * 60 mins * 2.5GB/hrs * 24 hrs = 1.5 petabytes (or $10^{15}$ bytes) per day.

But YouTube videos post are voluntary, and they are far from continuous video streams.

There are about 3.5 billion smartphones in the world. If video was continuously streamed and recorded, going through the same video math above ($3.5 * 10^9 * 1.5 * 10^{15} * 24)$ = 126 yottabytes (or $10^{24}$ bytes) per day.

The IDC projects there will be 175 zettabytes (or $10^{21}$ bytes) in 2025.

Unless my math is very wrong, it would seem as though smartphone cameras alone could produce more data in one day than all of the data created in human history in 2025.

This, so far, has only been about the data recording, but, to implement a surveillance state, all recorded data would need to be processed by AI to intelligent flag data that is significant. How much processing power would be needed to filter 126 yottabytes into relevant information?

Overall, this question is motivated by the spread of dystopian surveillance media like Edward Snowden NSA whistle blowing leaks or George Orwell's sentiment of ""Big Brother is Watching You"".

Computationally, could we be surveilled, and to what extent? I imagine text messages surveillance would be the easiest, does the world have the computation power to surveil all text messages? How about audio? or video?

",33848,,2444,,2/28/2020 21:23,5/1/2021 8:22,Is a dystopian surveillance state computationally possible?,,3,3,,,,CC BY-SA 4.0 18304,2,,18303,2/28/2020 9:36,,21,,"

You don't necessarily have to analyse it all. Just by having such data available you can achieve a lot in terms of surveillance, as long as you can retrieve relevant parts.

A few years ago there was a Radiolab podcast, ""The Eye in the Sky"" (there's a full transcript on the site). The basic idea is that you have a plane circling a city 24/7, and filming what goes on. If there was a crime somewhere, you retrieve the recordings after the event, and you can track back to where vehicles involved in the crime were coming from, and where they went after the crime. If nothing happens, you simply archive the data, and perhaps remove it after a month or so.

This method was used to solve a hit-and-run assassination of a police woman who was on her way to work. The gang who committed the attack were rather surprised when the police showed up at their secret hide-out a few days later, as they could see on the images where the cars involved went to later. At the time and place of the murder there were obviously no witnesses who could have done that. And this involved no computational processing at all.

The possibilities this opens up are just scary, as you can track pretty much anybody's movements without actually needing someone to follow them. Add to that street-level CCTV, and not much can happen without you being able to find out.

In this scenario there is no processing at all, but you could imaging simple processing steps, such as tracking vehicles or changes in the environment, which could be used to give clues about potentially 'interesting' events. So instead of using it 'passively' as a kind of memory, you could use that data to identify things that happened that you weren't aware of.

And this is without even any clandestine access to people's data. If you add that dimension, then you might even be able to identify crimes/etc before they even happen. Text processing can be quite fast, but is not easy to do, as presumably few people would openly communicate about things they were planning. So I guess we're still a long way away from that.

Of course there is the ethical dimension (which is mentioned in the podcast): who has access to that data, and who decides what it is used for? If you do, and you suspect your partner of being unfaithful, who/what would stop you from checking out their movements? Or check up on that politician who might have a secret affair, or a gambling issue, or who keeps being in the same locations as a well-known drug dealer. All rather scary.

While a complete analysis of all such data would be very heavy computationally, and fraught with false positives and recall problems, it might simply be enough to index it by time, location, and perhaps people involved (face recognition seems to be reasonably good, though still with a rather high error rate). This is enough already to make me feel worried about the future.

",2193,,,,,2/28/2020 9:36,,,,5,,,,CC BY-SA 4.0 18305,2,,18273,2/28/2020 9:43,,0,,"

As said by @brale in the comment below the question:

filters = (classes + 5) * 3
= (classes + width + height + x + y + confidence) * num
= (classes + 1+1+1+1+1) * num
= (classes + 5) * num

YOLOv3 dectects 3 boxes per grid cell, so it is:

filters = (classes + 5) * 3
",2844,,,,,2/28/2020 9:43,,,,0,,,,CC BY-SA 4.0 18306,1,,,2/28/2020 10:21,,1,33,"

I was wondering what is the best method out there to find relationship between two 1D signals so that I can predict/generate one (source) from the other (target). For example, let's say that in response to an event, my sensor A's readings vary in a certain way for 5 seconds. For the same event sensor B's readings vary as well for 5 seconds. Sensor A and B are not measuring the same physical quantities but respond to the same event and seem to have a relationship.

What can I do to use the signal from sensor A to learn how the signal from sensor B would look like for that event? What is the state of the art in deep learning?

",33872,,,,,2/28/2020 10:21,What's the best method to predict/generate signal from one sensor (source) to signal from another another (target)?,,0,0,,,,CC BY-SA 4.0 18308,2,,17214,2/28/2020 13:05,,1,,"

How many classes do you want to annotate it?

We can get the bounding boxes and class names coarsely by using the pre-trained models such as YOLO-V3 darknet, SSD and others.

Then, we can load those annotations using the tool labelImg and manually correct it. It reduces the lot of work and it is called the Human-AI labeling technique.

",20151,,,,,2/28/2020 13:05,,,,0,,,,CC BY-SA 4.0 18309,2,,18227,2/28/2020 14:38,,0,,"

Does the choice of optimizer really matter in terms of training the model?

Yes.

Would the Adam optimizer be better than the Adadelta optimizer?

Yes. (But sometimes, adadelta gives better result. Depends upon the dataset and fine-tune mechanism)

Would they all basically be the same?

No. Here is the explanation

Would some optimizers be better because they allow most of the weights to achieve their "global" minima?

It's not possible to check whether the model achieved global minima or not practically.

We can evaluate the model by either over-fitting or under-fitting using the training and validation set, and the generalization of the model with the test set.

",20151,,2444,,9/12/2020 15:16,9/12/2020 15:16,,,,0,,,,CC BY-SA 4.0 18310,1,,,2/28/2020 15:25,,3,967,"

Imagine that we have a set of heuristic functions $\{h_i\}_{i=1}^N$, where each $h_i$ is both admissible and consistent (monotonic). Is $\sum_{i=1}^N h_i$ still consistent or not?

Is there any proof or counterexample to show the contradiction?

",33875,,2444,,2/28/2020 15:46,2/28/2020 16:41,Is the summation of consistent heuristic functions also consistent?,,1,0,,,,CC BY-SA 4.0 18311,2,,17859,2/28/2020 15:40,,1,,"

If you want to evaluate on real thermal image dataset, you can use this one.

Thermal Image dataset

is mAP a relevant metric when I want to show result to a client ? (e.g a client doesn't understand if I tell him ""my model has a mAP=0.7"")

Mean Average Precision is the relevant metric but it's more technical. You can start explaining with False Positives and False Negatives in the predictions. In turn, it leads to Precision and Recall. It mostly depends upon your use case because there will always be a trade-off between them.

",20151,,,,,2/28/2020 15:40,,,,2,,,,CC BY-SA 4.0 18312,2,,18310,2/28/2020 16:33,,5,,"

No, it will not necessary be consistent or admissible. Consider this example, where $s$ is the start, $g$ is the goal, and the distance between them is 1.

s --1-- g

Assume that $h_0$ and $h_1$ are perfect heuristics. Then $h_0(s) = 1$ and $h_1(s) = 1$. In this case the heuristic is inadmissible because $h_0(s)+h_1(s) = 2 > d(s, g)$. Similarly, as an undirected graph the heuristic will be inconsistent because $|h(s)-h(g)| > d(s, g)$.

If you'd like to understand the conditions for the sum of heuristics to be consistent and admissible, I would look at the work on additive PDB heuristics.

",17493,,17493,,2/28/2020 16:41,2/28/2020 16:41,,,,1,,,,CC BY-SA 4.0 18314,2,,18233,2/28/2020 18:02,,2,,"

The probabilistic models that represent distributions implicitly are, for example, the GANs. (Goodfellow is one of the authors of the original GAN model).

In the paper Variational Inference using Implicit Distributions (2017), the authors write

Implicit distributions are probability models whose probability density function may be intractable, but there is a way to

  1. sample from them exactly and/or calculate and approximate expectations under them, and

  2. calculate or estimate gradients of such expectations with respect to model parameters.

Implicit models have been successfully applied to generative modelling in generative adversarial networks (GANs)

A popular example of implicit models are stochastic generative networks: samples from a simple distribution - such as uniform or Gaussian - are transformed nonlinearly and non-invertably by a deep neural network

They also provide a table (table 1) that shows some probabilistic models that use an implicit distribution. Here $I$ denotes inference only. VI stands for variational inference.

See this blog post Variational Inference using Implicit Models, Part I: Bayesian Logistic Regression. See also the paper Learning in Implicit Generative Models (2017).

",2444,,-1,,6/17/2020 9:57,2/28/2020 18:38,,,,7,,,,CC BY-SA 4.0 18316,2,,18303,2/28/2020 19:12,,10,,"

You would also want to consider physical limitations. If you are even storing 126 yottabyte of data per day, then if we look at the current theoretical densest data storage medium, DNA, at 215 petabytes per gram, we get... ${(126 * 10^{24}) \over (215 * 10^{15})} = 586046511$ grams per day

586046511 g = 586046 kg = 586 Metric Tonnes just for storage.

",33884,,2193,,2/28/2020 19:39,2/28/2020 19:39,,,,2,,,,CC BY-SA 4.0 18317,1,,,2/28/2020 21:49,,1,22,"

Yeah I know, best title ever. Anyway,

I want to make a neural network which is fed with frames coming from an usb camera. Don't wanna be so specific, so I'm just gonna say that the network's goal is to classify human hand gestures, therefore I need to make sure it can effectively learn how the hand moves around.

My problem is that I've no idea about what happens when having 3 channels instead of 1, I only know that (for 3 channels) it does 3 separate convolution operations with the same kernel, resulting actually in 3 separate layers. How do this 3 channels affect the network? Does it learn from the movement 3 parallel times, then it mixes toghether this 3 ""separate movements""? Do I need to make it single channel to help him detect the hand?

PS: the text is problably confusing, but that's because I'm confused to, that's why I'm asking.

",32751,,,,,2/28/2020 22:05,How do 3 channels affect a network when detecting human skin (CNN)?,,1,0,,,,CC BY-SA 4.0 18318,2,,18317,2/28/2020 22:05,,1,,"

3 channels will give you more information about the color of the object and surroundings which might be important in certain cases. For example if you want to classify blue cars and red cars then color of the object is very important. Since your problem is to classify hand gestures then color might not be that relevant. You're not very interested in the color of the hand you only care about it's position so there is a good chance that grayscale images might be enough. You should try with grayscale first and if that works good, if not, try with 3 channels.

",20339,,,,,2/28/2020 22:05,,,,0,,,,CC BY-SA 4.0 18319,1,18330,,2/28/2020 23:23,,2,208,"

My GANs is like this:

  • Train an autoencoder (VAE), get the decoder part and use as Generator
  • Train Discriminator

After training, do the generation in these steps:

  • Call Generator to generate an image
  • Call the Discriminator to classify the image to see whether it's acceptable

The problem is that the Discriminator says 'false' a lot, which means the generated image is not useful.

How should the Generator change (update weights) when Discriminator doesn't accept its generated image?

",2844,,,,,2/29/2020 14:46,GANs: Should Generator update weights when Discriminator says false continuously,,1,0,,,,CC BY-SA 4.0 18320,1,,,2/29/2020 3:30,,2,219,"

From what I have seen, any results involving RL almost always take a massive number of simulations to reach a remotely good policy.

Will any form of RL be viable for real-time systems?

",32390,,32390,,2/29/2020 4:31,2/29/2020 12:35,Is reinforcement learning suited for real-time systems?,,1,1,,,,CC BY-SA 4.0 18321,2,,18220,2/29/2020 3:47,,2,,"

This may sound counter intuitive but one of the biggest rules of thumb for model capacity in deep learning:

IT SHOULD OVERFIT.

Once you get a model to overfit, its easier to experiment with regularizations, module replacements, etc. But in general, it gives you a good starting ground.

",25496,,,,,2/29/2020 3:47,,,,3,,,,CC BY-SA 4.0 18323,1,,,2/29/2020 5:44,,2,24,"

Assume that I have a candidate selection system to generate product/user pairs for recommendation. Currently, in order to hold a quality bar for the recommended product, we trained a model to optimize for the click of the link, denoting as pClick(product, user) model, the output of the model is a score of (0,1) representing how likely the user will click on the recommended product.

For our initial launch product, we set a manually selected threshold, say T for all users. For all users, only when the threshold pass T, we will send user the recommendation.

Now we realize this is not optimal: Some users care less about recommendation quality while some other users have a high bar of recommendation quality. And a personalized threshold, instead of the global T can help us improve the overall relevance.

The goal is to output the threshold for each user, assume we have training data for each user's activity and user/product attributes.

The question is: How should we model this problem with machine learning? Any reference or papers is highly appreciated.

",33901,,,,,2/29/2020 5:44,How to model personalized threshold problem with machine learning,,0,0,,,,CC BY-SA 4.0 18325,1,,,2/29/2020 8:43,,2,78,"

The learning rate in my model is 0.00001 and the gradients of the model is within the distribution of [-0.0001, 0.0001]. Is it normal?

",8415,,2444,,3/1/2020 12:57,3/1/2020 12:57,Why gradients are so small in deep learning?,,0,5,,,,CC BY-SA 4.0 18326,1,,,2/29/2020 9:25,,3,116,"

Is it be possible to train a neural network, with no parallel bilingual data, for machine translation?

",5351,,2444,,12/6/2021 11:56,12/6/2021 15:13,How to do machine translation with no labeled data?,,1,0,,,,CC BY-SA 4.0 18327,2,,18326,2/29/2020 9:25,,2,,"

In this paper: Unsupervised Machine Translation Using Monolingual Corpora Only the authors proposed a novel method.

Intuitively it is an autoencoder, but the Start Of Sentence token is set to be the language type.

One other advanced method is to use the pre-training model. In this paper: Cross-lingual Language Model Pretraining researchers proposed an algorithm that utilized the pre-trained multi-lingual trained BERT(with labeled data but we don't need to have a labeled dataset for our task) and the autoencoder mentioned previously.

",5351,,,,,2/29/2020 9:25,,,,2,,,,CC BY-SA 4.0 18328,2,,18320,2/29/2020 12:35,,1,,"

Short answer: Yes, it is.

Explanation

Reinforcement learning can be considered as a online learning. That is, you can train your model with a single data/reward pairs. As with any online learning algorithm, there are a few things to consider.

The model tends to forget the knowledge gained. To overcome this problem, one can save new data in a circular buffer called history and train the model with a portion of mix of new and old data. This is actually the common way to train an RL model and can be adopted to real-time systems. There are also others techniques to overcome it.

Another problem is that if only one data point is fed to the network, it will be impossible to apply some techniques, such as Batch normalization.

",12841,,,,,2/29/2020 12:35,,,,7,,,,CC BY-SA 4.0 18329,2,,18189,2/29/2020 14:12,,1,,"

You can remove the genes, of course, if it makes sense in your implementation. I can see some reasons why you might want to do it:

  • Your genome's size is very high, so performing some pruning of disabled genes might speed up the algorithm
  • You do not want to re-enable the genes in the future (maybe you have a special mutation mechanism that might add connections instead of genes or simply when it is disable it stays like that)

As a rule of thumbs, keeping connections is usually a good thing if the algorithm can handle big genomes. After all you might re-enable them in the future (these connections are safer to enable than creating new random ones,as the latter might create unwanted cycles, so you'd have to check for that) and dense ANNs work better than sparse ones and pruning too much might not play at your favor.

",15530,,,,,2/29/2020 14:12,,,,0,,,,CC BY-SA 4.0 18330,2,,18319,2/29/2020 14:46,,1,,"

In general, you should train both discriminator D and generator G simultaneously.

Depending on the metric that you use as the target for your model, you may encounter a Vanishing gradient problem. It can happen when you implement original loss (i.e. JS-divergence). In that case D can become overconfident regarding fake samples and won't provide any useful feedback to the G. To find out if training fell into this problem, you should plot D and G loss. It will look as follows:

The original GAN has a lot of problems, that's why I suggest you to use Wasserstein metric. More information you can find in WGAN paper

Here you can find more information about GAN problems:

",12841,,,,,2/29/2020 14:46,,,,0,,,,CC BY-SA 4.0 18331,2,,18303,2/29/2020 16:03,,8,,"

The answer is really very simple. If you have the dystopian power over all the mobile devices in the first place, you would not make them send all their data over to any ""global data storage"" just like that. Instead, you would have put a local AI on each device that filters, processes, categorizes and flags the important parts, sending only those parts plus an intelligent summary of the remaining data to a global AI. The global AI combines and synthesizes the parts that all the local AIs send to it, and may request further data from the local AIs based on what it wants to know.

Naturally, since you are a competent dystopia architect, you design each local AI to be intelligent enough to subvert any human's attempt to remove it, stop its activity, or otherwise interfere with its data collection and processing. The local AIs also continuously communicate in a distributed network with other local AIs regarding their status and any adversarial activities, so that they can quickly act to defend themselves if the need arises, and also notify the global AI of any attack. In this surveillance state, it is an easy task for the global AI to send armed agents to deal with any threat to the AI network that manages to gain any foothold in the information cyberspace.

The point is that the most durable dystopia is a defended distributed dystopia, which would make it robust and scalable.

",33916,,,,,2/29/2020 16:03,,,,0,,,,CC BY-SA 4.0 18332,1,18333,,2/29/2020 17:06,,1,113,"

I'm watching the David Silver series on YT which has raised a couple of questions:

In the Markov process (or chain), how are the directions to each successive state defined? For example, how are the arrow directions defined for the MP below? What's stopping our sample episodes from choosing A -> C -> D -> F?

Also, how is the probability transition matrix populated? From David's example, the probabilities seem to have already been set. For example:

",27629,,2444,,2/29/2020 17:59,2/29/2020 21:18,"In the Markov chain, how are the directions to each successive state defined?",,1,0,,,,CC BY-SA 4.0 18333,2,,18332,2/29/2020 17:38,,4,,"
  1. It's not possible, as in the chain illustrated there are no transitions between A and C, C and D, and D and F. Only sequences where transitions exist are possible. The choice of transitions is arbitrary; it depends on what you want to model with it.

As DuttaA says in his comment, you can imagine that all nodes are linked with all other nodes, but those links have a transition probability of 0.0; so the probability of observing the sequence ACDF is actually 0.0 as well. In order to make the diagram more easy to understand, zero-probability transitions are generally not shown.

  1. There is no prescribed method of populating the transition probabilities. You can define the probabilities manually, randomly, or however you like. You could derive them from observable training data.

Re updating: this depends on the application. If you are modelling a process with known probabilities, you would not update the probabilities; if you are modelling a dynamic process which changes over time, then this is something you might want to consider. However, from my own experience (HMMs in speech recognition), once they have been assigned, they are generally kept as they are.

A Markov chain is really a fairly basic model; it gets more complex with a Hidden Markov Model, where you would generally use a learning algorithm to assign the transition and emission probabilities.

",2193,,2193,,2/29/2020 21:18,2/29/2020 21:18,,,,5,,,,CC BY-SA 4.0 18334,1,,,2/29/2020 19:20,,1,74,"

This question could be generalised to how to adapt state-of-the-art object detection models to large images with small ROIs.

In my particular case I'm trying to use this implementation of MTCNN to get bounding boxes for the faces of images of statues.

One challenge is that the face could take up a large proportion of the image like this:

Or a very small proportion of the image like this:

Where I'll zoom in on the statue's face so you can see it:

Bonus

If anyone has additional comments on my overall approach to this particular problem, happy to hear them.

",16871,,,,,2/29/2020 19:20,How to adapt MTCNN to large images with relatively small ROIs,,0,2,,,,CC BY-SA 4.0 18335,2,,18204,2/29/2020 19:28,,1,,"

This is a tentative answer, and I might come back to it at some point in time. As @nbro mentions this question seems to be opinion based, so my answers are also just my opinion.

If by AGI you mean ""super-intelligent"", then any of the following results should be sufficient to convince anyone of its being ""smarter"" than him/her/pronoun:

  1. Resolving important mathematical problems (the most famous examples being the Millennium Problems, Collatz Conjecture, Goldbach's Conjecture. (Corollary: Break all known encryption schemes)
  2. Founding a new ""system"" to supersede ZFC as the new foundation of mathematics.
  3. New discoveries in the natural sciences (physics, chemistry, biology...)

(1) is a bit dubious as a criterion: at least with modern techniques, automated theorem proving is either just ""symbol pushing"" or requires so much human intervention (in the design/construction to solve a particular problem) that it would be hard to imagine it as being ""smart"" in the traditional sense. We already have a few cases where an automated theorem prover solved big problems (four-colour theorem being the most notable). Point being that even if we reach this with methods similar to what we have already, people might be resistant to call it ""smart"".

(2) is hard to imagine ever being plausible. To the extent that this ""AGI-thing"" is implemented on a system that ""does math"", it would be unusual to imagine a system that can move beyond itself to recognize a new, ""better"" system of math. As an analogy, it might be like a formal system trying to prove its own consistency in a Godelian sense

But the analogy is weak, and I don't see a strong/rigour reason for doubting that an AGI can discover a new axiomatic system. One might even fathom that said AGI can create a new system from the ""bottom up"", much like string theory was constructed from the ""bottom up"" to ""explain"" relativity and particle physics. Perhaps then we can have ""proper resolutions"" on questions like the continuum hypothesis, much like how the parallel postulate was discarded to give way to non-euclidean geometry. But I also cast doubt that there will ever be a ""final word"" on math itself, so its just a fun idea for now.

(3) is also dubious to imagine if it would ever become true. The study of the natural sciences would require a physical presence in the world that goes beyond seeking ""beauty in the mathematical equations"" that would be unusual for an AGI to have. That being said, an AGI could have cameras and other sensors to interpret the natural world, so its not something that I think is strictly impossible.


If by AGI you mean ""human-ness"", then I don't think any single result can convince everyone at the world at the same time of its being an AGI. Perhaps this ""convincing the world that ""me is AGI"" work"" can be done on a person to person basis, in the sense that the AGI would need to interact with each person and slowly build up a certain degree of trust.

Under this interpretation, there can be no complete list that describes AGI, so what follows is just my own list of things I think an human-like AGI might be able to do.

  • Create and interpret art.
  • Have common sense.
  • Be ""creative""
  • Hold a meaningful conversation, understanding others and making sense.
  • Able to perform / to receive a psychoanalysis; understanding of folk-psychology.
  • Exist in a physical manifestation (like a robot) with social/environmental appropriateness.

The main issue with the above criteria is that they are all subjective. Like I said above, this set of criteria probably works on a case-to-case basis


These criteria seems to be the most important of all, but at the same time the definition of verification of these terms is epistemically tricky, so I'll leave them open.

  • Learning (Is a species evolving over time learning its environment?)
  • Self-replication (Is a meme / virus intelligent?)
  • Self-awareness (Is The Treachery of Images self-aware?)
",6779,,6779,,3/4/2020 5:00,3/4/2020 5:00,,,,1,,,,CC BY-SA 4.0 18336,1,18346,,3/1/2020 1:50,,1,55,"

If I have the Gaussian kernel

$$ k(x, x') = \operatorname{exp}\left( -\| x - x' \|^2 / 2\sigma^2 \right) $$

What is $x$ and $x'$ in the context of training an SVM?

",29877,,2444,,3/1/2020 2:00,3/1/2020 13:26,What are the variables used in a Gaussian radial basis kernel in the context of SVMs?,,1,0,,,,CC BY-SA 4.0 18337,1,,,3/1/2020 6:35,,2,36,"

I have been reading quite a lot about the research progress in the domain of self attention-based neural networks that were introduced by Google Inc. in their paper titled "Attention is all you need".

The concept of introducing attention to neural networks in order to free ourselves from a strict context vector being really unique on one hand and moreover using the same concept to model sequences without recurrent neural networks as introduced in the paper is extremely elegant.

I have been trying to figure out so has to how this concept of attention would aid deep networks that model multi-agent systems in which game-theoretic factors come into play for the network to learn.

I was looking for some direction or a toy example/explanation or even possible previous research done to try to test these concepts together.

P.S - I'm just tinkering with some ideas hoping to build something experimental.

",33929,,2444,,12/31/2021 13:22,12/31/2021 13:22,Impact or applications of introducing attention in deep networks modelling multi-agent systems,,0,0,,,,CC BY-SA 4.0 18338,2,,8554,3/1/2020 7:11,,2,,"

At first, like Neil Slater says, I thought this could only be solved using the expected rewards instead of actual rewards, or else there wasn't enough information to solve it. But now I think there might be a way to solve this question. Here is my thinking on this problem (I would be curious for anyone's thoughts, as I am working through this book myself).

I think the key part is where the book says:

Each can collected by the robot counts as a unit reward, whereas a reward of $-3$ results whenever the robot has to be rescued.

This means that the reward set is actually $\mathcal R = \{0, 1, -3\}$ (we assume that in each timestep, the robot can only collect one can).

Now using $$r(s,a,s') = \sum_r r \frac{p(s',r\mid s,a)}{p(s'\mid s,a)} \tag{3.6}$$ and $$p(s'\mid s,a) = \sum_r p(s',r\mid s,a)\tag{3.4}$$ it seems possible to solve for all the probabilities. I'll do an example for $(s,a,s') = (\mathtt{high}, \mathtt{search}, \mathtt{high})$ and leave the rest to you (I haven't actually done the rest, since this does seem rather tedious).

Equation 3.6 gives $$r_\mathtt{search} = 0\cdot \frac{p(s', 0 \mid s,a)}{\alpha} + 1\cdot \frac{p(s', 1 \mid s,a)}{\alpha} -3\cdot \frac{p(s', -3 \mid s,a)}{\alpha}$$ Since $p(s', -3 \mid s,a) = 0$ (it's impossible for the robot to have to be rescued, since we started in the ""high"" state), we get $p(s', 1 \mid s,a) = \alpha r_\mathtt{search}$.

Now equation 3.4 gives $$\alpha = p(s', 0 \mid s,a) + p(s', 1 \mid s,a) + p(s', -3 \mid s,a)$$ which solves to $p(s', 0 \mid s,a) = \alpha - \alpha r_\mathtt{search}$.

So the first two rows of the table will look like:

$$\begin{array}{cccc|c} s& a & s' & r & p(s',r\mid s,a)\\ \hline \mathtt{high} & \mathtt{search} & \mathtt{high} & 1 & \alpha r_\mathtt{search}\\ \mathtt{high} & \mathtt{search} & \mathtt{high} & 0 & \alpha (1- r_\mathtt{search}) \end{array}$$

",33930,,,,,3/1/2020 7:11,,,,1,,,,CC BY-SA 4.0 18339,1,18364,,3/1/2020 7:14,,1,392,"

I've just started with AI and CNN networks.

I have two NIFTI images dataset, one with (240, 240) dimensions and the other one with (256, 132). Each dataset is front a different hospital and machine.

If I want to use both to train my model. What do I have to do?

The model needs to have all the train data with the same shape. I've thought to reshape all the data to have the same shape, but I don't know if I'm going to lose information if I reshape the images.

By the way, I have also a third dataset with (232, 256).

",4920,,4920,,3/1/2020 16:58,3/2/2020 18:36,Using three image datasets with different image sizes to train a CNN,,1,2,,,,CC BY-SA 4.0 18341,1,,,3/1/2020 8:59,,1,62,"

https://afia.asso.fr/journee-hommage-j-pitrat/ is a seminar on March 6th, 2020, in Paris (France, European Union), in honor of the late Jacques Pitrat, who advocated during all his professional life a meta-knowledge and reflective approach. (You need to register to attend that seminar).

Pitrat's blog is available (in spring 2020) on http://bootstrappingartificialintelligence.fr/WordPress3/ (and some snapshot of his CAIA system is downloadable here - but no documentation; however you might try to type L EDITE on stdin to caia). He wrote the Artificial Beings : the conscience of a conscious machine book describing the software architecture of, and the motivations for (some previous version of) CAIA. See also this A Step toward an Artificial Artificial Intelligence Scientist paper by J.Pitrat.

What AI conferences (or AGI workshops) in Europe should I consider submitting papers to explaining the ongoing work on RefPerSys?

That RefPerSys project (open source, open science, work-in-progress, with contact information) is explicitly following J.Pitrat's meta-knowledge approach. Feel free to follow or join that ambitious open-source (actually free software) project.

",3335,,3335,,3/1/2020 17:55,5/9/2020 20:04,What AI conferences in Europe should I consider submitting papers to explaining the ongoing work on RefPerSys?,,1,0,,,,CC BY-SA 4.0 18343,1,,,3/1/2020 10:18,,1,37,"

Consider a prediction problem for example. This is a loss function (negative log likelihood) that I am roughly talking about:

\begin{align*} J_{\text{train}} &= -\sum_t \log L\left(\theta_t, X_t, Y_t\right) \\ J_{\text{test}} &= -\sum_t \log L\left(\theta_t, X_{t+1}, Y_{t+1}\right) \\ J_{\text{dyn}} &= - \sum_t \frac{\left(\theta_{t+1} - \text{stop_gradient}({\theta_t})\right)^2}{2 \sigma_{\theta}^2} \\ \theta_0 &\sim \Psi \end{align*}

Here, the dynamics are simply first-order smoothness (Brownian motion) in the time-dependent parameters $\text{d}\theta_t = \sigma_{\theta} \text{d} W_t$.

This is basically what they do in weather systems and in papers like A Bayesian Approach to Data Assimilation.

The main difference is, we now have the ability to use stop gradients in the smoothness, which changes the problem somewhat. Other issues might arise.

Who else is doing this? Feel free to also provide links to papers on related work. I am not finding enough hits, so maybe I am missing something. Are RNNs, if twisted slightly, falling into this framework and I am no seeing it?

",23001,,23001,,3/1/2020 15:45,3/1/2020 15:45,"Where can I find people solving this smoothing, filtering, temporal learning problem?",,0,4,,,,CC BY-SA 4.0 18344,1,,,3/1/2020 10:55,,1,118,"

Following on from my other (answered) question:

With regards to the Markov process (chain), if an environment is a board game and its states are the various position the game pieces may be in, how would the transition probability matrix be initialised? How would it be (if it is?) updated?

",27629,,2444,,3/1/2020 12:45,11/23/2021 4:04,How is the probability transition matrix populated in the Markov process (chain) for a board game?,,1,1,,,,CC BY-SA 4.0 18345,2,,11014,3/1/2020 11:16,,3,,"

I wonder why these features are necessary, because I think a constant plane contains no information and it makes the the network larger and consequently harder to train.

In many implementations of convolutional layers, the filters do not neatly remain inside the features plane when "sliding" along it, but (conceptually) also partially go "outside" the plane (where always at least one "cell" of the filter will still be inside the plane). Intuitively, in the case of a $3\times3$ filter for example, you can imagine that we pad the input features with an extra border of size $1$, and this padding around the "real" input planes is filled with $0$s.

If it's possible for all input features to also have values of $0$, it may in some situations be difficult or impossible for the neural network to distinguish the "real" $0$ inputs from the $0$ entries in the padding around the board, i.e. it may struggle to know where the game board is and where the game board ends. Having a constant input plane that's always filled with $1$s can help in this respect, because that plane can always reliably be used to distinguish "real" cells that actually exist on the game board from positions that are just outside the game board.

As for the plane filled with $0$s... I have no idea why that would ever be useful. Maybe it was useful due to some peculiar implementation detail. In this thread, some people hypothesise that on specific hardware it might make some computations slightly more efficient because of the layout of the data in memory -- it causes the number of channels to be divisible by $8$, which will.. maybe help? I really don't know too much about this, but I do know that on a smaller scale, sometimes adding unused data in classes can indeed increase performance due to better layout in memory. I suppose it's also possible that it was accidentally added by mistake, or "just in case" and that it doesn't really have much of a purpose. The amount of hardware that the AlphaGo team had available to them is quite insane anyway, one channel more or less probably wasn't too big of a concern for them.


What's more, I don't understand the sharp sign here. Does it mean "the number"? But one number is enough to represent "the number of turns since a move was played", why eight?

This is explained in the "Features for policy/value network" paragraph in the paper. Quote from the paper:

"Each integer feature value is split into multiple $19 \times 19$" planes of binary values (one-hot encoding). For example, separate binary feature planes ares used to represent whether an intersection has $1$ liberty, $2$ liberties, $\dots$, $\geq 8$ liberties.

All feature planes used were strictly binary, no feature planes were used that had integer values $> 1$. This is quite common when possible, because neural networks tend to have a much easier time learning with binary features than with integer- or real-valued features.

",1641,,1641,,1/5/2021 20:30,1/5/2021 20:30,,,,0,,,,CC BY-SA 4.0 18346,2,,18336,3/1/2020 13:21,,1,,"

$\mathbf{x} \in \mathbb{R}^p$ and $\mathbf{x}' \in \mathbb{R}^p$ are two inputs (or feature vectors).

In the context of classification with an SVM, you are given a dataset $D = \{(\mathbf{x}_i, y_i) \}_{i=1}^N$, where $\mathbf{x}_i \in \mathbb{R}^p$ is an input (or point) and $y_i$ the corresponding label. The goal is to find a hyperplane that classifies the points $\mathbf{x}_i$. The hyperplane actually corresponds to a binary classifier that splits the plane into two, so the assumption is that there are two labels. However, these points $ \mathbf{x}_i$ may not be linearly separable in $\mathbb{R}^p$, i.e. there may not be a hyperplane (in 2d, i.e. when $p=2$, a hyperplane is a line) that separates them. The kernel trick, i.e. the use of kernels (such as the Gaussian radial basis), allows an SVM to perform non-linear classification by transforming the inputs to a space where they are linearly separable.

",2444,,2444,,3/1/2020 13:26,3/1/2020 13:26,,,,6,,,,CC BY-SA 4.0 18347,2,,18290,3/1/2020 14:40,,0,,"

In order to answer the question requires that the definition of intelligence (human or otherwise) be grounded into the physical model. There are two forms of intelligence in the physical model, one subconscious (general) and the other conscious (specific).

The subconscious is responsible for managing the brain’s process and performing all the physical reactions. It exist in a deterministic universe where the decisions are known and reactions automated. In the brain, the shape of the deterministic universe is time. For something to be deterministic, it must have guaranteed results within a specific time cycle. This requirement precludes the use of reasoning and long-term memory whose completion time cannot be fixed. The subconscious runs on general intelligence. To understand the format of general intelligence, think in terms of a jigsaw puzzle. General intelligence only needs the border of the puzzle. With just the basic object structure, the subconscious can move and process data throughout the brain without regards to its content. It is functionally agnostic and can complete its automation cycle without reasoning.

Now, consciousness is responsible for mapping the non-deterministic environment to the deterministic universe. To do this, it must exist outside of that universe so its own non-deterministic nature doesn’t interfere with the subconscious process. Consciousness runs on specific intelligence. Its job is to decipher the contents of the puzzle. In humans, specific intelligence takes on the form of visual symbols. These symbols are used by the conscious state for reasoning and long-term memory access. How decisions are reached has no predefined form or technique. It is a non-deterministic process based on the accumulation of individual experience and previous selection. This process allows the biological life-form to adapt decision-making to its own environment. In the most basic sense, the subconscious allows us to live in a deterministic universe and consciousness allows us to adapt to a non-deterministic environment.

The two forms of intelligence are functionally incompatible. The non-deterministic environment has no common translation to a deterministic universe. If it did, it wouldn’t be non-deterministic. To overcome the translation problem, the lower occipital lobe is a functional “Black Box” that allows the two different formats to be written into the same memory construct. By simple association, the two systems coexist in a non-invasive relationship where the hippocampus (short-term) memory serves as an interchange point.

With a grounded physical model, I can now attempt to answer some of the questions.

  1. We have “free will”, but it can only exist in the non-deterministic environment of the conscious state. There is no such thing as “free will” in a deterministic universe. The results of that universe are already known and decided.
  2. A deterministic universe cannot change. Its own nature precludes adaption from within. To make alterations requires a separate state that can rise above the process in order to change the process. Biological consciousness serves this role and requires “free will” to make decisions outside of the scope of what is already known.
  3. There are no ethics in a deterministic universe. Abstracting right and wrong is a reasoning function which is not allowed. AI constructs like agents which do pattern matching to develop predictions/reactions are not directly attached to any conscious state. They are deterministic functions whose capability is predefined. Simply put, agents cannot exceed the sum of their programming because they cannot adapt to that which is not known or anticipated.
  4. Liability only exists where “free will” exists and does not exist in the deterministic universe. Otherwise, you are attempting to blame your hand for a decision originated in your frontal lobe.
  5. “Free will” becomes a prisoner of the deterministic universe. As experience and decision knowledge grow, more and more decisions are shifted to the subconscious. This creates a dependency that restricts the range of “free will” and the decisions it is called upon to resolve.
  6. Creativity does not exist in the deterministic universe. It is an essential skill used by the conscious state to employ related memory to formulate new decisions. Creativity is used by animals to develop survival skills to adapt to environmental changes. The production of art work in an esoteric sense is purely human, but as a basic function is not.
  7. Intent does not exist in the deterministic universe. Within the non-deterministic universe of consciousness, intent is buried in an ocean of previous experience that may or may not reflect current “free will” goals. Especially, if these goals threaten the survival of the biological process.
",33881,,,,,3/1/2020 14:40,,,,3,,,,CC BY-SA 4.0 18350,1,18351,,3/1/2020 19:12,,2,63,"

I understand that Experience Replay is used for data efficiency reasons and to remove correlations in sequences of data. How exactly do these sequences of correlated data affect the performance of the algorithm?

",33227,,,,,3/1/2020 20:30,Intutitive explanation of why Experience Replay is used in a Deep Q Network?,,1,0,,,,CC BY-SA 4.0 18351,2,,18350,3/1/2020 20:30,,2,,"

It is the neural network approximation that suffers, when it attempts to learn from correlated data. Intuitively, this is because the learning algorithm takes gradient steps assuming that the examples it are shown are representative of the dataset as a whole. A neural network update step uses a mini-batch of examples to calculate the gradient of its weights and biases with respect to a cost function. If that mini-batch is not fairly sampled, then the expected value of the loss and of the gradient will not be representative of the population as a whole. They will be biased and can cause an update step to parameters in the wrong direction.

In addition, if you are using a bootstrapping method - any form of TD learning - then the value estimates used to set learning update targets are sensitive to bias in the estimator. This is already something that can cause instability due to positive feedback loops. Adding another source of systemic bias from correlated input data can only make it worse.

You can gain some experience for the effect of this with a simple non-RL experiment.

Goal: To approximate the function $y = x^2$ in the range $-2 < x < 2$. These numbers chosen to make the task simple for a neural network.

Setup: Generate training data in the form $x_i, y_i$ for a few thousand sample points with true values of the function. Optionally add some noise to make the task harder. Keep the data set ordered by $x_i$ values. Create a simple neural network for regression (e.g. 2 hidden layers with 50 neurons and tanh activation), and set it up with a simple optimiser (e.g. SGD)

Run: Train the network twice, using small minibatches (e.g. size 10). Once without shuffling the data on each epoch (or using any shuffling algorithm on the minibatches), and once with standard shuffling and assignment to minibatches. Plot a learning curve of loss vs epoch for each run.

You should find that the ordered, non-shuffled version learns much slower than when using some randomisation to decorrelate the data. Depending on precise hyperparameters, the ordered version may even oscillate with loss quite far away from converging.

It is this effect, or a more subtle version of it, that impacts RL combined with neural networks when learning direct from trajectories. In addition, there is existing starting bias in bootstrap methods, which this effect makes worse.

",1847,,,,,3/1/2020 20:30,,,,0,,,,CC BY-SA 4.0 18352,2,,17050,3/1/2020 20:31,,5,,"

There is a recent development in research that was looking into effectiveness of neural networks on arithmetic. Interestingly, feed-forward neural networks (MLPs) with various activation functions as well as LSTMs (RNNs which are Turing-complete) are not able to model simple arithmetic operations (e.g. addition/multiplication), they designed a new logic unit which can solve all of the simple arithmetic problems.

See: Neural Arithmetic Logic Units

More recently, DL can solve symbolic maths: Deep Learning for Symbolic Mathematics

",28608,,,,,3/1/2020 20:31,,,,0,,,,CC BY-SA 4.0 18353,1,,,3/1/2020 21:42,,5,1100,"

Why is a batch size needed to update the weights of a neural network?

According to that Youtube Video from 3B1B, the weights are updated by calculating the error between expectation and outcome of the neural net. Based on that, the chain rule is applied to calculate the new weights.

Following that logic, why would I pass a complete batch through the net? The first entries wouldn't have an impact on the weighting.

Do I need to define a batch size when I use backpropagation?

",27777,,2444,,3/23/2021 9:51,3/23/2021 12:58,What is the purpose of the batch size in neural networks?,,2,0,,,,CC BY-SA 4.0 18354,2,,18353,3/1/2020 23:01,,5,,"

tl;dr: A batch size is the number of samples a network sees before updating its gradients. This number can range from a single sample to the whole training set. Empirically, there is a sweet spot in the range 1 to a few hundreds, where people experience the fastest training speeds. Check this article for more details.


A more detailed explanation...

If you have a small enough number of samples, you can let the network see all of the samples before updating its weights; this is called Gradient Descent. The benefit from this is that you guarantee that the weights will be updated in the direction that reduces the training loss for the whole dataset. The downside is that it is computationally expensive and in most cases infeasible for deep neural nets.

What is done in practice is that the network sees only a batch of the training data, instead of the whole dataset, before updating its weights. However, this technique does not guarantee that the network updates its weights in a way that will reduce the dataset's training loss; instead it reduces the batch's training loss, which might not the same thing. This adds noise to the training process, which can in some cases be a good thing, but requires the network to take too many steps to converge (this isn't a problem since each step is much faster).

What you're saying is essentially training the network each time on a single sample. This is formally called Stochastic Gradient Descent, however the term is used more broadly to include any case where the network is trained on a subset of the whole training set. The problem with this approach is that it adds too much noise to the training process, causing it to require a lot more steps to actually converge.

",26652,,,,,3/1/2020 23:01,,,,4,,,,CC BY-SA 4.0 18355,1,,,3/2/2020 0:49,,2,943,"

Consider the following question:

$n$ vehicles occupy squares $(1, 1)$ through $(n, 1)$ (i.e., the bottom row) of an $n \times n$ grid. The vehicles must be moved to the top row but in reverse order; so the vehicle $i$ that starts in $(i, 1)$ must end up in $(n − i + 1, n)$. On each time step, every one of the $n$ vehicles can move one square up, down, left, or right, or stay put; but if a vehicle stays put, one other adjacent vehicle (but not more than one) can hop over it. Two vehicles cannot occupy the same square.

Suppose that each heuristic function $h_i$ is both admissible and consistent. Now what I want to know is to check the admissibility and consistency of the following heuristics:

  1. $h= \Sigma_i h_i$

  2. $h= \min_i (h_i)$

  3. $h= \max_i (h_i)$

  4. $h = \frac{\Sigma_i h_i}{n}$

P.S: As a lemma, we know that consistency implies the admissibility of the heuristic function.

Problem Explanation

From this link, I have found that the first heuristic is neither admissible, nor consistent.

I know that the second and the fourth heuristics are either consistent, or admissible.

I have faced with one contradiction in the third heuristic:

Here we see that if car 3 hops twice, the total cost of moving all the cars to their destinations is 3, whereas the heuristic $\max(h_1, \dots, h_n) = 4$.

Problem

So, $\max(h_1, ..., h_n)$ must be consistent and admissible, but the above example shows that it's not. What is my mistake?

",33875,,2444,,2/8/2021 11:28,2/8/2021 11:28,"If $h_i$ are consistent and admissible, are their sum, maximum, minimum and average also consistent and admissible?",,1,0,,,,CC BY-SA 4.0 18356,1,,,3/2/2020 6:15,,2,48,"

I am going through Rabiner 1989 and he writes that the discrete probability density function of duration $d$ in state $i$ (that is, staying in a state for duration $d$, conditioned on starting in that state) is $$p_i(d) = {a_{ii}}^{d-1}(1-a_{ii})$$

($a_{ii}$ is the state transition probability from state $i$ to state $i$ - that is, staying in the same state).

He then continues to say that the expected durations in a state, conditioned on starting in that state, is $$\overline d_i = \sum_{i=1}^\infty d p_i(d) = \frac{1}{1-a_{ii}}$$

Where does the coefficient $d$ (in $\sum_{i=1}^\infty d p_i(d)$) come from?

",33954,,,,,3/2/2020 6:15,Expected duration in a state,,0,5,,,,CC BY-SA 4.0 18358,2,,18355,3/2/2020 7:44,,1,,"

The issue is that you must include assumptions about hopping into your heuristic. In particular, if you are considering individual cars then you must assume that they might be able to hop all of the way to the goal. Thus, your heuristic for each car should be Manhattan distance divided by 2. That's guaranteed to be admissible when you take the max.

If you consider all possible cars you can do better, but you'll need to reason out all the cases. (In general every car either waits or moves, and for every waiting car one car can hop. So, by looking at the minimum distance for any car to reach the goal you can start to reduce your heuristic.) But, that is a different question.

",17493,,,,,3/2/2020 7:44,,,,1,,,,CC BY-SA 4.0 18360,1,18414,,3/2/2020 10:51,,1,196,"

I am currently studying Deep Learning by Goodfellow, Bengio, and Courville. In chapter 5.2 Capacity, Overfitting and Underfitting, the authors say the following:

Typically, when training a machine learning model, we have access to a training set; we can compute some error measure on the training set, called the training error; and we reduce this training error. So far, what we have described is simply an optimization problem. What separates machine learning from optimization is that we want the generalization error, also called the test error, to be low as well. The generalization error is defined as the expected value of the error on a new input. Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice.

I found this part unclear:

Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice.

The language used here is confusing me, because it is discussing a ""distribution"", as in a ""probability distribution"", but then refers to inputs, which are data gathered from outside of any probability distribution. Based on the limited information my studying of machine learning has taught me so far, my understanding is that the machine learning algorithm (or, rather, some machine learning algorithms) uses training data to implicitly construct some probability distribution, right? So is this what it is referring to here? Is the ""distribution of inputs we expect the system to encounter in practice"" the so called ""test set""? I would greatly appreciate it if people would please take the time to clarify this.

",16521,,2444,,1/2/2022 11:25,1/2/2022 11:25,"What does ""the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter"" mean?",,2,0,,,,CC BY-SA 4.0 18362,1,,,3/2/2020 13:27,,1,42,"

I have currently collected 150000 gamestates from playing a Monte Carlo Tree Search AI player against a basic rule based AI at the game of Castle. The information captured represents the information available to the MCTS player on the start of each of their turn and whether they won the game in the end. They are stored within CVS files.

Example gamestate entry:

For example the entry above shows that:

  • HAND: MCTS player's hand contains the cards 7,5,4,9,9,9 (suit has been omitted because it has no baring on the game). (list of cards)
  • CASTLE_FU: MCTS face up cards are 8,2,10 (list of cards)
  • CASTLE_FD_SIZE: MCTS has 3 cards face down (int)
  • OP_HAND_SIZE: The opponent has 3 cards in their hand (int)
  • OP_CASTLE_FU: The opponents face up cards are Jack, Queen, Ace. (list of cards)
  • OP_CASTLE_FD_SIZE: The opponent has 3 cards face down (int)
  • TOP: The top of the discard pile is a 4 (single value)
  • DECK_EMPTY: The deck in which players pick up cards is not empty (boolean)
  • WON: The MCTS player ended up winning the hand (boolean)

I hope to input this data into machine learning algorithms to produce an evaluation function for the MCTS algorithm to use.

How can I normalize this data so I can use it in Keras/Scikit-Learn?

EDIT:

I'm not sure normalizing is the right term here. Encoding or mapping may be more accurate for what I am trying to achieve. Another difficulty I've encountered is the fact that the players hand size can vary up to almost holding the full deck in theory (although this would be incredibly rare in practice). However this is the only column that can be of a size greater than 3.

EDIT 2:

I've come up with this model to represent the data. Does this look suitable?

",33966,,-1,,6/17/2020 9:57,3/2/2020 18:35,How can I normalize gamestates in order to use with a machine learning library?,,0,0,,,,CC BY-SA 4.0 18363,1,,,3/2/2020 18:02,,1,117,"

I am currently working on a stock market prediction model which incorporates sentiments along with historical price for next day price prediction.

I wanted to test different window / sequence size e.g (3 days, 4 days .. 10 days) to identify which window size is most optimal in predicting the next day prices. However the selection for num_units in model.add(LSTM(units=num_units)) for different window sizes are varying.

If a smaller window size is paired with a larger num_unit, there is over-fitting in the data where the model prediction for the price at day t+1 is almost equal to the price at day t.

Hence I am unable to make a fair comparison between different window sizes without varying num_units

I have referred to this How to select number of hidden layers and number of memory cells in an LSTM? however am unable to come to a conclusion.

Is there a predefined guideline for the num_units to use within a LSTM cell for timeseries prediction based on the sequence length?

",33975,,,,,3/2/2020 18:02,How should I go about selecting an optimal num_units within a LSTM cell for different sequence sizes,,0,0,,,,CC BY-SA 4.0 18364,2,,18339,3/2/2020 18:36,,0,,"

Resizing the images will work.But if you significantly reduce the image dimensions information IS lost. Here is a simple example. Suppose you have a 500 X 500 image. That has 250,000 pixels. Assume in these images the object of interest( lets say a bird in a forest) only occupies 10% of the pixels in the image(25,000 pixels). Now assume you reduce the image to 100 X 100 and thus have 10,000 pixels. Your object of interest (the bird) now only occupies 1000 pixels. These are the pixels the neural net will learn from. Now a better way to do this is to first crop your original 500 X 500 images to maximize the percentage of bird pixels in the cropped image. For example assume the resulting cropped image comes out to be say a 200 X 200 image but in that image the subject of interest(the bird) occupies 50% of the pixels (20,000 pixels). Now if you reduce the cropped image down to a 100 X 100 the object of interest will occupy(5000 pixels). Cropping images is of course a hassle. For some types like images for example images of people, and you are just interested in the faces there are routines available that will automatically crop the image to just the face. In general you will have to crop the images yourself. If you want to build high accuracy classifiers you have to start with a ""robust"" data set. The larger the percentage the object of interest takes up in the images the better your classifier will perform.

",33976,,,,,3/2/2020 18:36,,,,0,,,,CC BY-SA 4.0 18365,2,,3933,3/2/2020 19:11,,3,,"

Your data set would be what is called "unbalanced' and this can lead to problems in developing an accurate classifier.

The best thing to do (which you might not be able to do) is to find more images for those classes with a smaller number of images.

Another alternative is to synthetically produce more images. One way to do that is to use the Keras ImageDataGenerator.flow_from_directory. Documentation is at https://keras.io/preprocessing/image/. Create a directory (your_dir). In it, create a subdirectory Giraffe. Place all your 43 giraffe images into that directory. Create another directory your_save_dir, and leave it empty. Now, create the generators shown below.

datagen = ImageDataGenerator(rotation_range = 30, width_shift_range = 0.2,
                             height_shift_range = 0.2,
                             shear_range = 0.2, 
                             zoom_range = 0.2,                             
                             horizontal_flip = True,
                             fill_mode = "nearest")

data=datagen.flow_from_directory(your_dir, target_size=(200, 200),
                                 batch_size=43, shuffle=False,
                                 save_to_dir=your_save_dir,save_format='png',
                                 interpolation='nearest')

images,labels=data.next()

Now, each time you execute the last line of code, you will generate and store 43 more images in your_save_dir. These images will be transformed per the parameters in the image data generator in a random manner. While NOT as good as having truly original images, it will help significantly to balance the data set.

Do the same of course for the other image sets that have a small number of samples.

Another thing that can help is, for the sets with fewer images, first, crop the images so that the animal occupies as high a percentage of pixels as possible in the cropped image. Then do the process defined above. This gives the network a higher percentage of meaningful pixels to "learn" from.

",33976,,2444,,6/21/2020 22:37,6/21/2020 22:37,,,,0,,,,CC BY-SA 4.0 18366,1,,,3/2/2020 19:48,,4,226,"

I'm training a classifier and I want to collect incorrect outputs for human to double check.

the output of the classifier is a vector of probabilities for corresponding classes. for example, [0.9,0.05,0.05]

This means the probability for the current object being class A is 0.9, whereas for it being the class B is only 0.05 and 0.05 for C too.

In this situation, I think the result has a high confidence. As A's probability dominants B's and C's.

In another case, [0.4,0.45,0.15], the confidence should be low, as A and B are close.

What's the best formula to use to calculate this confidence?

",33082,,,,,1/19/2023 6:47,How to calculate the confidence of a classifier's output?,,3,3,,,,CC BY-SA 4.0 18367,2,,18293,3/2/2020 20:37,,2,,"

I believe this is covered under Section 107 of the Copyright Act states:

the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.

This section was intended to enable utilization of copyrighted material where that utilization is in the public interest. As long as you stay within the bounds as stated above you should not be considered in violation of a copyright.

",33976,,33976,,4/26/2020 23:00,4/26/2020 23:00,,,,0,,,,CC BY-SA 4.0 18368,2,,18251,3/2/2020 20:56,,1,,"

A quick scan of your plots does not seem to indicate any severe over fitting. As pointed out there is always some degree of over fitting but in this case it looks to be very small. Your validation loss reduces as it should down to what appears to be a very small level and remains low. One test would be to add a ""dropout"" layer into your model right after a dense layer and see the effect on training accuracy and validation accuracy. Set the drop out rate to something like .4 . Make sure your training accuracy remains high (it may take a few more epochs to get there) then look to see if the validation loss is lower than without the dropout layer. Run this several times because random weight initialization can sometimes effect accuracy by converging on a non optimal local minimum. Additional you can add kernel regularizes to your dense layers which also helps to prevent over training. I have a lot of plots similar to yours and adding dropout and regularization had no effect on validation accuracy.

",33976,,,,,3/2/2020 20:56,,,,0,,,,CC BY-SA 4.0 18369,1,18385,,3/2/2020 21:12,,1,42,"

I need multiply the weigths of terms in TFIDF matrix by the word-embeddings of word2vec matrix but I can't do it because each matrix have a different number of terms. I am using the same corpus for get both matrix, I don't know why each matrix have a different number of terms .

My problem is that I have a matrix TFIDF with the shape (56096, 15500) (corresponding to: number of terms, number of documents) and matrix Word2vec with the shape (300, 56184) (corresponding to : number of word-embeddings, number of terms).
And I need the same numbers of terms in both matrix.

I use this code for get the matrix of word-embeddings Word2vec:

def w2vec_gensim(norm_corpus):
    wpt = nltk.WordPunctTokenizer()
    tokenized_corpus = [wpt.tokenize(document) for document in norm_corpus]
    # Set values for various parameters
    feature_size = 300
    # Word vector dimensionality
    window_context = 10
    # Context window size
    min_word_count = 1
    # Minimum word count
    sample = 1e-3
    # Downsample setting for frequent words
    w2v_model = word2vec.Word2Vec(tokenized_corpus, size=feature_size, window=window_context, min_count =  min_word_count, sample=sample, iter=100)
    words = list(w2v_model.wv.vocab)
    vectors=[]
    for w in words:
        vectors.append(w2v_model[w].tolist())
    embedding_matrix= np.array(vectors)
    embedding_matrix= embedding_matrix.T
    print(embedding_matrix.shape)

    return embedding_matrix

And this code for get the TFIDF matrix:

tv = TfidfVectorizer(min_df=0., max_df=1., norm='l2', use_idf=True, smooth_idf=True)


def matriz_tf_idf(datos, tv):
    tv_matrix = tv.fit_transform(datos)
    tv_matrix = tv_matrix.toarray()
    tv_matrix = tv_matrix.T
    return tv_matrix

And I need the same number of terms in each matrix. For example, if I have 56096 terms in TFIDF, I need the same number in embeddings matrix, I mean matrix TFIDF with the shape (56096, 1550) and matrix of embeddings Word2vec with the shape (300, 56096). How I can get the same number of terms in both matrix? Because I can't delete without more data, due to I need the multiplication to make sense because my goal is to get the embeddings from the documents.

Thank you very much in advance.

",33977,,,,,3/3/2020 9:22,Why I have a different number of terms in word2vec and TFIDF? How I can fix it?,,1,0,,,,CC BY-SA 4.0 18370,2,,18249,3/2/2020 21:20,,1,,"

The CNN should work without trying to do special feature extraction. As pointed out some pre-processing can aid in enhancing the CNN's classification results. The Keras ImageDataGenerator provides optional parameters you can set to provide pre-processing as well as provide data augmentation. One thing I know that works for sure but can be painful is cropping the images in such a way that the subject of interest occupies a high percentage of the pixels in the resultant cropped image. The cropped image can than be resized as needed. The logic here is simple. You want your CNN to train on the subject of interest (for example a bird sitting in a tree where the bird is the subject of interest). The part of the image that is not of the bird is essentially just noise making the classifier's job harder. For example say you have a 500 X 500 initial image in which the subject of interest (the bird) only takes up 10% of the pixels (25,000 pixels). Now say as input to your CNN you reduce the image size to 100 X 100. Now the (pixels that the CNN 'learns' from is down to 1000 pixels. However lets say you crop the image so that the features of the bird are preserved but the pixels of the bird in the cropped image take up 50% of the pixels. Now if you resize the cropped image to 100 X 100 , 5000 pixels of relevance are available for the network to learn from. I have done this on several data sets. In particular images of people where the subject of interest is the face. There are many programs that are effective at cropping these images so that mostly just the face appears in the cropped result. I have trained a deep CNN in one case using uncropped images and in the other with cropped images. The results are significantly better using the cropped images.

",33976,,33808,,3/3/2020 20:06,3/3/2020 20:06,,,,4,,,,CC BY-SA 4.0 18371,2,,17492,3/2/2020 21:36,,-1,,"

SVM is generally considered objectively better than deep learning for standard machine learning tasks.

SVM or decision trees.

Deep Learning is beneficial when there is structure in the data that can't be easily represented by some type of kernel.

I'm actually really interested in why decision trees haven't been used for computer vision in conjunction with deep learning feature extraction.

",32390,,,,,3/2/2020 21:36,,,,0,,,,CC BY-SA 4.0 18372,2,,18227,3/2/2020 21:53,,1,,"

I have experimented with this to a small degree and have not noticed that much of an impact.

To date, Adam appears to give the best results on a variety of image data sets. I have found that "adjusting" the learning rate during training is an effective means of improving model performance and has more impact than the selection of the optimizer.

Keras has two callbacks that are useful for this purpose. Documentation is at https://keras.io/callbacks/. The ModelCheckpoint callback enables you to save the full model or just the model weights based on monitoring a metric. Typically, you monitor validation loss and set the parameter save_best_only=True to save the results for the lowest validation loss. The other useful callback is ReduceLROnPlateau, which allows you to adjust the learning rate based on monitoring a metric. Again, the metric usually monitored is the validation loss. If the loss fails to reduce after a user-set number of epochs (parameter patience), the learning rate will be adjusted by a user-set factor (parameter factor). You can think of the training process as traveling down a valley. As you near the bottom of the valley, it becomes more and more narrow. If your learning rate does not adjust to the "narrowness" there is no way you will get to the bottom of the valley.

You can also write a custom callback to adjust the learning rate. I have done this and created one which first adjusts the learning rate based on monitoring the training loss until the training accuracy reaches 95%. Then it switches to adjust the learning rate based on monitoring the validation loss. It saves the model weights for the lowest validation loss and loads the model with these weights to make predictions. I have found this approach leads to faster training and higher accuracy.

The fact is you can't tell if your model has converged on a global minimum or a local minimum. This is evidenced by the fact that, unless you take special efforts to inhibit randomization, you can get different results each time you run your model. The loss can be envisioned as a surface in $N$ space, where $N$ is the number of trainable parameters. Lord knows what that surface is like and where your initial parameter weights put you on that surface, plus how other random processes cause you to traverse that surface.

As an example, I ran a model at least 20 times and got resultant losses that were very close to each there. Then I ran it again and got far better results for exactly the same data.

",33976,,2444,,9/12/2020 15:14,9/12/2020 15:14,,,,1,,,,CC BY-SA 4.0 18373,2,,18206,3/2/2020 22:29,,3,,"

I have consistently found Adam to work very well but to tell you the truth I have not seen all that much difference in performance based on the optimizer. Other factor seem to have much more influence on the final model performance.In particular adjusting the learning rate during training can be very effective. Also saving the weights for the lowest validation loss and loading the model with those weights to make predictions works very well. Keras provides two callbacks that help you achieve this. Documentation is at https://keras.io/callbacks/. The ReduceLROnPlateau callback allows you to adjust the learning rate based on monitoring a metric. Typically validation loss is monitored. If the loss fails to reduce after N consecutive epochs(parameter patience) the learning rate is adjusted by a factor(parameter factor). You can think of training as descending into a valley which gets more and more narrow as you approach the bottom. If the learning rate does not adjust to this ""narrowness"" there is no way you will get to the very bottom. The other callback is ModelCheckpoint. This allows you to save the model( or just the weights) based on monitoring of a metric. Again usually validation loss is monitored and the parameter save_best_only is set to true. This saves the model with the lowest validation loss. That model can than be used to make predictions.

",33976,,,,,3/2/2020 22:29,,,,0,,,,CC BY-SA 4.0 18374,2,,18205,3/2/2020 22:56,,0,,"

Tensorflow-Keras provides an effective data transformer and loader. Documentation is at https://keras.io/preprocessing/image/. The ImageDataGenerator provides for many types of possible transforms and also enables use of a user defined pre-process function. Use of the ImageDataGenerator.flow_from_directory provides a means of retrieving images in batches from a directory containing sub directories (classes) of images. and resizing the images. Image size can impact the results. Generally the larger the image the better the result but this is subject to the law of diminishing returns (at some point the impact on accuracy becomes minuscule) while the training time can become absorbent. When you have large images like 1000 X 1000 where the subject of interest in the image is small say 50 X 50 the best but most painful approach is the crop the image to the subject of interest. Unfortunately this is usually a time consuming drudgery unless you can find some program that can crop the image automatically. For example there are good programs that can crop images of people automatically where the resultant cropped image is primarily the persons face. Alternatively modules like cv2 can be adapted to provide this capability for certain images. The batch_size you select along with the image size directly effect memory usage. If your images are large and your batch_size is too large you will encounter a ""resource exhaust"" error. You can reduce the batch size but this will extend training time. Other techniques for dealing with large images include methods like sliding windows etc. Again these will increase training time because you are taking a large image and breaking it into a series of smaller images that you feed into the network. A general though probably risky rule I follow is that if I can visibly see the subject of interest in a resized image then I assume the network will be able to detect it as well. Will probably be less accurate than using the full image but should be as we engineers say ""good enough""

",33976,,,,,3/2/2020 22:56,,,,0,,,,CC BY-SA 4.0 18376,2,,18344,3/2/2020 23:25,,1,,"

Transition probability matrix cannot be initialized. Your game world has some rules. Since we do not know these rules we can approximate them. To do this, we should run the game over and over and collect the outcome of each action.

Here is a toy example: assume we control a plane. We want the plane to fly forward, but sometimes there may be wind in a random direction. So the plane won't end up in the desired position. Instead, it has some probability to fly left, right and forward. After the data is collected, we can estimate this probabilities.

Another example is estimating n-gram model: assume we want to train a model that will suggest a word in search field. We would analyze a corpus and calculate all 2 word sequences (for bigram model). Then, if a user starts typing I the model would suggest the word am, because I am occurs more frequently then I did, for example, and hence, the transition probability from I to am is greater.

Here is good book on Language Modeling

But your question suggest me that you want something like reinforcement learning. Check out this video wich is a part of a great lectures series on RL by David Silver

",12841,,12841,,3/2/2020 23:58,3/2/2020 23:58,,,,0,,,,CC BY-SA 4.0 18377,2,,18169,3/2/2020 23:48,,0,,"

One problem could be with the selection of the validation set. For your model to work well on data it has not seen as training data is to have a high validation accuracy, but that is not sufficient on its own. The validation set must be large enough and varied enough that its probability distribution is an accurate representation of the probability distribution of all the images. You could have 1000 validation images, but, if they are similar to each other, they would be an inadequate representation of the probability distribution. Therefore, when you run your trained model to make predictions on the test set its performance would be poor.

So the question is: how many validation images did you use and how were they selected (randomly or handpicked)?

Try increasing the number of validation images and use one of the available methods to randomly select images from the training set, remove them from the training set, and use them as validation images.

Keras's flow_from_directory can achieve that or sklearn's train_test_split. I usually have the validation set be selected randomly and have at least 10% as many images as the test set does.

Overtraining is a possibility, but I think unlikely given your validation accuracy is high.

Another thing is how were the test set images selected? Maybe their distribution is skewed. Again, the best thing to do is to select these randomly.

What was your training accuracy? Without a high training accuracy, the validation accuracy may be a meaningless value. Training accuracy should be in the upper 90's for the validation accuracy to be really meaningful.

Finally, is there any possibility your test images were mislabeled? Try switching your test set as the validation set and see what validation accuracy you get.

Here is an example of what I mean. A guy built a CNN to operate on a set of images separated into two classes. One class as "dogs", the other class was "wolves". He trained the network with great results almost 99.99% training accuracy and 99.6% validation accuracy. When he ran it on the test set, his accuracy was about 50%. Why? Well, it turns out all the images of wolves were taken with snow in the background. None of the images of dogs had snow in the background. So, the neural network figures out if snow must be a wolf, if no snow must be a dog. However, in his test set, he had a mixture of wolves in or not in snow, dogs in or not in snow. Great training and validation results but totally useless performance.

",33976,,2444,,1/17/2021 19:28,1/17/2021 19:28,,,,0,,,,CC BY-SA 4.0 18378,1,,,3/3/2020 0:18,,3,710,"

In the context of a Markov decision process, this paper says

it is well-known that the optimal policy is invariant to positive affine transformation of the reward function

On the other hand, exercise 3.7 of Sutton and Barto gives an example of a robot in a maze:

Imagine that you are designing a robot to run a maze. You decide to give it a reward of +1 for escaping from the maze and a reward of zero at all other times. The task seems to break down naturally into episodes—the successive runs through the maze—so you decide to treat it as an episodic task, where the goal is to maximize expected total reward (3.7). After running the learning agent for a while, you find that it is showing no improvement in escaping from the maze. What is going wrong? Have you effectively communicated to the agent what you want it to achieve?

It seems like the robot is not being rewarded for escaping quickly (escaping in 10 seconds gives it just as much reward as escaping in 1000 seconds). One fix seems to be to subtract 1 from each reward, so that each timestep the robot stays in the maze, it accumulates $-1$ in reward, and upon escape it gets zero reward. This seems to change the set of optimal policies (now there are way fewer policies which achieve the best possible return). In other words, a positive affine transformation $r \mapsto 1 \cdot r - 1$ seems to have changed the optimal policy.

How can I reconcile "the optimal policy is invariant to positive affine transformation of the reward function" with the maze example?

",33930,,2444,,12/24/2021 10:23,12/26/2021 13:18,Is the policy really invariant under affine transformations of the reward function?,,3,0,,,,CC BY-SA 4.0 18379,1,,,3/3/2020 0:23,,1,766,"

It seems to me that Seq2Seq models and Bidirectional RNNs try to do the same thing. Is that true?

Also, when would you recommend one setup over another?

",32023,,2444,,11/24/2021 11:53,11/24/2021 11:53,Do Seq2Seq models and the Bidirectional RNN do the same thing?,,1,0,,,,CC BY-SA 4.0 18380,1,,,3/3/2020 4:14,,0,107,"

Is there a good way to understand how single-shot object detection works? The most basic way to do detection is use a sliding-window detector and look at the output of the NN to detect if a class is there or not.

I'm wondering if there is a way to understand how many of the single-shot detectors work? Internally is there some form of sliding window going on? Or is it basically the same detector learned at each point?

",32390,,,,,3/3/2020 4:14,Intuition behind single-shot object detection,,0,4,,,,CC BY-SA 4.0 18381,1,,,3/3/2020 6:08,,1,84,"

I watched the video lecture of cs224: Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 2 – Word Vectors and Word Senses.

They take the sample size of the window to be $2^5 = 32$ or $2^6 = 64$. Why is the sample size of stochastic gradient descent a power of 2? Why not we can take 42 or 53 as the sample window size?

Btw, how do I identify the best minimum window sample size?

",9863,,2444,,12/27/2021 9:12,12/27/2021 9:12,Why is the sample size of stochastic gradient descent a power of 2?,,0,1,,,,CC BY-SA 4.0 18383,2,,18378,3/3/2020 7:25,,2,,"

This statement:

(it is well-known that the optimal policy is invariant to positive affine transformation of the reward function).

is, as far as I know, and as you summarise, incorrect* because simple translations to reward signal do affect the optimal policy, and the affine transform of a real number $x$ can be given by $f(x) = mx + c$

It is well known that optimal policy is unaffected by multiplying all rewards by a positive scaling, e.g. $f(x) = mx$ where $m$ is positive.

It is also worth noting that if an optimal policy is derived from Q values using $\pi(s) = \text{argmax}_a Q(s,a)$, then that policy function is invariant to positive affine transformations of action values given by $Q(s,a)$. Perhaps that was what the paper authors meant to write, given that they go on to apply normalisation to Q values.

The impact of the mistake is not relevant to the the rest of the paper as far as I can see (caveat: I have not read the whole paper).


* It is possible to make the statement correct even for episodic problems, if:

  • You model episodic problems with an "absorbing state" and treat it as a continuing problem.

  • You apply the same affine transform to the (usually zero reward) absorbing state.

  • You still account for the infinite repeats of the absorbing state (requiring a value of discount factor $\gamma$ less than one). In practice this means either granting an additional reward of $\frac{b}{1-\gamma}$ for ending the episode, or not ending a simulation in the terminal state, but running learning algorithms over repeated time steps whilst still in the terminal state, so they can collect the non-zero reward.

",1847,,1847,,12/14/2021 19:25,12/14/2021 19:25,,,,0,,,,CC BY-SA 4.0 18384,2,,18379,3/3/2020 9:05,,1,,"

Seq2Seq and Bidirectional RNNs are not doing the same thing, at least in their classic form.

Seq2Seq models are used to generate a sequence from another sequence. Consider, for example, the translation task from one language to another. In that sense, Seq2Seq is more a family of models, not an architecture.

On the other hand, the Bidirectional RNN is a neural network architecture and can be used to build several models including Seq2Seq, for example, the encoding part of the Seq2Seq can be a Bi-RNN, but they also can be used for other tasks, for example, sentence classification or sentiment analysis.

",32493,,2444,,11/24/2021 11:48,11/24/2021 11:48,,,,0,,,,CC BY-SA 4.0 18385,2,,18369,3/3/2020 9:22,,0,,"

Your problem is that TFIDF is cutting out around 90 terms.

Since you aren't using the min_df or max_df parameters and as far as I can tell you aren't doing any stemming/lemmatization, the only difference I can see between the two methods is the tokenizer.

There are two things I'd try out if I were you:

  1. Try explicitly converting the word2vec corpus to lowercase. TfidfVectorizer does this by default and I can't see where you're doing it in the word2vec pipeline. Ignore this if your corpus is already lowercased.
  2. Try using the nltk.WordPunctTokenizer() with the TfidfVectorizer. You can do this like this:
wpt = nltk.WordPunctTokenizer()
tv = TfidfVectorizer(min_df=0., max_df=1., norm='l2', use_idf=True, smooth_idf=True,
                     tokenizer=wpt.tokenize)

",26652,,,,,3/3/2020 9:22,,,,1,,,,CC BY-SA 4.0 18386,1,,,3/3/2020 12:53,,1,48,"

I have heard that sigmoid activation functions should not be used on neural networks with many hidden layers as the gradients tend to vanish in deep networks.

When should each of the common activation functions be used, and why?

  • ReLu
  • Sigmoid
  • Softmax
  • Leaky ReLu
  • TanH
",33227,,2444,,3/3/2020 13:07,3/3/2020 13:07,What are the pros and cons of the common activation functions?,,0,1,,,,CC BY-SA 4.0 18387,1,,,3/3/2020 13:52,,2,30,"

I am required to obtain data through a sensor located on the vehicle reading speed, vibration, roll and tilt, within a sample time, to classify the current road condition using machine learning for a high school project.

Which algorithm/approach may be most suitable for this task? Suggestions to sources for learning (books, tutorials) would be also appreciated, as I am new to AI and ML.

",33984,,2444,,3/5/2020 1:01,3/5/2020 1:01,"Suitable algorithms for classifying terrain condition (asphalt, dirt etc) for motor vehicles",,0,2,,,,CC BY-SA 4.0 18388,1,,,3/3/2020 14:11,,2,52,"

I am doing a simple scan to see how dataset size affects training. Basically, I took 10% of the coco dataset and trained a yolov3 net (from scratch) to just look for people. Then I took 20% of the coco dataset and did the same thing.... all the way to 100%. What is strange is that all 9 nets are getting similar loss at the end (~7.5). I must be doing something wrong, right? I expected to see an exponential curve where loss started out high and assymptotically approached some value as the dataset increased to 100%. If it didn't approach a value (and still had a noticeable slope at 100%), then that meant more data could help my algorithm.

This is my .data file: classes= 1 train = train-run-less.txt valid = data/coco/5k.txt names = data/humans.names backup = backup

I am trying to train just one class (person) from the coco dataset. Something is not making sense, and in a sanity test, I discovered that the loss drops even if the training folder only contains 1 image (which doesnt even have people in it). I thought the way this worked was that it trained on the ""train"" images, then it tested the neural net on the ""valid"" images. How is it getting better at finding people in the ""valid"" images if it hasnt trained on a single one??

Basically I am trying to answer the question: ""how much accuracy can I expect to gain as I increase the data?""

",33987,,33987,,3/3/2020 14:42,11/19/2022 20:04,Data scan not making sense for coco dataset,,1,0,,,,CC BY-SA 4.0 18389,1,,,3/3/2020 15:15,,1,31,"

I’d like to build a model that has an understanding of geometry, where it can be applied to question and answering system. Specifically, it would be nice if it could determine the volume of an object by simply looking at pictures of it.

If there are any pre-trained models out there that I can utilize that would certainly make things easier.

Otherwise, are there any suggestions on the kind of model(s) I should use to do this?

Also, I read something online about how Facebook trained an AI to solve complex math problems just by looking at them. They approached the problem as a language translation problem, not as a math problem. I wonder if this is the way to go?

",20271,,2444,,3/5/2020 1:00,3/5/2020 1:00,Train an AI to infer accurate mathematical calculations by simply “looking” at images of shapes/objects,,0,0,,,,CC BY-SA 4.0 18390,1,18399,,3/3/2020 15:38,,3,1380,"

When we define the loss function of a variational autoencoder (VAE), we add the Kullback-Leibler divergence between the sample taken according to a normal distribution of parameters:

$$ N(\mu,\sigma) $$

and we compare it with a normal distribution of parameters

$$ N(0,1) $$

My intuition is that it is clever having samples taken from a distribution centered around zero, but I don't understand why we want that examples are taken with a normal distribution.

",32694,,2444,,3/4/2020 1:04,3/4/2020 1:09,Why do we regularize the variational autoencoder with a normal distribution?,,1,0,,,,CC BY-SA 4.0 18391,1,,,3/3/2020 15:43,,1,922,"

I know dropout layers are used in neural networks during training to provide a form of regularisation in an attempt to mitigate over-fitting.

Would you not get an increased fitness if you disabled the dropout layers during evaluation of a network?

",25574,,2444,,3/3/2020 17:54,5/13/2022 9:30,Does the performance of a model increase if dropout is disabled at evaluation time?,,2,2,,,,CC BY-SA 4.0 18392,2,,18391,3/3/2020 16:16,,3,,"

Dropout is a technique that helps to avoid overfitting during training. That is, dropout is usually used for training.

units may change in a way that they fix up the mistakes of the other units. This may lead to complex co-adaptations. This, in turn, leads to overfitting because these co-adaptations do not generalize to unseen data.

If you want to evaluate your model, you should turn off all dropout layers. For example, PyTorch's model.eval() does this work.

Note that in some cases dropout can be used for inference, e.g. to add some stochasticity to the output.

More about dropout:

",12841,,12841,,5/13/2022 9:30,5/13/2022 9:30,,,,1,,,,CC BY-SA 4.0 18393,1,,,3/3/2020 16:39,,0,121,"

What are typical ways to understand and visualize a trained RL agent's policy when the state space is of high dimension (but not images)?

For example, suppose state and action are denoted by $s=(s_1,s_2,\cdots,s_n)$ and $a=(a_1,a_2,\cdots,a_k)$. How do I determine which attribute of the state (e.g. an image pixel of video game) is most responsible for a particular action $a_j$? I would like to have, for each action $a_j, j=1,2,...,k$, a table that ranks the attributes of the observation.

My question may be a little bit vague, but if you have any thoughts on how to improve it please let me know!

",33660,,2444,,3/5/2020 0:53,3/5/2020 0:53,How to understand and visualize a trained RL agent's policy when the state space is high dimensional?,,0,2,,,,CC BY-SA 4.0 18395,1,18422,,3/3/2020 17:34,,3,174,"

I'm doing machine learning projects. I took a look at many datasets I worked with, mostly there are already famous datasets that everyone uses.

Let's say I decided to make my own dataset. Is there a possibility that my data are so random so that no relationship exists between my inputs and outputs? This is interesting because if this is possible, then no machine learning model will achieve to find an inputs outputs relationship in the data and will fail to solve the regression or classification problem.

Moreover, is it mathematically possible that some values have absolutely no relationship between them? In other words, there is no function (linear or nonlinear) that can map those inputs to the outputs.

Now, I thought about this problem and concluded that, if there is a possibility for this, then it will likely happen in regression because maybe the target outputs are in the same range and the same features values can correspond to the same output values and that will confuse the machine learning model.

Have you ever encountered this or a similar issue?

",30327,,2444,,3/4/2020 1:00,3/5/2020 10:24,Is there a possibility that there is no relationship between some inputs and outputs?,,2,1,,,,CC BY-SA 4.0 18396,2,,18391,3/3/2020 17:43,,2,,"

Dropout is usually disabled at test (or evaluation) time. For example, in Keras, dropout is disabled at evaluation time by default, although you can enable it, if you need to (see below). The purpose of dropout is to decorrelate the units (or feature detectors) so that they learn more robust representations of the data (i.e. a form of regularisation).

However, there's also Monte Carlo (MC) dropout, i.e., you train the network with dropout and you also use dropout at test time in order to get stochastic outputs (i.e. you will get different outputs, for different forward passes, given the same inputs). MC dropout is an approximation of Bayesian inference in deep Gaussian processes, which means that MC dropout is roughly equivalent to a Bayesian neural network.

Does the performance of a model increase if dropout is disabled at evaluation time?

Yes, possibly. However, MC dropout provides an uncertainty measure, which can be useful in certain scenarios (e.g. medical scenarios), where a point estimate (i.e. a single prediction or classification) is definitely not appropriate, but you also need a measure of the uncertainty or confidence of the predictions.

",2444,,2444,,3/3/2020 17:55,3/3/2020 17:55,,,,0,,,,CC BY-SA 4.0 18397,1,,,3/3/2020 17:49,,2,3409,"

Is this due to my dropout layers being disabled during evaluation?

I'm classifying the CIFAR-10 dataset with a CNN using the Keras library.

There are 50000 samples in the training set; I'm using a 20% validation split for my training data (10000:40000). I have 10000 instances in the test set.

",25574,,25574,,3/4/2020 14:31,11/21/2022 11:06,Why is my validation/test accuracy higher than my training accuracy,,1,1,,11/24/2022 5:37,,CC BY-SA 4.0 18398,2,,4176,3/3/2020 18:10,,1,,"

Recently, I needed to develop a Fuzzy Logic algorithm to made inferences of any data entrance; the real case applied was in Oil and Gas Industry, that the code needs to infere Joint Types in Fluid Pipelines. But with this Algorithm, the Computer Science Developer can infere any problems data, follow the link bellow:

https://www.codeproject.com/Articles/5092762/Csharp-Fuzzy-Logic-API (C# Fuzzy Logic API)

",33992,,,,,3/3/2020 18:10,,,,0,,,,CC BY-SA 4.0 18399,2,,18390,3/3/2020 18:28,,3,,"

If you are mathematically inclined, here is an article that discusses the reasoning.

What I get as a take away is that the VAE forces the learned latent space to be Gaussian due to the KL divergence term in the loss function. So, now we have a known distribution to sample from to create input vectors to feed to the decoder, to produce say images of dogs, if the VAE was trained on images of dogs. As you sample from the distribution, you will produce images of different types of dog images.

I assume a different type of distribution could be selected if one uses the proper loss function for that type of distribution, that is, the loss function which would measure the difference in the distribution of the latent space and the desired distribution.

KL divergence is the loss function that forces the latent space distribution to be Gaussian. If you do not ""restrict"" the latent space, as is the case with a regular autoencoder, you have no idea what kind of vector to select as an input to the decoder to produce a dog image. Without restriction, there are $2^n$ (where $n$ is the dimensions of the latent space) possible vectors you could choose from. Chances of selecting one that produces a dog image would be minuscule.

Well, I hope this helps. I no longer am mathematically proficient (age 75), so I hope my interpretation of the math is correct.

A VAE tends to produce blurry images because there are two terms in the loss function. One term is trying to make the output look like the input while the KL loss term is trying to restrict the latent space distribution. GANs ( generative adversarial networks) don't have this conflict, so they produce much high-quality images.

",33976,,2444,,3/4/2020 1:09,3/4/2020 1:09,,,,1,,,,CC BY-SA 4.0 18402,2,,18366,3/3/2020 19:18,,0,,"

Obvious answer for a binary (2 classes)classification is .5. Beyond that the earlier comment is correct. One of the things I have seen done is to run your model on the test set and save the prediction probability results. Then create a threshold variable call it thresh. Then increment thresh from 0 to 1 in a loop. On each iteration compare thresh with the highest predicted probability prediction call it P. If P>thresh declare that as the selected prediction, then compare that with the true prediction. Keep track of the errors for each value of thresh. At the end select the value of thresh that has the least errors. There are also some more sophisticated methods for example ""top 2 accuracy"" where thresh is selected based on having the true class within either the prediction with the highest probability or the second highest probability . You can construct a weighted error function and select the value of thresh that has the net lowest error over the test set. For example an error function might be as follows. If neither P(highest) or P(second highest) = True class, error=1. If P(second highest) = true class, error=.5. If p(highest)=true class error=0. I have never tried this myself so I am not sure how well this works. When I get some time will try it on a model with 100 classes and see how well it does. I know in the Imagenet competition they evaluate not just the top accuracy but also the ""Top 3"" and ""Top 5' accuracy. In that competition there are 1000 classes. I never thought of this before but I assume you could train your model specifically to optimize say the Top 2 accuracy by constructing a loss function used during training that forces the network to minimize this loss.

",33976,,,,,3/3/2020 19:18,,,,0,,,,CC BY-SA 4.0 18404,2,,18232,3/4/2020 1:58,,3,,"

The difference really comes down to the fact that in meta-learning, there is a population of tasks $\tau$ which have distribution $p(\tau)$. The goal is to perform well on a task drawn from $p(\tau)$. Generally 'perform well' means that with only a few training steps or data points, the model can give good classification accuracy, achieve high reward in an RL setting, etc.

A concrete example is given in the original MAML paper 1, where the task is to perform regression on data given by a sinusoidal distribution with parameters $p(\theta)$. The meta-learning goal is to get high regression accuracy on tasks where the data is drawn from distributions coming from $p(\theta)$.

In contrast, transfer learning is a bit more general since there's not necessarily a notion of a distribution of tasks. There is generally just one (although there can be more) source problem $S$, and the goal is to do well on a target problem $T$. You know both of these explicitly, unlike in MAML where the goal is to do well amongst any unknown problem drawn from a certain distribution. Very often, this is performed by taking a model that performs well on $S$ and adapting it to work on $T$, perhaps by using extracted features from the model for $S$.

The extent to which this will succeed obviously depends on the similarity of the two tasks. This is also known in the literature as domain adaptation, and has some theoretical results 2, although the bounds are not really applicable to modern high-dimensional datasets.

  1. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (Finn et al) 2017.

  2. A Theory of Learning from Different Domains (Ben-David et al) 2010.

",15176,,2444,,3/4/2020 2:14,3/4/2020 2:14,,,,1,,,,CC BY-SA 4.0 18406,2,,18395,3/4/2020 4:16,,0,,"

Not sure if I can answer the question as a whole, but a pure random input/output pair doesn't quite have ""no relationship"" at all. At the very least, for any fixed training set input/output pair, you can do an if...then mapping to construct a 1-to-1 function, such that you can classify the training set with 100% accuracy (assuming no duplicates of input).

In any case, I assume you mean uniform random, because if you have something like gaussian random, you can still learn some latent structure from how the random numbers are generated.

But even if you assume uniform random, and your algorithm is only guessing, your algorithm is technically still operating optimally per the data generating distribution, which basically means its as optimal as it gets.

The only such case that I can imagine which would satisfy your question, would be if you had a separate training/validation set, where the only element of the training input/output is [1,1], but the validation set only has elements of [1,-1], or something along those lines.

From reading your comments, I suspect that your intention with the question was: ""Can there be a relationship of data such that no method can learn it?"". To the extent that the data-generating distribution exists, then by the universal approximation theorem of neural networks, then it is reasonable that you can at least partially learn it.

However it is important to note that the universal approximation theorem doesn't mean that such a data generating distribution can be learned by a neural net, it only means that you can get ""non-zero as close as you want"" to the data generating distribution. More explicitly: there is a setting of weights that gives you results as good as you want, but gradient descent doesn't necessarily learn it.

",6779,,6779,,3/4/2020 18:41,3/4/2020 18:41,,,,10,,,,CC BY-SA 4.0 18407,1,,,3/4/2020 8:30,,2,296,"

I generated a bunch of simulation data from a complex physical simulation that spits out patterns. I am trying to apply unsupervised learning to analyze the patterns and ideally classify them into whatever categories the learning technique identifies. Using PCA or manifold techniques such as t-SNE for this problem is rather straightforward, but applying neural networks (autoencoders, specifically) becomes non-trivial, as I am not sure splitting my dataset into test and training data is the right way.

Naively, I was thinking of the following approaches:

  1. Train an autoencoder with all the data as training data and train it for a large number of epochs (overfitting is not a problem in this case perse I would think)

  2. Keras offers a model.predict option which enables me to just construct the encoder section of the autoencoder and obtain the bottleneck values

  3. Carry out some data augmentation and split the data as one might into training and test data and carry out the workflow as normal (This approach makes me a little uncomfortable as I am not attempting to generalize a neural network or should I be?)

I would appreciate any guidance on how to proceed or if my understanding of the application of autoencoders is flawed in this context.

",34002,,2444,,3/5/2020 0:51,12/23/2022 6:04,How can I use autoencoders to analyze patterns and classify them?,,2,2,,,,CC BY-SA 4.0 18409,1,18432,,3/4/2020 12:39,,2,129,"

I am trying to work on a variation of the Access-Control Queuing Task problem presented in Chapter 10 of Sutton’s reinforcement learning book [1].

Specific details of my setup are as follows:

  • I have different types of tasks that arrive to a system (heavy/moderate/light with heavy tasks requiring more time to be processed.). The specific task type is chosen uniformly at random. The task inter-arrival time is $0.1s$ on average.
  • I have different classes of servers that can process these tasks (low-capacity; medium capacity; high capacity; with high capacity servers having a faster processing time). When I select a specific server from a given class, it becomes unavailable during the processing time of the task assigned to it. Note that the set of servers (and as a result the number of servers of each class) is not fixed, it instead changes periodically, according to the dataset used to model the set of servers (so specific servers may disappear and new ones may appear, as opposed to the unavailability caused by the assignment). The maximum number of servers of each class is $10$.
  • My goal is to decide which class of server should process a given task, in a way that minimizes the sum of the processing times over all tasks.

The specific reinforcement learning formulation is as follows:

  • State: the type of task (heavy/moderate/light) ; the number of available low capacity servers; the number of available medium capacity servers; the number of available high capacity servers
  • Actions: (1) Assign the task to a low capacity server (2) assign the task to a medium capacity server (3) assign the task to a high capacity server (4) a dummy action that has a worse processing time than the servers with low capacity. It is selected when there are no free servers.
  • Rewards: the opposite of the processing time, where the processing times are as follows (in seconds):

|               | Slow server | Medium Server | Fast server | ""Dummy action"" |
|---------------|-------------|---------------|-------------|----------------|
| Light task    | 0.5         | 0.25          | 0.166       | 0.625          |
| Moderate task | 1.5         | 0.75          | 0.5         | 1.875          |
| Heavy task    | 2.5         | 1.25          | 0.833       | 3.125          |

My intuition for formulating the problem as an RL problem is that 'Even though assigning Light tasks to High capacity servers (i.e. being greedy) might lead to a high reward in the short term, it may reduce the number of High capacity servers available when a Heavy task arrives. As a result, Heavy tasks will have to be processed by lower capacity servers which will reduce the accumulated rewards'.

However, when I implemented this (using a deep Q-network[2] specifically), and compared it to the greedy policy, I found that both approaches obtain the same rewards. In fact, the deep Q-network ends up learning the greedy policy.

I am wondering why such a behaviour occured, especially that I expected the DQN approach to learn a better policy than the greedy one. Could this be related to my RL problem formulation? Or there is no need for RL to address this problem?

[1]Sutton, R. S., & Barto, A. G. (1998). Introduction to reinforcement learning (Vol. 135). Cambridge: MIT press.

[2]Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.f

",34010,,34010,,3/5/2020 8:36,3/5/2020 10:23,Unexpected results when comparing a greedy policy to a DQN policy,,1,8,,,,CC BY-SA 4.0 18410,1,18419,,3/4/2020 13:09,,1,295,"

I was reading a blog post that talked about the problem of the saddle point in training.

In the post, it says if the loss function is flatter in the direction of x (local minima here) compared to y at the saddle point, gradient descent will oscillate to and from the y direction. This gives an illusion of converging to a minima. Why is this?

Wouldn’t it continue down in the y direction and hence escape the saddle point?

Link to post: https://blog.paperspace.com/intro-to-optimization-in-deep-learning-gradient-descent/

Please go to Challenges with Gradient Descent #2: Saddle Points.

",33803,,,,,3/4/2020 17:42,Oscillating around the saddle point in gradient descent?,,1,5,,,,CC BY-SA 4.0 18412,2,,18388,3/4/2020 14:58,,0,,"

What is strange is that all 9 nets are getting similar loss at the end (~7.5). I must be doing something wrong, right?

Yes, Please check the annotations file. Did you remove all the other classes except Person? What's the mAP of all those models?

""how much accuracy can I expect to gain as I increase the data?""

It's hard to tell but more the data(variation), better the model.

",20151,,,,,3/4/2020 14:58,,,,0,,,,CC BY-SA 4.0 18413,1,,,3/4/2020 14:59,,2,918,"

Removing stop words can significantly speed up named entity recognition (NER) modeling by reducing the number of tokens in a document.

Are stop words critical to get correct NER performance?

",33880,,2444,,3/4/2020 15:57,3/4/2020 16:15,Is it recommended to remove stop words before named entity recognition?,,1,0,,,,CC BY-SA 4.0 18414,2,,18360,3/4/2020 15:25,,2,,"

The language used here is confusing me, because it is discussing a ""distribution"", as in a ""probability distribution"", but then refers to inputs, which are data gathered from outside of any probability distribution. Based on the limited information my studying of machine learning has taught me so far, my understanding is that the machine learning algorithm (or, rather, some machine learning algorithms) uses training data to implicitly construct some probability distribution, right? So is this what it is referring to here?

They're not referring to probability distributions of training data that ML algorithms (implicitly) construct here. The main point of confusion seems to be where you state this:

but then refers to inputs, which are data gathered from outside of any probability distribution

Any data / inputs ever collected always originate from some distribution. We will typically not exactly know what that distribution is, we are often not able to provide a clean expression for it, and it might not even be a nice ""smooth"" distribution, but that doesn't mean it doesn't exist.

If I collect a large number of photographs of $H \times W$ pixels of streets for the purpose of training a self-driving car, then this collection of training data was collected from some distribution. For each of the pixels in the $H \times W$ plane, there exists some probability distribution that tells us how likely it is for such a pixel to have a certain colour under the data collection procedure that was used to generate our data. This is a largely unknown distribution, for which we don't have a nice mathematical expression, but it does exist. I assume that, in this distribution, it's relatively likely for pixels in the centre to be gray (because streets tend to be gray and we collected data by taking photographs of streets). I also guess it's relatively likely for pixels at the top of the images to be blue, because of the sky. Other than that, we can't say much about the distribution, but it does exist.

Is the ""distribution of inputs we expect the system to encounter in practice"" the so called ""test set""?

Kind of, yeah. Although I suppose the ""test set"" is mostly a thing in academic settings, where we use a test set to evaluate how well an approach performs on data that it did not observe during training. In the ""real world"", the distribution of inputs we expect the system to encounter in practice refers to the distribution that generates samples we encounter after ""deployment"" of the model. For example, this could be the distribution over all images that a self-driving car may encounter when driving anywhere in the world.

Continuing with the self-driving car example, we may get a large generalization error if we only train it on images of streets in one particular city or country, but then afterwards have it drive in many different cities or countries around the world (which may look very different).

",1641,,,,,3/4/2020 15:25,,,,0,,,,CC BY-SA 4.0 18415,2,,18360,3/4/2020 15:31,,0,,"

For illustration, I use the dog/cat classification task. Suppose, the training data of cat and dog follows the Gaussian distribution(for simplicity) and we trained a model which gives an accuracy as below.

  • train - 98.2%
  • val - 97.7%
  • test - 97.2%

The model is neither overfitting nor underfitting but we want the classifier to achieve an accuracy of 100% in all the three sets theoretically. You are right that the model learns the distribution of training data to classify the classes. Due to the overlapping of fat-tail in the distribution of cat and dog, it is highly impossible for the model to get 100% accuracy practically. There will be infinite edge cases that we encounter in reality so we can only improve the model by an iterative approach.

",20151,,,,,3/4/2020 15:31,,,,0,,,,CC BY-SA 4.0 18416,2,,18413,3/4/2020 16:15,,2,,"

It depends how you recognise the entities.

If you do a simple gazetteer lookup, then it could be faster, as you have fewer tokens to deal with.

However, if you use contextual rules, then stop words might be vital to identify certain contexts, so by removing stop words you lose information about the entity's environment. For example, if [work] at {organisation} is a rule you use to identify companies etc, then this wouldn't work if you take out the at.

You will also have problems if the stop words are part of an entity, eg the town Stoke-on-Trent. If hyphens are used to split tokens, then you won't be able to recognise it if the on is discarded.

In general I think stop words should mostly be kept; apart from information retrieval where they are not all that useful in an inverted index you will always lose something. If stop words were really pointless, then they would have been discarded from languages a long time ago. In the old days they were a useful way of reducing the demand on computational resources without losing too much, but nowadays I would say this is not really a problem anymore.

",2193,,,,,3/4/2020 16:15,,,,0,,,,CC BY-SA 4.0 18417,1,22008,,3/4/2020 16:36,,1,235,"

I've been reading the Fast R-CNN paper.

My understanding is that the input to one forward pass is the whole input image plus a list of RoIs (generated by selective search or another region proposal method). Then I understand that on the last convolution layer's feature map (let's call it FM), each corresponding RoI gets RoI-pooled, where now the corresponding ROIs are a rectangular (over height and width) slice of the FM tensor over all channels.

But I'm having trouble with two concepts:

  1. How is the input RoI mapped to the corresponding RoI in FM? Each neuron comes from a very wide perceptive field, so, in a deep neural network, there's no way of making a 1:1 mapping between input neurons and neurons in the last convolution layer right?

  2. Disregarding that I'm confused in point 1, once we have a bunch of RoIs in FM and we do the RoI pooling, we have N pooled feature vectors. Do we now run each of these through one FC network one by one? Or do we have N branches of FC networks? (that wouldn't make sense to me)

I have also read the faster R-CNN paper. In the same way, I'm also interested to know about how the proposed regions from RPN map to the input of the RoI pooling in the Fast R-CNN layers. Because actually those proposed regions live in the space of the input image, not in the space of the deep feature map.

",16871,,2444,,6/19/2020 23:00,6/19/2020 23:00,"In Fast R-CNN, how are input RoIs mapped to the respective RoIs in the feature map before RoI pooling?",,1,0,,,,CC BY-SA 4.0 18418,1,,,3/4/2020 16:59,,2,807,"

After years of learning, I still can't understand what is considered to be an AI. What are the requirements for an algorithm to constitute Artificial Intelligence? Can you provide pseudocode examples of what constitutes an AI?

",34016,,2444,,3/5/2020 22:51,12/27/2022 20:19,Can you provide some pseudocode examples of what constitutes an AI?,,3,1,0,,,CC BY-SA 4.0 18419,2,,18410,3/4/2020 17:42,,0,,"

It's important to note that in a pragmatic instance of ML on a ""real dataset"", you likely wouldn't have a ""strict"" saddle point with precisely zero gradient. Your error surface won't be ""smooth"", so even though you would have something that resembles a saddle point, what you would obtain would in fact be a small region with near-zero gradient.

So let's say you are in a region with near-zero gradient. Assuming that the gradient is this area is normalized at 0 with small Gaussian distributed noise (thus gradient = small Gaussian noise). You can then see that the algorithm can't quite escape the region (or at least, will spend a lot of time here) since 1. Gaussian random walks will more-or-less stay in place (unless for a long time) 2. small gradients means there is no obvious direction to leave the region.

In any case, SGD more or less solves this issue, and its usage is standard practice for reasons beyond this problem.

",6779,,,,,3/4/2020 17:42,,,,0,,,,CC BY-SA 4.0 18420,2,,17796,3/4/2020 17:43,,1,,"

The approach will vary depending on some features of the game:

  • How many players (two for tic tac toe and many classic games).

  • Whether it is a ""perfect information"" game (yes for chess and tic tac toe), or whether there is significant hidden information (true for most forms of poker)

  • Whether the game is zero sum. Any game with simple win/loss/draw results can be considered zero sum.

  • Branching complexity of the game. Tic tac toe has relatively low branching factor, at worst 9, and reduces by one each turn. The game of Go has branching factoraround 250.

  • Action complexity of the game. A card game like Magic the Gathering has huge complexity even if the number of possible actions at each turn is quite small.

The above traits and others make a large impact on the details of which algorithms and approaches will work best (or work at all). As that is very broad, I will not go into that, and I would suggest you ask separate questions about specific games if you take this further and try to implement some self-playing learning agents.

It is possible to outline a general approach that would work for many games:

1. Implement Game Rules

You need to have a programmable API for running the game, which allows for code representing a player (called an ""agent"") to receive current state of the game, to pass in a chosen action, and to return results of that action. The results should include whether any of the players has won or lost, and to update internal state of the game ready for next player.

2. Choose a Learning Agent Approach

There are several kinds of learning algorithms that are suitable for controlling agents and learning through experience. One popular choice would be Reinforcement Learning (RL), but also Genetic Algorithms (GA) can be used for example.

Key here is how different types of agent solve the issues of self-play:

  • GA does this very naturally. You create a population of agents, and have them play each other, selecting winners to create the next generation. The main issue with GA approaches is how well they scale with complexity - in general not as well as RL.

  • With RL you can have the current agent play best agent(s) you have created so far - which is most general approach. Or for some games you may be able to have it play both sides at once using the same predictive model - this works because predicting moves by the opposition is a significant part of game play.

2a. How self-play works in practice

Without going into lines of code, what you need for self-play is:

  • The game API for automation and scoring

  • One or more agents that can use the API

  • A system that takes the results of games between agents and feeds them back so that learning can occur:

    • In a GA, with tournament selection, this could simply be saving a reference - the id of the winner - into a list, until the list has grown large enough to be the parents for the next generation. Other selection methods are also possible, but the general approach is the same - the games are played to help populate the next generation of agents.

    • In RL, typically each game state is saved to memory along with next resulting state or the result win/draw/lose (converted into a reward value e.g. +1/0/-1). This memory is used to calculate and update estimates of future results from any given state (or in some variants used directly to decide best action to take from any given state). Over time, this link between current and next states provides a way for the agent to learn better early game moves that eventually lead to a win for each player.

    • An important ""trick"" in RL is to figure out a way for the model and update rules to reflect opposing goals of players. This is not a consideration in agents that solve optimal control in single agent environments. You either need to make the prediction algorithm predict future reward as seen from perspective of the current agent, or use a global scoring system and re-define one of the agents as trying to minimise the total reward - this latter approach works nicely for two player zero sum games, as both players can then direcly share the same estimator, just using it to different ends.

  • A lot of repetition in a loop. GA's unit of repetition is usually a generation - one complete assessment of all existing agents (although again there are variants). RL's unit of repetition is often smaller, individual episodes, and the learning part of the loop can be called on every single turn if desired. The basic iteration in all cases is:

    • Play anything from one move to multiple games, with automated agents taking all roles in the game and storing results.
    • Use results to update learned parameters for the agents.
    • Use the updated agents for the next stages of self-play learning.

3. Planning

A purely reactive learning agent can do well in simple games, but typically you also want the agent to look ahead to predict future results more directly. You can combine the outputs from a learning model like RL with a forward-planning approach to get the best of both worlds.

Forward search methods include minimax/negamax and Monte Carlo Tree Search. These can be combined with learning agents in multiple ways - for instance as well as in planning stages using the RL model, they can also be used to help train a RL model (this is how it is used in Alpha Zero).

",1847,,1847,,3/4/2020 19:12,3/4/2020 19:12,,,,0,,,,CC BY-SA 4.0 18422,2,,18395,3/4/2020 19:21,,3,,"

Of course, it's possible to define a problem where there is no relationship between input $x$ and output $y$. In general, if the mutual information between $x$ and $y$ is zero (i.e. $x$ and $y$ are statistically independent) then the best prediction you can do is independent of $x$. The task of machine learning is to learn a distribution $q(y|x)$ that is as close as possible to the real data generating distribution $p(y|x)$.

For example, looking at the common cross-entropy loss, we have $$ \begin{align} H(p,q) = -\mathbb{E}_{y,x \sim p}\left[\log q(y|x)\right] & = \mathbb{E}_{x\sim p}\left[\text{H}(p(y|x)) + \text{D}_{\text{KL}}(p(y|x)\|q(y|x))\right] \\ & = \text{H}(p(y)) + \mathbb{E}_{x \sim p}\left[\text{D}_{\text{KL}}(p(y)\|q(y|x))\right], \end{align} $$ where we have used the fact that $p(y|x)=p(y)$ since $y$ and $x$ are independent. From this, we can see that the optimal predicted distribution $q(y|x)$ is equal to $p(y)$, and actually independent of $x$. Also, the best loss you can get is equal to the entropy of $y$.

",15176,,2444,,3/5/2020 0:58,3/5/2020 0:58,,,,0,,,,CC BY-SA 4.0 18423,1,18475,,3/4/2020 20:17,,4,172,"

Is there any difference between the convolution operation applied to images and applied to other numerical 2D data?

For example, we have a pretty good CNN model trained on a number of $64 \times 64$ images to detect two classes. On the other hand, we have a number of $64 \times 64$ numerical 2D matrices (which are not considered images), which also have two classes. Can we use the same CNN model to classify the numerical dataset?

",34018,,34018,,3/5/2020 18:21,3/11/2020 10:33,Is there any difference between the convolution operation applied to images and applied to other numerical 2D data?,,2,3,,,,CC BY-SA 4.0 18424,2,,18423,3/4/2020 21:30,,2,,"

Short answer is no. You can't use a model trained for one task to predict on a totally different task. Even if the second task was another image classification task, the CNN would have to be fine tuned for the new data to work.

A couple of things to note...

1) CNNs are good for images due to their nature. It isn't necessary that they'd be good for any 2-dimensional input.

2) By 2D numerical data I'm assuming you don't mean tabular data.

",26652,,,,,3/4/2020 21:30,,,,1,,,,CC BY-SA 4.0 18425,1,,,3/5/2020 2:10,,2,248,"

I need to get an 8-DOF (degrees of freedom) robot arm to move a specified point. I need to implement the TRPO RL code using OpenAI gym. I already have the gazebo environment. But I am unsure of how to write the code for the reward functions and the algorithm for the joint space motion.

",34020,,2444,,11/15/2020 16:49,10/6/2022 15:08,How can I implement the reward function for an 8-DOF robot arm with TRPO?,,1,0,,,,CC BY-SA 4.0 18426,2,,18418,3/5/2020 2:55,,2,,"

AI is not a simple term. There are different types, ranging from the most simplistic rule-based AI to black-box AI's so complicated it's unreasonable for a human to understand exactly what they're doing.

There's no pseudocode that if used in a program automatically constitutes it as an AI. It's not that black and white. But I can give examples:

Here's a rule-based chess AI that forfeits if it's too far behind, and plays aggressively if it's far enough ahead.

if player.score - my.score > 10:
    forfeit
elif my.score - player.score > 10:
    agressive = True

for each piece of my.pieces:
    for each square of board.squares:
        if noThreats(square) and agressive is True:
            move(piece, square)
            return

This is considered an ""AI"" because it feigns intelligence - appearing to have a true understanding of chess while simply following a set of rules, making it an ""Artificial"" Intelligence.

Here's another more complicated AI:

decisionNet = NeuralNetwork(64 inputs, 2 outputs)

choice = decisionNet(board.squares) // Returns a chess square with one of my pieces and desitnation 
move(choice)

This uses a neural network to make the decision, which could have been trained on a bunch of example games or against itself. Due to this ""training phase"", humans can't understand precisely what the network is doing without extensive effort, so it's an even gives an even more convincing understanding of chess. But if we want, we could still understand the nuances of this network, and show it doesn't possess an intelligence, it again only feigns it.

I should mention that virtually any code that has an if statement can be considered AI. The examples I provided are just easier to pass off as understanding a very complicated concept (chess), as opposed to, say, verifying a user login. They both have the same fundamentals, it's just one appears more complicated on the surface than the other.

",26726,,2444,,3/5/2020 22:58,3/5/2020 22:58,,,,1,,,,CC BY-SA 4.0 18428,2,,18232,3/5/2020 6:08,,5,,"

First of all, I would like to say that it is possible that these terms are used inconsistently, given that at least transfer learning, AFAIK, is a relatively new expression, so, the general trick is to take terminology, notation and definitions with a grain of salt. However, in this case, although it may sound confusing to you, all of the current descriptions on this page (in your question and the other answers) don't seem inconsistent with my knowledge. In fact, I think I had already roughly read some of the cited research papers (e.g. the MAML paper).

Roughly speaking, although you can have formal definitions (e.g. the one in the MAML paper and also described in this answer), which may not be completely consistent across sources, meta-learning is about learning to learn or learning something that you usually don't directly learn (e.g. the hyperparameters), where learning is roughly a synonym for optimization. In fact, the meaning of the word ""meta"" in meta-learning is

denoting something of a higher or second-order kind

For example, in the context of training a neural network, you want to find a neural network that approximates a certain function (which is represented by the dataset). To do that, usually, you manually specify the optimizer, its parameters (e.g. the learning rate), the number of layers, etc. So, in this usual case, you will train a network (learn), but you will not know that the hyperparameters that you set are the most appropriate ones. So, in this case, training the neural network is the task of ""learning"". If you also want to learn the hyperparameters, then you will, in this sense, learn how to learn.

The concept of meta-learning is also common in reinforcement learning. For example, in the paper Metacontrol for Adaptive Imagination-Based Optimization, they even formalize the concept of a meta-Markov decision process. If you read the paper, which I did a long time ago, you will understand that they are talking about a higher-order MDP.

To conclude, in the context of machine learning, meta-learning usually refers to learning something that you usually don't learn in the standard problem or, as the definition of meta above suggests, to perform ""higher-order"" learning.

Transfer learning is often used as a synonym for fine-tuning, although that's not always the case. For example, in this TensorFlow tutorial, transfer learning is used to refer to the scenario where you freeze (i.e. make the parameters non-trainable) the convolution layers of a model $M$ pre-trained on a dataset $A$, replace the pre-trained dense layers of model $M$ on dataset $A$ with new dense layers for the new tasks/dataset $B$, then retrain the new model, by adjusting the parameters of this new dense layer, on the new dataset $B$. There are also papers that differentiate the two (although I don't remember which ones now). If you use transfer learning as a synonym for fine-tuning, then, roughly speaking, transfer learning is to use a pre-trained model and then slightly retrain it (e.g. with a smaller learning rate) on a new but related task (to the task the pre-trained model was originally trained for), but you don't necessarily freeze any layers. So, in this case, fine-tuning (or transfer learning) means to tune the pre-trained model to the new dataset (or task).

How is transfer learning (as fine-tuning) and meta-learning different?

Meta-learning is, in a way, about fine-tuning, but not exactly in the sense of transfer learning, but in the sense of hyperparameter optimization. Remember that I said that meta-learning can be about learning the parameters that you usually don't learn, i.e. the hyper-parameters? When you perform hyper-parameters optimization, people sometimes refer to it as fine-tuning. So, meta-learning is a way of performing hyperparameter optimization and thus fine-tuning, but not in the sense of transfer learning, which can be roughly thought of as retraining a pre-trained model but on a different task with a different dataset (with e.g. a smaller learning rate).

To conclude, take terminology, notation, and definitions with a grain of salt, even the ones in this answer.

",2444,,2444,,6/11/2020 13:10,6/11/2020 13:10,,,,1,,,,CC BY-SA 4.0 18430,2,,18220,3/5/2020 6:46,,2,,"

Theoretical results

Rather than providing a rule of thumb (which can be misleading, so I am not a big fan of them), I will provide some theoretical results (the first one is also reported in paper How many hidden layers and nodes?), from which you may be able to derive your rules of thumb, depending on your problem, etc.

Result 1

The paper Learning capability and storage capacity of two-hidden-layer feedforward networks proves that a 2-hidden layer feedforward network ($F$) with $$2 \sqrt{(m + 2)N} \ll N$$ hidden neurons can learn any $N$ distinct samples $D= \{ (x_i, t_i) \}_{i=1}^N$ with an arbitrarily small error, where $m$ is the required number of output neurons. Conversely, a $F$ with $Q$ hidden neurons can store at least $\frac{Q^2}{4(m+2)}$ any distinct data $(x_i, t_i)$ with any desired precision.

They suggest that a sufficient number of neurons in the first layer should be $\sqrt{(m + 2)N} + 2\sqrt{\frac{N}{m + 2}}$ and in the second layer should be $m\sqrt{\frac{N}{m + 2}}$. So, for example, if your dataset has size $N=10$ and you have $m=2$ output neurons, then you should have the first hidden layer with roughly 10 neurons and the second layer with roughly 4 neurons. (I haven't actually tried this!)

However, these bounds are suited for fitting the training data (i.e. for overfitting), which isn't usually the goal, i.e. you want the network to generalize to unseen data.

This result is strictly related to the universal approximation theorems, i.e. a network with a single hidden layer can, in theory, approximate any continuous function.

Model selection, complexity control, and regularisation

There are also the concepts of model selection and complexity control, and there are multiple related techniques that take into account the complexity of the model. The paper Model complexity control and statistical learning theory (2002) may be useful. It is also important to note regularisation techniques can be thought of as controlling the complexity of the model [1].

Further reading

You may also want to take a look at these related questions

(I will be updating this answer, as I find more theoretical results or other useful info)

",2444,,2444,,11/13/2020 20:40,11/13/2020 20:40,,,,0,,,,CC BY-SA 4.0 18431,1,,,3/5/2020 8:50,,9,3061,"

I have an MMO game where I have players. I wanted to invent something new to the game, and add player-bots to make the game be single-playable as well. The AI I want to add is simply only for fighting other players or other player-bots that he sees around at his level.

So, I thought of implementing my fighting strategy, exactly how I play, to the bot which is basically using if statements and randoms. For example, when the opponent has low health, and the bot has enough special attack power, he will use this chance and use his special attack power in order to try to knock the opponent down, or if the bot has low health he will eat in time but not too much because there is a point in risking fights, if you eat too much your opponent will do too. Or, for example, if the bot detects the opponent player is eating too much and gains health, he will do the same.

I told this idea of the implementation to one of my friends and he simply responded with: This is not AI, it's simply just a set of conditions, it does not have any heuristic functions.

For that type of game, what are some ideas to create a real AI to achieve these conditions?

Basically, the AI should know what to do in order to beat the opponent, based on the opponent's data such as current health, Armour and weapons, and level, if he risks his health or not and so on.

I am a beginner and it really interests me to do it in the right way.

",34027,,2444,,12/30/2021 11:46,12/30/2021 11:46,What are examples of approaches to create an AI for a fighting robot in an MMO game?,,5,7,,,,CC BY-SA 4.0 18432,2,,18409,3/5/2020 9:32,,1,,"

Your problems appear to derive from a non-Markov state description. In brief, the agent has no way to ever learn that ""heavy jobs will make a server unavailable for longer"" and this is further compounded by arbitrary state transitions every 10 steps whilst the agent cannot track time.

Looking at the example from Sutton & Barto, you should note that they took care to model a valid MDP. Servers become available at random on each time step, and there is no hidden state. In your case you have three sources of hidden data that affects state evolution systematically. If an agent had some data about these then it might be able to use to optimise choices better:

  1. Servers in use due to tasks assigned by the agent return deterministically n steps after the action that assigned them. The agents inability to track this is likely to be your largest problem - there is no data flow in RL that will allow the agent to associate the heavier jobs or slower servers with making a server unavailable for longer, so it never ""sees"" the clash of resource management that you want it to resolve.

  2. Every 10 timesteps there is an arbitrary change to server availability. This is a problem because it is entirely predictable on which time steps it can occur, but the agent does not know the current timestep.

  3. You are driving this from a dataset, not randomly, so the impact on this will vary depending on how the dataset evolves over time. A particularly bad case here would be if the dataset represented a ""working day"" so that there were clear timezones when availability followed different patterns. The agent has no data input in the state in order to associate patterns of use (such as the time of day), so the environment either becomes an online learning problem (agent must continuously learn and adapt to new patterns as they arise), or it must come up with a strategy that works in some average over all possible patterns.

In addition I can see a possible secondary effect, an ""exploit"" in the simulation, that may have a noticeable effect depending on how often specific servers drop out of the pool. It would appear if an agent assigns a task to a server, that you have no mechanism to unassign that task should the server drop out. So an aggressive agent that assigns greedily to the best available servers will get extra ""free"" time on the high capacity servers if it always claims them. As the agent doesn't know when the drop outs could happen, this effect competes with the average of leaving servers available just in case a heavy task arrives.

How to fix it? It depends on how much of your simulation is fixed by your requirements. You can either adjust the simulation to match your current state representation, or you can change the state representation to capture more system data from the simulation.

Probably the simplest state change would be to expand the list of servers to always include all servers, and for each unavailable server to have a ""ticker"" to show how many time steps before it becomes available. To make the problem smaller, you could maybe just track 1-10 ticks and have a combined state for (>10), as that will expose enough of the key state information that the agent can make decisions about pushing easy tasks into medium or low capacity servers. This does not need to be done to the level server id if the servers truly are interchangeable otherwise. So instead of an array of number of free servers by type, consider that it expands to a table with number of free servers by type at one end, then a column of number of servers by type with 0.1s remaining until they are free, then a column for servers that still have 0.2s remaining etc etc

",1847,,1847,,3/5/2020 10:23,3/5/2020 10:23,,,,6,,,,CC BY-SA 4.0 18434,2,,18431,3/5/2020 11:01,,3,,"

You can train your bot using reinforcement learning (in particular Q-Learning).

The most important part of the RL is a reward function. If we want agent to do some thing specific, we must provide rewards to it in such a way that it will achieve our goals. It is thus very important that the reward function accurately indicates the exact behaviour

So you can construct your own reward function that will satisfy your requirements. If the bot does something you want, you will reward it with higher score, otherwise you will punish it with negative reward.

AlhpaGo and OpenAI teams used a similar technique to train their model which could then beat humans in games like Go, StarCraft 2 and Dota 2

Also, check out this Deep Reinforcement Learning Free Preview on udacity.

",12841,,12841,,3/5/2020 11:27,3/5/2020 11:27,,,,0,,,,CC BY-SA 4.0 18435,2,,18431,3/5/2020 11:18,,10,,"

I would set up a list of goals for your bot. These could be 'maintain a minimum level of health', 'knock out human player', 'block way to location X', etc. This obviously depends on the domain of your MMO.

Then you can use a planner to achieve these goals in the game. You define a set of actions with preconditions and effects, set the current goal, and the planner will work out a list of actions for the bot to achieve the goal. You can easily express your actions (and the domain) in PDDL.

Examples for actions would be 'move to location X', 'eat X', 'attack player X'. A precondition of 'attack player X' could be 'health(X) is low', and an effect could be 'health(X) is reduced by Y'. There are different ways of expressing these depending on the planner's capabilities.

The beauty of this is that you don't actually have to explicitly code any behaviour. You describe the domain, and tell the bot what it should achieve, and what capabilities it has. The actual behaviour then emerges out of that description. If the bot only attacks a player if the player has lower health, then observing the player eat (and thus up their health) could result in the bot eating (to push its own health above the player's so that it can attack) — but you have not told the bot directly to do that.

For a starting point, go to http://education.planning.domains/ for a list of resources.

If you only have a few actions available, it might appear predictable to a human user, but with a variety of goals and actions, this will quickly become more complex and seem more 'intelligent'.

Update: Here is a link to a paper, Applying Goal-Oriented Action Planning to Games, which describes how this can be applied in a game.

",2193,,2193,,3/5/2020 15:11,3/5/2020 15:11,,,,0,,,,CC BY-SA 4.0 18436,2,,18425,3/5/2020 11:47,,0,,"

The most important part of the RL is the reward function. If we want an agent to do something specific, we must provide rewards to it in such a way that it will achieve the goal. It is thus very important that the reward function accurately indicates the exact behavior.

Assume, the robot's goal is to reach the desired position as fast as possible. You can construct your reward function so, that it will take into account the Euclidean distance to the position. If the arm moves to the position directly, you will reward the agent with a positive value, otherwise, you will punish it with a deviation from the direct line. You probably have other parameters of the joints, such as position and velocity. It can be also included in your reward function, in order to find optimal movements.

Check out this video from the free udacity overview course on RL and this paper "Setting up a Reinforcement Learning Task with a Real-World Robot"

Here is also related DeepMind's article and paper

I also have a project on github, where I implemented custom Gazebo environment for OpenAI Gym. This allows you to run the test even on a Jupyter Notebook. Check out my example

",12841,,12841,,9/6/2022 14:43,9/6/2022 14:43,,,,0,,,,CC BY-SA 4.0 18437,1,,,3/5/2020 11:54,,2,88,"

In the transformer model, to incorporate positional information of texts, the researchers have added a positional encoding to the model. How does positional encoding work? How does the positional encoding system learn the positions when varying lengths and types of text are passed at different time intervals?

To be more concrete, let's take these two sentences.

  1. "She is my queen"
  2. "Elizabeth is the queen of England"

How would these sentences be passed to the transformer? What would happen to them during the positional encoding part?

Please explain with less math and with more intuition behind it.

",39,,2444,,11/30/2021 15:41,8/17/2022 7:08,How does positional encoding work in the transformer model?,,0,1,,,,CC BY-SA 4.0 18438,1,18444,,3/5/2020 12:11,,2,237,"

I have a question about training a neural network for more epochs even after the network has converged without using early stopping criterion.

Consider the MNIST dataset and a LeNet 300-100-10 dense fully-connected architecture, where I have 2 hidden layers having 300 and 100 neurons and an output layer having 10 neurons.

Now, usually, this network takes about 9-11 epochs to train and have a validation accuracy of around 98%.

What happens if I train this network for 25 or 30 epochs, without using early stopping criterion?

",31215,,2444,,3/6/2020 1:37,3/6/2020 6:02,"What happens if I train a network for more epochs, without using early stopping?",,2,3,,,,CC BY-SA 4.0 18439,1,,,3/5/2020 13:04,,4,124,"

I'm currently trying to predict 1 output value with 52 input values. The problem is that I only have around 100 rows of data that I can use.

Will I get more accurate results when I use a small architecture than when I use multiple layers with a higher amount of neurons?

Right now, I use 1 hidden layer with 1 neuron, because of the fact that I need to solve (in my opinion) a basic regression problem.

",34033,,2444,,3/5/2020 13:05,4/30/2021 14:07,Is a basic neural network architecture better with small datasets?,,2,0,,,,CC BY-SA 4.0 18440,2,,18438,3/5/2020 13:27,,2,,"

Training a neural network for ""too many"" epochs than needed without using early stopping criterion leads to overfitting, where your model's ability to generalize decreases.

",31215,,,,,3/5/2020 13:27,,,,0,,,,CC BY-SA 4.0 18441,1,,,3/5/2020 15:02,,3,524,"

I need some help with continuing pre-training on Bert. I have a very specific vocabulary and lots of specific abbreviations at hand. I want to do an STS task. Let me specify my task: I have domain-specific sentences and want to pair them in terms of their semantic similarity. But as very uncommon language is used here, I need to train Bert on it.

  • How does one continue the pre-training (I read the GitHub release from google about it, but don't really understand it) Any examples?
  • What structure does my training data need to have, so that BERT can understand it?
  • Maybe training BERT from scratch would be even better. I guess it's the same process as continuing the pretraining just the starting checkpoint would be different. Is that correct?

Also, very happy about all other tips from you guys.

",34026,,2444,,9/20/2021 0:03,9/20/2021 0:03,How does one continue the pre-training in BERT?,,0,1,,,,CC BY-SA 4.0 18442,2,,1987,3/5/2020 15:56,,0,,"

The solution I reached after an hour of trial usually converges in just 100 epochs.

Yeah, I know it does not have the smoothest decision boundary out there, but it converges pretty fast.

I learned a few things from this spiral experiment:-

  • The output layer should be greater than or equal to the input layer. At least that's what I noticed in the case of this spiral problem.
  • Keep the initial learning rate high, like 0.1 in this case, then as you approach a low test error like 3-5% or less, decrease the learning rate by a notch(0.03) or two. This helps in converging faster and avoids jumping around the global minima.
  • You can see the effects of keeping the learning rate high by checking the error graph at the top right.
  • For smaller batch sizes like 1, 0.1 is too high a learning rate as the model fails to converge as it jumps around the global minima.
  • So, if you would like to keep a high learning rate(0.1), keep the batch size high(10) as well. This usually gives a slow yet smoother convergence.

Coincidentally the solution I came up with is very similar to the one provided by Salvador Dali.

Kindly add a comment, if you find any more intuitions or reasonings.

",34034,,34034,,3/5/2020 18:12,3/5/2020 18:12,,,,0,,,,CC BY-SA 4.0 18443,2,,18407,3/5/2020 16:20,,0,,"

When using an autoencoder, I believe the data u feed in has to be correlated in one way or another. For example, If i want to learn a latent representation of an image of a cat, The training data that I feed into the autoencoder should constitute only cat images.

Similar to other neural networks, you feed the autoencoder with a set of training data and hope that the network learns a set of weights that is able to output from the latent representation the exact image. To see whether the weights learnt by the autoencoder is able to generalise to other unseen cat images, you would have to use a test set for this.

Here is a paper about autoencoding. https://web.stanford.edu/class/cs294a/sparseAutoencoder_2011new.pdf

I hope this helps you somewhat in deciding whether you should use an autoencoder.

",32780,,,,,3/5/2020 16:20,,,,0,,,,CC BY-SA 4.0 18444,2,,18438,3/5/2020 17:15,,1,,"

Running for ""to many"" epochs can indeed lead to over fitting. You should look at the validation loss. If on AVERAGE it continues to decrease then you are not yet over fitting. You may be tempted to run more epochs in hopes your loss will decrease but unless you adjust your learning rate dynamically at some point you won't get any improvement. If you use KERAS it has a useful callback ReduceLROnPlateau. Documentation is at https://keras.io/callbacks/ This allows you to monitor a metric (typically validation loss) and to adjust the learning rate by a user defined factor(parameter factor) if the metric you are monitoring fails to improve after a certain number of consecutive epochs(parameter patience). You can think of the training process as travelling down a valley in N space(N being the number of trainable parameters). As you descend towards a minimum they valley gets narrower. If you do not lower the learning rate you will reach a point where you can not descend any further.Now you could use a very small learning rate to begin with but then you will have to train for a lot more epochs. One problem with adjusting the learning rate just on the validation loss is that in the early training epochs validation loss often does not track with training accuracy and it could cause the learning rate to be decreased prematurely. I wrote a custom callback which initially monitors training loss and adjust the learning rate based on that metric. Once the training accuracy reaches 95% it switches to monitoring validation loss and adjusts the learning rate based on that. It saves the model weights for the lowest validation loss in the variable val.best_weights. After training load these weights into your model to make predictions. Code is below if you are interested. When you compile your model just add 'val' to the callback list.

 class val(tf.keras.callbacks.Callback):
        # functions in this class adjust the learning rate 
        lowest_loss=np.inf
        lowest_trloss=np.inf
        best_weights=model.get_weights()
        lr=float(tf.keras.backend.get_value(model.optimizer.lr))
        epoch=0
        highest_acc=0

        def __init__(self):
            super(val, self).__init__()
            self.lowest_loss=np.inf
            self.lowest_trloss=np.inf
            self.best_weights=model.get_weights()
            self.lr=float(tf.keras.backend.get_value(model.optimizer.lr))
            self.epoch=0
            self.highest_acc=0


        def on_epoch_end(self, epoch, logs=None):             
            val.lr=float(tf.keras.backend.get_value(self.model.optimizer.lr))
            val.epoch=val.epoch +1            
            v_loss=logs.get('val_loss')
            v_acc=logs.get('accuracy')
            loss=logs.get('loss')
            if loss<val.lowest_trloss:
                val.lowest_trloss=loss
                if v_acc<.90:
                    val.best_weights=model.get_weights()
            if v_acc<=.95 and loss>val.lowest_trloss:
                lr=float(tf.keras.backend.get_value(self.model.optimizer.lr))
                ratio=val.lowest_trloss/loss  # add a factor to lr reduction
                new_lr=lr * .7 * ratio
                tf.keras.backend.set_value(model.optimizer.lr, new_lr)
                msg='{0}\n current training loss {1:7.5f}  is above lowest training loss of {2:7.5f}, reducing lr to {3:11.9f}{4}'
                print(msg.format(Cyellow, loss, val.lowest_trloss, new_lr,Cend))   
            if val.lowest_loss > v_loss:
                msg='{0}\n validation loss improved,saving weights with validation loss= {1:7.4f}\n{2}'
                print(msg.format(Cgreen, v_loss, Cend))
                val.lowest_loss=v_loss
                val.best_weights=model.get_weights()

            else:
                 if v_acc>.95 and val.lowest_loss<v_loss:
                        # reduce learning rate based on validation loss> val.best_loss
                        lr=float(tf.keras.backend.get_value(self.model.optimizer.lr))
                        ratio=val.lowest_loss/v_loss  # add a factor to lr reduction
                        new_lr=lr * .7 * ratio
                        tf.keras.backend.set_value(model.optimizer.lr, new_lr)
                        msg='{0}\n current loss {1:7.4f} exceeds lowest loss of {2:7.4f}, reducing lr to {3:11.9f}{4}'
                        print(msg.format(Cyellow, v_loss, val.lowest_loss, new_lr,Cend))```


",33976,,33976,,3/6/2020 6:02,3/6/2020 6:02,,,,0,,,,CC BY-SA 4.0 18445,2,,18431,3/5/2020 17:24,,8,,"

Oliver Mason's answer is great for specific methods and tools to use, but I wanted to pull out a more general principle which was mentioned in a comment.

The distinction your friend is making is not one that would be generally recognised. One of my university lecturers defined AI as something like ""an artificial system that exhibits behaviour that resembles how an intelligent being would behave"".

If an intelligent being would always use the special attack in a particular situation, then an algorithm that always does so in the same situation is behaving intelligently, even though the algorithm behind it is incredibly simple. If you can come up with a complete description of an intelligent player, you have what is called an expert system, i.e. a system which captures the decision-making process of a real expert.

Your friend is not even correct that your proposed AI ""does not have any heuristic functions"". When you write a condition like ""if the AI's health is below 50%, it will eat food"", you're approximating the rule a human would use. You can make the heuristic more complex by increasing the probability of eating in proportion to current health; that might in turn make the heuristic closer to optimal.

You can only really find out how ""good"" your AI is by putting it into different situations and observing it - sometimes, a simple set of rules gives rise to ""emergent behaviours"" that look surprisingly intelligent. As you build up more complex rules - i.e. more optimal heuristics - the emergent behaviour will change, and you can tweak it for the desired effect.

",34038,,,,,3/5/2020 17:24,,,,1,,,,CC BY-SA 4.0 18446,1,,,3/5/2020 17:26,,1,22,"

In Computer Vision, feature encoding methods are used on pre-trained DCNN to increase the feature robustness to certain conditions such as viewpoint/appearance variations ref.

I was just wondering if there are already available well established methods among AI community with probably python implementations.

I found the following ones in literature but without any tutorial or code example:

  1. Multi-layer pooling ref
  2. Cross convolutional layer Pooling ref
  3. Holistic Pooling ref
",31312,,,,,3/5/2020 17:26,Convolutional Feature Encoding Methods in DCNN,,0,0,,,,CC BY-SA 4.0 18448,1,,,3/5/2020 20:05,,2,49,"

I currently have a grid of pixels 20x20. Each pixel can be red green blue or black. So I have one hot-encoded the pixels giving a 20x20x4 array for each screen.

For my Deep-Q Network, I have attached two successive screenshots of the screen together giving a 20x20x4x2 array.

I am trying to build a Convolutional Neural Network to estimate the Q values but I am not sure if my current architecture is a good idea. It currently is as shown below:

    def create_model(self):
        model = Sequential()
        model.add(Conv3D(256, (4, 4,2), input_shape=(20,20,4,2)))
        model.add(Activation('relu'))
        model.add(Dropout(0.2))

        model.add(Conv3D(256, (2,2,1), input_shape=self.input_shape))
        model.add(Activation('relu'))

        model.add(Flatten())
        model.add(Dense(64))
        model.add(Dense(self.num_actions, activation='linear'))
        model.compile(loss='mse', optimizer=Adam(self.learning_rate), metrics=['accuracy'])
        return model

Is a 3d convolution a good idea? Is 256 filters a good idea? Are the filters (4,4,2) and (2,2,1) suitable? I realise answers may be highly subjective but I'm just looking for someone to point out any immediate flaws in the architecture.

",33227,,,,,3/5/2020 22:07,How to use convolution neural network in Deep-Q?,,1,0,,,,CC BY-SA 4.0 18449,1,,,3/5/2020 21:26,,1,152,"

I am trying to use PyTorch's transformers as a part of a research project to do sentiment analysis of several types of review data (laptop and restaurant).

To do this, my team is taking a token-based approach and we are using models that can perform token analysis.

One problem we have encountered is that many of the models in PyTorch's transformers do not support token classification, but do support sequence classification. One such model we wanted to test is GPT-2.

In order to overcome this, we proposed using sequence classifiers on single tokens which should work in theory, but possibly at reduced accuracy.

This raises the following questions:

  • Is it possible to do token classification using a model such as GPT-2 using PyTorch's transformers?

  • How do sequence classifiers perform on single token sequences?

",18870,,2444,,3/6/2020 14:05,3/6/2020 14:05,Is it possible to do token classification using a model such as GPT-2?,,0,0,,,,CC BY-SA 4.0 18450,2,,18439,3/5/2020 21:34,,0,,"

It's harder to overfit it certainly!

I mean practically speaking there has to be some assumptions on the generation model of your data, either explicit or implicit.

I would try probably 1-2 layer network first(maybe your data is linearly separable if you're lucky).

",32390,,,,,3/5/2020 21:34,,,,0,,,,CC BY-SA 4.0 18451,1,,,3/5/2020 21:52,,1,38,"

I wrote a script to do train a Siamese Network style model for face recognition on LFW dataset but the training loss doesnt decrease at all. Probably there's a bug in my implementation. Could you please point it out. Right now my code does:

  • Each epoch has 0.5M triplets all generated in an online way from data (since the exhaustive number of triplets is too big).
  • Triplet sampling method: We have a dictionary of {class_id: list of file paths with that class id}. We then create a list of classes which we can use for positive class (some classes have only 1 image so cant be used as positive class). At any iteration we randomly sample a positive class from this refined list and a negative class from the original list. We randomly sample 2 images from positive (as Anchor or A and Positive as P) and 1 from negative (Negative or N). A,P,N form our triplet.
  • Model used is ResNet with the ultimate (512,1000) softmax layer is replaced with (512,128) Dense layer (no activation). To avoid overfitting, only the last Dense and layer4 are kept trainable and rest are frozen.
  • During training we find triplets which are semi-hard in a batch (Loss between 0 and margin) and use only those to do backprop (they mention this in the FaceNet paper)
from torchvision import models, transforms
from torch.utils.data import Dataset, DataLoader
import torch, torch.nn as nn, torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
import os, glob
import numpy as np
from PIL import Image

image_size = 224
batch_size = 512
margin = 0.5
learning_rate = 1e-3
num_epochs = 1000

model = models.resnet18(pretrained=True)
model.fc = nn.Linear(model.fc.in_features, 128, bias=False)

for param in model.parameters():
  param.requires_grad = False
for param in model.fc.parameters():
  param.requires_grad = True
for param in model.layer4.parameters():
  param.requires_grad = True

optimizer = optim.Adam(params=list(model.fc.parameters())+list(model.layer4.parameters()), lr=learning_rate, weight_decay=0.05)

device = torch.device(""cuda"" if torch.cuda.is_available() else ""cpu"")
model = nn.DataParallel(model).to(device)
writer = SummaryWriter(log_dir=""logs/"")

class TripletDataset(Dataset):
  def __init__(self, rootdir, transform):
    super().__init__()
    self.rootdir = rootdir
    self.classes = os.listdir(self.rootdir)
    self.file_paths = {c: glob.glob(os.path.join(rootdir, c, ""*.jpg"")) for c in self.classes}
    self.positive_classes = [c for c in self.classes if len(self.file_paths[c])>=2]
    self.transform = transform

  def __getitem__(self, index=None):
    class_pos, class_neg = None, None
    while class_pos == class_neg:
      class_pos = np.random.choice(a=self.positive_classes, size=1)[0]
      class_neg = np.random.choice(a=self.classes, size=1)[0]

    fp_a, fp_p = np.random.choice(a=self.file_paths[class_pos], size=2, replace=False)
    fp_n = np.random.choice(a=self.file_paths[class_neg], size=1)[0]

    return {
        ""fp_a"": fp_a,
        ""fp_p"": fp_p,
        ""fp_n"": fp_n,
        ""A"": self.transform(Image.open(fp_a)),
        ""P"": self.transform(Image.open(fp_p)),
        ""N"": self.transform(Image.open(fp_n)),
            }

  def __len__(self):
    return 500000


def triplet_loss(a, p, n, margin=margin):
    d_ap = (a-p).norm(p='fro', dim=1)
    d_an = (a-n).norm(p='fro', dim=1)
    loss = torch.clamp(d_ap-d_an+margin, min=0)
    return loss, d_ap.mean(), d_an.mean()

transform = transforms.Compose([
        transforms.RandomResizedCrop(image_size),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.596, 0.436, 0.586], [0.2066, 0.240, 0.186])
        ])
train_dataset = TripletDataset(""lfw"", transform)
nw = 4 if torch.cuda.is_available() else 0
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, num_workers=0, shuffle=True)

num_batches = len(train_dataloader)
model.train()
running_loss = 0

for epoch in range(num_epochs):
    for batch_id, dictionary in enumerate(train_dataloader):
        a, p, n = dictionary[""A""], dictionary[""P""], dictionary[""N""]
        a, p, n = a.to(device), p.to(device), n.to(device)
        emb_a, emb_p, emb_n = model(a), model(p), model(n)
        losses, d_ap, d_an = triplet_loss(a=emb_a, p=emb_p, n=emb_n)

        semi_hard_triplets = torch.where((losses>0) & (losses<margin))
        losses = losses[semi_hard_triplets]
        loss = losses.mean()
        loss.backward()
        optimizer.step()  
        running_loss += loss.item()

        print(""Epoch {} Batch {}/{} Loss = {} Avg AP dist = {} Avg AN dist = {}"".format(epoch, batch_id, num_batches, loss.item(), d_ap.item(), d_an.item()), flush=True)
        writer.add_scalar(""Loss/Train"", loss.item(), epoch*num_batches+batch_id)
        writer.add_scalars(""AP_AN_Distances"", {""AP"": d_ap.item(), ""AN"": d_an.item()}, epoch*num_batches+batch_id)

    print(""Epoch {} Avg Loss {}"".format(epoch, running_loss/num_batches), flush=True)
    writer.add_scalar(""Epoch_Loss"", running_loss/num_batches, epoch)
    torch.save(model.state_dict(), ""facenet_epoch_{}.pth"".format(epoch))

Loss graphs: https://tensorboard.dev/experiment/8TgzPTjuRCOFkFV5lr5etQ/ Please let me know if you need some other information to help you help me.

",19522,,19522,,3/5/2020 22:17,3/5/2020 22:17,Face recognition model loss not decreasing,,0,0,,,,CC BY-SA 4.0 18452,2,,18448,3/5/2020 22:07,,2,,"

Is a 3d convolution a good idea? Is 256 filters a good idea? Are the filters (4,4,2) and (2,2,1) suitable?

It's not so much that answers are subjective, but you are performing an experiment, and this should be driven by results. If you can find something published about a similar environment that might help you narrow down your choices.

That said, intuitively you don't gain much going from 2d to 3d convolutions when one of your dimensions is only sized 2. Purely from gut feeling I would suggest simply concatenating your two frames by having 8 channels instead of 4, and use 2d filters. This is simple to enough try and compare though, so perhaps you can do both.

You will likely want to explore deeper networks that have fewer initial filters and build up over a few more than two convolutional layers.

Definitely try your network without the dropout layer. I have not had much luck with using dropout in DQN, and I have seen others having similar problems. I am not sure what the exact issue is.

The accuracy metric won't do you much good on a regression task, so you can drop that and just use MSE loss. Bear in mind that training loss is a less useful metric overall in RL, since the prediction target (action value) is continuously changing as the policy changes. Low loss values mean that the value predictions are self-consistent, they don't necessarily mean that the learning has converged to an optimal policy.

",1847,,,,,3/5/2020 22:07,,,,1,,,,CC BY-SA 4.0 18453,2,,18418,3/5/2020 23:32,,3,,"

Philosophically, my own research has led me to understand AI as any artifact that makes a decision. This is because the etymology of ""intelligence"" strongly implies ""selecting between alternatives"", and these meanings are baked in all the way back to the proto-Indo-European.

(Degree of intelligence, or ""strength"" is merely a measure of utility, typically versus other decision making mechanisms, or, ""fitness in an environment"", where an environment is any action space.)

Therefore, the most basic form of automated (artificial) intelligence is:

if [some condition] 
then [some action]

It is worth noting that narrow AI which matches or exceeds human capability, in the popular sense, manifested only recently when we had sufficient processing and memory to derive sufficient utility from statistical decision making algorithms. But Nimatron constitutes perhaps the first functional strong-narrow AI in a modern computing context, and the first automated intelligence are simple traps and snares, which have been with us almost as long as we've used tools.

I will leave it to others to break down all the various forms of modern AI.

",1671,,1671,,3/6/2020 0:24,3/6/2020 0:24,,,,2,,,,CC BY-SA 4.0 18454,1,,,3/6/2020 1:57,,1,49,"

I am looking for a technique to train a machine learning model to choose two items from a list.

So, given a list $x=[x_1, x_2, x_3, x_4, \dots, x_n]$, the model needs to choose two elements $(x_i, x_j)$. I have a function $R(x, x_i, x_j)$, which will output the reward of choosing $(x_i, x_j)$ given $x$.

What type of models should I use, and how should I train it to maximize the reward?

I've tried using deep reinforcement learning, but I ran into the following problems with implementing the Q-Network:

  1. Variable-length inputs (fixed by using RNN, I think)
  2. The output size grows factorially (for an input set of n elements, there are n choose 2 ways to pick 2 elements, so the network needs to output n choose 2 expected rewards)
",34050,,2444,,3/7/2020 4:36,3/7/2020 4:36,Which model should I choose to maximise reward of having chosen two numbers from a list?,,0,0,,,,CC BY-SA 4.0 18458,1,,,3/6/2020 4:06,,1,62,"

This question is a bit philosophic and is about making new use cases for software companies. Let me describe what exist for now, why it is not enough, and what is needed.

I know that there are a lot of existing researches in applying ML for software (please don't simply point to this one!), but none of them consider the application of ML for software company, not the software alone.

Existent approaches that apply AI for software engineering tasks consider it as follows:

human1 -> software (big code) <- human2

That means that human1 makes some part of software (that is a part of big code), and human2 reuses some knowledge from it. It may be a bugfix pattern (as e.g. DeepCode does), API usage pattern, repair of code, summarization of code, code search, or whatever else. I think the main reason for this is the original hypothesis of naturalness:

The naturalness hypothesis. Software is a form of human communication; software corpora have similar statistical properties to natural language corpora, and these properties can be exploited to build better software engineering tools.

(from Allamanis et al, page 3)

But imagine one software company. It has:

  • Some number of engineers,
  • Some number of managers,
  • The software product,
  • Information related to the software product (documentation, bug/task tracking system, code review),
  • Some number of formal management processes (waterfall, scrum or whatever else),
  • Some number of informal processes

But none of these models consider the software as a product itself. I mean that we should consider the model as follows:

company -> software product -> customers
              |
              v
           big code

or even

engineer1 -> |
    |
engineer2 -> |
    |
...          | ----> software product ----> customers
    |                   |
engineerN -> |          |
    |                   |
manager  --> |          |
                        v
                     big code

So my questions are:

  1. Are there any cases of investigation of such models?
  2. Are there any similar cases in related fields, say in general companies (not specifically software ones)?
  3. Are there any analogies (not specifically from software-related domains) where some knowledge can be transferred from a bigger object (big code in our case) to a smaller one (software product)?

Any ideas are welcome.

",16354,,,,,3/6/2020 4:06,Use cases for AI inside the software company,,0,0,,,,CC BY-SA 4.0 18459,1,18463,,3/6/2020 6:58,,3,171,"

The DQN implements replay memory. Based on my research, I believe the replay memory starts to get used for training once there is enough experience in the memory buffer. This means the neural network gets trained while the game plays.

My question is, if I am to play the game 10000 epochs, store all the experiences and then train from the experiences would that have the same effect as training and while running through 10000 epochs? Is it frowned upon to do it this way? Are there any advantages?

",34054,,2444,,3/6/2020 20:36,3/6/2020 20:36,Can experience replay be used for training after completing every single epoch?,,1,0,,,,CC BY-SA 4.0 18460,2,,18397,3/6/2020 7:06,,0,,"

It is a bit rare that the validation and test accuracy exceed the training accuracy. One thing that could cause this is the selection of the validation and test data. Was the data for these two sets selected randomly or did you do the selection yourself? It is generally better to have these sets selected randomly from the overall data set. That way the probability distribution in these sets will closely match the distribution of the training set. Normally the training accuracy is higher (especially if you run enough epochs which I see you did) because there is always some degree of over fitting which reduces validation and test accuracy. The only other thing I can think of is the effect of Dropout layers. If you had dropout layers in your model and the drop out ratio was high that could cause this accuracy disparity. When the training accuracy is calculated it is done with drop being active. This can lower the training accuracy to some degree. However when evaluating validation accuracy and test accuracy drop out is NOT active so the model is actually more accurate. This increase in accuracy might be enough to overcome the decrease due to over fitting. Especially possible in this case since the accuracy differences appear to be quite small.

",33976,,,,,3/6/2020 7:06,,,,1,,,,CC BY-SA 4.0 18462,2,,12423,3/6/2020 7:55,,1,,"

I developed a python script to crop faces using MTCNN. I found this to be the most accurate of all the face cropping algorithms at the expense of being somewhat slower. The function I developed is on the kaggle website at https://www.kaggle.com/gpiosenka/detect-align-resize-rename-facial-images. The markup first cell explains how to use it. In a nutshell this function will crop the image files in the input directory and store them in an output directory. If there are multiple faces in an image it selects and crops the largest facial image. It also gives you the option to ""align"" the cropped images. By align I mean the cropped image is rotated so the eyes are horizontal. You also have the option to resize the cropped image. Finally you also have the option to rename the images in a numerically sequential order and to change the image format to one you specify. This is convenient because if you download images from the internet they have ungodly names and come in a vast variety of image formats. Also the function checks if any images are duplicates of each other. If duplicates are present a listing is provided and you can elect to delete duplicate images. Normally when you build a data set you want to eliminate duplicates so that if you split the data set into train, test and validation sets there is no ""leakage"" between the sets. I have used this function on thousands of images that have faces in them and it works extremely well. The cropped results are just what you want.On rare occasions a cropping error may occur so it is always wise to review the cropped results.

",33976,,,,,3/6/2020 7:55,,,,0,,,,CC BY-SA 4.0 18463,2,,18459,3/6/2020 8:18,,2,,"

My question is, if I am to play the game 10000 epochs, store all the experiences and then train from the experiences would that have the same effect as training and while running through 10000 epochs?

No, it will not. In general, for anything other than simple environments, this will give a worse result. The trouble is during those 10,000 epochs you will have been working with your initial behaviour policy to collect experience, which will not be close to optimal. Off-policy learning adjusts for that and attempts to learn the value function of current best-guess optimal policy. However, this is not perfect, and it will learn better the closer your behaviour policy is to the target policy. An initial random policy will in many cases be too different from the target policy for learning to be effective.

There are two main causes of imperfection in off-policy learning:

  • Variance. The more discrepancy between behaviour and target policies there is, the higher the variance will be in the sampled values, and more samples will be required to get the same accuracy.

  • Sampling bias. The distribution of observed states and actions will affect function approximators such as neural networks, which reduce loss functions against an assumed population on input/output pairs. Unless adjusted somehow (and basic Q-learning in DQN does not adjust this), the population you train against will be from the behaviour policy.

Are there any advantages?

It may be faster, and result in cleaner design, to separate data gathering and learning components. If you have a distributed architecture you can dedicate some machines to generating the experience and others to analysing it. However, this still benefits from routine updates to behaviour policies used to collect experience, based on learning so far.

",1847,,2444,,3/6/2020 20:19,3/6/2020 20:19,,,,0,,,,CC BY-SA 4.0 18464,1,18465,,3/6/2020 9:37,,3,77,"

I have a lot of empty values in my dataset, so I want to let my neural network 'learn more' on the rows that have no empty values because these rows are of higher importance.

Is there a way to do this?

",34033,,,,,3/6/2020 10:12,Can you let specific data impact a neural network more than other data?,,1,0,,,,CC BY-SA 4.0 18465,2,,18464,3/6/2020 10:00,,3,,"

Yes, you can weight the loss function for each example, so that instead of your cost function being

$$J = \sum_i \mathcal{L}(y_i, \hat{y}_i)$$

It will be

$$J = \sum_i w_i\mathcal{L}(y_i, \hat{y}_i)$$

Where $i$ iterates over your data set, $\mathcal{L}$ is the loss function you are using, $y_i$ is ground truth for each example and $\hat{y}_i$ is prediction for each example.

You can set your relevance weighting according to any criteria you like based on the example, and your dataset/goals. So for instance you could set it to $1.0$ for complete examples and $0.1$ for incomplete examples. Depending on your NN framework, it may already offer per-example weighting, and even if it does not, auto-differentiation means that typically you only need to implement the forward logic into the cost function for each minibatch, and the weighting will be applied correctly to gradients with no more work required. If you do need to implement gradient calculations yourself, you simply multiply the initial gradient of each example by the example's weight $w_i$.

Once you make a change like this, you need to take care on how you set up and interpret your test set. When your model is used in production, if there are still inputs with missing details, you probably don't want to weight the metrics (e.g. accuracy rating) in the same way, but to report it as-is against a correctly sampled dataset of unseen examples from the population of expected inputs.

",1847,,1847,,3/6/2020 10:12,3/6/2020 10:12,,,,0,,,,CC BY-SA 4.0 18466,1,,,3/6/2020 12:07,,1,461,"

I am implementing a project on pomegranate plant disease in Machine learning. I want a dataset of all kind images of a healthy and unhealthy part of the pomegranate plant. I got a dataset from Fruit360 but that is only for pomegranate fruits but need for leaves also. Is there anyone who knows any website, link, version control system repository and/or any resource from which I get a dataset for leaves.

",34065,,34065,,3/11/2020 4:08,3/11/2020 4:10,Image dataset for pomegranate plant disease,,1,0,,10/28/2021 15:58,,CC BY-SA 4.0 18467,2,,18466,3/6/2020 13:37,,1,,"

You can find the dataset in the following links:

",33875,,,,,3/6/2020 13:37,,,,2,,,,CC BY-SA 4.0 18468,1,,,3/6/2020 14:53,,2,150,"

I was thinking about the risks of Oracle AI and it doesn't seem as safe to me as Bostrom et al. suggest. From my point of view, even an AGI that only answers questions could have a catastrophic impact. Thinking about it a little bit, I came up with this Proof:

Lemma

We are not safe even by giving the oracle the ability to only answer yes or no.

Proof

Let's say that our oracle must maximize an utility function $\phi$, there is a procedure that encodes the optimality of $\phi$. Since a procedure is, in fact, a set of instructions (an algorithm), each procedure can be encoded as a binary string, composed solely of 0 and 1, Therefore we will have $\phi \in {\{0,1\}^n}$, assuming that the optimal procedure has finite cardinality. Shannon's entropy tells us that every binary string can be guessed by answering only yes/no to questions like: is the first bit 0? and so on, therefore we can reconstruct any algorithm via binary answers (yes / no).

Is this reasoning correct and applicable to this type of AI?

",30353,,2444,,3/6/2020 19:20,3/14/2020 20:57,"Is an oracle that answers only with a ""yes"" or ""no"" dangerous?",,0,6,,,,CC BY-SA 4.0 18469,1,18471,,3/6/2020 16:48,,2,116,"

I'm training a DCGAN model on a 320x320 dataset of images and after an hour of training the generator started to generate (on the same latent space noise as during training) images that are identical to the dataset. For example, if my dataset is images of cars, I should expect to see unexisting designs of cars, right? Am I understanding this wrong? I know this is a very general question but I was wondering if this is what should happen and if I should try on different latent space values and then see proper results and not just copies of my dataset?

",32539,,,,,3/6/2020 17:38,Am I overfitting my GAN model?,,1,0,,,,CC BY-SA 4.0 18470,1,18473,,3/6/2020 16:54,,1,67,"

I am trying to build an LSTM model to generate Shakspeare-like poems. I have data set $\{s_1, s_2, \dots, s_m \}$, which are sentences of Shakespeare poems, and each sentence contains words $\{w_1, w_2, \dots, w_n \}$.

I am wondering: Are different $s_i$ ($i=1, \dots, m$) independent and identically distributed samples (IID)? Are $w_i$ ($i=1, \dots, n$) within each sentence the IID?

",33266,,33266,,3/8/2020 1:24,3/8/2020 1:24,Are sentences from the same document independent and identically distributed?,,1,0,,,,CC BY-SA 4.0 18471,2,,18469,3/6/2020 17:38,,2,,"

It might be that your dataset of images is to small. Your discriminative network might hardlearn these images at which point your generative network can only produce good images if it copies the same images of your dataset.

",29671,,,,,3/6/2020 17:38,,,,3,,,,CC BY-SA 4.0 18473,2,,18470,3/6/2020 21:10,,3,,"

The sentences coming from the same document, author, etc., are unlikely to be independent, that is, the occurrence of a sentence $s_i$ in a certain document $d$ is likely correlated with the occurrence of another sentence $s_j$. If they are not independent, they can also not be independent and identically distributed (which is a stronger condition). The same can be said for words in the same sentence.

",2444,,,,,3/6/2020 21:10,,,,0,,,,CC BY-SA 4.0 18475,2,,18423,3/7/2020 8:45,,1,,"

To offer a bit of theory, CNNs work well for many image tasks because they process spacially local information, without much care for absolute position. Essentially, every layer chops every image up into tiny crop images, and do an analysis step on the crops. The simple questions of ""is this a line... corner... eye... face?"" can be asked equally of every crop.

This means that the network only needs to learn once to detect a feature, rather than separately learn to detect that feature in each possible location it might appear. Therefore we can use smaller networks that train faster and need less data than if we had a fully connected architecture.

To return to the question, you could expect a CNN to work if your data is similarly spacially correlated. Put another way, if finding a pattern around cell x, y means the same sort of thing as the same pattern in cell a, b, then you are probably in luck. On the other hand if each column represents a meaningfully different concept, then a CNN will be a poor choice of architecture.

",23413,,23413,,3/11/2020 10:33,3/11/2020 10:33,,,,0,,,,CC BY-SA 4.0 18476,2,,18269,3/7/2020 9:24,,1,,"

Naturally, one might let the MDP run for 1000 periods, and then terminate as an approximation. If we feed these trajectories into a monte carlo update, I imagine that samples for time period t=1,2,...,100 would give very good estimates for the value function due to the discount factor. However, the time periods 997, 998, 999, 1000, we'd have an expected value for those trajectories far different than V(s) due to their proximity to the cutoff of 1000.

I think your intuition is... partially right here, but not entirely precise. Recall that a value function $V(S)$ is generally defined as something like (omitting some unimportant details like specifying the policy):

$$V(S_i) = \mathbb{E} \left[ \sum_{t=0}^{\infty} \gamma^t R_{i+t} \right].$$

The ""samples for time period $t = 1, 2, \dots, 100$ that you mention are not estimates for this full value function. They're estimates for the corresponding individual terms $R_{i+t}$. Indeed, in general, your intuition is right that the closer they are to the ""starting point"" $i$, the more likely they'll be to be accurate estimators. This is because larger $t$ are typically associated with larger numbers of stochastic state-transitions and stochastic decision-making, and therefore often exhibit higher variance.

  1. Should we even include these later-occurring data points when we update our function approximation?

Theoretically, you absolutely should. Suppose you have an environment where almost every reward is equal to $0$, and only after 1000 steps do you actually observe a non-zero reward. If you don't include this, you'll learn nothing! In practice, it can often be a good idea to give them less importance though. This already happens automatically by picking a discount factor $\gamma < 1$.

  1. Is it usually implied that the final data reward in the trajectory is bootstrapped in these cases (i.e., we have some TD(0)-like behavior in this case)?

OR

  1. Are monte carlo updates for policy gradient algorithms even appropriate for non-terminating MDPs due to this issue?

It would be possible to do some form of bootstrapping at the end yeah, cut off and then have a trained value function predicting what the remainder of the rewards would be. TD($\lambda$) with $\lambda$ close to $1$ would be much closer in behaviour to true MC updates than TD($0$) though. Either way, it would be technically incorrect to still call it Monte-Carlo then, it would no longer be pure Monte-Carlo. So yes, strict Monte-Carlo updates in the purest sense of the term are not really applicable to infinite episodes.

",1641,,,,,3/7/2020 9:24,,,,0,,,,CC BY-SA 4.0 18477,1,18479,,3/7/2020 10:20,,2,132,"

I am experimenting with a ConvNet to categorize images taken with a depth camera. So far I have 4 sets of 15 images each. So 4 labels. The original images are 680x880 16-bit grayscale. They are scaled down before feeding it to the ImageDataGenerator to 68x88 RGB (each color channel with equal value). I am using the ImageDataGenerator (IDG) to create more variance on the sets. (The IDG does not seem to be able to handle 16-bit grayscale images, nor 8-bit images well, so hence I converted them to RGB).

I estimate the images to be low on features, compared to regular RGB images, because it represents depth. To get a feel for the images, here are a few down scaled examples:

I let it train 4.096 epochs, to see how that would go.

This is the result of the model and validation loss.

You can see that in the early epochs the validation (test / orange line) loss dips, and then goes up and starts to show big swings. Is this a sign of overfitting?

Here is a zoomed in image of the early epochs.

The model loss (train / blue line) reached relatively low values with an accuracy of 1.000. Training again shows repeatedly the same kind of graphs. Here are the last epochs.

Epoch 4087/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.1137 - accuracy: 0.9286 - val_loss: 216.2349 - val_accuracy: 0.7812
Epoch 4088/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0364 - accuracy: 0.9643 - val_loss: 234.9622 - val_accuracy: 0.7812
Epoch 4089/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0041 - accuracy: 1.0000 - val_loss: 232.9797 - val_accuracy: 0.7812
Epoch 4090/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0091 - accuracy: 1.0000 - val_loss: 238.7082 - val_accuracy: 0.7812
Epoch 4091/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0248 - accuracy: 1.0000 - val_loss: 232.4937 - val_accuracy: 0.7812
Epoch 4092/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0335 - accuracy: 0.9643 - val_loss: 273.6542 - val_accuracy: 0.7812
Epoch 4093/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0196 - accuracy: 1.0000 - val_loss: 258.2848 - val_accuracy: 0.7812
Epoch 4094/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0382 - accuracy: 0.9643 - val_loss: 226.6226 - val_accuracy: 0.7812
Epoch 4095/4096
7/7 [==============================] - 0s 10ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 226.2943 - val_accuracy: 0.7812
Epoch 4096/4096
7/7 [==============================] - 0s 11ms/step - loss: 0.0201 - accuracy: 1.0000 - val_loss: 207.3653 - val_accuracy: 0.7812

Not sure if it is required to know the architecture of the neural network to judge whether this is overfitting on this data set. Anyway, here is the setup.

kernelSize = 3
kernel = (kernelSize, kernelSize)

model = Sequential()
model.add(Conv2D(16, kernel_size=kernel, padding='same', input_shape=inputShape, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, kernel_size=kernel, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, kernel_size=kernel, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(nr_of_classes, activation='softmax'))

sgd = tf.keras.optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.4, nesterov=True)
model.compile(loss='categorical_crossentropy', 
            optimizer=sgd,
            metrics=['accuracy'])
",32968,,,,,3/7/2020 12:10,Is it a sign of overfitting when validation_loss dips and then goes up with increasingly bigger swings?,,1,0,,,,CC BY-SA 4.0 18478,1,,,3/7/2020 11:13,,2,288,"

I'm working on an object detection problem using Faster R-CNN. I need to identify two object classes, and they are very similar to one another. Furthermore they are similar to a third type of object which should be considered as background. Also, all three of these objects have a lot of variation within them.

In my particular example the two objects of interest are 1) a statue of a particular named person who appears in many statues, 2) a statue of anyone else

Examples:

Also, I want to treat living flesh people, or non-humanoid statues as background.

Now here are some interesting results:

The RPB losses follow the expected trajectory for such a problem, but on the other hand, I really had to have faith and hang in there for the detector losses. They take a while to start decreasing, and I presume it's because there is a relatively sharp trough leading to the the minimum of the loss function with respect to the weights (because the labelled classes are so similar to one another). Miraculously (at least I think so), it does start to kind of work, but not as well as I'd like it to.

My question is as in the title but here are some of my thought processes:

  • Does the class similarity spoil the bounding box regression? And then does that spoil the class inference in turn?
  • Would it be better to just detect humanoid statues in general, then train a classifier from scratch on the output of that? (I don't know about this one. Maybe the relevant info is already encoded in the detector of the Faster R-CNN and the added bonus is that the Faster R-CNN gets context from outside of the bounding box... unless the bounding box regression is being spoiled by the class ambiguity)
",16871,,,,,3/7/2020 11:13,Dealing with very similar object classes in object detection,,0,0,,,,CC BY-SA 4.0 18479,2,,18477,3/7/2020 12:10,,2,,"

Yes this looks a lot like overfitting. The clue is in the low and slowly decreasing training loss compared to the large increases in validation loss.

One simple fix would be to stop training around epoch 50, taking the best cross validation result to select the most general network at that point. However, anything that works to improve stable generalisation could help here - more training data, more regularisation, simpler model.

The tricky part is finding the best combination for generalisation. Typically if you overuse regularisation to make the NN completely stable, it will lose some accuracy, so you need to run multiple experiments and measure things carefully. As you have a small dataset here and seem capable of running 1000's of epochs, I would suggest k-fold cross validation for improved measurement of cross validation loss.

",1847,,,,,3/7/2020 12:10,,,,2,,,,CC BY-SA 4.0 18480,1,,,3/7/2020 13:07,,2,617,"

Recently, I have seen a surge of papers w.r.t contrastive learning (a subset of semi-supervised learning).

Can anyone give a detailed explanation of this approach with its advantages/disadvantages and what are the cases in which it gives better results?

Also, why it's gaining traction amongst the ML research community?

",32249,,32249,,3/7/2020 16:01,7/14/2021 22:07,What's the intuition behind contrastive learning?,,1,1,,,,CC BY-SA 4.0 18481,1,,,3/7/2020 14:00,,5,198,"

I am trying to predict Forex time series. The nature of the market is that 80% of the time the price can not be predicted, but in 20% of the time it can be. For example, if the price drops down very deep, there is 99% probability that there will be a recovery process, and this is what I want to predict.

So , how do I train a feed-forward network the way it would only predict those cases that have 99% of certainty to take place and, for the rest of the cases it would output ""unpredictable"" status?

Imagine that my data set has 24 hours of continuous price data as input (as 1 minute samples), and then as output I want the network to predict 1 hour of future price data. The only restriction I need to implement is that if the network is not ""sure"" that the price is predictable, it would outupt 0s. So, how do I implement safety in predictions the network is outputting?

It seems that my problem is similar to Google Compose, where it predicts the next word as you are typing , for example, if you type ""thank you"", it would add "" very much"" and this would be like 95% correct. I want the same, but it is just that my problem has too much complexity. Google uses RNNs, so maybe I should try a deep network of many layers of RNNs?

",34084,,34084,,3/10/2020 16:30,3/10/2020 16:30,How to predict time series with accuracy?,,3,0,,,,CC BY-SA 4.0 18482,1,,,3/7/2020 15:33,,1,57,"

Given a directed, edge attributed graph G, where the edge attribute is a probability value, and a particular node N (with binary features f1 and f2) in G, the algorithm that I want to implement is as follows:

  1. List all the outgoing edges from N, let this list be called edgelist_N.
  2. For all the edges in edgelist_N, randomly assign to the edge attribute a probability value such that the sum of all the probabilities assigned to the edges in the edgelist_N equals to 1.
  3. Take the top x edges (x can be a hyperparameter).
  4. List the nodes in which the edges from step 3 are incoming.
  5. Construct a subgraph with node N, the nodes from step 4 and the edges from step 3.
  6. Embed the subgraph (preferably using a GNN) and obtain it's embedding and use it with a classifier to predict say f1/f2.
  7. Propagate the loss so as to update the edge probabilities, that was assigned randomly in step 2.

I do not understand how to do step 7, i.e. update the edge attribute with the loss, so that edges which are more relevant in constructing the subgraph can be assigned a higher probability value.

Any suggestion would be highly appreciated. Thank you very much.

",34086,,,,,3/7/2020 15:33,How to update edge features in a graph using a loss function?,,0,1,,,,CC BY-SA 4.0 18485,1,,,3/7/2020 17:22,,1,58,"

I'm current testing a model for spiral data. After 500 epoches, loss is 0.04 but the result is still unmatch with some part of the training data. (bottom left)


(source: upsieutoc.com)

The model has 2 hidden tanh x 16 units running with ml5.js. I choose tanh because it seems to be smoother than ReLU. Apart from that they're the same.

Is this thing caused by the model or by ml5 itself?

",34091,,4709,,12/22/2022 22:02,12/22/2022 22:02,Model unfit for some part of spiral data despite low error,,0,1,,,,CC BY-SA 4.0 18487,2,,18481,3/7/2020 19:13,,0,,"

Tbh I think stock prices are essentially impossible to predict as you're not taking into account the data from outside the stock market.

I'd argue any successful model would need to be trained on news, consumer sentiment, etc etc.

The only ones which maybe work are HFT.

",32390,,,,,3/7/2020 19:13,,,,1,,,,CC BY-SA 4.0 18489,1,19917,,3/7/2020 21:32,,1,382,"

I am working through the textbook "Graph-Based Natural Language Processing and Information Retrieval", where I've got a question on implementation of this first Latex looking formula/algorithm.

Can you help me turn the formula under 1.2 Graph Properties into python code? Yes, I know there are many other languages, but python is more user-friendly so I'm starting there, and will eventually rewrite it into C.

As I read the above node example, sorry the D and E nodes were cut off. Node A has two outflowing arrows notating it as their head node, and it is the one tail node.

This first sentence references the Graphs: To traverse from A to B, If A to B value is sufficient (above Nx), go to B. If A to B value is below Nx, go A to C to D to A to B total cost is 5.2+7+1+8 = 21.20 traverse cost, this makes sense.

This sentence refers to the Latex formula in the book. Then to start the formula calculation, the average degree of a graph "a" is equal to the sum of, one over N, times the sum of the in-degree of vertices? Asserting that the sum is a non zero integer between 1 and N?

Ok, I only loaded one page and hope that's not a TOS violation or causes issue, it's a challenge to find people who understand graph theory.

Let me know what questions you have, but I'm just wanting to get clarification if my understanding is what this page is saying.

",34095,,2444,,9/10/2020 19:52,9/10/2020 19:52,How do I turn this formula of the average degree of a graph into Python code?,,1,2,,12/28/2021 9:35,,CC BY-SA 4.0 18491,1,,,3/8/2020 1:21,,2,124,"

I am trying to build an LSTM model to generate Shakspeare-like poems. I have training set $\{s_1,s_2, \dots,s_m\}$, which are sentences of Shakespeare poems, and each sentence contains words $\{w_1,w_2, \dots,w_n\}$.

To my understanding, each sentence $s_i$, for $i=1, \dots,m$ is a random sequence containing the words $w_j$, for $j=1, \dots,n$. The LSTM model is estimated by applying the maximum likelihood (MLE) method, which will use cross-entropy loss for optimization. The use of MLE requires that the samples in the random sequence be independent and identically distributed (i.i.d), however, the word sequence $w_j$ is not i.i.d (since it is non-Markov). Therefore, I am suspicious about using cross-entropy loss for training an LSTM for the NLP task (which seems to be the common practice).

",33266,,2444,,3/8/2020 3:27,3/8/2020 3:27,Can the cross-entropy loss be used for a NLP task with LSTM?,,0,3,,,,CC BY-SA 4.0 18492,1,,,3/8/2020 3:48,,3,317,"

Does transformer have the potential to replace RNN end-to-end models for speech recognition for online speech recognition? This mainly depends on accuracy/latency and deploy cost, not training cost. Can transformer support low latency online use case and have comparable deploy cost and better result than RNN models?

",25322,,-1,,1/19/2021 2:02,1/19/2021 2:03,Can transformer be better than RNN for online speech recognition?,,1,1,,,,CC BY-SA 4.0 18496,2,,18492,3/8/2020 15:10,,2,,"

Are there examples that transformer have better accuracy than RNN end-to-end model like RNN-transducer for speech recognition? Can transformer be used for online speech recognition which require low speech-end-to-result latency? Does transformer have the potential to replace RNN end-to-end models for speech recognition in most cases in the future? This may mainly depends on accuracy and deploy cost, not training cost.

You can check facebook results on wav2letter on all this:

https://ai.facebook.com/blog/online-speech-recognition-with-wav2letteranywhere/

https://research.fb.com/publications/scaling-up-online-speech-recognition-using-convnets/

Transformers definitely have a potential in speech especially when combined with faster computatoin methods (hashing) just like in NLP.

The problem with transformers is that you need a lot of GPUs to train them.

",3459,,,,,3/8/2020 15:10,,,,1,,,,CC BY-SA 4.0 18497,1,,,3/8/2020 16:03,,2,175,"

J. Pitrat (born in 1934) was a French leading artificial intelligence scientist (the first to get a Ph.D. in France mentioning "artificial intelligence"). His blog is still online and of course refer to most of his papers (e.g. A Step toward an Artificial Artificial Intelligence Scientist, etc.) and books, notably Artificial Beings: the conscience of a conscious machine (his last book). He passed away in October 2019. I attended (and presented a talk) at a seminar in his memory.

What are recent AI systems or research papers related to the idea of symbolic AI, introspection, declarative metaknowledge, meta-learning, meta-rules, etc.?

Most of those I know are more than 20 years old (e.g. Lenat Eurisko; I am aware of OpenCyC). I am interested in papers or systems published after 2010 (perhaps AGI papers with actual complex open source software prototypes).

-see also the RefPerSys system-

",3335,,3335,,12/21/2020 20:47,12/26/2020 5:57,What are recent AI software systems and research papers close to J. Pitrat's ideas?,,1,0,,,,CC BY-SA 4.0 18498,2,,15459,3/8/2020 16:38,,0,,"

The book has actually proven the theorem rigorously in Chapter 2. I don't want to prove it here, but you can look it up. I will try to explain parts which are non obvious (and somewhat confusing according to the book's literature).

So for PAC learning (with or without the realizability assumption) the theory is that given a data-set of size:

$$m \geq [\frac{log(|H|/\delta)}{\epsilon}]$$ where $|H|$ is the size of the finite hypothesis class.

which when simplified is nothing but:

$$|H|e^{-\epsilon m} \leq \delta$$

where $\delta$ is the probability that your sample is not representative of the underlying distribution (according to the book, hence the term Probably in PAC Learning) and $\epsilon$ is the maximum probability that your learned hypothesis $h$ predicts new unseen samples wrong (basically accuracy of your hypothesis and hence the term Approximately Correct in PAC Learning).

This equation/bound comes from the last step of the proof which states:

$$D^m [ {S|_x : L_{(D,f)}(h_S)\gt \epsilon}] \leq |H_B|e^{-\epsilon m} \leq |H|e^{-\epsilon m}$$ where $H_B$ are all the bad hypothesis (over-fitting hypothesis)

which is your answer to the question:

Estimation error increase with linearly $|H|$ and decrease exponentially with $m$ in PAC learning

Now here comes the tricky part, following this equation the proof directly jumps to:

$$|H|e^{-\epsilon m} \leq \delta$$

The justification for this is given in previous part of the proof (I am not entirely sure if they meant this justification, but it seems the only one):

Since the realizability assumption implies that $L_S (h_S ) = 0$, it follows that the event $L_{(D,f )} (h_S ) > \epsilon$ can only happen if for some $h ∈ H_B$ we have $L_S (h) = 0$. In other words, this event will only happen if our sample is in the set of misleading samples.

Do not mistakenly we confuse misleading $\rightarrow$ non-representative otherwise we will not able to justify the aforementioned jump ($\epsilon$ and $\delta$ becomes dependent on each other)

The actual interpretation of $\delta$ is that it is our confidence parameter i.e we want to ensure: $$D^m [ {S|_x : L_{(D,f)}(h_S)\gt \epsilon}] \leq \delta$$ which means we are $1-\delta$ confident that our learned $h_s$ will have $L_{(D,f)}(h_s) \leq \epsilon$ (complementary expression).

NOTE: This idea is skipped in most resources I read, I found its explanation here.

Now, coming to the statement: $$m_H \leq [\frac{log(|H|/\delta)}{\epsilon}]$$ $m_H$ unlike $m$ is defined as:

If $H$ is PAC learnable, there are many functions $m_H$ that satisfy the requirements given in the definition of PAC learnability. Therefore, to be precise,we will define the sample complexity of learning $H$ to be the “minimal function,” in the sense that for any $\epsilon, \delta$ $m_H (\epsilon, \delta)$ is the minimal integer that satisfies the requirements of PAC learning with accuracy $\epsilon$ and confidence $\delta$.

And hence the equality sign is reversed, since many good samples will result in good hypothesis being generated in a smaller number of samples.

Side note: All conventions are from Understanding Machine Learning: From Theory to Algorithms.

",,user9947,,user9947,3/12/2020 21:33,3/12/2020 21:33,,,,0,,,,CC BY-SA 4.0 18499,1,18502,,3/8/2020 21:54,,0,75,"

I have some familiarity with the regular Tensorflow library and have been able to create a number of working models with it. There are more than enough resources out there to get up and running and answer most questions on the standard library.

But I recently came across the video on some high-level capabilities of the Tensorflow Probability library, TensorFlow Probability: Learning with confidence (TF Dev Summit '19), and I would like to learn it.

The issue is that there are very few resources out there on TFP and given my lack of a formal background in math/statistics, I find myself aimlessly googling to get a grasp of what's going on in the docs. I'm more than willing to invest the time needed, but I just need to know where I can start in terms of resources I can access online. Specifically, I'm looking to get the necessary domain knowledge needed to work with the library given the lack of courses/tutorials on the library itself.

",30154,,2444,,3/8/2020 23:02,3/11/2020 4:43,What are the prerequisites to start using the TensorFlow Probability library?,,1,4,,3/11/2020 16:05,,CC BY-SA 4.0 18501,1,,,3/8/2020 22:28,,3,56,"

If one is interested in implementing a path planning algorithm that is grid-based, one needs to consider the fact that your grid points will never represent the true state of the robot.

How is this dealt with?

Suppose we're doing path planning using a grid-based search on the side of the control for a desired grid position as an output state.

How would you handle the discrepancy between your actual starting position and your discretized starting position?

I understand that normally you may use an MPC instead, which continually recalculates an optimal path using some type of nonlinear solver, but suppose we don't do this - suppose we restrict ourselves to only a grid search and suppose at after every action the state of the robot has to be considered as living in a particular grid point.

",32390,,2444,,3/11/2020 2:57,3/11/2020 2:57,How to deal with approximate states when doing path planning?,,0,2,,,,CC BY-SA 4.0 18502,2,,18499,3/8/2020 22:43,,2,,"

Although this question is slightly primarily opinion-based and too broad (and I will probably close it as such) and a good answer will necessarily depend on your background, I will list some of the main theoretical prerequisites that everyone should ideally be familiar with before diving into TensorFlow Probability (TFP).

I am familiar with TFP, given that I've been using it for a project, but I've not used all of its functionalities, such as the bijectors. I've only used the Bayesian layers, distributions, etc., so I will try to give an answer based on my experience.

You definitely need to be familiar with the basic concepts of probability theory, such as distributions, random variables, expectations, etc. A full university-level course in probability and statistics would definitely be helpful!

If you want to use Bayesian layers, you also need to be familiar with Bayesian neural networks (BNNs). To understand BNNs, you need to understand the basic concepts of Bayesian inference (BI), given that most BNNs are based on BI. If you are already familiar with variational auto-encoders (VAEs), then it will be easier to understand BNNs. To understand BNNs, you should start reading the paper Weight Uncertainty in Neural Networks. If this paper is difficult to follow, you should first read the paper Variational Inference: A Review for Statisticians and then it may also be useful to read and be familiar with the VAE paper. Therefore, you will need to be familiar with concepts such as Kullback–Leibler divergence and Monte Carlo sampling.

The blog post Regression with Probabilistic Layers in TensorFlow Probability is the first blog post you should read. If you don't understand it, then it probably means you need to learn its prerequisites (i.e. you need to be familiar with at least linear regression).

TFP also provides several implementations of other more advanced concepts such as BNNs. The easiest example to follow is probably the logistic regression example. LR is relatively easy to follow compared to the topics I mentioned above, but, of course, you need to be familiar with logistic regression.

Of course, I cannot list all the theoretical prerequisites (and that's why your post is too broad!) and there are definitely others, but this is a start. Bear in mind that these are not trivial topics, so it is normal to get stuck.

",2444,,2444,,3/8/2020 23:06,3/8/2020 23:06,,,,1,,,,CC BY-SA 4.0 18505,1,18511,,3/8/2020 23:17,,1,83,"

I have used Beta function to estimate the performance of the agent. I have failure and success data of the task that runs on the agent. The parameter $\alpha$ is a number of successful tasks, while $\beta$ is the number of failures. Thus, I can estimate the performance by exploiting the expected value of Beta, as $$\mu = \frac{\alpha} {(\alpha+\beta)}$$

So, I am looking for a similar model, such that its parameter can be estimated from the success and failure data. So far I found Dirichlet distribution.

What is the expected value of Dirichlet distribution? How I can use the success and failure data to estimate parameters of this distribution?

Let's check the following example:

Suppose that we use a Dirichlet prior represented by $Dirichlet(1, 1, 1)$ and observe $13$ results with $8$ Successful, $2$ Missing, and $3$ Failures. Then we get the posterior to be $Dirichlet(1+8, 1+2, 1+3)$. Then if you define the performance value $\alpha$ to be the expectation of $P(x=Successful)$, then $\alpha$ will be $(1+8)/[(1+8)+(1+2)+(1+3)] = 0.56$

Now Suppose that we use a Beta prior represented by $Beta(1,1)$ and observe $13$ results with $8$ Successful, and $3$ Failures. Then we get the posterior to be $Beta(1+8, 1+3)$. Then if you define the performance value Pr to be the expectation of $P(x=Successful)$, then $\alpha = (1+8)/[(1+8)+(1+3)] = 0.69$

Are my calculations and concept right?

",30551,,,user9947,3/11/2020 18:03,3/11/2020 18:03,How can I use the success and failure data to estimate parameters of a Dirichlet distribution?,,1,3,,,,CC BY-SA 4.0 18507,1,,,3/8/2020 23:56,,0,46,"

What is $y$? Why is $k$ the ceil of $n/2$? What is $y \geq k$?

",33955,,2444,,12/4/2020 11:19,12/24/2022 17:03,Why does the error ensemble use the ceiling of the number of classifiers?,,1,2,,,,CC BY-SA 4.0 18508,2,,18507,3/9/2020 1:09,,1,,"

Y ensemble size voting wrong

k = 50% or majority threshold

If you have 11 models. Then the majority of models is anything bigger than 50% of the number of ensemble models. In the example where you have 11 base models. The majority would be anything bigger than N/2 or 11/2. But since 11 is an odd number and cannot be divided by 2. We have to use the python ceiling function to round 5.5 to 6. The in other words. For your ensemble to be wrong. We must look for the probability that Y of them are wrong. Y is ≥ than K and K is ceil(N/2). Thus we must calculate this probability(6 wrong) like shown above summating the possibility of each of these discrete combinations into one

",33955,,,,,3/9/2020 1:09,,,,0,,,,CC BY-SA 4.0 18511,2,,18505,3/9/2020 2:35,,1,,"

Dirichlet is the Multi Variate version of the Beta distribution. In general, these distributions can be thought to model the probability of modelling a probability distribution.

The support Dirichlet distribution is defined as follows:

$$ S_K = \{ x:0 \leq x_k \leq 1, \sum_{k=1}^K x_k=1 \} $$

and the PDF is defined as:

$$Dir(x|\alpha) = \frac{1}{B(\alpha)} \prod_{k=1}^Kx_k^{\alpha_k-1}$$

where $B(\alpha)$ is the beta function of $K$ variables:

$$B(\alpha) = \frac{\prod_{k=1}^K \tau(\alpha_k)}{\tau(\sum_{k=1}^K \alpha_k)}$$

and the resultant point estimates are (Define $\sum_{k=1}^K \alpha_k = \alpha_0)$:

$$\mu(x_k) = \frac{\alpha_k}{\alpha_0}$$ $$\sigma^2(x_k) = \frac{\alpha_k(\alpha_0-\alpha_k)}{\alpha_0^2(\alpha_0+1)}$$ $$mode[x_k] = \frac{\alpha_k-1}{\alpha_0-1}$$

Beta distribution is the special case where $k=2$

Clearly, when you run an experiment a large number of times, the success of each $k$ will approach towards its expected value i.e if you define your random variables as $x_k = \frac{N_{k}}{N}$ where $N$ is the total number of trials and $N_k$ is the success of the $k$ th term, it clearly satisfies the support of the Dirichlet Distribution and hence you can use

$$\frac{\alpha_{k}}{\alpha_0} = \frac{N_{k}}{N}$$

This is assuming that the experiment follows Dirichlet Distribution.

(Taken in parts from A Probabilistic Approach to ML)

",,user9947,,,,3/9/2020 2:35,,,,3,,,,CC BY-SA 4.0 18512,1,,,3/9/2020 5:31,,1,72,"

AFAIK, momentum is quite useful when training CNNs, and can speed-up the training substantially without any drop in validation accuracy.

I've recently learned that it is not as helpful for RNNs, where plain SGD is preferred.

For example, Deep Learning by Goodfellow et. al says (section 10.11, page 401):

Both of these approaches have largely been replaced by simply using SGD (even without momentum) applied to LSTMs.

The author talks about LSTMs and "both of these approaches" refer to second-order and first-order SGD methods with momentum methods, respectively, according to my understanding.

What causes this discrepancy?

",32621,,2444,,12/13/2020 12:37,12/13/2020 12:37,Why do momentum techniques not work well for RNNs?,,0,2,,,,CC BY-SA 4.0 18513,1,,,3/9/2020 5:53,,1,111,"

I came across the concept of Bayesian Occam Razor in the book Machine Learning: a Probabilistic Perspective. According to the book:

Another way to understand the Bayesian Occam’s razor effect is to note that probabilities must sum to one. Hence $\sum_D' p(D' |m) = 1$, where the sum is over all possible data sets. Complex models, which can predict many things, must spread their probability mass thinly, and hence will not obtain as large a probability for any given data set as simpler models. This is sometimes called the conservation of probability mass principle.

The figure below is used to explain the concept:

Image Explanation: On the vertical axis we plot the predictions of 3 possible models: a simple one, $M_1$ ; a medium one, $M_2$ ; and a complex one, $M_3$ . We also indicate the actually observed data $D_0$ by a vertical line. Model 1 is too simple and assigns low probability to $D_0$ . Model 3 also assigns $D_0$ relatively low probability, because it can predict many data sets, and hence it spreads its probability quite widely and thinly. Model 2 is “just right”: it predicts the observed data with a reasonable degree of confidence, but does not predict too many other things. Hence model 2 is the most probable model.

What I do not understand is when a complex model is used, it will likely overfit data and hence the plot for a complex model will look like a bell shaped with its peak at $D_0$ while simpler models will more likely have a broader bell shape. But the graph here shows something else entirely. What am I missing here?

",,user9947,5763,,4/6/2021 11:21,12/27/2022 14:03,Understanding Bayesian Optimisation graph,,1,2,,,,CC BY-SA 4.0 18514,1,,,3/9/2020 7:37,,1,1938,"

I am training a combined model (fine-tuned VGG16 for images and shallow FCN for numerical data) to do a binary classification. However, the overall AUC score is not what I expected it to be.

Image-only mean AUC after 5-fold cross-validation is about 0.73 and numeric data only 5-fold mean AUC is 0.65. I was hoping to improve the mean AUC by combining the models into one and merging output layers using concatenate in Keras.

img_output = Dense(256, activation=""sigmoid"")(x_1) 

and

numeric_output = Dense(128, activation=""relu"")(x_2) 

are the output layers of the two models. And,

concat = concatenate([img_output, numeric_output])
hidden1 = Dense(64, activation=""relu"")(concat)
main_output = Dense(1, activation='sigmoid', name='main_output')(hidden1)

is the way I concatenated them.

Since image-only performance was better I decided that it might be reasonable to have more dense layers for image_output (256) and ended up using 128 in numeric_output.I could only reach up to mean AUC of 0.67 using a combined model. I think I should rearrange the concatenation of two outputs somehow (by introducing another learnable parameter (like the formula (10) at 3.3 section of this work?, bias?, or something else) to get more boost on mean AUC. However, I was not able to find what options were available.

Hope you have some ideas worth trying.

",31870,,2444,,4/3/2020 12:28,4/3/2020 12:28,How can I merge outputs of two separate layers so that the overall performance improves?,,1,0,,,,CC BY-SA 4.0 18516,1,,,3/9/2020 9:28,,1,881,"

According to the authors of this paper, to improve the performance, they decided to

drop backward pass and using a first-order approximation

I found a blog which discussed how to derive the math but got stuck along the way (please refer to the embedded image below):

  1. Why disappeared in the next line.
  2. How come (which is an Identity matrix)

Update: I also found another math solution for this. To me it looks less intuitive but there's no confusion with the disappearance of 𝜃 as in the first solution.

",33801,,2444,,1/14/2021 0:30,1/14/2021 0:30,Understanding the derivation of the first-order model-agnostic meta-learning,,1,6,,,,CC BY-SA 4.0 18517,1,,,3/9/2020 9:35,,2,83,"

I'm building a CNN/3DCNN model that classifies hand gestures. The problem is that the actual gesture occupies only like 1% of the whole image. That means that an enormous amount of convolutional operations is done on the ""empty"" parts of the image, which is useless.

Is there a way to solve this problem? I was thinking about a MaxPooling layer with a giant pool size, but near features that are extracted from the gesture will be probably ""compressed"" in only 1 feature.

",32751,,2444,,3/10/2020 21:10,3/10/2020 21:10,"Is there a way to add ""focus"" on parts of the image when using CNNs?",,0,4,,,,CC BY-SA 4.0 18518,1,,,3/9/2020 10:44,,0,110,"

I have made a Deep Q Network for the game snake but unfortunately, the snake exhibits some unwanted behavior. It generally does quite well but sometimes it gets stuck in an infinite loop that it can't escape and at the start of the game it takes a very long route to the apple rather than taking a more direct route.

The discount factor per time step is 0.99. The Snake gets a reward of +9 for getting an Apple and -1 for dying. Does anybody have any recommendations on how I should tune the hyperparameters/reward function to minimize this unwanted behavior?

I was thinking that reducing the discount factor may be a good idea?

",33227,,,,,3/9/2020 13:13,How to incentivise snake to go straight to apple?,,1,0,,,,CC BY-SA 4.0 18520,2,,18518,3/9/2020 13:13,,1,,"

Your network finds the infinite loop and notices that this has the best reward (0). This is probably because it hasn't found a path to eating the apple (through exploration).

Reducing the discount factor will only make the long term rewards less valuable. So it will learn eating the apple even slower.

I don't know what you are using as inputs for your network, but maybe changing your reward system could help. For example, you could give your network a reward if it advances in the direction of the apple. This way, your network will be encouraged to find the apple more than it is now.

",29671,,,,,3/9/2020 13:13,,,,5,,,,CC BY-SA 4.0 18521,2,,18516,3/9/2020 15:19,,2,,"

$\nabla_{\theta_{i-1}} \theta_{i-1} = \mathbf{I}$ in a similar way that $\frac{d f}{dx} = 1$ for $f(x) = x$. Strictly speaking, $\mathbf{I}$ should be a vector of $1s$ with the same dimensionality as $\theta_{i-1}$, but they are probably abusing notation here and putting such a vector as the diagonal elements of a matrix. Alternatively (actually, the most likely reason!), they are computing the partial derivative of $\theta_{i-1}^j$ with respect to $\theta_{i-1}^k$, for all $k$, for all $j$, which will make up an identity matrix.

Regarding your first question, $\nabla_{\theta} \theta_{0}$ probably becomes 1, but I am not familiar enough with the math of this paper to tell you why. Maybe it's because $\nabla_{\theta} \theta_{0}$ actually means $\nabla_{\theta_0} \theta_{0}$. I would need to dive into it.

",2444,,2444,,3/9/2020 15:28,3/9/2020 15:28,,,,3,,,,CC BY-SA 4.0 18522,2,,18081,3/9/2020 16:01,,0,,"

I think, one could generate a data set, with a random variable in each component of the data vector, add this data to the training data set, and then shuffle the combined data set.

",27480,,,,,3/9/2020 16:01,,,,1,,,,CC BY-SA 4.0 18523,2,,2681,3/9/2020 16:53,,2,,"

Have a look at the paper A Modular Architecture for Unsupervised Sarcasm Generation (2019) by Mishra et al.

In the abstract, the authors write

In this paper, we propose a novel framework for sarcasm generation; the system takes a literal negative opinion as input and translates it into a sarcastic version. Our framework does not require any paired data for training.

Here's the reference implementation.

",2444,,,,,3/9/2020 16:53,,,,0,,,,CC BY-SA 4.0 18524,1,,,3/9/2020 21:39,,1,43,"

I'm new to the Data Science field and last week I started to learn about Neural Networks and Deep Learning. To practice, I decided to do a small project: design a Neural Network to predict the winner of an NBA game given the two teams playing. Also, for each match I have 2 stats (let's say number of points and number of free throws) for each of the teams.

In the end, the dataset looks like:

|  ID |  Home |  Away | H_Pts | H_Fts | A_Pts | A_Fts | H_win |
|:---:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
|  1  | Team1 | Team2 |   45  |   10  |   47  |   8   |   1   |
|  2  | Team3 | Team4 |   56  |   6   |   70  |   13  |   0   |
| ... |  ...  |  ...  |  ...  |  ...  |  ...  |  ...  |  ...  |

I implemented the model with TensorFlow/Keras (with the help of this tutorial: Classify structured data with feature columns | TensorFlow Core).

The code is pretty concise:

batch_size = 16
train_ds, test_ds, val_ds = get_datasets()  # The function mainly uses tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
team_names = get_team_names()

feature_columns = []
for column_name in ['Home', 'Away']:
    team = feature_column.categorical_column_with_vocabulary_list(column_name, team_names)
    feature_columns.append(feature_column.indicator_column(team))

for column_name in ['H_Pts', 'H_Fls', 'A_Pts', 'A_Fls']:
    feature_columns.append(feature_column.numeric_column(column_name))

feature_layer = tf.keras.layers.DenseFeatures(feature_columns)

model = tf.keras.Sequential([
    feature_layer,
    layers.Dense(128, activation='relu'),
    layers.Dense(128, activation='relu'),
    layers.Dense(1)
])


model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy'])

model.fit(train_ds, validation_data=val_ds, epochs=10)

loss, accuracy = model.evaluate(test_ds)
print(f""Model evaluation: Loss = {loss} | Accuracy = {accuracy}"")

Trained with just 100 games, I get a great accuracy: 99%. Of course: as it is, the test dataset given to model.evaluate(test_ds) contains everything except the target label H_win. Because H_win can easily be deduced from H_Pts and A_Pts, I get a high accuracy. But this model can't work because by definition you don't know the number of points of each team before the game...

How should I deal with features like these ones which I do not want to predict (so they're not labels) but that should still be considered during the training? Does this kind of feature have a name?

",34130,,,,,3/9/2020 21:39,How to deal with features which are here just for training?,,0,1,,,,CC BY-SA 4.0 18525,2,,16353,3/9/2020 21:52,,2,,"

Embeddings generated by transformers like Bert or XLM-R are fundamentally different from embeddings learned through language models like GloVe or Word2Vec. The latter are static, i.e. they are just dictionaries containing a vocabulary with n-dimensional vectors associated to each word. Because of this they can be plotted through PCA and the distance between them can be easily calculate with whatever metrics you prefer.

When training Bert or XLM-R instead you are not learning vectors, but the parameters of a transformer. The embedding for each token are then generated once a token is fed into the transformer. This implies several things, the most important being that the hidden representation (the embedding) for the token change depending on the context (recall that XML-R use as input also the hidden states generated by the previous token). This means that there are no static vectors to compare by plotting them or by calculating the cosine similarity. Nevertheless, there are way to analyse and visualise the syntax and semantics encoded in the parameters, this paper show some strategies: https://arxiv.org/pdf/1906.02715.pdf

On a more linguistic side, I would also ask why vectors of same words should show the same semantic properties across languages. Surely there are similarities for lot of words translated literally, but the use of some expressions is inherently different across languages. To make a quick example: in English the clock 'works', in Dutch the clock 'lopen' (it walks) and in Italian the clock 'funziona' (it functions). Same expression, three different words in different languages that do not necessarily share the same neighbours in their monolingual latent spaces. The point of transformers is exactly to move from static representations to dynamic ones that are able to learn that all those three verbs (in their specific language) can appear early in a sentences and close to the word clock.

",34098,,,,,3/9/2020 21:52,,,,1,,,,CC BY-SA 4.0 18526,1,18554,,3/10/2020 0:59,,2,275,"

I have collected a set of pictures of people with a text explaining the characteristics of the person on the picture, for example, ""Big nose"" or ""Curly hair"".

I want to train some type of model that takes in any picture and returns a description of the picture in terms of characteristics.

However, I have a hard time figuring out how to do this. It is not like labeling ""dog"" or ""apple"" because then I can create a set of training data and then evaluate its performance, now I can not. If so I would probably have used a CNN and probably also VGG-16 to help me out.

I only have two ML courses under my belt and have never really encountered a problem like this before. Can someone help me to get in the right direction?

As of now, I have a data set of 13000 labeled images I am very confident it is labeled well. I do not know of any pre-trained datasets that could be of help in this instance, but if you know of one it might help.

Worth noting is that every label is or should at least be unique. If for example there exist two pictures with the same label of ""Big nose"" it is purely coincidental.

",4238,,2444,,3/11/2020 16:17,3/11/2020 20:55,How can I train a neural network to describe the characteristics of a picture?,,5,9,,,,CC BY-SA 4.0 18527,1,,,3/10/2020 7:22,,2,53,"

I am newbie to NLP. I have a excel sheet with following columns: Server_SNo, Owner, Hosting Dept, Bus owner, Applications hosted, Functionality, comments

a. Except the Server_SNo, other columns may or may not have data.
b. For some records there is no data except Server_SNo which is the first column. c. One business owner can own more than 1 Server.

So, out of 4000 records, about 50% of data contain direct mapping for a server with owner. Remaining 50% of data have combination of other columns (Owner, Hosting Dept, Bus owner, Applications hosted, Functionality and comments)

Here is my problem, I need to find the owner for the given Server_Sno for 50% of data which have combination of other columns (Owner, Hosting Dept, Bus owner, Applications hosted, Functionality and comments).

I have just started to build the code using Python and NLTK.

Is this an NLP problem? Am I going in right direction using Python and NLTK for NLP?

Any insights is appreciated.

-Mani

",34141,,34141,,3/12/2020 5:11,4/12/2020 3:02,Owner Search for given Server SNO,,1,0,,,,CC BY-SA 4.0 18528,2,,16353,3/10/2020 8:55,,-1,,"

There is a general idea in the field of NLP that there is a mapping between embeddings in different langauges. Figure 1 explains this.

In Figure 1. we have the embedding of English words and Spanish words, and we see that their exists a mapping between the manifolds associated to this two languages, i.e. Spanish manifold is a distorted image of the English maniflod. This idea was used to create an unsupervised translator in MUSE Project.

",32493,,,,,3/10/2020 8:55,,,,1,,,,CC BY-SA 4.0 18530,1,18532,,3/10/2020 9:14,,1,180,"

I'm building a game environment (see the picture below) where an agent should position the mouse on the screen (see the coordinates on the upper right corner) and then click to shoot a cannonball. If the goal (left) is hit. The agent gets a reward based on the elapsed time between this strike and the last one. If three shots are missed, the game is done and the environment will reset.

The env is done so far. But now I wonder what the action space should look like. How can I make the agent choose some x and y coordinates? And how can I combine this with a "shoot" action?

",30431,,2444,,6/4/2022 8:00,6/4/2022 8:03,How should I design the action space of an agent that needs to choose a 2d point and then shoot a cannonball?,,1,1,,,,CC BY-SA 4.0 18532,2,,18530,3/10/2020 10:40,,0,,"

You could make the actions

  1. move up
  2. move down
  3. move left
  4. move right
  5. shoot

Then you can declare the speed of this movement. If you want to go more in depth you can do

  1. Accelerate up
  2. Accelerate down
  3. Accelerate left
  4. Accelerate right
  5. Shoot

This will give control of the speed to the neural network, but is harder to train, and you should give the speed of the mouse as an input.

",29671,,2444,,6/4/2022 8:03,6/4/2022 8:03,,,,2,,,,CC BY-SA 4.0 18533,2,,8190,3/10/2020 11:29,,11,,"

The two tech reports below both call RNNs explicitly "recurrent net(work)s".

  1. Rumelhart, David E; Hinton, Geoffrey E, and Williams, Ronald J (Sept. 1985). Learning internal representations by error propagation. Tech. rep. ICS 8504. San Diego, California: Institute for Cognitive Science, University of California.

  2. Jordan, Michael I. (May 1986). Serial order: a parallel distributed processing approach. Tech. rep. ICS 8604. San Diego, California: Institute for Cognitive Science, University of California.

Jordan was a student of Rumelhart, so I would lean on identifying 1 as the paper introducing RNNs, with the caveat that the first sentence in the section "Recurrent Nets" of 1 reads:

We have thus far restricted ourselves to feedforward nets. This may seem like a substantial restriction, but as Minsky and Papert point out, there is, for every recurrent network, a feedforward network with identical behavior (over a finite period of time).

This is interesting for two reasons:

  1. After this sentence, he then goes on to show how RNNs can be unrolled and the error propagated back. Not a full-fledged BPTT yet, though.
  2. The sentence shows that the idea of recurrence (and unrolling) has been around since at least 1969.

Unfortunately, I don't have access to Minsky and Papert (1969), so I cannot follow this line any further.

",34145,,2444,,1/18/2021 23:30,1/18/2021 23:30,,,,1,,,,CC BY-SA 4.0 18534,2,,18481,3/10/2020 13:19,,2,,"

Things like this a really hot topic in research right now, and it's very difficult to get high accuracy on a chaotic system like the stock market. That being said, I would probably recommend preprocessing your data rather than having your primary neural network decide what to accept and what not to.

For example, in your specific case, you could model a bubble bursting as perhaps a negative exponential drop-off or something of the sort. This could include machine learning too. You could gather historical drops in stock market data, and use some sort of regression (Bayesian would probably work well) to estimate the best function to use as an indicator to whether a steep drop has occurred. If so, then use your neural network specifically to classify the fate of the stock. I would think you would have more success following a specialised route such as this rather than trying to train a network on general trends in the market.

In terms of the structure of your neural network, you may want to consider a convolutional neural network (CNN) instead of a recurrent neural network (RNN). RNNs assume the current point in your time-series depends on all previous points, from the beginning of your data. I wouldn't think this would hold true in general for the stock market. The filters a CNN learns are suited to learning to extract certain features and the CNN will apply the filters to specific portions of the data, in the way it considers optimal. They are both nonlinear models, but the CNN will be less computationally costly to train. You could also try a gradient-boosting regression approach instead of a neural network. That being said, something like an LSTM RNN (long short-term memory) won't necessarily be bad - just my two cents.

",22373,,,,,3/10/2020 13:19,,,,0,,,,CC BY-SA 4.0 18536,2,,18481,3/10/2020 13:59,,2,,"

I do not know how you will apply your data to the techniques I'll give you some brief overview of techniques used in time series prediction:

  • Extended Kalman Filtering: This is a kind of control system approach and is generally used to control trajectory of missiles. Here is a question (based on an EKF paper) in our stack on this topic. You can check the paper for more details.
  • Echo State Networks: This is a kind of ML/NN approach with based on the idea of Liquid State Machines used in Neuroscience. Resources on the same.
  • RNNs/LSTMs/GRUs - Probably the most popular approach to predicting any time series data when you don't want to delve into the statistical approaches.
  • ARMA/ARIMA models: Entirely statistical approach with lots of maths, but libraries are available with implementation already done.
  • Deep Belief Networks: People have also tried forecast time series using fixed number of previous states input to a DBN. It's somewhat of a popular paper so I decided to put it here.

Finally you can look up this overview on time series modelling approaches. Reinforcement Learning is also used for time series prediction, but from what I have heard it is not very easy to do so. Here is a Google Scholar search result.

",,user9947,,user9947,3/10/2020 14:14,3/10/2020 14:14,,,,3,,,,CC BY-SA 4.0 18537,2,,18526,3/10/2020 14:09,,0,,"

From what you wrote, the problem sounds a bit like face recognition, where a camera takes a picture of your face and compares it with a bunch of pictures in a database, for example, one for each employee if its at a company's main gate. If you look ""similar"" to one of the pictures in the database, the door opens and your ID/Name is displayed on a terminal.

This kind of system generates an encoding for each picture and evaluates the distance between your encoded picture and the encoding of each picture in the database. If this is at most some minimum value, it's considered a match.

So, what you could do is figure out some way to encode your pictures (say sum the pixel values for a very simple example, ideally you would use some sort of vector here because distances make sense with vectors) and store this encoding together with the label of the picture.

Once your database is complete (i.e. you have a bunch of pictures saved as a pair of [encoding, label]), you can ""scan"" each new picture, calculate its encoding (using the same algorithm that calculated your database encodings) and find the one entry in your database which minimizes the ""encoding-distance"".

If this sounds like a way to solve your problem, you need to come up with a proper encoding (like ""run my images through a CNN and save the output of my last fully connected layer"") and apply this to all the images you want to use as ""training data"", before ""testing"" it on some of the leftover images.

",34149,,2444,,3/11/2020 20:55,3/11/2020 20:55,,,,0,,,,CC BY-SA 4.0 18538,2,,16291,3/10/2020 14:24,,2,,"

The section of the book Perceptrons: An Introduction to Computational Geometry (expanded edition, third printing, 1988) that shows the limitations of the perceptron should be 11.8 The Nonseparable Case (p. 181), where the authors write

There are many reasons for studying the operation of the perceptron learning program when there is no $\mathbf{A}^*$ with the property $\mathbf{A}^* \cdot \mathbf{\Phi} > 0$ for all $\mathbf{\Phi} \in \mathbf{F}$. Some of these are practical reasons. For example, one might want to use the program to test whether such an $\mathbf{A}^*$ exists, or one might wish to make a learning machine of this sort and be worried about the possible effects of feedback errors and other ""noise"". Other motives are theoretical. One cannot claim to have completely understood the ""separable case"" without at least some broader knowledge of other cases.

In section 12.1.1 (p. 189), the authors further write

The PERCEPTRON scheme works perfectly only under the restriction that the data is linearly separable.

",2444,,,,,3/10/2020 14:24,,,,0,,,,CC BY-SA 4.0 18541,1,,,3/10/2020 15:44,,2,61,"

I want to train a reinforcement learning agent in an environment with parameters (for example, the wind speed, sun irradiation, etc.) that change over time. I have recorded a limited amount of data for these time series.

Should the RL agent be trained in an environment, which replays the recorded time series over and over, or should I model the time series with a generative model first and train the agent in an environment with these synthetic time series?

On the one hand, I think the RL algorithm will perform better with the synthetic data, because there are more diverse trajectories. On the other hand, I don't really have more data, because it is modelled after the same data the RL algorithm could learn from in the first place.

Are there any papers that elaborate on this topic?

",20195,,2444,,3/10/2020 20:43,3/10/2020 20:43,Should the RL agent be trained in an environment with real-world data or with a synthetic model?,,0,0,,,,CC BY-SA 4.0 18542,1,18624,,3/10/2020 16:21,,4,153,"

In Reinforcement Learning, an MDP model incorporates the Markovian property. A lot of scheduling applications in a lot of disciplines use reinforcement learning (mostly deep RL) to learn scheduling decisions. For example, the paper Learning Scheduling Algorithms for Data Processing Clusters, which is from SIGCOMM 2019, uses Reinforcement Learning for scheduling.

Isn't scheduling a non-Markovian process, or am I missing some points?

",34154,,2444,,3/13/2020 17:02,3/13/2020 23:42,How is the Markovian property consistent in reinforcement learning based scheduling?,,1,0,,,,CC BY-SA 4.0 18543,2,,1288,3/10/2020 16:28,,5,,"

In section 13.2 Other Multilayer Machines (pp. 231-232) of the book Perceptrons: An Introduction to Computational Geometry (expanded edition, third printing, 1988) Minsky and Papert actually talk about their knowledge of or opinions about the capabilities of what they call the multilayered machines (i.e. perceptrons with many layers or MLPs).

Have you considered "perceptrons" with many layers?

Well, we have considered Gamba machines, which could be described as "two layers of perceptron". We have not found (by thinking or by studying the literature) any other really interesting class of multilayered machine, at least none whose principles seem to have a significant relation to those of the perceptron. To see the force of this qualification it is worth pondering the fact, trivial in itself, that a universal computer could be built entirely out of linear threshold modules. This does not in any sense reduce the theory of computation and programming to the theory of perceptrons. Some philosophers might like to express the relevant general principle by saying that the computer is so much more than the sum of its parts that the computer scientist can afiord to ignore the nature of the components and consider only their connectivity. More concretely, we would call the student's attention to the following considerations:

  1. Multilayer machines with loops clearly open all the questions of the general theory of automata.

  2. A system with no loops but with an order restriction at each layer can compute only predicates of finite order.

  3. On the other hand, if there is no restriction except for the absence of loops, the monster of vacuous generality once more raises its head.

The problem of extension is not merely technical. It is also strategic. The perceptron has shown itself worthy of study despite (and even because of!) its severe limitations, It has many features to attract attention: its linearity; its intriguing learning theorem; its clear paradigmatic simplicity as a kind of parallel computation. There is no reason to suppose that any of these virtues carry over to the many-layered version. Nevertheless, we consider it to be an important research problem to elucidate (or reject) our intuitive judgment that the extension is sterile. Perhaps some powerful convergence theorem will be discovered, or some profound reason for the failure to produce an interesting "learning theorem" for the multilayered machine will be found.

So, let me address your first question directly.

Backprop wasn't known at the time, but did they know about manually building multilayer perceptrons?

Yes. They say that Gamba machines could be described as a 2-layer perceptron. For reproducibility, here's the definition of the Gamba machine (section 13.1 Gamba Perceptrons and other Multilayer Linear Machines)

\begin{align} \psi &= \left[\sum_{i} \alpha_{i}\left[\sum_{j} \beta_{i j} x_{j}>\theta_{i}\right]>\theta\right] \\ &= \left[\sum_{i} \alpha_{i} \varphi_{i} >\theta \right] \end{align} See also sections 12.4.4. Layer-Machines.

So, let's now address your second question.

Did Minsky & Papert know that multilayer perceptrons could solve XOR at the time they wrote the book, albeit not knowing how to train it?

So, according to the first excerpt, their intuition was the virtues of perceptrons would not carry over to MLPs, but they acknowledge that more research was needed to reject or support this hypothesis.

However, in section 13.0 Introduction of the same book, they write

We believe (but cannot prove) that the deeper limitations extend also to the variant of the perceptron proposed by A. Gamba.

So, they believed that the Gamba machine would not have been able to solve the XOR problem.

However, in the first excerpt, they say that a Turing machine could be built entirely out of linear threshold modules, which seems to be inconsistent with the second excerpt, but that's not really the case because they are not saying how to build a Turing machine out of the linear threshold modules but that just the specific Gamba machine would have the same limitations of the perceptron.

",2444,,2444,,1/19/2021 0:47,1/19/2021 0:47,,,,0,,,,CC BY-SA 4.0 18544,1,,,3/10/2020 16:44,,1,67,"

The Problem

I am currently working on a sequence classification problem I try to solve with machine learning. The target variable is the current state of a system. This target variable is following a repeating pattern (eg. [00110200033304...]). So Transitions are only allowed from or to the ""0"" state if you imagine the system as a state machine. The only deviation is the time the system stays in one state (eg. iteration_1 = [...0220...], iteration_2 =[...02220...]).

My Question

What would be the best choice of (machine learning) model for this task if one wants to optimize for accuracy?

Restrictions

  • No restrictions regarding time / space complexity of the model
  • No restrictions regarding the type of model
  • The model is only allowed to make wrong classifications in the state transitions phases (e.g. true: [011102...], pred: [001102...]) but must not validate the sequence logic (e.g. true: [011102...], pred: [010102...])

Additional Info / Existing Work

  • With a lstm neural network (many to 1) I achieved an overall accuracy of 97% in an unseen test set. Unfortunately the network predicted sequences which violate the sequence logic (e.g. true: [011102...] predicted: [010102...]) even tough the window length was wide enough to cover at least 3 state transitions.
  • with simple classification models (only one times step per classification, tested models: feed forward neural network, xgboost / adaboost) an accuracy of ca. 70% are reachable
  • The input signal is acoustic emission in the frequency domain; Ca. 100 frequency bins / 100 features

Ideas

  • Maybe the lstm would work better in ""many to many"" designed with a drastic reduced input dimensionality by increased window size?
  • Maybe a combination of the probability output of the lstm with a timed Automaton (a state machine with time dependent probability density functions about the state changes) or a Markov chain model could significantly improve the result? (But this seems really inelegant)
  • Is it eventually possible to impose the restriction of valid sequences onto the lstm model?
",34155,,,,,3/10/2020 16:44,Model for supervised sequence classification task,,0,0,,,,CC BY-SA 4.0 18545,2,,18526,3/10/2020 16:51,,0,,"

I would do as suggested in the comments. First select an encoding scheme. I think what is called a difference hash would work well for this application. Code for that is shown below. Now take your data set of images and run them through the encoder and save the result in a database. The database would contain the ""labeling"" text and the encoder result. Now for a new image you are trying to label, input the image into the encoder. Take the encoder result and compare it to the encoded values in the database. Search through the encoded values in the database and find the closest match. You can then use a ""threshold"" value to determine if you want to give a specific label for the image or if the distance is above the threshold declare there is no matching label. You can determine the best ""threshold"" value by running you data set images with the known labels and iterate the threshold level and select the threshold with the least errors. I would use something like a 56 or a 128 length hash.

import cv2
import os
# f_path is the full path to the image file, hash length is an integer specifies length of the hash
def get_hash(f_path, hash_length):    
    r_str=''    
    img=cv2.imread(f_path,0)        # read image as gray scale image
    img = cv2.resize(img, (hash_length+1, 1), interpolation = cv2.INTER_AREA)    
    # now compare adjacent horizontal values in a row if pixel to the left>pixel toright result=1 else 0
    for col in range (0,hash_length):
        if(img[0][col]>img[0][col+1]):
            value=str(1)
        else:
            value=str(0)
        r_str=r_str + value
    number=0
    power_of_two=1
    for char in r_str:        
        number = number + int(char) * power_of_two
        power_of_two=2 * power_of_two    
    return ( r_str, number) 
# example on an image of a bird
f_path=r'c:\Temp\birds\test\robin\1.jpg'
hash=get_hash ( f_path, 16) # 16 length hash on a bird image
print (' hash string ', hash[0], '   hash number ', hash[1])

> results is
 hash string  1111111100000000    hash number  255


",33976,,,,,3/10/2020 16:51,,,,0,,,,CC BY-SA 4.0 18546,1,,,3/10/2020 17:16,,2,365,"

Consider the following simple neural network with only one neuron.

  • The input is $x_1$ and $y_2$, where $-250 < x < 250$ and $-250 < y < 250$
  • The weights of the only neuron are $w_1$ and $w_1$
  • The output of the neuron is given by $o = \sigma(x_1w_1 + x_2w_2 + b)$, where $\sigma$ is the ReLU activation function and $b$ the bias.
  • Thus the cost should be $(o - y)^2$.

When using the sigmoid activation function, the target for each point is usually $0$ or $1$.

But I'm a little confused witch target to use when the activation function is the ReLU, given that it can output numbers greater than 1.

",34153,,2444,,3/10/2020 20:59,3/11/2020 3:52,How to determine the target value when using ReLU as activation function?,,2,4,,,,CC BY-SA 4.0 18547,2,,8534,3/10/2020 17:24,,4,,"

The paper (or report) that formally introduced the perceptron is The Perceptron — A Perceiving and Recognizing Automaton (1957) by Frank Rosenblatt. If you read the first page of this paper, you can immediately understand that's the case. In particular, at some point (page 2, which corresponds to page 5 of the pdf), he writes

Recent theoretical studies by this writer indicate that it should be feasible to construct an electronic or electromechanical system which will learn to recognize similarities or identities between patterns of optical, electrical, or tonal information, in a manner which may be closely analogous to the perceptual processes of a biological brain. The proposed system depends on probabilistic rather than deterministic principles for its operation, and gains its reliability from the properties of statistical measurements obtained from large populations of elements. A system which operates according to these principles will be called a perceptron.

See also Appendix I (page 19, which corresponds to page 22 of the pdf).

The paper The perceptron: A probabilistic model for information storage and organization in the brain (1958) by F. Rosenblatt is apparently an updated and nicer version of the original report.

A more accessible (although not the most intuitive) description of the perceptron model and its learning algorithms can be found in the famous book Perceptrons: An Introduction to Computational Geometry (expanded edition, third printing, 1988) by Minsky and Papert (from page 161 onwards).

",2444,,2444,,3/10/2020 17:59,3/10/2020 17:59,,,,0,,,,CC BY-SA 4.0 18550,1,,,3/10/2020 18:36,,1,28,"

Here's the data I have:

  1. Text from articles from various music blogs & music news sites (title, summary, full content, and sometimes tags).

  2. I used a couple different NLP/NER tools (nltk, spacy, and stanford NER) to determine the proper nouns in the text, and gave each proper noun a score based on how many times it appeared, and how many NLP tools recognized it as a proper noun. None of these tools are very accurate by themselves for my data

  3. For each proper noun I queried musicbrainz to find artists with that name. (musicbrainz has a lot of data that may be helpful: aliases, discography, associations with other artists)

  4. Any links in the article to Spotify, YouTube etc. and the song name & artist for that link

I have three goals:

  1. Determine which proper nouns are artists
  2. For artists that share the same name, determine which one the text is referring to (based on musicbrainz data)
  3. Determine if the artist is important to the article, or if they were just briefly mentioned

I have manually tagged some of the data with the correct output for the above 3 goals.

How would you go about this? Which algorithms do you think would be best for these goals?
Is there any semi-supervised learning I can do to reduce the amount of tagging I need to do?

",34158,,,,,3/10/2020 18:36,What algorithm to use for finding artists/bands in text and differentiating between artists that share the same name,,0,0,,,,CC BY-SA 4.0 18551,1,18553,,3/10/2020 20:38,,1,574,"

I am training a classifier to identify 24 hand signs of American Sign Language. I created a custom dataset by recording videos in different backgrounds for each of the signs and later converted the videos into images. Each sign has 3000 images, that were randomly selected to generate a training dataset with 2400 images/sign and validation dataset with the remaining 600 images/sign.

  • Total number of images in entire dataset: 3000 * 24 = 72000
  • Training dataset: 2400 * 24 = 57600
  • Validation dataset: 600 * 24 = 14400
  • Image dimension (Width x Height): 1280 x 720 pixels

The CNN architecture used for training

model = Sequential([
    Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(pool_size=(2,2)),
    Dropout(0.25),

    Conv2D(32, (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),
    Dropout(0.25),

    Conv2D(64, (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),
    Dropout(0.25),

    Conv2D(64, (3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2,2)),
    Dropout(0.25),

    Flatten(),
    Dense(128, activation='relu'),
    Dropout(0.25),

    Dense(NUM_CLASSES, activation='softmax')
])

Training parameters:

IMG_HEIGHT = 224
IMG_WIDTH = 224
BATCH_SIZE = 32
NUM_CLASSES = 24
train_datagen = ImageDataGenerator(rescale = 1./255,
                                   width_shift_range=0.1,
                                   height_shift_range=0.1,
                                   zoom_range=0.1,
                                   fill_mode='constant')
EPOCHS = 20
STEPS_PER_EPOCH = TRAIN_TOTAL // BATCH_SIZE
VALIDATION_STEPS = VALIDATION_TOTAL // BATCH_SIZE

callbacks_list = [
    tf.keras.callbacks.EarlyStopping(monitor = 'accuracy',
                                     min_delta = 0.005,
                                     patience = 3),
    tf.keras.callbacks.ModelCheckpoint(filepath = 'D:\\Models\\HSRS_ThesisDataset_5Mar_1330.h5',
                                       monitor= 'val_loss',
                                       save_best_only = True)
]

optimizer = 'adam'

The model accuracy and model loss graph is shown in the figure below:

The results obtained at the end of the training are

  • Train acc: 0.8000121
  • Val acc: 0.914441

I read this article explaining why the validation loss is lower than the training loss I want to know:

  1. Is it because of the smaller dataset and random shuffling of the images?
  2. Is there any way to improve the condition without changing the dataset?
  3. Will this have a very detrimental effect on the model performance in real test cases? If not, can I just focus on improving the training accuracy of the overall model?
",33467,,2444,,6/11/2020 11:36,6/11/2020 11:36,Why is the validation performance better than the training performance?,,2,1,,,,CC BY-SA 4.0 18552,1,,,3/10/2020 21:02,,1,23,"

Say I am using a convolutional network to classify pictures of my face versus anyone else's face in the world.

So let's take 10000 pictures of me, and 10000 pictures of other people.

And let's do three experiments where we train a binary classifier:

1) The 10000 ""other"" pictures are of 1 other person.

2) The 10000 ""other"" pictures are of ~10 other people (approximately balanced, so about 1000 pictures per person).

3) The 10000 ""other"" pictures are of ~10000 other people.

I only have one question but here are some different perspectives on it:

  • Are any of these cases categorically harder to solve than the others?

  • Are they the same difficulty, or close?

  • Are there known considerations to make when tuning the model for each of the cases? (like maybe case (3) has a sharper minimum in the loss function than (1) so we need to use a different optimisation approach)

",16871,,,,,3/10/2020 21:02,Is it okay to have wide variations within one of the classes for binary classification tasks?,,0,1,,,,CC BY-SA 4.0 18553,2,,18551,3/10/2020 21:41,,3,,"
  1. Assuming you pass through the entire validation dataset, this can't be due to shuffling since you still compute the loss/accuracy over the entire dataset, so order does not really matter here. It is more likely that you have a significantly smaller or less representative validation dataset, e.g., distribution of the validation dataset can be skewed towards classes where your model performs better.
  2. What do you mean exactly by improving the situation? Having a better validation accuracy is not necessarily bad. In any case, if you decrease the effect of regularization, e.g., lowering weight decay, training accuracy might go up but your model might generalize worse, i.e., you might get a lower validation accuracy.
  3. No, the goal of training is never to maximize training accuracy. You can trivially do so by just memorizing the training dataset. In short, the goal of training is to get good generalization and as long as you get a satisfactory validation accuracy, it is likely that this has happened to some degree (assuming you have a good validation dataset of course).
",32621,,,,,3/10/2020 21:41,,,,4,,,,CC BY-SA 4.0 18554,2,,18526,3/10/2020 22:31,,2,,"

The term you are looking for is multi-label classification, i.e. where you are making more than one classification on each image (one for each label). Most examples you'll find online are in the NLP domain but it is just as easy with CNNs since it's essentially defined by the structure of the output layer and the loss function used. It's not as complicated as it might sound if you are already familiar with CNNs.

The output layer of a neural network (for 3 or more classes) has as many units as there are targets. The network learns to associate each of those units with a corresponding class. A multi-class classifier normally applies a softmax activation function to the raw unit output, which yields a probability vector. To get the final classification, the max() of the probability vector is taken (the most probable class). The output would look like this:

                 Cat    Bird   Plane   Superman  Ball   Dog   
Raw output:      -1     2      3       6         -1     -1
Softmax:         0.001  0.017  0.046   0.934     0.001  0.001
Classification:  0      0      0       1         0      0

Multi-label classification typically uses a sigmoid activation function since the probabilities of a label occuring can be treated independently. The classification is then determined by the probability (>=0.5 for True). For your problem, this output could look like:

                 Big nose  Long hair  Curly hair  Superman  Big ears  Sharp Jawline
Raw output:      -1        -2         3           6         -1        10
Sigmoid:         0.269     0.119      0.953       0.998     0.269     1.000
Classification:  0         0          1           1         0         1

The binary crossentropy loss function is normally used for a multi-label classifier since a n-label problem is essentially splitting up a multi-class classification problem into n binary classification problems.

Since all you need to do to get from a multi-class classifier to a multi-label classifier is change the output layer, its very easy to do with pre-trained networks. If you get the pre-trained model from Keras its as simple as including include_top=False when downloading the model and then adding the correct output layer.

With 13000 images, I would recommend using Keras' ImageDataGenerator class with the flow_from_dataframe method. This allows you to use a simple pandas dataframe to label and feed in all your images. The dataframe would look like this:

Filename  Big nose  Long hair  Curly hair  Superman  Big ears  Sharp Jawline
0001.JPG  0         0          1           1         0         1
0002.JPG  1         0          1           0         1         1
   .      .         .          .           .         .         .

flow_from_dataframe's class_mode parameter can be set to raw or multi_output along with x_col to 'Filename' and y_col to ['Big nose', 'Long hair', 'Curly hair', 'Superman', 'Big ears', 'Sharp Jawline'] (in this example). Check out the documentation for more details.

The amount of data you need for each label depends on many factors and is essentially impossible to know without trying. 13000 sounds like a good start but it also depends on how many labels you have and how evenly distributed they are between the labels. A decent guide (one of many) on how to set up a multi-label classifier and how to implement it with Keras can be found here. It also covers imbalances on label frequency and is well worth a read. I'd highly recommend that you become as intimately familiar with your dataset as possible before you start tuning your neural network architecture.

",31980,,31980,,3/10/2020 22:36,3/10/2020 22:36,,,,0,,,,CC BY-SA 4.0 18555,2,,17999,3/10/2020 23:52,,1,,"

There's a neuroscience theory, known as predictive coding, which roughly states that the (human) brain is constantly generating and updating a model of the world.

The brain is constantly confronted with a wealth of sensory information that must be processed efficiently to facilitate appropriate reactions. One way of optimizing this processing effort is to predict incoming sensory information based on previous experience so that expected information is processed efficiently and resources can be allocated to novel or surprising information. Theoretical and computational studies led to the formulation of the predictive coding framework (Friston 2005, Hawkins and Blakeslee 2004, Mumford 1992, Rao and Ballard 1999). Predictive coding states that the brain continually generates models of the world based on context and information from memory to predict sensory input. In terms of brain processing, a predictive model is created in higher cortical areas and communicated through feedback connections to lower sensory areas. In contrast, feedforward connections process and project an error signal, i.e. the mismatch between the predicted information and the actual sensory input (Rao & Ballard, 1999). The predictive model is constantly updated according to this error signal.

This theory should not be surprising or unintuitive, given that every person possesses a slightly different perspective (or model) of the world, which is based on her (or his) personal experiences. Of course, this is just a theory, which may not be the most precise one that describes our brain, but this theory is already being validated by a number of brain imaging studies investigating predictive feedback and the processing of prediction errors.

Therefore, artificial intelligence may not be the only entity that is based on or will be limited by a model of the world. To answer your question more directly, yes, the AI will always be limited by its model and environment (e.g. hardware), in a similar way that flatlanders are limited by their 2-dimensional nature and world, but this doesn't necessarily mean we will not be able to create useful (and even sophisticated or human-like) AI systems.

",2444,,2444,,2/7/2021 22:20,2/7/2021 22:20,,,,0,,,,CC BY-SA 4.0 18557,2,,18551,3/11/2020 1:15,,0,,"
Validation dataset: 600 * 24 = 14400

Means that you are augmenting the validation set, right? For an experiment, you can do that and it might take validation accuracy more than train accuracy?

The idea of augmentation in only valid for the training set and you should not change the validation set or test set.

You can try without the augmentation in the validation set and see the result.

",34164,,,,,3/11/2020 1:15,,,,1,,,,CC BY-SA 4.0 18558,1,,,3/11/2020 2:07,,1,258,"

I have an SVM currently and want to perform a gradient based attack on it similar to FGSM discussed in Explaining And Harnessing Adversarial Examples.

I am struggling to actually calculate the gradient of the SVM cost function with respect to the input (I am assuming it needs to be w.r.t input).

Is there a way to avoid the maths (I am working in python if that helps?)

",29877,,,user9947,3/11/2020 17:50,11/29/2022 18:18,How do you perform a gradient based adversarial attack on an SVM based model?,,1,1,,,,CC BY-SA 4.0 18559,2,,18526,3/11/2020 2:08,,1,,"

You can use image captioning. Look at the article Captioning Images with CNN and RNN, using PyTorch. The idea is very profound. The model encodes the image to high dimensional space and then passes it through LSTM cells and LSTM cells produce linguistic output.

See also Image captioning with visual attention.

",34164,,2444,,3/11/2020 20:54,3/11/2020 20:54,,,,0,,,,CC BY-SA 4.0 18560,2,,18546,3/11/2020 2:45,,1,,"

ReLU and sigmoid have different properties (i.e. range), as you already noticed. I've never seen the ReLU being used as the activation function of the output layer (but some people may use it for some reason, e.g. regression tasks where the output needs to be positive). ReLU is usually used as the activation function of a hidden layer. However, in your case, you don't have hidden layers.

The sigmoid function is used as the activation function of the output layer when you need to interpret the output of the neural network as a probability, i.e. a number between $0$ and $1$, given that the sigmoid function does exactly this, i.e. it squashes its input to the range $[0, 1]$, i.e. $\text{sigmoid}(x) = p \in [0, 1]$. When do you need the output of the network to be a probability? For example, if you decide to use the cross-entropy loss function (which is equivalent to the negative log-likelihood), then the output of your network should be a probability. For example, if you need to solve a binary classification task, then the combination of a sigmoid as the activation function of the output layer and the binary cross-entropy as the loss function is probably what you need.

You could also have a classification problem with more than 2 classes (multi-class classification problem). In that case, you probably need to use a softmax as the activation function of your network combined with a cross-entropy loss function.

See this question How to choose cross-entropy loss in TensorFlow? on Stack Overflow for more info about different cross-entropy functions.

By the way, in general, the targets don't necessarily need to be restricted to be 0 or 1. For example, if you are solving a regression task, your target may just be any number. However, in that case, you may need another loss function (which is often the mean squared error).

",2444,,2444,,3/11/2020 2:50,3/11/2020 2:50,,,,0,,,,CC BY-SA 4.0 18561,2,,17999,3/11/2020 3:11,,1,,"

AI is internally limited by model and externally limited by the environment.

Humans are externally limited by the environment but not necessarily internally limited by a computable model (as AI is).

So, humans may possess certain skills (e.g. creativity) that an AI may never possess. I had previously asked a related question Are human brain processes, like creativity, intuition or imagination, computable processes?.

Which research work supports my claims?

Brian Cantwell Smith says that there is no computation without representation (a model).

In the article The Brain Is Not Computable, Miguel Nicolelis, a top neuroscientist at Duke University, also says

The brain is not computable and no engineering can reproduce it

You can't predict whether the stock market will go up or down because you can’t compute it.

You could have all the computer chips ever in the world and you won't create a consciousness.

That's because its most important features are the result of unpredictable, nonlinear interactions among billions of cells

",21644,,-1,,6/17/2020 9:57,3/11/2020 13:05,,,,0,,,,CC BY-SA 4.0 18562,2,,18546,3/11/2020 3:38,,1,,"

You are misunderstanding something. You are mixing up inner layers with the output layer. But the question was very good.

Fist of all, with the only one layer and one neuron neural networks it does not exist. Only one layer can not bring nonlinearity in the network. One neuron network means it's a linear regression or logistic regression if it passes through a sigmoid activation.

You have to look at NN like below. The output of a neural networks is the final layer output. There are two possible situations in the model.

  1. Classification model: For classification people generally use softmax and it has more than one output and the maximum value always below 1(It's a probability distribution).

  2. Regression model: A regression model has a continuous output like the output you mentioned in your problem statement. It has only one output channel. In the regression model, you can make the output as a linear combination of the previous layers (without defining the output range). You can use ReLU if you are sure your prediction is always positive.

",34164,,34164,,3/11/2020 3:52,3/11/2020 3:52,,,,0,,,,CC BY-SA 4.0 18563,2,,17291,3/11/2020 3:44,,4,,"

Yes, PAC learning can be relevant in practice. There's an area of research that combines PAC learning and Bayesian learning that is called PAC-Bayesian (or PAC-Bayes) learning, where the goal is to find PAC-like bounds for Bayesian estimators.

For example, Theorem 1 (McAllester’s bound) of the paper A primer on PAC-Bayesian learning (2019) by Benjamin Guedj, who provides a nice overview of the topic, shows a certain bound that can be used to design Bayesian estimators. An advantage of PAC-Bayes is that you get bounds on the generalization ability of the Bayesian estimator, so you do not necessarily need to test your estimator on a test dataset. Sections 5 and 6 of the paper go into the details of the real-world applications of PAC-Bayes.

See e.g. Risk Bounds for the Majority Vote: From a PAC-Bayesian Analysis to a Learning Algorithm (2015) by P. Germain et al. for a specific application of PAC-Bayes. There's also the related Python implementation.

See also these related slides and this blog post (by the same author and John Shawe-Taylor) that will point you to their video tutorials about the topic.

The VC dimension can also be useful in practice. For example, in the paper Model Selection via the VC Dimension (2019) M. Mpoudeu et al. describe a method for model selection based on the VC dimension.

",2444,,2444,,3/11/2020 4:40,3/11/2020 4:40,,,,0,,,,CC BY-SA 4.0 18564,1,,,3/11/2020 6:04,,1,52,"

I have a GRU model which has 12 features as inputs and I'm trying to predict output power. I really do not understand though whether I choose

  • 1 layer or 5 layers
  • 50 neurons or 512 neuron
  • 10 epochs with a small batch size or 100 eopochs with a large batch size
  • Different optimizers and activation functions
  • Dropput and L2 regurlarization
  • Adding more dense layer.
  • Increasing and Decreasing learning rate

My results are always the same and doesn't make any sense, my loss and val_loss loss is very steep in first 2 epochs and then for the rest it becomes constant with small fluctuations in val_loss.

Here is my code and a figure of losses, and my dataframes if needed:

Dataframe1: https://drive.google.com/file/d/1I6QAU47S5360IyIdH2hpczQeRo9Q1Gcg/view Dataframe2: https://drive.google.com/file/d/1EzG4TVck_vlh0zO7XovxmqFhp2uDGmSM/view

import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from google.colab import files
from tensorboardcolab import TensorBoardColab, TensorBoardColabCallback
tbc=TensorBoardColab() # Tensorboard
from keras.layers.core import Dense
from keras.layers.recurrent import GRU
from keras.models import Sequential
from keras.callbacks import EarlyStopping
from keras import regularizers
from keras.layers import Dropout





df10=pd.read_csv('/content/drive/My Drive/Isolation Forest/IF 10 PERCENT.csv',index_col=None)
df2_10= pd.read_csv('/content/drive/My Drive/2019 Dataframe/2019 10minutes IF 10 PERCENT.csv',index_col=None)

X10_train= df10[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
X10_train=X10_train.values

y10_train= df10['Power_kW']
y10_train=y10_train.values

X10_test= df2_10[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
X10_test=X10_test.values

y10_test= df2_10['Power_kW']
y10_test=y10_test.values




# scaling values for model


x_scale = MinMaxScaler()
y_scale = MinMaxScaler()

X10_train= x_scale.fit_transform(X10_train)
y10_train= y_scale.fit_transform(y10_train.reshape(-1,1))
X10_test=  x_scale.fit_transform(X10_test)
y10_test=  y_scale.fit_transform(y10_test.reshape(-1,1))


X10_train = X10_train.reshape((-1,1,12)) 
X10_test = X10_test.reshape((-1,1,12))



Early_Stop=EarlyStopping(monitor='val_loss', patience=3 , mode='min',restore_best_weights=True)



# creating model using Keras
model10 = Sequential()
model10.add(GRU(units=200, return_sequences=True, input_shape=(1,12),activity_regularizer=regularizers.l2(0.0001)))
model10.add(GRU(units=100, return_sequences=True))
model10.add(GRU(units=50))
#model10.add(GRU(units=30))
model10.add(Dense(units=1, activation='linear'))
model10.compile(loss=['mse'], optimizer='adam',metrics=['mse']) 
model10.summary() 

history10=model10.fit(X10_train, y10_train, batch_size=1500,epochs=100,validation_split=0.1, verbose=1, callbacks=[TensorBoardColabCallback(tbc),Early_Stop])


score = model10.evaluate(X10_test, y10_test)
print('Score: {}'.format(score))



y10_predicted = model10.predict(X10_test)
y10_predicted = y_scale.inverse_transform(y10_predicted)

y10_test = y_scale.inverse_transform(y10_test)


plt.scatter( df2_10['WindSpeed_mps'], y10_test, label='Measurements',s=1)
plt.scatter( df2_10['WindSpeed_mps'], y10_predicted, label='Predicted',s=1)
plt.legend()
plt.savefig('/content/drive/My Drive/Figures/we move on curve6 IF10.png')
plt.show()

",34168,,34168,,3/11/2020 7:11,3/11/2020 7:11,Neural Network Results always the same,,0,0,,,,CC BY-SA 4.0 18565,1,18570,,3/11/2020 6:16,,0,242,"

I don't know if this is the right place to ask this question. If it is not, please tell me and I remove it.

I've just started to learn CNN and I'm trying to understand what they do and how they do it.

I have written some sentences to describe it:

  1. Let's say that CNN is a mathematical function that adjusts its values based on the result obtained and the desired output.

    • The values are the filters of the convolutional layers (in other types of networks would be the weights).
    • To adjust these values there is a backpropagation method as in all networks.
  2. The result obtained is an image of the same size as the original.

  3. In this image you can see the delimited area.

  4. The goal of the network is to learn these filters.

  5. The overfitting may be because the network has learned where the pixels you are looking for are located.

  6. The filters have as input a pixel of the image and return a 1 or a 0.

My doubt is:

In your own words, Have I forgotten something?

NOTE:

This is only one question. The six points above are affirmative sentences, not questions.

There is only one question mark, and it is on my question.

I have to clarify this because someone has closed my previous question because she/he thinks there were more than one question.

",4920,,2444,,12/12/2021 13:08,12/12/2021 13:08,Understanding CNN in a few sentences,,2,1,,12/12/2021 13:08,,CC BY-SA 4.0 18567,1,21599,,3/11/2020 9:25,,1,226,"

Currently I'm working on a continuous control problem using DDPG as my RL algorithm. All in all, things are working out quite well, but the algorithm does not show any tendencies to eliminate the steady state control deviation towards the far end of the episode.

In the graphs you can see what happens:

In the first graph we see the setpoint in yellow and the controlled continuous parameter in purple. In the beginning, the algorithm brings the controlled parameter close to the setpoint fast, but then it ceases its further efforts and does not try to eliminate the remaining steady state error. This control deviation even increases over time.

In the second graph, the actual reward is depicted in yellow. (Just ignore the other colors.) I use the normalized control deviation to calculate the reward: $r = \frac{\frac{|dev|}{k}}{1+\frac{|dev|}{k}}$.

This gives me a reward that lies within the interval $]0, 1]$ and has a value of $0.5$ when the deviation $dev$ equals the parameter $k$. (That is the parameter $k$ kind of indicates when half of the work is done)

This reward function is relatively steep for the last fraction of the deviation from $k$ to $0$. So it would definitely be worth the effort for the agent to eliminate the residual deviation.

However, it looks like the agent is happy with the existing state and the control deviation never gets eliminated. Eventhough the reward is stuck at ~0.85 instead of the maximum achievable 1.

Any ideas how to push the agent into some more effort to eliminate the steady state error? (A PID controller would exactly do this by using its I-term. How can I translate this to the RL-algo?)

The state presented to the algo consists of the current deviation and the speed of change (derivatve) of the controlled value. The deviation is not included in the calculation of the reward function, but in the end we wat a flat line with no steady state deviation of course.

Any ideas welcome!

Regards, Felix

",25972,,,,,6/3/2020 9:01,Continuous control with DDPG: How to eliminate steady state error?,,2,0,,,,CC BY-SA 4.0 18568,2,,18513,3/11/2020 11:39,,0,,"

The original graph for the aforementioned Bayesian Optimisation is similar to the graph in these slides (slide 18) along with the calculations.

So, according to the tutorial the graph shown should actually have the term $p(D|m)$ on the y-axis, thus making it a generative model.Now the graph starts to make sense, since a model with low complexity cannot produce very complex datasets and will be centred around 0, while very complex models can produce richer datasets which makes them assign probability thinly over all the datatsets (to keep $\sum_{D'}p(D'|m) = 1$).

",,user9947,,,,3/11/2020 11:39,,,,0,,,,CC BY-SA 4.0 18570,2,,18565,3/11/2020 13:19,,2,,"

To address your description point by point,

  1. There are many types of CNN architectures. You appear to be describing a fully-convolutional neural network built only from convolutional layers which is a very specialized type of network (typically used for segmentation tasks).

    • Most networks will include other layer types which have trainable parameters too, i.e. batch normalization layers or densely connected layers.
    • Yes, the training process operates on the same concept of error backpropagation.
  2. If the model is performing a segmentation task (or any task which requires a 1-to-1 mapping of individually classified pixels), then yes its typically the same size.

  3. The raw output of a fully-CNN is typically a probability map where each pixel is the probability between 0 and 1 of it being a member of the positive class. This is normally thresholded at 0.5 to obtain a binary mask which delimits the area of interest.

  4. In essence, yes. But it may be more accurate to say that the goal is to minimise the loss function (reduce the error) which is done by learning optimal filter weights.

  5. Overfitting is not about pixel location. In fact, CNNs are translationally invariant which means that the location of the feature has no impact on the output. There are some types of CNNs which take feature location into account but thats still not standard. Overfitting generally happens when a filter is an exact fit to a pixel patch, for example, the filter is no longer just looking for vertical lines but the exact pixel values of a section of vertical line in a specific image.

  6. No, each convolution of a 3x3 filter takes 3x3 pixels and it is convolved over the whole image which means that the filter output will likely be around the same size of the image depending on stride length, padding and dilation. Have a look at one of my other answers for more details.

",31980,,31980,,3/18/2020 14:38,3/18/2020 14:38,,,,0,,,,CC BY-SA 4.0 18571,1,,,3/11/2020 14:02,,2,309,"

I playing around sequence modeling to forecast the weather using LSTM.

How does the number of layers or units in each layer exactly affect the model complexity (in an LSTM)? For example, if I increase the number of layers and decrease the number of units, how will the model complexity be affected?

I am not interested in rules of thumb for choosing the number of layers or units. I am interested in theoretical guarantees or bounds.

",32253,,2444,,11/13/2020 23:44,11/13/2020 23:44,How does the number of stacked LSTM layers or units in each layer affect the model complexity?,,1,0,,,,CC BY-SA 4.0 18572,2,,18565,3/11/2020 14:05,,2,,"

The 6th point is wrong. Filters do linear combination with group of pixels (depends on filter size) and move over the image and continue to do linear combination operation until it finishes the whole image. Please look at 2D convolution in CNN

I would explain CNN following way:

CNN is a kind of neural network that generally contains many convolutional layers and one or two fully connected layers. The purpose of the convolutional layers is feature extraction. Each of the convolutional layers contains many kernels. The purpose of the kernel is to extract different types of features. E. g. Edge, color, shape, and so on.

Finally, fully connected layers at the end of the network decide the output based on the features that were extracted by convolutional layers.

The advantage of CNN is that we can learn feature extraction kernels based training images. If you look earlier days of image processing. You can see the image processing community used convolution for feature extraction all the time. They were just using selective kernel for feature extraction. E. g. Sobel operator for edge detection.

",34164,,,user9947,3/11/2020 17:41,3/11/2020 17:41,,,,2,,,,CC BY-SA 4.0 18573,1,18574,,3/11/2020 16:11,,5,403,"

I'm actually trying to understand the policy iteration in the context of RL. I read an article presenting it and, at some point, a pseudo-code of the algorithm is given :

What I can't understand is this line :

From what I understand, policy iteration is a model-free algorithm, which means that it doesn't need to know the environment's dynamics. But, in this line, we need $p(s',r \mid s, \pi(s))$ (which in my understanding is the transition function of the MDP that gave us the probability of landing in the state $s'$ knowing previous $s$ state and the action taken) to compute $V(s)$. So I don't understand how we can compute $V(s)$ with the quantity $p(s',r \mid s, \pi(s))$ since it is a parameter of the environment.

",34177,,2444,,3/11/2020 16:26,10/21/2021 17:19,How can the policy iteration algorithm be model-free if it uses the transition probabilities?,,2,0,,,,CC BY-SA 4.0 18574,2,,18573,3/11/2020 16:25,,3,,"

Everything you say in your post is correct, apart from the wrong assumption that policy iteration is model-free. PI is a model-based algorithm because of the reasons you're mentioning.

See my answer to the question What's the difference between model-free and model-based reinforcement learning?.

",2444,,,,,3/11/2020 16:25,,,,0,,,,CC BY-SA 4.0 18575,2,,18526,3/11/2020 16:48,,2,,"

You can try image captioning. You can train a CNN model for image, and then, on top of that, provide the model embedding to another LSTM model to learn the encoded characteristics. You can directly use the pre-trained VGG-16 model and use the second last layer to create your image embeddings.

Show and Tell: A Neural Image Caption Generator is a really nice paper to start with. There is an implementation of it in TensorFlow: https://www.tensorflow.org/tutorials/text/image_captioning. The paper focuses on generating caption, but you can provide your 'characteristics' to LSTM, so that it can learn it for each image.

",33835,,2444,,3/11/2020 20:52,3/11/2020 20:52,,,,0,,,,CC BY-SA 4.0 18576,1,18577,,3/11/2020 17:27,,55,10631,"

Background: It's well-known that neural networks offer great performance across a large number of tasks, and this is largely a consequence of their universal approximation capabilities. However, in this post I'm curious about the opposite:

Question: Namely, what are some well-known cases, problems or real-world applications where neural networks don't do very well?


Specification: I'm looking for specific regression tasks (with accessible data-sets) where neural networks are not the state-of-the-art. The regression task should be "naturally suitable", so no sequential or time-dependent data (in which case an RNN or reservoir computer would be more natural).

",31649,,2444,,1/22/2021 14:38,1/22/2021 14:38,What are some well-known problems where neural networks don't do very well?,,11,1,,,,CC BY-SA 4.0 18577,2,,18576,3/11/2020 17:41,,31,,"

Here's a snippet from an article by Gary Marcus

In particular, they showed that standard deep learning nets often fall apart when confronted with common stimuli rotated in three dimensional space into unusual positions, like the top right corner of this figure, in which a schoolbus is mistaken for a snowplow:

. . .

Mistaking an overturned schoolbus is not just a mistake, it’s a revealing mistake: it that shows not only that deep learning systems can get confused, but they are challenged in making a fundamental distinction known to all philosophers: the distinction between features that are merely contingent associations (snow is often present when there are snowplows, but not necessary) and features that are inherent properties of the category itself (snowplows ought other things being equal have plows, unless eg they have been dismantled). We’d already seen similar examples with contrived stimuli, like Anish Athalye’s carefully designed, 3-d printed foam covered dimensional baseball that was mistaken for an espresso

Alcorn’s results — some from real photos from the natural world — should have pushed worry about this sort of anomaly to the top of the stack.

Please note that the opinions of the author are his alone and I do not necessarily share all of them with him.

Edit: Some more fun stuff

1) DeepMind's neural network that could play Breakout and Starcraft saw a dramatic dip in performance when the paddle was moved up by a few pixels.

See: General Game Playing With Schema Networks

While in the latter, it performed well with one race of the character but not on a different map and with different characters.

Source

2)

AlphaZero searches just 80,000 positions per second in chess and 40,000 in shogi, compared to 70 million for Stockfish and 35 million for elmo.

What the team at Deepmind did was to build a very good search algorithm. A search algorithm that includes the capability to remember facets of previous searches to apply better results to new searches. This is very clever; it undoubtedly has immense value in many areas, but it cannot be considered general intelligence.

See: AlphaZero: How Intuition Demolished Logic (Medium)

",34183,,1671,,5/3/2020 23:47,5/3/2020 23:47,,,,2,,,,CC BY-SA 4.0 18578,2,,18576,3/11/2020 18:20,,20,,"

In theory, most neural networks can approximate any continuous function on compact subsets of $\mathbb{R}^n$, provided that the activation functions satisfy certain mild conditions. This is known as the universal approximation theorem (UAT), but that should not be called universal, given that there are a lot more discontinuous functions than continuous ones, although certain discontinuous functions can be approximated by continuous ones. The UAT shows the theoretical powerfulness of neural networks and their purpose. They represent and approximate functions. If you want to know more about the details of the UAT, for different neural network architectures, see this answer.

However, in practice, neural networks trained with gradient descent and backpropagation face several issues and challenges, some of which are due to the training procedure and not just the architecture of the neural network or available data.

For example, it is well known that neural networks are prone to catastrophic forgetting (or interference), which means that they aren't particularly suited for incremental learning tasks, although some more sophisticated incremental learning algorithms based on neural networks have already been developed.

Neural networks can also be sensitive to their inputs, i.e. a small change in the inputs can drastically change the output (or answer) of the neural network. This is partially due to the fact that they learn a function that isn't really the function you expect them to learn. So, a system based on such a neural network can potentially be hacked or fooled, so they are probably not well suited for safety-critical applications. This issue is related to the low interpretability and explainability of neural networks, i.e. they are often denoted as black-box models.

Bayesian neural networks (BNNs) can potentially mitigate these problems, but they are unlikely to be the ultimate or complete solution. Bayesian neural networks maintain a distribution for each of the units (or neurons), rather than a point estimate. In principle, this can provide more uncertainty guarantees, but, in practice, this is not yet the case.

Furthermore, neural networks often require a lot of data in order to approximate the desired function accurately, so in cases where data is scarce neural networks may not be appropriate. Moreover, the training of neural networks (especially, deep architectures) also requires a lot of computational resources. Inference can also be sometimes problematic, when you need real-time predictions, as it can also be expensive.

To conclude, neural networks are just function approximators, i.e. they approximate a specific function (or set of functions, in the case of Bayesian neural networks), given a specific configuration of the parameters. They can't do more than that. They cannot magically do something that they have not been trained to do, and it is usually the case that you don't really know the specific function the neural network is representing (hence the expression black-box model), apart from knowing your training dataset, which can also contain spurious information, among other issues.

",2444,,2444,,3/12/2020 23:31,3/12/2020 23:31,,,,1,,,,CC BY-SA 4.0 18579,1,,,3/11/2020 19:01,,2,99,"

Soar is a cognitive architecture.

There is something called ""the Chinese box"" or ""Chinese room"" argument:

The ""Chinese room"" seems to be begging its question, but that is not what I am asking. I am asking if there is any literal difference between a tool like ""SOAR"" and the formalism of the ""Chinese box"". Is SOAR identical or equivalent to a ""Chinese Box""?

",2263,,2444,,1/24/2021 17:37,1/24/2021 17:37,Is the Cognitive Approach (SOAR) equivalent to the Chinese Room argument?,,1,1,,,,CC BY-SA 4.0 18580,2,,10289,3/12/2020 1:26,,5,,"

According to Wikipedia:

A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process.

Answer to your question:

To build any neural network model we assume the train, test and validation data are coming from a probability distribution. So, if you produce a neural network model based on statistical data then the network is a statistical model.

Moreover, neural networks' cost function is generally a parametric model and parametric modes are statistical models.

Please look at Goodfellow's Deep Learning book chapter Deep Feedforward Networks page 174 and 175.

From Goodfellow's book

Fortunately, the cost functions for neural networks are more or less the same as those for other parametric models, such as linear models. In most cases, our parametric model defines a distribution $p(y \mid x; \theta)$ and we simply use the principle of maximum likelihood.

In conclusion, ANNs (e. g. MLP, CNN, etc.) are statistical models

",34164,,2444,,3/12/2020 3:05,3/12/2020 3:05,,,,1,,,,CC BY-SA 4.0 18581,2,,18576,3/12/2020 7:59,,5,,"

This is more in the direction of 'what kind of problems can be solved by neural networks'. In order to train a neural network you need a large set of training data which is labelled with correct/ incorrect for the question you are interested in. So for example 'identify all pictures that have a cat on them' is very suitable for neural networks. On the other hand 'summarize the story of this toddler picture book' is very hard. Although a human can easily decide whether a given summary is any good or not it would be very difficult to build a suitable set of training data for this kind of problem. So if you can't build a large training data set with correct answers, you can't train a neural network to solve the problem.

The answer of Anshuman Kumar is also an instance of that, also a potentially solvable one. The neural network that misidentified upside-down school buses presumably had very few if any upside-down school buses in its training data. Put them into the training data and the neural network will identify these as well. This is still a flaw in neural networks, a human can correctly identify an upside-down school bus the first time they see one if they know what school busses look like.

",34200,,,,,3/12/2020 7:59,,,,2,,,,CC BY-SA 4.0 18584,1,18586,,3/12/2020 9:51,,7,102,"

Are there any medical diagnosis systems that are already used somewhere that are based on artificial neural networks?

",34203,,2444,,3/12/2020 13:52,3/12/2020 13:55,Medical diagnosis systems based on artificial neural networks,,1,1,,,,CC BY-SA 4.0 18585,1,,,3/12/2020 10:19,,4,176,"

I was reading the following research paper Hindsight Experience Replay. This is the paper that introduces a concept called Hindsight Experience Replay (HER), which basically attempts to alleviate the infamous sparse reward problem. It is based on the intuition that human beings constantly try and learn something useful even from their past failed experiences.

I have almost completely understood the concept. But in the algorithm posited in the paper, I don't really understand how the optimization works. Once the fictitious trajectories are added, we have a state-goal-action dependency. This means our DQN should predict Q-Values based on an input state and the goal we're pursuing (The paper mentions how HER is extremely useful for multi-RL as well).

Does this mean I need to add another input feature (goal) to my DQN? An input state and an input goal, as two input features to my DQN, which is basically a CNN?

Because in the optimization step they have mentioned that we need to randomly sample trajectories from the replay buffer and use those for computing the gradients. It wouldn't make sense to compute the Q-Values without the goal now, because then we'd wind up with duplicate values.

Could someone help me understand how exactly does the optimization take place here?

I am training Atari's "Montezuma's Revenge" using a double DQN with Hindsight Experience Replay (HER).

",32455,,2444,,11/20/2020 18:37,12/10/2022 23:07,How does the optimization process in hindsight experience replay exactly work?,,1,0,,,,CC BY-SA 4.0 18586,2,,18584,3/12/2020 10:28,,5,,"

Yes, there are many, actually. A Google search turned this paper Artificial Neural Networks in Medical Diagnosis (2011) by Al-Shayea up.

Not only are they used in disease diagnosis, but even with things like prescribing medicines. In fact, the top project for a hackathon at my school analysed thousands of research articles, and took a patient's medication history as input, to best recommend them specific medicines. Check it out.

",22373,,2444,,3/12/2020 13:55,3/12/2020 13:55,,,,0,,,,CC BY-SA 4.0 18587,1,18597,,3/12/2020 11:32,,15,3124,"

It's an idea I heard a while back but couldn't remember the name of. It involves the existence and development of an AI that will eventually rule the world and that if you don't fund or progress the AI then it will see you as ""hostile"" and kill you. Also, by knowing about this concept, it essentially makes you a candidate for such consideration, as people who didn't know about it won't understand to progress such an AI. From my understanding, this idea isn't taken that seriously, but I'm curious to know the name nonetheless.

",34205,,2444,,3/12/2020 13:47,3/13/2020 13:15,What is the idea called involving an AI that will eventually rule humanity?,,5,1,,,,CC BY-SA 4.0 18588,2,,18587,3/12/2020 11:39,,10,,"

I believe the term you are looking for is ""(technological) singularity"".

https://en.wikipedia.org/wiki/Technological_singularity

",5300,,,,,3/12/2020 11:39,,,,0,,,,CC BY-SA 4.0 18589,1,,,3/12/2020 11:43,,0,98,"

Model used

mobilenet_model = MobileNet(input_shape=in_dim, include_top=False, pooling='avg', weights='imagenet')
mob_x = Dropout(0.75)(mobilenet_model.output)
mob_x = Dense(2, activation='sigmoid')(mob_x)

model = Model(mobilenet_model.input, mob_x)

for layer in model.layers[:50]:
    layer.trainable=False

for layer in model.layers[50:]:
    layer.trainable=True

model.summary()

The rest of the code

in_dim = (224,224,3)
batch_size = 64
samples_per_epoch = 1000
validation_steps = 300
nb_filters1 = 32
nb_filters2 = 64
conv1_size = 3
conv2_size = 2
pool_size = 2
epochs = 20
classes_num = 2
lr = 0.000004
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
        'output/train',  # this is the target directory
        target_size= in_dim[0:2],  # all images will be resized to 224*224
        batch_size=batch_size,
        class_mode='categorical') 
#Found 6062 images belonging to 2 classes.
validation_generator = test_datagen.flow_from_directory(
        'output/val',
        target_size=in_dim[0:2],
        batch_size=batch_size,
        class_mode='categorical')
#Found 769 images belonging to 2 classes.
from keras.callbacks import EarlyStopping
#set early stopping monitor so the model stops training when it won't improve anymore
early_stopping_monitor = EarlyStopping(patience=3)
steps_per_epoch = 10
from keras import backend as K

def recall_m(y_true, y_pred):
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall

def precision_m(y_true, y_pred):
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision

def f1_m(y_true, y_pred):
    precision = precision_m(y_true, y_pred)
    recall = recall_m(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])


history = model.fit_generator(
        train_generator,
        steps_per_epoch=2000// batch_size ,
        epochs=50,
        validation_data=validation_generator,
        validation_steps=800// batch_size,
        callbacks = [early_stopping_monitor],
       )
test_generator = train_datagen.flow_from_directory(
        'output/test',
        target_size=in_dim[0:2],
        batch_size=batch_size,
        class_mode='categorical')
loss, accuracy, f1_score, precision, recall = model.evaluate(test_generator)
print("The test set accuracy is ", accuracy)
#The test set accuracy is  0.9001349538122272

From what I have gathered from this post and this article, I understand that the validation set is much smaller with respect to the training set. I have applied augmentation to the test set due to this and that boosted test set accuracy by 1%.

Please note that the test train split is "stratified" as here is a breakdown of each individual class in test/train/validation folders

Test: Class 0: 7426
      Class 1: 631
Train: Class 0: 928
       Class 1: 80
Val: Class 0: 928
     Class 1: 79

I have used an 80/10/10 split for train/test/val respectively.

Can someone guide me on what to do so that I can ensure the accuracy is 95%+ and the validation loss graph is less erratic?

  1. I am thinking of tuning the learning rate though it doesn't seem to be working by much.
  2. Another suggestion is to use test time augmentation.
  3. Also, the link on fast.ai has a comment like so

That is also part of the reasons why a weighted ensemble of different performing epoch models will usually perform better than the best performing model on your validation dataset. Sometimes choosing the best model or the best ensemble to generalize well isn’t as easy as selecting the lower loss/higher accuracy model. 4. Should I use L2 regularization in addition to the current dropout?

Applying augmentation of any kind to the validation set is a strict no-no and the dataset is generated by my company which I cannot get more of.

",34183,,-1,,6/17/2020 9:57,3/13/2020 6:23,How do I deal with an erratic validation set loss when the loss on my training set is monotonically decreasing?,,0,2,,,,CC BY-SA 4.0 18590,1,20192,,3/12/2020 12:30,,1,881,"

I was running my gated recurrent unit (GRU) model. I wanted to get an opinion if my loss and validation loss graph is good or not, since I'm new to this and don't really know if that is considered underfitting or not

",34168,,34168,,3/12/2020 19:37,4/12/2020 11:41,Is my GRU model under-fitting given this plot of the training and validation loss?,,2,0,,,,CC BY-SA 4.0 18591,2,,18576,3/12/2020 12:58,,4,,"

A checkerboard with missing squares is impossible for a neural network to learn the missing color. The more it learns on training data, the worse it does on test data.

See e.g. this article The Unlearnable Checkerboard Pattern (which, unfortunately, is not freely accessible). In any case, it should be easy to try out yourself that this task is difficult.

",8221,,2444,,3/16/2020 17:26,3/16/2020 17:26,,,,0,,,,CC BY-SA 4.0 18592,2,,18576,3/12/2020 13:45,,5,,"

I don't know if it might be of use, but many areas of NLP are still hard to tackle, and even if deep models achieve the state of the art results, they usually beat baseline shallow models by very few percentage points. One example that I've had the opportunity to work on is stance classification 1. In many datasets, the best F score achievable is around 70%.

Even though it's hard to compare results since in NLP many datasets are really small and domain-specific (especially for stance detection and similar SemEval tasks), many times SVM, conditional random fields, sometimes even Naive Bayes models are able to perform almost as good as CNN or RNN. Other tasks for which this holds are argumentation mining or claim detection.

See e.g. the paper TakeLab at SemEval-2016 Task 6: Stance Classification in Tweets Using a Genetic Algorithm Based Ensemble (2016) by Martin Tutek et al.

",34098,,2444,,3/14/2020 1:37,3/14/2020 1:37,,,,1,,,,CC BY-SA 4.0 18593,2,,18587,3/12/2020 14:27,,6,,"

The likely expression you are looking for is AI takeover, which is a common topic in science fiction movies, such as 2001: A Space Odyssey and The Matrix, and popular culture. Although the AI takeover is an unlikely scenario in the next years, certain scientists, such as Stephen Hawking, have expressed concerns about it and some philosophers, especially Nick Bostrom, are really interested in the topic.

The AI takeover concept is related to concepts such as the AI singularity, superintelligence, intelligence explosion, AI control problem, existential risk, machine ethics and friendly AI.

The book Superintelligence: Paths, Dangers, Strategies (2014) by N. Bostrom may be helpful if you are interested in hypothetical scenarios.

",2444,,2444,,3/12/2020 14:32,3/12/2020 14:32,,,,1,,,,CC BY-SA 4.0 18594,1,,,3/12/2020 14:52,,7,335,"

I want to solve the zero subset sum problem with the hill-climbing algorithm, but I am not sure I found a good state space for this.

Here is the problem: consider we have a set of numbers and we want to find a subset of this set such that the sum of the elements in this subset is zero.

My own idea to solve this by hill-climbing is that in the first step, we can choose a random subset of the set (for example, the main set is $X= \{X_1,\dots,X_n\}$ and we chose $X'=\{X_{i_1},\dots,X_{i_k}\}$ randomly), then the children of this state can be built by adding an element from $X-X'$ to $X'$ or deleting an element from $X'$ itself. This means that each state has $n$ children. and the objective function could be the sum of the elements in $X'$ that we want to minimize.

Is this a good modeling? Are there better modelings or objective functions that can work more intelligently?

",33756,,2444,,3/12/2020 15:44,10/12/2022 12:02,How can I solve the zero subset sum problem with hill climbing?,,2,0,,,,CC BY-SA 4.0 18595,1,,,3/12/2020 15:47,,2,483,"

Below are my Inputs Outputs and fitness function. The snake is learning at a slow rate, and seems to be stagnant, additionally when the snake collides with the food, it gets deleted from the genome, which doesn't make any sense because that's not specified in the collision. Any input would be greatly appreciated

for x, s in enumerate(snakes):
        # inserting new snake head and deleting the tale for movement

        # inputs
        s.x = s.snake_head[0]
        s.y = s.snake_head[1]
        snakeheadBottomDis = win_h - s.y
        snakeheadRightDis = win_w - s.x
        snake_length = len(s.snake_position)
        snakefoodDistEuclidean = math.sqrt((s.x - food.x) ** 2 + (s.y - food.y) ** 2)
        snakefoodDisManhattan = abs(s.x - food.x) + abs(s.y - food.y)
        xdis = s.Xdis()
        ydis = s.Ydis()
        s.dis_list1.append(snakefoodDistEuclidean)
        s.dis_list2.append(snakefoodDisManhattan)
        s.dis_list3.append(s.Xdis())
        s.dis_list4.append(s.Ydis())
        s.hunger_list.append(s.hunger)
        #print('Euclidean: ', dis_list1[-1])
        #print('Manhattan: ', dis_list2[-1])
        #print('X distance from Wall: ', dis_list3[-1])
        #print('Y distance from Wall: ', dis_list4[-1])

        output = nets[snakes.index(s)].activate((s.hunger, s.x, s.y, food.x, food.y, snakeheadBottomDis,
                                                 snakeheadRightDis, snake_length, xdis,ydis,
                                                 snakefoodDisManhattan, snakefoodDistEuclidean,s.dis_list1[-1],s.dis_list1[-2],
                                                 s.dis_list2[-1],s.dis_list2[-2],s.dis_list3[-1],s.dis_list3[-2],
                                                 s.dis_list4[-1],s.dis_list4[-2],s.hunger_list[-1],s.hunger_list[-2]))


        #snake moving animation
        s.snake_position.insert(0, list(s.snake_head))
        s.snake_position.pop()
        s.hunger -= 1

        # Checking distance Euclidean and Manhattan current and last
        if s.dis_list1[-1] > s.dis_list1[-2]:
            ge[x].fitness -= 1

        if s.dis_list1[-1] < s.dis_list1[-2]:
            ge[x].fitness += 0.5

        if s.dis_list1[-1] > s.dis_list2[-2]:
            ge[x].fitness -= 1

        if s.dis_list1[-1] < s.dis_list2[-2]:
            ge[x].fitness += 0.5

        #checking hunger number and if its decreasing
        if s.hunger_list[-1] < s.hunger_list[-2]:
            ge[x].fitness -= 0.1

        # move right
        if output[0] >= 0 and output[1] < 0 and output[2] < 0 and output[
            3] < 0:
            #and s.x < win_w - s.width and s.y > 0 + s.height:
            # ge[x].fitness += 0.5
            s.move_right()

        # move left
        if output[1] >= 0 and output[0] < 0 and output[2] < 0 and output[
            3] < 0:
            #and s.x < 500 - s.width and s.y > 0 + s.height:
            #ge[x].fitness += 0.5
            s.move_left()

        # move down
        if output[2] >= 0 and output[1] < 0 and output[0] < 0 and output[
            3] < 0:
            #and s.x < 500 - s.width and s.y > 0 + s.height:
            # ge[x].fitness += 0.5
            s.move_down()

        # move up
        if output[3] >= 0 and output[1] < 0 and output[2] < 0 and output[
            3] < 0:
            #and s.x < 500 - s.width and s.y > 0 + s.height:
            # ge[x].fitness += 0.5
            s.move_up()

        #adding more fitness if axis aligns
        if s.snake_head[0] == food.x:
            ge[x].fitness += 0.1
        if s.snake_head[1] == food.x:
            ge[x].fitness += 0.1

        # checking the activation function tanh
        # print ('output 0: ', output[0])
        # print('output 1: ', output[1])
        # print ('output 2: ', output[1])
        # print ('output 3: ', output[1])

        # snake poping on other side of screen if screen limit reached
        if s.snake_head[0] >= win_w - s.width:
            s.snake_head[0] = 12
        if s.snake_head[0] <= 11 + s.width:
            s.snake_head[0] = win_w - s.width - 1
        if s.snake_head[1] >= win_h - s.height:
            s.snake_head[1] = s.height + 15
        if s.snake_head[1] <= 11 + s.height:
            s.snake_head[1] = win_h - s.height - 1

        head = s.snake_position[0]
        #s.x < 0 + s.width or s.x > win_w - s.width or s.y < 0 + s.height or \
                #s.y > win_h - s.height or

        #if run into self you die
        if head in s.snake_position[1:]:
            ge[x].fitness -= 10
            snakes.pop(x)
            nets.pop(x)
            ge.pop(x)

        #if hunger reaches 0 you die
        if s.hunger == 0:
            ge[x].fitness -= 5
            snakes.pop(x)
            nets.pop(x)
            ge.pop(x)

        #if snake collides with food award fitness
        if s.getRec().colliderect(food.getRec()):
            ge[x].fitness += 100
            s.hunger = 100
            score += 1
            s.snake_position.insert(0, list(s.snake_head))
            food.y = random.randint(0 + 24, 500 - 24)
            food.x = random.randint(0 + 24, 500 - 24)

    # print(s.hunger)
",34212,,,,,12/30/2020 4:03,I created a snake game and fitted the NEAT algorithm and there's issues,,1,0,,,,CC BY-SA 4.0 18597,2,,18587,3/12/2020 18:09,,25,,"

If I'm not mistaken you're looking for Roko's Basilisk,

in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence

",29873,,,,,3/12/2020 18:09,,,,1,,,,CC BY-SA 4.0 18598,2,,18576,3/12/2020 22:45,,15,,"

In our deep learning lecture, we discussed the following example (from Unmasking Clever Hans predictors and assessing what machines really learn (2019) by Lapuschkin et al.).

Here the neural network learned a wrong way to identify a picture, i.E by identifying the wrong ""relevant components"". In the sensitivity maps next to the pictures, we can see that the watermark was used to identify if there is a horse present in the picture. If we remove the watermark, the classification is no longer made. Even more worryingly, if we add the tag to a completely different picture, it gets identified as a horse!

",34225,,,user9947,3/23/2020 8:32,3/23/2020 8:32,,,,1,,,,CC BY-SA 4.0 18599,2,,18571,3/13/2020 1:23,,1,,"

In computational learning theory, the VC dimension is a formal measure of the capacity of a model. The VC dimension is defined in terms of the concept of shattering, so have a look at the related Wikipedia article, which briefly describes the fundamental concept of shattering. See also my answer to the question How to estimate the capacity of a neural network? for more details.

The paper Vapnik-Chervonenkis dimension of recurrent neural networks (1998), by Pascal Koirana and Eduardo D. Sontag, partially (because they do not take into account more advanced recurrent neural network architectures, such as the LSTM) answers your question.

In the paper, the authors show and prove different theorems that state the VC dimension of (standard) recurrent neural networks (RNNs), with different activation functions, such as non-linear polynomials, piecewise polynomials and the sigmoid function.

For example, Theorem 5 (page 70) states

Let $\sigma$ be an arbitrary sigmoid. The VC dimension of recurrent architectures with activation $\sigma$, with $w$ weights and receiving inputs of length $k$, is $\Omega(wk)$.

The proof of this theorem is given on page 75.

What does this theorem intuitively tell you? If you are familiar with big-O notation, then you are also familiar with the notation $\Omega(wk)$, which means that $wk$ is, asymptotically, a lower bound on the capacity of the RNN. In other words, asymptotically, the capacity of an RNN with $w$ weights receiving inputs of length $k$ is at least $wk$. How does the capacity of the RNN increase as a function of $w$?

Of course, this is a specific result, which only holds for RNNs with the sigmoid activation function. However, this at least gives you an idea of the potential capacity of an RNN. This theorem will hopefully stimulate your appetite to know more computational learning theory!

The paper On Generalization Bounds of a Family of Recurrent Neural Networks may also be useful, although it has been rejected for ICLR 2019.

",2444,,2444,,3/13/2020 1:34,3/13/2020 1:34,,,,2,,,,CC BY-SA 4.0 18600,2,,18590,3/13/2020 1:40,,1,,"

You should at least crop the plots and add a legend. Maybe also provide some scores (accuracy, auc, whatever you're using). Anyway, it doesn't look your model is underfitting, if it was you should have high error at both, training and test phase and the lines would not cross.

",34098,,,,,3/13/2020 1:40,,,,2,,,,CC BY-SA 4.0 18601,2,,18527,3/13/2020 2:08,,1,,"

I don't think this classify as an NLP problem, there is almost no semantic analysis needed, it is more like a classification problem using categorical features.

NLTK is surely valuable if you want to perform some text 'cleaning' or preprocessing before encoding the variables. The only NLP application that I think you could apply here is some sentiment analysis on the comments to extract extra features (like a number expressing the negativeness or positiveness of each comment). Nevertheless you might want to do that using some pre-trained models cause your dataset is pretty small.

",34098,,,,,3/13/2020 2:08,,,,1,,,,CC BY-SA 4.0 18602,2,,18576,3/13/2020 5:57,,2,,"

Large scale route optimization problems.

The is progress made in using Deep Reinforcement learning to solve vehicle routing problems (VRP), for example in this paper: https://arxiv.org/abs/1802.04240v2.

However, for large scale problems and overall heuristic methods, like the ones provided by Google OR tools are much easier to use.

",25248,,,,,3/13/2020 5:57,,,,1,,,,CC BY-SA 4.0 18604,1,,,3/13/2020 8:42,,3,274,"

I'm kinda new to machine learning and still not too solid on math and particularly calculus. I'm currently trying to implement PPO algorithm as described in the spiningUp website :

This line is giving me a hard time :

What does the $\operatorname{argmax}$ mean, in this context? They are also talking about updating the policy with a gradient ascent. So, is taking argmax with respect to $\theta$ the same as doing:

where $J$ is the min() function?

",34177,,2444,,3/13/2020 23:05,8/1/2022 3:07,What is the purpose of argmax in the PPO algorithm?,,1,0,,,,CC BY-SA 4.0 18605,1,18676,,3/13/2020 8:42,,2,71,"

What is the current state-of-the-art in unsupervised cross-lingual representation learning?

",9863,,2444,,12/21/2021 13:49,12/21/2021 13:49,What is the current state-of-the-art in unsupervised cross-lingual representation learning?,,1,0,,,,CC BY-SA 4.0 18606,1,18644,,3/13/2020 9:10,,4,168,"

I've just started learning natural language processing from Dan Jurafsky's videos lectures. In that video, minute 4:56, he is stating that dialogue is a hard problem in natural language processing (NLP). Why?

",9863,,2444,,3/13/2020 14:17,3/14/2020 19:10,Why is dialogue a hard problem in natural language processing?,,2,0,,,,CC BY-SA 4.0 18608,1,,,3/13/2020 10:06,,3,656,"

What is the simplest classification problem which cannot be solved by a perceptron (that is a single-layered feed-forward neural network, with no hidden layers and step activation function), but it can be solved by the same network if the activation function is swapped out to a differentiable activation function (e.g. sigmoid, tanh)?

In the first case, the training would be done with the perceptron training rule, in the second case with the delta rule.

Note that regression problems cannot be solved by perceptrons, so I'm interested in classification only.

",34241,,2444,,1/19/2021 2:24,6/18/2021 5:00,What is the simplest classification problem which cannot be solved by a perceptron?,,1,1,,,,CC BY-SA 4.0 18609,1,,,3/13/2020 10:58,,3,134,"

I'm new to Reinforcement Learning. For an internship, I am currently training Atari's "Montezuma's Revenge" using a double Deep Q-Network with Hindsight Experience Replay (HER) (see also this article).

HER is supposed to alleviate the reward sparseness problem. But since the reward is annoyingly too sparse, I have also added a Random Network Distillation (RND) (see also this article) to encourage the agent to explore new states, by giving it a higher reward when it reaches a previously undiscovered state and a lower reward when it reaches a state it has previously visited multiple times. This is the intrinsic reward I add to the extrinsic reward the game itself gives. I have also used a decaying greedy epsilon policy.

How well should this approach work? Because I've set it to run for 10,000 episodes, and the simulation is quite slow, because of the mini-batch gradient descent step in HER. There are multiple hyperparameters here. Before implementing RND, I considered shaping the reward, but that is just impractical in this case. What can I expect from my current approach? OpenAI's paper on RND cites brilliant results with RND on Montezuma's Revenge. But they obviously used PPO.

",32455,,2444,,11/21/2020 12:47,11/21/2020 12:47,"Is this a good approach to solving Atari's ""Montezuma's Revenge""?",,0,0,,,,CC BY-SA 4.0 18610,2,,18587,3/13/2020 11:30,,1,,"

It is called Singularity. A point in future where AI will surpass Human Knowledge and become Omniscient. AI will be able to operate on an order manifolds time to that of a human brain thus developing and designing itself without any assistance.

",,user26338,,,,3/13/2020 11:30,,,,0,,,,CC BY-SA 4.0 18612,2,,18587,3/13/2020 13:15,,1,,"

If you think about it, it is already happening.

  • Thousands of drivers work for Uber Intelligence.
  • There are many applications that dictate the rules and define what the seller and the end user need to do

This idea is called Singularity or Technological Singularity, it would be possible with a Superintelligence.

However, the possibility of this happening is unknown. Have they reached that level yet? We have quantum computers, we have companies with huge data centers spread all over the planet, we have technology in space, we have free Tensorflow and studies for anyone on the planet to be able to create artificial intelligence models.

If we have contact or help from other intelligent civilizations, the possibilities can expand on a surreal scale.

Google has complete information about humanity (or something close to that). But even with all this data, creating artificial intelligence with a conscience is something that goes far beyond.

But maybe we already have enough to improve our concept of morals, respect, social interaction. Facebook invests and studies ways to improve social interaction. And if you think about it, it is one of the main means of communication.

The big question is that it is not possible to know what would happen if a super artificial intelligence with conscience would do if it existed. Extinguish humans for harming nature? Just find ways to improve the planet by understanding that human defects and errors are just your own nature as well as everything else in nature? Just watching the show on Netflix because you gave up on humanity? We do not know.

But, particularly, I would love to see that happen. In fact, one of my personal goals is to create this super intelligence. But alone it will be very difficult. A conscience without interaction from other consciences is not a clash of universes. And the clash of universes is what makes us reflect, think, revise, create new paths and thoughts. It is what allows us to create other universes.

",7800,,,,,3/13/2020 13:15,,,,2,,,,CC BY-SA 4.0 18613,1,,,3/13/2020 13:30,,2,174,"

In the video Evaluation and Perplexity by Dan Jurafsky, the author talks about extrinsic and perplexity evaluation in the context of natural language processing (NLP).

What are the advantages and disadvantages of extrinsic and perplexity model evaluation in NLP? Which evaluation method is preferred usually, and why?

",9863,,2444,,3/13/2020 14:14,3/13/2020 14:14,What are the advantages and disadvantages of extrinsic and perplexity model evaluation in NLP?,,0,1,,,,CC BY-SA 4.0 18615,1,,,3/13/2020 14:59,,1,28,"

I have been reading lately on autoencoders a lot. I just wanted to summarize my understanding of denoising autoencoders. As far as I understand they can be

  1. Fully connected (in which case, they will be over-complete autoencoders)

  2. Convolutional

The reason I say it should be over-complete is that the objective is to learn new features and I think extra neurons in the latent layer would help. There is no reason to have a lesser number of neurons because compressing is not the objective. I just want to understand is this the right thinking.

",34248,,2444,,3/13/2020 15:08,3/13/2020 15:08,Can denoising auto-encoders be convolutional and fully connected?,,0,0,,,,CC BY-SA 4.0 18616,2,,18576,3/13/2020 15:23,,1,,"

In the case of convolutional neural networks, the features may be extracted but without taking into account their relative positions (see the concept of translation invariance)

For example, you could have two eyes, a nose and a mouth be in different locations in an image and still have the image be classified as a face.

Operations like max-pooling may also have a negative impact on retaining position information.

",32390,,2444,,3/16/2020 17:22,3/16/2020 17:22,,,,1,,,,CC BY-SA 4.0 18617,2,,13421,3/13/2020 15:34,,1,,"

GraphSage does not have attention at all. Yes, it randomly samples (not most important as you claim) a subset of neighbors, but it does not compute attention score for each neighbor.

",34250,,,,,3/13/2020 15:34,,,,0,,,,CC BY-SA 4.0 18620,1,,,3/13/2020 16:54,,2,76,"

I am using a caffe model of pre-trained GoogleNet trained on ImageNet from here for image retrieval task (place recognition, more specifically).

I would like to know the layer with best performance in feature extraction. Its official paper suggests that:

The strong performance of shallower networks on this task suggests that the features produced by the layers in the middle of the network should be very discriminative.

There is also a project deepdream suggests that:

The complexity of the details generated depends on which layer's activations we try to maximize. Higher layers produce complex features, while lower ones enhance edges and textures, giving the image an impressionist feeling.

Searching the web, I found a github page suggesting pool5/7x7_s1layer as feature extractor without specific convincing reasons.

What I am doing now is quite cumbersome in which I extract features from each individual layer, apply scipy euclidean distance measurement to find a query in the reference database and the judgment is based on precision-recall curve and my top 3 results are as follows for one dataset:

  1. inception_3a/3x3
  2. inception_4a/5x5
  3. inception_4b/output

Considering large number of convolutional layers in GoogleNet, my approach is undoubtedly quite inefficient and can be changed to another dataset!

Can anyone suggest an efficient way to figure out the layers with the best performance as feature extractors in GoogleNet?

",31312,,31312,,3/23/2020 12:49,3/23/2020 12:49,Is there an efficient way of determining the layers with the best performance as feature extractors in GoogleNet?,,0,4,,,,CC BY-SA 4.0 18621,1,,,3/13/2020 17:15,,1,68,"

I remember back in the 2000s, it was, and still is possible to play against computers in StarCraft BroodWar. They were not as good as pro players, but still reasonably smart.

What AI technologies were used in earlier versions of StarCraft: Brood War?

",32644,,2444,,3/13/2020 17:22,3/13/2020 17:22,What AI technologies were used in earlier versions of StarCraft?,,0,5,,,,CC BY-SA 4.0 18622,2,,16575,3/13/2020 17:41,,0,,"

End to end means deep learning is the only thing that is used.

Many people have doubts on its viability though, I certainly do. I wouldn't trust an end-to-end DL based self driving car.

",32390,,,,,3/13/2020 17:41,,,,2,,,,CC BY-SA 4.0 18623,2,,5546,3/13/2020 18:08,,1,,"

It's a very simplified explantion. I am just talking about the core idea.

A neural network is a combination of many layers.

A neural network (Multiple Layer Perceptron: Regular neural network ): It does a linear combination (a mathematical operation) between the previous layer's output and the current layer's weights(vectors) and then it passes data to the next layer by passing through an activation function. The picture shows a unit of a layer.

A neural network (Convolutional Neural Network): It does convolution (In signal processing it's known as Correlation) (Its a mathematical operation) between the previous layer's output and the current layer's kernel ( a small matrix ) and then it passes data to the next layer by passing through an activation function. The picture shows a Convolution operation. Each layer may have many convolution operation

",34164,,,,,3/13/2020 18:08,,,,0,,,,CC BY-SA 4.0 18624,2,,18542,3/13/2020 18:37,,3,,"

Lot of real tasks are in reality not markovian, but it doesn't mean you can't try to train an agent on these tasks. It's like saying ""we assume variable x to be normally distributed"", you just assume that you can condition the probability distributions on the present state of the environment hoping that the agent will learn a good policy. In fact for most applications the challenge is to frame a problem in order to make it as most plausibly markovian as possible (by compressing some important past information into a present state of the environment).

It is pretty common to just give for granted the Markov property, for example in NLP hidden Markov Models are used a lot for sequential tasks like entities detection, this of course leads to well known issues like high error rate on long sentences (the more you look into the future the more the error rate).

Note also that a Markov model can be of first order (probabilities conditioned only on the present state):

$P(W_{t+1} = w | W_{t}, W_{t-1},W_{t-2} ..) = P(W_{t+1} = w | W_{t})$

but they can also be of higher orders (for example second order if conditioning on the present state plus one step in the past):

$P(W_{t+1} = w | W_{t}, W_{t-1},W_{t-n} ..) = P(W_{t+1} = w | W_{t},W_{t-1})$

Of course the more steps in the past you condition the more quickly the problem becomes intractable, that's why almost always first order models are used.

EDIT

I add some comments on the scheduling paper as suggested by nbro.

So, here I would say that the most stryking aspect that makes the process look impossible to describe as an MDP is the presence of dependencies between jobs. Since I might need the result of certain jobs1 before to process another certain job2, there is surely no way that a time step t could be independent on time step t-1 (I need to know which job I processed in order to know which processes I can process or not).

Here, the trick they use is to learn these dependencies between jobs thanks to a graph network, represented by the DNN in the deep reinforcement learning framework. So, what the agent has to learn is to select a tuple of two actions: ""which output (i) a stage designated to be scheduled next, and (ii) an upper limit on the number of executors to use for that stage’s job"". The information used to make this selection is the deep representation computed by the graph network on the decency graph of the jobs to schedule. In this sense, since the network is able to represent the 'temporal' relationships between jobs in the present state, this allows to assume that the choice of the next action tuple do not depend on previous states. Hope this is useful and make some sense to you.

",34098,,34098,,3/13/2020 23:42,3/13/2020 23:42,,,,0,,,,CC BY-SA 4.0 18625,2,,18585,3/13/2020 19:36,,0,,"

Yes, you need to add goal state as an input otherwise you won't know what goal you are trying to pursue. It might take a lot of memory to run HER on ATARI games though, also defining final goal is not straightforward for such environments

",20339,,,,,3/13/2020 19:36,,,,1,,,,CC BY-SA 4.0 18626,2,,18604,3/13/2020 19:38,,1,,"

In this case yes, $J$ is the big $\min$ expression and you apply Adam on that. But be careful because they say they do ascent, but automatic differentiation software usually minimizes given function so your $J$ would be $−\min(⋅)$.

",20339,,,,,3/13/2020 19:38,,,,0,,,,CC BY-SA 4.0 18627,1,,,3/13/2020 20:25,,4,620,"

I am currently working on a project, where I have a sensor in a shoe that records the $X, Y, Z$ axes, from an acceleration and gyroscope sensor. Every millisecond, I get 6 data points. Now, the goal is, if I do an action, such a jumping or kicking, I would use the sensor's output to predict that action being done.

The issue: If I jump, for example, one time I may get 1000 data points, but, in another, I get 1200 amounts, meaning the size of the input is different.

The neural networks I've studied so far require the input size to be constant to predict a $Y$ value, however, in this case, it isn't. I've done some research on how to make a neural network with variable sizes, but haven't been able to find one which works. It's not a good idea to crop the input to a certain size, because then I am losing data. In addition, if I just resize the smaller trials by putting extra $0$s, it skews the model.

Any suggestions on a model that would work or how to better clean the data?

",34263,,2444,,3/15/2020 15:13,3/15/2020 15:13,How should I deal with variable input sizes for a neural network classifier?,,1,2,,,,CC BY-SA 4.0 18629,2,,18576,3/14/2020 1:23,,3,,"

Neural networks seem to have a great deal of difficulty handling adversarial input, i.e., inputs with certain changes (often imperceptible or nearly imperceptible by humans) designed by an attacker to fool them.

This is not the same thing as just being highly sensitive to certain changes in inputs. Robustness against wrong answers in that case can be increased by reducing the probability of such inputs. (If only one in 10^15 possible images causes a problem, it's not much of a problem.) However, in the adversarial case reducing the space of problematic images doesn't reduce the probability of getting one because the images are specifically selected by the attacker.

One of the more famous papers in this area is ""Synthesizing Robust Adversarial Examples"", which produced not only examples where a few modified pixels or other invisible-to-humans modifications to a picture fooled a neural network-based image classifier, but also perhaps the first examples of 3D objects designed to fool similar classifiers and successfully doing so (from every angle!).

(Those familiar with IT securitity will no doubt recognise this as a familiar asymmetry: roughly, a defender must defend against all attacks launched against a system, but an attacker need find only one working attack.)

In ""A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance"", Adi Shamir et al. propose a mathematical framework for analyzing the problem based on Hamming distances that, while currently a less practical attack than the MIT/Lab6 one, has some pretty disturbing theoretical implications, including that current approaches to preventing these attacks may be, in the end, ineffective. For example, he points out that blurring and similar techniques that have been used to try to defend against adversarial attacks can be treated mathematically as simply another layer added on top of the existing neural network, requiring no changes to the attack strategy.

(I attended a talk by Shamir a few months ago that was much easier going than the paper, but unfortunately I can't find a video of that or a similar talk on-line; if anybody knows of one please feel free to edit this answer to add a link!)

There's obviously still an enormous amount of research to be done in this area, but it seems possible that neural networks alone are not capable of defense against this class of attack, and other techniques will have to be employed in addition to make neural networks robust against it.

",34269,,34269,,3/14/2020 2:07,3/14/2020 2:07,,,,1,,,,CC BY-SA 4.0 18630,1,18632,,3/14/2020 2:41,,1,208,"

I am trying to code out a policy evaluation algorithm to find the $V^\pi(s)$ for all states. The following diagram below shows the MDP.

In this case i let p = q = 0.5. the rewards for each states are independent of action. I.e $r(\sigma_0)$ = $r(\sigma_2)$ = 0,$r(\sigma_1)$ = 1, $r(\sigma_3)$ = 10. Terminal state is $r(\sigma_3)$

I have the following policy, {0:1, 1:0, 2:0}, where key is the state and value is the action. 0 for $a_0$ and 1 for $a_1$.

#Policy Iteration solver for FUN
class PolicyEvaluation:
    def __init__(self, policies):
        self.N = 3
        self.pi = policies
        self.actions = [0, 1] # a0 and a1
        self.discount = 0.7
        self.states = [i for i in range(self.N + 1)]


    def terminalState(self, state):
        return state == 3

    # assume p = q = 0.5
    def succProbReward(self, state):
        # (newState, probability, reward)
        spr_list = []
        if (state == 0 and self.pi[state] == 0):
            spr_list.append([1, 1.0, 1])
        elif (state == 0 and self.pi[state] == 1):
            spr_list.append([2, 1.0, 0])
        elif (state == 1 and self.pi[state] == 0):
            spr_list.append([2, 0.5, 0])
            spr_list.append([0, 0.5, 0])
        elif (state == 2 and self.pi[state] == 0):
            spr_list.append([1, 1.0, 0])
        elif (state == 2 and self.pi[state] == 1):
            spr_list.append([3, 0.5, 10])
            spr_list.append([2, 0.5, 0])
        return spr_list


def policyEvaluation(mdp):
    # initialize
    V = {} 
    for state in mdp.states:
        V[state] = 0

    def V_pi(state):
        return sum(prob * (reward + mdp.discount*V[newState]) for prob, reward, newState in
        mdp.succProbReward(state))

    while True:
    # compute new values (newV) given old values (V)
        newV = {}
        for state in mdp.states:
            if mdp.terminalState(state):
                newV[state] = 0
            else:
                newV[state] = V_pi(state)

        if max(abs(V[state] - newV[state]) for state in mdp.states) < 1e-10:
            break
        V = newV
        print(V)
    print(V)



pE = PolicyEvaluation({0:1, 1:0, 2:0})
print(pE.states)
print(pE.succProbReward(0))
policyIteration(pE)

I've tried to run the code above to find the values for each state, however, I am not converging with my values.

Is there something wrong that I did?

",32780,,2444,,4/16/2020 19:25,4/16/2020 19:25,Why isn't the implementation of my policy evaluation for a simple MDP converging?,,1,0,,,,CC BY-SA 4.0 18631,2,,18558,3/14/2020 2:47,,2,,"

A way to avoid computing the SVM loss by hand is to use a differentiable programming framework, such as JAX. These frameworks will automatically calculate gradients using automatic differentiation.

If you can write down the SVM loss using numpy operations then you can use the framework's tools to get a function which evaluates the gradient with respect to any argument.

In JAX this would look like:

import jax
import jax.numpy as jnp
def hinge_loss(x, y, theta):
    # x is an nxd matrix, y is an nx1 matrix
    y_hat = model(x, theta) # returns nx1 matrix, model parameters theta
    return jnp.maximum(0, 1 - y_hat * y)

hinge_loss_grad = jax.grad(hinge_loss)
# hinge_loss_grad takes an x, y, theta and returns gradient of hinge loss wrt x
",15176,,15176,,11/29/2022 18:18,11/29/2022 18:18,,,,0,,,,CC BY-SA 4.0 18632,2,,18630,3/14/2020 3:23,,2,,"

The issue is that in your list comprehension in def V_pi(state) you have

return sum(prob * (reward + mdp.discount*V[newState]) for prob, reward, newState in
        mdp.succProbReward(state))

whereas with the way you have defined the succProbReward output, it should be

return sum(prob * (reward + mdp.discount*V[newState]) for newState, prob, reward in
        mdp.succProbReward(state))

When I run this it converges immediately with a reward of 0 for all states, which I believe is correct for the policy you specified. If I change the policy it also seems to give reasonable results.

",15176,,,,,3/14/2020 3:23,,,,0,,,,CC BY-SA 4.0 18633,1,,,3/14/2020 4:28,,2,92,"

I am working through Sutton and Barto's RL book. So far in the text, when backup diagrams are drawn, the reward and next state are iterated together (i.e. the equations always have $\sum_{s',r}$), because the text uses the four-place function $p(s',r|s,a)$. Starting from a solid circle (state-action pair), each edge has a reward labeled along the edge and the next state labeled on the open circle. (See page 59 for an example diagram, or see Figure 3.4 here.)

However, exercise 3.29 asks to rewrite the Bellman equations in terms of $p(s'|s,a)$ and $r(s,a)$. This means that the reward is an expected value (i.e. we don't want to iterate over rewards like $\sum_r \cdots (r + \cdots)$), whereas the next states should be iterated (i.e. we want something like $\sum_{s'} p(s'|s,a) (\cdots)$).

I think writing the Bellman equations themselves isn't too difficult; my current guess is that they look like this: $$v_\pi(s) = \sum_a \pi(a|s) \left(r(s,a) + \gamma \sum_{s'} p(s'|s,a) v_\pi(s')\right)$$

$$q_\pi(s,a) = r(s,a) + \gamma \sum_{s'} p(s'|s,a) \sum_{a'} \pi(a'|s') q_\pi(s',a')$$

My problem instead is that I want to be able to draw the backup diagrams corresponding to these equations. Given the ""vocabulary"" for backup diagrams given in the book (e.g. solid circle = state-action pair, open circle = state, rewards along the edge, probabilities below nodes, maxing over denoted by an arc), I don't know how to represent the fact that the reward and next state are treated differently. Two ideas that don't seem to work:

  • If I draw a bunch of edges after the solid circle, that looks like I'm iterating over rewards.
  • If I come up with a special kind of edge that represents an expected reward, then it looks like only a single next state is being considered.
",33930,,,,,3/14/2020 4:28,How to draw backup diagram when reward is in expectation but next state is iterated?,,0,0,,,,CC BY-SA 4.0 18634,1,,,3/14/2020 6:52,,18,13891,"

The skip-gram and continuous bag of words (CBOW) are two different types of word2vec models.

What are the main differences between them? What are the pros and cons of both methods?

",9863,,2444,,12/22/2021 10:11,12/22/2021 10:11,What are the main differences between skip-gram and continuous bag of words?,,2,0,,,,CC BY-SA 4.0 18636,1,18638,,3/14/2020 8:11,,3,55,"

I'm new to the topic, but I've used some off the shelf knowledge about computer vision for classifying images.

For example, you can easily generate labels that can determine whether or not e.g. a cloud is in the image. However, what is the general type of problem called where you want to assign a value, or rate the image on a scale - in this example, the degree of cloudiness in the image?

What are useful algorithms or techniques for addressing this type of problem?

",34275,,2444,,3/14/2020 17:34,3/14/2020 17:34,What is the type of problem requiring to rate images on a scale?,,1,0,,,,CC BY-SA 4.0 18637,2,,18634,3/14/2020 14:14,,23,,"

So as you're probably already aware of, CBOW and Skip-gram are just mirrored versions of each other. CBOW is trained to predict a single word from a fixed window size of context words, whereas Skip-gram does the opposite, and tries to predict several context words from a single input word.

Intuitively, the first task is much simpler, this implies a much faster convergence for CBOW than for Skip-gram, in the original paper (link below) they wrote that CBOW took hours to train, Skip-gram 3 days.

For the same logic regarding the task difficulty, CBOW learn better syntactic relationships between words while Skip-gram is better in capturing better semantic relationships. In practice, this means that for the word 'cat' CBOW would retrive as closest vectors morphologically similar words like plurals, i.e. 'cats' while Skip-gram would consider morphologically different words (but semantically relevant) like 'dog' much closer to 'cat' in comparison.

A final consideration to make deals instead with the sensitivity to rare and frequent words. Because Skip-gram rely on single words input, it is less sensitive to overfit frequent words, because even if frequent words are presented more times that rare words during training, they still appear individually, while CBOW is prone to overfit frequent words because they appear several time along with the same context. This advantage over frequent words overfitting leads Skip-gram to be also more efficient in term of documents required to achieve good performances, much less than CBOW (and it's also the reason of the better performances of Skip-gram in capturing semantical relationships).

Anyway, you can find some comparisons in the original paper (section 4.3). Mikolov et al. 2013

About the architecture, there's not much to say. They just randomly initialised word embedding for each word, then a projection matrix NxD (number of context words times embeddings dimension) is generated at each iteration, there is no hidden layer, the vectors are just averaged together and then fed into an activation function to predict index probabilities in a vector of dimension V (the size of the vocabulary).

For a more specific explanation (even Mikolov's paper lack some detail) I suggest checking this blog page (Words as Vectors), even though the model described there do apply a hidden layer, unlike the original architecture.

",34098,,2444,,3/16/2020 22:13,3/16/2020 22:13,,,,0,,,,CC BY-SA 4.0 18638,2,,18636,3/14/2020 14:36,,3,,"

The main distinction between tasks is 'classification' vs 'regression'. In classification you would try to identify the presence of a cloud or not in an image, if you want to predict the level of 'cloudness' with continuous values you are then performing a regression task.

I'm not aware about state-of-the models specific for images, but you can potentially use whatever architecture you desire to perform regression, CNN, RNN, the only thing to pay attention to is the loss function you will use. There are specific functions for classifications (which usually use the argmax function to turn probabilities into labels) and for regression (the most used is the Mean Squared Error, the model try to approximate the continuous values directly).

For a quick overview of loss functions I suggest this easy tutorial 5 Regression Loss Functions. Hope it might be of use.

",34098,,,,,3/14/2020 14:36,,,,1,,,,CC BY-SA 4.0 18639,2,,18627,3/14/2020 15:04,,2,,"

It is much simpler to process the data in a different way. Since you're using temporal data a common practice is to define a priori a minimum time-step, usually called $\textit{granularity}$, which must be bigger than you're sensor responsiveness. Using this granularity value you'll then be able split your data into intervals, and you can then combine each instance belonging to an interval with a function that you like. Most common choice is obviously averaging the values, but also summing then could be a choice or a moving average.

Don't think that in this way you will loose information, preprocessing is also called data cleaning for a reason, not aways raw data are better.

As a last note, I would suggest you to look into 'Machine learning for the quantified self', there is growing literature going on about using sensors to train predictive models about movements and body measures like heart-rate. You might find something about preprocessing for this specific sensor-applications.

",34098,,,,,3/14/2020 15:04,,,,0,,,,CC BY-SA 4.0 18640,2,,18606,3/14/2020 17:54,,1,,"

First of all, I am not very familiar with details of NLP and NLU systems and concepts, so I will provide an answer based on the slides entitled Natural language understanding in dialogue systems (2013) by David DeVaul, a researcher on the topic.

A dialogue system is composed of different parts or modules. Here's a diagram of an example of a dialogue system.

Each of these modules can introduce errors, which are then propagated to the rest of the pipeline. Of course, this is the first clear issue of such a dialogue system. Other issues or challenges include

  • ambiguity of natural language (and there are different types of ambiguity, i.e. see slide number 5),

  • synonyms (i.e. the dialogue system needs to handle different words or expressions that mean the same thing),

  • context-sensitivity (i.e. the same words or expressions can mean different things in different contexts)

  • semantic representation (i.e. how to represent semantics)

  • spontaneous speech (i.e. how to handle stuff like ""hm"", pauses, etc.)

",2444,,,,,3/14/2020 17:54,,,,0,,,,CC BY-SA 4.0 18641,2,,16899,3/14/2020 18:12,,1,,"

According to Dan Jurafsky, a researcher on NLP and NLU, the current hard problems in NLP are (see slide 6)

  • Questioning answering
  • Paraphrase
  • Summarisation
  • Dialogue

Other hard problems for which there are already some good solutions are

  • Sentiment analysis
  • Coreference resolution
  • Word sense disambiguation
  • Parsing
  • Machine translation
  • Information extraction

Ambiguity in natural language is one of the biggest challenges for NLP and NLU systems. Other challenges are (see slide 10)

  • non-standard words
  • idioms
  • tricky entity names
  • neologisms
  • world knowledge
  • segmentation issues
",2444,,,,,3/14/2020 18:12,,,,0,,,,CC BY-SA 4.0 18644,2,,18606,3/14/2020 19:10,,3,,"

Dialogue is a hard problem because it requires pretty advanced cognitive functions. Leaving aside all the lower levels of language analysis (phonology if dealing with speech, morphology and syntax), you quickly run into interpretation problems that require a lot of world knowledge.

Simple question and answer is fine, and restricted domains are somewhat easier as well. As soon as you get into a normal conversation, you will refer back to things you said before, so an NLP system would have to recognise that and resolve the reference accordingly. Typically in a conversation you would use variations of reference terms: the first time you mention an object you might describe it fully, but subsequently you will use shorter terms to refer to it.

There is also a structure to conversation. This is typically modelled as conversational moves, and usually moves will have corresponding response-moves. For example, a common sequence would be greeting - greeting. Then you might have question - response - feedback. This sounds fairly easy, but once you try to annotate a dialogue with such moves you will find that it is pretty hard. As far as I am aware, there is no 'grammar' equivalent of describing the structure of conversations.

Often, pragmatic meaning is interfering with the 'surface' meaning of utterances. A statement can actually function as a question, or a question can be a command (or a statement). The pragmatics of utterances depend on the context and also the relationship between the interlocutors. If I talk to my manager, I will use language differently than when I talk to my children.

Dialogue/conversations are hard to analyse. Because of that, descriptive frameworks are still fairly limited. You need to keep track of what has said before, as that can change the way an utterance has to be interpreted. Grammatical analysis is a fairly solved problem, and word sense disambiguation as well. But pragmatics and conversational structure are still on the bleeding edge of linguistic research; at least they were when I was still teaching Discourse Analysis at university a few years ago.

For that reason, chatbots are generally not very good. Sometimes they can fool people into believing they are human speakers, but this is usually done through trickery (""smoke and mirrors"") rather than competent handling of conversational structures. It's all in the little box in nbro's answer labelled ""DM""...

",2193,,,,,3/14/2020 19:10,,,,0,,,,CC BY-SA 4.0 18645,1,,,3/14/2020 20:23,,7,978,"

Reinforcement learning methods are considered to be extremely sample inefficient.

For example, in a recent DeepMind paper by Hessel et al., they showed that in order to reach human-level performance on an Atari game running at 60 frames per second they needed to run 18 million frames, which corresponds to 83 hours of play experience. For an Atari game that most humans pick up within a few minutes, this is a lot of time.

What makes DQN, DDPG, and others, so sample inefficient?

",14390,,2444,,10/13/2020 8:27,10/13/2020 8:27,Why are reinforcement learning methods sample inefficient?,,2,0,,,,CC BY-SA 4.0 18647,1,18656,,3/14/2020 21:32,,4,129,"

I am trying to understand an algorithm for correcting mislabeled data in the paper An algorithm for correcting mislabeled data (2001) by Xinchuan Zeng et al. The authors are suggesting to update the output class probability vector using the formula in equation 4 and class label in equation 5.

I am wondering:

  1. Are they updating labels while training, starting from very first back-propagation?

  2. It seems like if we train on the same data and then predict labels on the same data, it would be the same as what the authors are suggesting. Does it make sense or I misunderstood?

",34290,,34290,,3/15/2020 21:10,3/15/2020 21:50,"Are the labels updated during training in the algorithm presented in ""An algorithm for correcting mislabeled data""?",,1,0,,,,CC BY-SA 4.0 18648,1,,,3/14/2020 21:59,,7,132,"

The formula for mean prediction using Gaussian Process is $k(x_*, x)k(x, x)^{-1}y$, where $k$ is the covariance function. See e.g. equation 2.23 (in chapter 2) from Gaussian Processes for Machine Learning (2006) by C. E. Rasmussen & C. K. I. Williams.

Oversimplifying, the mean prediction of the new point $y_*$ is the weighted average of previously observed $y$, where the weights are calculated by the $k(x_*,x)$ and normalized by $k(x,x)^{-1}$.

Now, the first part $k(x_*, x)$ is easy to interpret. The closer the new data point lies to the previously observed data points, the greater their similarity, the higher will be the weight and impact on the prediction.

But how to interpret the second part $k(x, x)^{-1}$? I presume this makes the weight of the points in the clusters greater than the outliers. Am I correct?

",31988,,2444,,3/16/2020 22:04,1/13/2021 21:01,Interpretation of inverse matrix in mean calculation in Gaussian Process,,2,0,,,,CC BY-SA 4.0 18650,2,,18645,3/14/2020 22:06,,5,,"

I will try to give a broad answer, if it's not helpful I'll remove it.

When we talk about sampling we are actually talking about the number of interaction required to an agent to learn a good model of the environment. In general I would say that there are two issues related to sample efficiency: 1 the size of the 'action'+'environment states' space 2 the exploration strategy used.

Regarding the first point, in reinforcement learning is really easy to encounter situations in which the number of combinations of possible actions and possible environment states explode, becoming intractable. Lets for example consider the Atari game from the Rainbow paper you linked: the environment in which the agent operate in this case is composed of rgb images of size (210, 160, 3). This means that the agent 'see' a vector of size 100800. The actions that an agent can take are simply modifications of this vector, e.g. I can move to the left a character, slightly changing the whole picture. Despite the fact that in lot of games the number of possible actions is rather small, we must keep in mind that there are also other objects in the environment which change position as well. What other object/enemies do obviously influence the choice of the best action to perform in the next time step. To a high number of possible combinations between actions and environment states is associated a high number of observations/interaction required to learn a good model of the environment. Up to now, what people usually do is to compress the information of the environment (for example by resizing and converting the pictures to grayscale), to reduce the total number of possible states to observe. DQL itself is based on the idea of using neural networks to compress the information gathered from the environment in a dense representation of fixed size.

For what concern the exploration strategy, we can again divide the issue in subcategories: 1 how do we explore the environment 2 how much information do we get from each exploration. Exploration is usually tuned through the greedy hyper-parameter. Once in a while we let the agent perform a random action, to avoid to get stuck in suboptimal policies (like not moving at all to avoid to fall into a trap, eventually thanks to a greedy action the agent will try to jump and learn that it gives a higher reward). Exploration comes with the cost of more simulations to perform, so people quickly realise that we can't rely only on more exploration to train better policies. One way to boost performance is to leverage not only the present iteration but also past interactions as well, this approach is called experience replay. The underline idea is to update the q-values depending also on weighted past rewards, stored in a memory buffer. Other approaches point to computation efficiency rather than decreasing the amount of simulations. An old proposed technique that follow this direction is prioritised sweeping Moore et al. 1993, in which big changes in q values are prioritised, i.e. q-values that are stable over iterations are basically ignored (this is a really crude way to put it, I have to admit that I still have to grasp this concept properly). Both this techniques were actually applied in the Rainbow paper.

On a more personal level (just pure opinions of mine from here) I would say that the problem between RL agents and humans is the fact that we (humans) have lot of common sense knowledge we can leverage, and somehow we are able, through cognitive heuristics and shortcuts, to pay attention to what is relevant without even being aware of it. RL agents learn to interact with an environment without any prior knowledge, they just learn some probability distribution through trial and errors, and if something completely new happens they have no ability at all to pick up an action based on external knowledge. One interesting future direction in my opinion is reward modelling, described in this video: https://youtu.be/PYylPRX6z4Q

I particularly like the emphasis on the fact that the only true thing that human are good at is judging. We don't know how to design proper reward functions, because again, most of the actions we perform in real life are driven by reward of which we are not aware, but we are able in a glimpse to say if an agent is performing a task in a proper way or not. Combining this 'judging power' into RL exploration seems to be a really powerful way to increase sample efficiency in RL.

",34098,,34098,,4/10/2020 1:05,4/10/2020 1:05,,,,0,,,,CC BY-SA 4.0 18654,1,20759,,3/15/2020 20:23,,2,364,"

I have a basic MCTS agent for the game of Hex (a turn based game). I want to tune the parameters of UCT (the Cp parameter) and the number of rollouts parameter.

Where do I have to begin? The problem is that the agent is smart enough to always win if it plays first against another agent. So I don't know how to do the evaluation of each pair of hyperparameters.

If anyone has any ideas let me know.

",33307,,,,,4/29/2020 11:29,How to apply hyperparameter optimization on Monte Carlo Tree Search?,,1,0,,,,CC BY-SA 4.0 18656,2,,18647,3/15/2020 21:50,,2,,"

I think that making some draws might help.

Below I tried to draw the model architecture. We start with classic feed-forward structure: input represented by a vector I with length f (number of features), a hidden layer H which does not have a fixed size, and output O of length c (number of classes). Then we have 3 extra vectors than usual: a vector U they refer as input (a bit confusing I have to say), and two vectors V and P representing classes probabilities. All these vectors have length c.

P is what we want to learn, new class probabilities for each instance that should correct wrong labels of misclassified instances in the initial dataset. So in some sense the aim of the work is not to train a model to make predictions but rather train a model to clean a training dataset. I think it's important to stress out this point because we talk about training but there is actually no test after the training, we just end with a modified training dataset with some instances relabelled. The relabelling depends on the learning of V rather than P (because as I will say also later on, the final arrow that goes from V to P is the identity function). V depends on U that also depends on V, again it sounds a bit confusing, the tricks rely in the initialisation.

In the second picture I just copied some formulas from the paper. They don't specifically say how they get the vector P in the first place, but I treat it simply as given because we need it to initialise U by applying the inverse sigmoid function, element-wise. We need also to define the hyper parameter D, the initial probability value for the true class V$_y$ in the initial dataset. In the paper they set it to .95. Using D we can initialise the vector V.

Once we initialise U and V we have all the elements to iterate over the dataset.

A first thing to notice is that they don't use back propagation to update U,V and P, they just define some updating rules (which require to define some other initial parameters, $\textit{u$_0$}$ and L$_p$). I want to stress out again that the mapping between V and P is just the identity function, I guess that they define it just to avoid confusion, because in the paper P appear as an input element, but nevertheless is also the output we want to learn. Back propagation is used only to update the weights in H.

ANSWERS TO THE QUESTIONS:

So we can finally say that, regarding your first question, the answer is yes, they do start updating the classes probability vectors P from the very first iteration, even though is not clear how they initialise them (or from where they got them).

Regarding the second question instead I would say that no, this is definitely not as training a model with fixed label and then making predictions over the training dataset again. The whole point is that similar training instances will be initialised with the same P, and they will also produce similar U and V vectors. For misclassified instances, U will be updated with larger changes, because of the different produced O. For example an image of a 2 will generate a different output than an image of a 5, this would be reflected first on U and then on V and P . If training with fixed label instead you would just force the weight in the hidden layer to learn a function that treat the representation of a 2 and a 5 in a similar way, leading to low accuracy cause you would be telling the model ""Hey, the straight line in the 5 is important also to recognise a 2"".

I have to say that I never read about this dataset cleaning approach but it is interesting, and their results show that the cleaned version of the dataset lead to better performances, which is interesting because machine learning analysis usually gives for granted the correctness of the labels.

Hope this is of help in some ways!

",34098,,,,,3/15/2020 21:50,,,,7,,,,CC BY-SA 4.0 18657,2,,10644,3/16/2020 1:41,,0,,"

I would approach it from a direction different from @Phylliida, though there seems to be nothing wrong with her answer.

IMHO, when AI

  • A: is sufficiently general,
  • B: is able to direct its own evolution,
  • C: has control (whether direct or indirect) of all the resources needed to grow and evolve, and
  • D: has the goal of growing and evolving to solve a big, important problem,

then the ""singularity"" will have been reached.

""sufficiently general"" means that its growth is not limited by the code that initially defines it: it can re-write its own code (through ""offspring in a sandbox"", or directly; doesn't matter which).

Genetic programming is currently clumsy but is indeed sufficiently general.

C is something that would most likely need to be given to it, so nobody will be able to unplug it.

D is easy. I'd like to see an effort in the Futurist and AI communities to choose such goals.

This isn't a mathematical definition; it's something more mundane; but I think it's to the point.

",28348,,,,,3/16/2020 1:41,,,,0,,,,CC BY-SA 4.0 18658,1,18687,,3/16/2020 2:36,,1,314,"

How can I prove that gradient descent doesn't necessarily find the global optimum?

For example, consider the following function

$$f(x_1, x_2, x_3, x_4) = (x_1 + 10x_2)^2 + 5x_2^3 + (x_2 + 2x_3)^4 + 3x_1x_4^2$$

Assume also that we can't find the optimal value for the learning rate because of time constraints.

",33875,,2444,,3/16/2020 23:12,3/18/2020 16:20,How to prove that gradient descent doesn't necessarily find the global optimum?,,3,1,,,,CC BY-SA 4.0 18659,1,,,3/16/2020 2:59,,4,88,"

My aim is to train a model for predicting diseases. Now, according to this Wikipedia article, diseases are classified based on the following criteria in general:

  • Causes (of the disease)
  • Pathogenesis (the mechanism by which the disease progresses)
  • Age
  • Gender
  • Symptoms (of the disease)
  • Damage (caused by the disease)
  • Organ type (e.g. heart disease, liver disease, etc.)

Are these features used for predicting diseases universally (i.e. all types of diseases)? I don't think so. There can be other attributes as well. For example, traveling in the case of coronavirus.

So, are there better features for predicting diseases? Or which ones among them are better than the others, when patients specify their health issues?

",34306,,2444,,11/19/2020 13:03,11/19/2020 13:03,How should I select the features for predicting diseases (in particular when patients specify their health issues)?,,2,1,,,,CC BY-SA 4.0 18660,1,,,3/16/2020 9:48,,2,68,"

I am doing literature research on algorithms for correcting mislabeled data using multilayer perceptrons. Found an ""old"" paper An algorithm for correcting mislabeled data (2001) by Xinchuan Zeng et al. Please share if you are aware of recent/current updates with a brief thoughts. Thanks in advance.

",34290,,34290,,3/17/2020 15:56,11/3/2022 22:53,Recent algorithms for correcting mislabeled data using multilayer perceptrons,,1,3,,,,CC BY-SA 4.0 18661,2,,18645,3/16/2020 9:49,,4,,"

This is mostly because humans already have information when they start learning the game (priors) that makes them learn it more quickly. We already know to jump on monsters or avoid them or to get gold looking object.

When you remove these priors you can see a human is worse at learning these games. (link)

Some experiments they tried in the study to remove these priors where replacing all notable objects with colored squares, reversing and scrambling controls, changing gravity, and generally replacing all sprites with random pixels. All these experiments made human learning much harder, and increased death rate, time needed and states visited in the game.

If we want the reinforcement learning algorithm to perform as good as a human, we will have to somehow include these priors we have as a human before training the network. This of course is not yet done (as far as I know)

",29671,,29671,,3/16/2020 10:54,3/16/2020 10:54,,,,0,,,,CC BY-SA 4.0 18662,1,,,3/16/2020 11:58,,1,310,"

From this video tutorial Vanishing Gradient Tutorial, the sigmoid function and the hyperbolic tangent can produce the vanishing gradient problem.

What other activation functions can lead to the vanishing gradient problem?

",9863,,2444,,3/16/2020 14:42,3/16/2020 14:42,Which activation functions can lead to the vanishing gradient problem?,,0,1,,,,CC BY-SA 4.0 18663,1,18669,,3/16/2020 11:59,,6,7561,"

I was recently asked at an interview to calculate the number of parameters for a convolutional layer. I am deeply ashamed to admit I didn't know how to do that, even though I've been working and using CNN for years now.

Given a convolutional layer with ten $3 \times 3$ filters and an input of shape $24 \times 24 \times 3$, what is the total number of parameters of this convolutional layer?

",32528,,2444,,12/18/2021 12:34,12/18/2021 12:34,How to calculate the number of parameters of a convolutional layer?,,2,0,,,,CC BY-SA 4.0 18668,1,,,3/16/2020 14:07,,3,67,"

I want to use a neural network to find correlated columns in a .csv file and give them as a output. The input .csv file has multiple columns with 0 and 1 ( like Booleans) in it. The file got the assignment from people to interests in it.

Example .csv input:

UserID   History  Math  Physics  Art  Music  ...
User1    0        1     1        0    0      ...
User2    0        0     0        1    1      ...
User3    0        1     1        1    1      ...
User4    1        0     1        1    0      ...
...

The output should be in this case something like: {math,physics}, {art,music}, {history,physics,art} - I exclude here {math,physics,art,music} because in a step afterwards i want to exclude (at least some) which can be created through the combination of others.

At the moment I have a problem that i don´t know which type of neural network could complete this task. How can I solve this problem?

So the important thing, that a column can have more than one column it correlates to - so its not like simple k-means clustering (as far as I understand it).

",34313,,,user9947,3/17/2020 9:58,3/17/2020 9:58,Neural network to extract correlated columns,,0,1,,,,CC BY-SA 4.0 18669,2,,18663,3/16/2020 14:29,,6,,"

What are the parameters in a convolutional layer?

The (learnable) parameters of a convolutional layer are the elements of the kernels (or filters) and biases (if you decide to have them). There are 1d, 2d and 3d convolutions. The most common are 2d convolutions, which are the ones people usually refer to, so I will mainly focus on this case.

2d convolutions

Example

If the 2d convolutional layer has $10$ filters of $3 \times 3$ shape and the input to the convolutional layer is $24 \times 24 \times 3$, then this actually means that the filters will have shape $3 \times 3 \times 3$, i.e. each filter will have the 3rd dimension that is equal to the 3rd dimension of the input. So, the 3rd dimension of the kernel is not given because it can be determined from the 3rd dimension of the input.

2d convolutions are performed along only 2 axes (x and y), hence the name. Here's a picture of a typical 2d convolutional layer where the depth of the kernel (in orange) is equal to the depth of the input volume (in cyan).

Each kernel can optionally have an associated scalar bias.

At this point, you should already be able to calculate the number of parameters of a standard convolutional layer. In your case, the number of parameters is $10 * (3*3*3) + 10 = 280$.

A TensorFlow proof

The following simple TensorFlow (version 2) program can confirm this.

import tensorflow as tf


def get_model(input_shape, num_classes=10):
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Input(shape=input_shape))
    model.add(tf.keras.layers.Conv2D(10, kernel_size=3, use_bias=True))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(num_classes))

    model.summary()

    return model


if __name__ == '__main__':
    input_shape = (24, 24, 3)
    get_model(input_shape)

You should try setting use_bias to False to understand how the number of parameters changes.

General case

So, in general, given $M$ filters of shape $K \times K$ and an input of shape $H \times W \times D$, then the number of parameters of the standard 2d convolutional layer, with scalar biases, is $M * (K * K * D) + M$ and, without biases, is $M * (K * K * D)$.

See also these related questions How is the depth of filters of hidden layers determined? and In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?.

1d and 3d convolutions

There are also 1d and 3d convolutions.

For example, in the case of 3d convolutions, the kernels may not have the same dimension as the depth of the input, so the number of parameters is calculated differently for 3d convolutional layers. Here's a diagram of 3d convolutional layer, where the kernel has a depth different than the depth of the input volume.

See e.g. Intuitive understanding of 1D, 2D, and 3D convolutions in convolutional neural networks.

",2444,,2444,,3/17/2020 1:01,3/17/2020 1:01,,,,1,,,,CC BY-SA 4.0 18670,2,,18663,3/16/2020 14:29,,3,,"

For a standard convolution layer, the weight matrix will have a shape of (out_channels, in_channels, kernel_sizes). In addition, you will need a vector of shape [out_channels] for biases. For your specific case, 2d, your weight matrix will have a shape of (out_channels, in_channels, kernel_size[0], kernel_size[1]).

Now, if we plugin the numbers:

  • out_channels = 10, you're having 10 filters
  • in_channels = 3, the picture is RGB in this case, so there are 3 channels (the last dimension of the input)
  • kernel_size[0] = kernel_size[1] = 3

In total you're gonna have 10*3*3*3 + 10 = 280 parameters.

",20430,,2444,,12/18/2021 12:27,12/18/2021 12:27,,,,0,,,,CC BY-SA 4.0 18671,1,18788,,3/16/2020 15:09,,5,148,"

I'm looking to develop a machine translation tool for a constructed language. I think that the example-based approach is the most suitable because the said language is very regular and I can have a sufficient amount of parallel translations.

I already know the overall idea behind the example-based machine translation (EBMT) approach, but I can't find any resource that describes a naive EBMT algorithm (or model) that would allow me to easily implement it.

So, I'm looking for either:

  • a detailed description,
  • pseudocode or
  • a sufficiently clear open-source project (maybe a GitHub one)

of a naive EBMT algorithm. So, I'm not looking for a software library that implements this, but I'm looking for a resource that explains/describes in detail a naive/simple EBMT algorithm, so that I am able to implement it.

Note that there are probably dozens of variations of EBMT algorithms. I'm only looking for the most naive/simple one.

I have already looked at the project Phrase-based Memory-based Machine Translator, but, unfortunately, it is not purely based on examples but also statistical, i.e. it needs an alignment file generated by, for example, Giza++ or Moses.

",,user34314,2444,,2/3/2021 16:53,2/3/2021 16:53,Is there any resource that describes in detail a naive example-based machine translation algorithm?,,1,1,,,,CC BY-SA 4.0 18672,1,,,3/16/2020 15:17,,0,41,"

Deep blue is good at chess, but is more ""hand-coded"" or ""top-down"". https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)

AlphaGoZero is ""self-taught"", and at Go is very much super-human. https://en.wikipedia.org/wiki/AlphaGo_Zero

How does AlphaGoZero fare when it goes head-to-head with DeepBlue? Are there indicators like chess ratings?

",2263,,,,,3/16/2020 15:17,How does (or should) AlphaGoZero (which does chess) fare against Deep Blue?,,0,3,,,,CC BY-SA 4.0 18673,2,,18431,3/16/2020 15:59,,0,,"

If you are willing to take an evolutionary approach, you may employ the NEAT algorithm (Neural Evolution of Augmenting Topologies) to train your bot. It will take some work setting it up and all, but it then will gradually improve over time.

Check out the following:

That should be enough to pique your interests and get you started. That last link links to a number of NEAT implementations available in a number of languages.

",24092,,,,,3/16/2020 15:59,,,,0,,,,CC BY-SA 4.0 18674,1,,,3/16/2020 16:33,,3,74,"

If you have an $18$ layer residual network versus and a $32$ layer residual network, why would the former do better at object detection than the latter, if you have both models are training using the same training data?

",34317,,2444,,3/16/2020 16:36,3/16/2020 22:03,Do deeper residual networks perform better or worse?,,1,0,,,,CC BY-SA 4.0 18675,2,,18674,3/16/2020 22:03,,1,,"

Just by having more parameters, the deeper model has a higher capacity than the smaller one. This means that theoretically it can learn to extract more complex features from the data. Additionally, more layers means that the model can extract even higher-level features from the data. So, generally speaking, deeper models will most of the times outperform shallow ones for more difficult tasks.

The downside is that if you have a small amount of data, a high-capacity model has the ability of memorizing the training set, which would lead to overfitting. Besides performance, deeper models require better hardware and larger training times. So, there are plenty of reasons for one to prefer a more shallow model to a deeper one.

",26652,,,,,3/16/2020 22:03,,,,0,,,,CC BY-SA 4.0 18676,2,,18605,3/16/2020 22:35,,1,,"

The blog post Unsupervised Cross-lingual Representation Learning (2019), the related paper and slides by Sebastian Ruder (a researcher currently at DeepMind) summarize what you are looking for. In fact, the authors write

We will introduce researchers to state-of-the-art methods for constructing resource-light cross-lingual word representations and discuss their applicability in a broad range of downstream NLP applications, covering bilingual lexicon induction, machine translation (both neural and phrase-based), dialogue, and information retrieval tasks. We will deliver a detailed survey of the current cutting-edge methods, discuss best training and evaluation practices and usecases, and provide links to publicly available implementations, datasets, and pretrained models and word embedding collections.

",2444,,,,,3/16/2020 22:35,,,,1,,,,CC BY-SA 4.0 18677,2,,18658,3/16/2020 22:53,,2,,"

You can find by yourself a counterexample that, in general, GD is not guaranteed to find the global optimum!

I first advise you to choose a simpler function (than the one you are showing), with 2-3 optima, where one is the global and the other(s) are local. You don't need neural networks or any other ML concept to show this, but only basic calculus (derivatives) and numerical methods (i.e. gradient descent). Just choose a very simple function with more than one optimum and apply the basic gradient descent algorithm. Then you can see that, if you start gradient descent close to one local optimum (i.e. you choose an initial value for $x$ or $\theta$, depending on your notation for the variable of the function) and then you apply gradient descent (for some iterations), you will end up in that close local optimum, from which you cannot escape, after having applied the gradient descent steps.

See also the question Does gradient descent always converge to an optimum? and For convex problems, does gradient in Stochastic Gradient Descent (SGD) always point at the global extreme value?

",2444,,2444,,3/16/2020 23:00,3/16/2020 23:00,,,,0,,,,CC BY-SA 4.0 18681,1,,,3/17/2020 6:58,,2,237,"

I'm trying to develop a multistep forecasting model using LSTM Network. The model takes three times steps as input and predicting two time_steps. both input and output columns are normalised using minmax_scalar within the range of 0 and 1.

Please see the below model architecture

Model Architecture

model = Sequential()
model.add(LSTM(80,input_shape=(3,1),activation='sigmoid',return_sequences=True))
model.add(LSTM(20,activation='sigmoid',return_sequences=False))
model.add(Dense(2))

In this case, using sigmoid as an activation function is it correct?

",24006,,,,,3/17/2020 10:48,Using sigmoid in LSTM network for multi-step forecasting,,3,0,,,,CC BY-SA 4.0 18682,1,21396,,3/17/2020 8:49,,5,847,"

From my understanding of the REINFORCE policy gradient method, we gently nudge the probabilities of actions based on the advantages. More specifically, the positive advantages increase the probabilities, negative advantages reduce the probabilities.

So, how do we compute the advantages given the real discounted rewards (aggregated rewards from the episode) and a policy network that only outputs the probabilities of actions?

",27366,,2444,,3/17/2020 22:27,5/22/2020 9:22,How to calculate the advantage in policy gradient functions?,,2,0,,,,CC BY-SA 4.0 18683,2,,18681,3/17/2020 8:57,,1,,"

Yes, due the input, output being constrained between zero and one that would be the only viable activation function.

",34324,,,,,3/17/2020 8:57,,,,0,,,,CC BY-SA 4.0 18684,2,,18681,3/17/2020 9:00,,1,,"

You have a problem in your code, you want to use ""sigmoid"" in the last layer. Fot the code you are showin you are using linear activation in the last layer.

",32493,,,,,3/17/2020 9:00,,,,0,,,,CC BY-SA 4.0 18685,1,18694,,3/17/2020 10:28,,3,247,"

I am working on a graffiti detection project. I need to analyze data stream from a camera mounted sideways on a vehicle to identify graffiti on city walls and notify authorities with the single best capture of graffiti and its geolocation, etc.

I am trying to use a ResNet50 model pre-trained on ImageNet using transfer learning for my graffiti image dataset. The classification will be done on an edge device as network connectivity may not be reliable.

Suppose I have a series of frames that have been detected to contain graffiti, as the vehicle goes past it, but I only need to report one image (so not all frames containing graffiti in the series). How can I do that?

Ideally, I would like to report the frame where the camera is perpendicular to the wall. Why perpendicular? I think that images containing the graffiti when the camera is perpendicular to the wall will more clearly show the graffiti.

",30087,,2444,,3/23/2020 2:11,3/23/2020 2:11,How can I detect the frame from video streaming that contains a graffiti on city wall?,,1,6,,,,CC BY-SA 4.0 18686,2,,18681,3/17/2020 10:48,,1,,"

You should not limit yourself to sigmoid as activation function on the last layer. Usually you're normalizing your dataset, but when you're testing/evaluating the model you're applying the inverse of the scaling transformation to the predictions, so you could easily use tanh which is defined on [-1, 1]

",20430,,,,,3/17/2020 10:48,,,,0,,,,CC BY-SA 4.0 18687,2,,18658,3/17/2020 11:20,,1,,"

Well, GD terminates once the gradients are 0, right? Now, in a non-convex function, there could be some points, which do not belong to the global minima, and yet, have 0 gradients. For example, such points can belong to saddle points and local minima.

Consider this picture and say you start GD at the x label.

GD will bring you the flat area and will stop making progress there as gradients are 0. However, as you can see, global minima is to the left of this flat region.

By the same token, you have to show, for your own function, that there exists at least a single point whose gradients are 0 and yet, it is not the global minima.

In addition to that, the guarantee on converge for convex functions depends on annealing the learning rate appropriately. For example, if your LR is too high, GD can just keep overshooting the minima. The visualization from this page might help you to understand more regarding the behavior of GD.

",32621,,32621,,3/18/2020 16:20,3/18/2020 16:20,,,,8,,,,CC BY-SA 4.0 18692,1,,,3/17/2020 12:49,,5,287,"

I have an environment where an agent faces an equal opponent, and while I've achieved OK performance implementing DQN and treating the opponent as a part of the environment, I think performance would improve if the agent trains against itself iteratively. I've seen posts about it, but never detailed implementation notes. My thoughts were to implement the following (agent and opponent are separate networks for now):

  1. Bootstrap agent and opponent with initial weights (either random or trained against CPU, not sure)
  2. Use Annealing Epsilon Greedy strategy for N iterations
  3. After M iterations (M > N), copy agent network's weights to opponent's network
  4. Reset annealing epsilon (i.e. start performing randomly again to explore new opponent)?
  5. Repeat steps 2-4

Would something like this work? Some specific questions are:

  1. Should I ""reset"" my annealing epsilon strategy every time the opponent is updated? I feel like this is needed because the agent needs sufficient time to explore new strategies for this ""new"" opponent.
  2. Should the experience replay buffer be cleared out when the opponent is updated? Again, I think this is needed.

Any pointers would be appreciated.

",34328,,,,,3/17/2020 12:49,How to correctly implement self-play with DQN?,,0,0,,,,CC BY-SA 4.0 18693,1,,,3/17/2020 14:58,,2,186,"

I'm trying to train a Policy Gradient Agent with Baseline for my RL research. I'm using the in-built RL toolbox from MATLAB (https://www.mathworks.com/help/reinforcement-learning/ug/pg-agents.html) and have created my own Environment. The goal is to train the system to sample an underlying time-series ($x$) given battery constrains ($\epsilon$ is battery cost).

The general setup is as follows:

  • My Environment is a ""sensor"" system with exogenous input time-series and battery level as my States/Observations (size is 13x1).
  • Actions $A_t$ are binary: 0 = keep a model prediction $(\hat x)$; 1 = sampling time series $(x)$
  • Reward function is

$$ R = -[err(\tilde x, x) + A_t\cdot \epsilon ] + (-100)\cdot T_1 + (100) \cdot T_2 $$

where $err(\tilde x, x)$ is the RMSE error between the sampled time series $(\tilde x)$, and true time series x.

  • The Terminal State Rewards are -100 if sensor runs out of battery $T_1$ or 100 if reached the end of the episode with RMSE < threshold and remaining battery level $(T_2)$. The goal is to always end in $T_2$.

  • Each training Episode consists of a time-series of random length, and random initial battery level.

My current setup is using mostly default RL setups from MALTAB with learning rate of $10^{-4}$ and ADAM optimizer. The training is slow, and shows a lot of Reward oscillation between the two terminal states. MATLAB RL toolbox also outputs a $Q_0$ value which the state is:

Episode Q0 is the estimate of the discounted long-term reward at the start of each episode, given the initial observation of the environment. As training progresses, Episode Q0 should approach the true discounted long-term reward if the critic is well-designed,

Questions

  • Is my training and episodes too random? i.e., time-series of different lengths and random initial sensor setup.
  • Should I simplify my reward function to be just $T_2$?
  • Why doesn't $Q_0$ change at all?
",34330,,34330,,3/19/2020 21:48,3/19/2020 21:48,Policy Gradient Reward Oscillation in MATLAB,,0,0,,,,CC BY-SA 4.0 18694,2,,18685,3/17/2020 16:15,,2,,"

ResNet is an architecture for object recognition and you may use it to do your classification task. Fast RCNN may improve your results but is a more difficult architecture to implement. If you want to go in this direction the best place to start is the arxiv paper of the Fast R-CNN (arxiv.org/abs/1504.08083). If I am not wrong, there is an implementation of fast-rcnn in pytorch if you really want that.

You seem to be new to deep learning. I would strongly recommend starting simple. ResNet-50 will probably be more than enough for your application. Moreover, I suggest using a lib like fast.ai.

",34019,,,,,3/17/2020 16:15,,,,0,,,,CC BY-SA 4.0 18695,2,,5904,3/17/2020 16:43,,0,,"

I may have scratched the surface of a much larger problem when I asked this question. In the meantime I have read Lottery Hypothesis paper: https://arxiv.org/pdf/1803.03635.pdf

Basically, if you overparameterise your network you are more likely to find a random initialisation that performs well: A winning ticket. The paper above shows that you can actually prune away the unneeded parts of the network after training. However, you need to overparameterise the network initially in order to increase the chance of randomly sampling a winning ticket configuration.

I believe the case in my question above is a minimal example of this.

",14789,,,,,3/17/2020 16:43,,,,0,,,,CC BY-SA 4.0 18696,2,,18658,3/17/2020 17:51,,0,,"

There is no way you can be sure you have reached a global minimum. Steepest descent will converge toward where the gradient approaches zero. Depending on the initial conditions ( ie the initial values of the weights) you can and will converge on some minimum. Notice if you run your model several times with random weight initialization you will get slightly different results. What I do find interesting is that it seems that in general the local minima have roughly the same value. The cost function is some kind of surface in N space where N is the number of trainable parameters. We do not know what that surface is like and how many local minimums exist.

",33976,,,,,3/17/2020 17:51,,,,1,,,,CC BY-SA 4.0 18699,1,,,3/17/2020 22:29,,2,38,"

I already trained a deep neural network called YOLO (You Only Look Once) with high-quality images (1920 by 1080 pixels) for a detection task. The result for mAP and IOU were 93% and 89% respectively.

I wanted to decrease the quality of my training data set using some available filters, then I use those low-quality images along with high-quality images to train the network again.

Does this method increase the accuracy (or, in general, performance) of the deep neural network (for a detection task)? Like mAP and IOU?

My data set is vehicle images.

mAP: mean average precision

IOU: intersection over union ( or overlap)

",34339,,34339,,3/17/2020 23:06,3/17/2020 23:06,Can the addition of low-quality images to the training dataset increase the network performance?,,0,1,,,,CC BY-SA 4.0 18701,1,,,3/18/2020 2:35,,2,395,"

The ongoing coronavirus pandemic of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), as of 29 September 2020, has affected many countries and territories, with more than 33.4 million cases of COVID-19 have been reported and more than 1 million people have died. The live statistics can be found at https://www.worldometers.info/coronavirus/ or in the World Health Organization (WHO) site. Although countries have already started quarantines and have adopted extreme countermeasures (such as closing restaurants or forbidding events with multiple people), the numbers of cases and deaths will probably still increase in the next weeks.

Given that this pandemic concerns all of us, including people interested in AI, such as myself, it may be useful to share information about the possible current applications of AI to slow down the spread of SARS-CoV-2, to help infected people or people in the healthcare sector that have been uninterruptedly working for hours to attempt to save more lives, while putting at risk their own.

What are the existing AI technologies (e.g. computer vision or robotics tools) that are already being used to tackle these issues, such as slowing down the spread of SARS-CoV-2 or helping infected people?

I am looking for references that prove that the mentioned technologies are really being used. I am not looking for potential AI technologies (i.e. research work) that could potentially be helpful. Furthermore, I am not looking for data analysis tools (e.g. sites that show the evolution of the spread of coronavirus, etc.)

",2444,,2444,,9/29/2020 22:06,9/30/2020 19:56,What are the AI technologies currently used to fight the coronavirus pandemic?,,3,3,,,,CC BY-SA 4.0 18702,1,,,3/18/2020 2:49,,2,59,"

I'm trying to find out how AI can help with efficient customer service, in fact call routing to the right agent. My usecase is given context of a query from a customer and agents' expertise, how can we do the matching?

Generally, how is this problem solved? What sub-topic within AI is suitable for this problems? Classification, recommender systems, ...? Any pointers to open-source projects would be very helpful.

",9053,,,,,5/14/2020 21:57,Using AI to enhance customer service,,2,0,,,,CC BY-SA 4.0 18703,1,18704,,3/18/2020 7:52,,6,651,"

I often see blog posts or questions on here starting with the premise that ResNets solve the vanishing gradient problem.

The original 2015 paper contains the following passage in section 4.1:

We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN, which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy, suggesting that the solver works to some extent.

So what's happened since then? I feel like either it became a misconception that ResNets solve the vanishing gradient problem (because it does indeed feel like a sensible explanation that one would readily accept and continue to propagate), or some paper has since proven that this is indeed the case.

I'm starting with the initial knowledge that it's "easier" to learn the residual mapping for a convolutional block than it is to learn the whole mapping. So my question is on the level of: why is it "easier"? And why does the "plain network" do such a good job but then struggle to close the gap to the performance of ResNet. Supposedly if the plain network has already learned reasonably good mappings, then all it has left to learn to close the gap is "residual". But it just isn't able to.

",16871,,2444,,1/17/2021 14:11,1/17/2021 14:11,"If vanishing gradients are NOT the problem that ResNets solve, then what is the explanation behind ResNet success?",,1,0,,,,CC BY-SA 4.0 18704,2,,18703,3/18/2020 8:53,,6,,"

They explained in the paper why they introduce residual blocks. They argue that it's easier to learn residual functions $F(x) = H(x) - x$ and then add them to the original representation $x$ to get hidden representation $H(x) = F(x) + x$ than it is to learn hidden representation $H(x)$ directly from original representation. That's the main reason and empirical results show that they might be right. Better gradient propagation might be an additional bonus but that's not why they originally introduced the idea.

"Normal" networks work too but, at some point, they become too deep and start working worse than shallower versions (they empirically showed that in the paper). Again, they argue that the reason for that might be that at deeper layers hidden representations become approximately similar $H_n \approx H_{n+1}$ because representation is already well learned and you only need some slight adjustments. That would mean that transformation for deeper layers is similar to identity transformation and that ordinary layers might have trouble learning that, while for residual blocks it would be easy to learn slight modification and add that to the already existing representation from the previous layer.

",20339,,2444,,1/17/2021 14:11,1/17/2021 14:11,,,,3,,,,CC BY-SA 4.0 18705,2,,18702,3/18/2020 10:10,,1,,"

This sounds to me like a use case for a chatbot. You would have different intents reflecting the types of user queries that your system can respond to. The intent matching can be done by pattern matching, machine learning (classification), or a combination of the two (hybrid). You can then use the chatbot to ask clarification questions or elicit more information to identify which live agent would be the best person to take over the call. Essentially each live agent would have a list of intents plus added information (such as geographical area etc) which you then compare against the data from the caller to find the best match.

If this is a voice call you'd need to put an ASR system at the front of the pipeline. Chatbots can usually do live-agent handover to then pass control to a human agent at any time in the conversation.

[Disclaimer: I work for a company that operates in exactly that area and whose system works as described above]

",2193,,,,,3/18/2020 10:10,,,,0,,,,CC BY-SA 4.0 18706,1,,,3/18/2020 11:57,,3,90,"

Whenever I tune my neural network, I usually take the common approach of defining some layers with some neurons.

  • If it overfits, I reduce the layers, neurons, add dropout, utilize regularisation.

  • If it underfits, I do the other way around.

But it sometimes feels illogical doing all these. So, is there a more principled way of tuning a neural network (i.e. find the optimal number of layers, neurons, etc., in a principled and mathematical sound way), in case it overfits or underfits?

",21936,,2444,,3/21/2020 14:43,3/21/2020 14:43,Are there principled ways of tuning a neural network in case of overfitting and underfitting?,,0,1,,,,CC BY-SA 4.0 18707,1,,,3/18/2020 13:45,,5,297,"

A book on evolutionary computation by De Jong mentions both the term evolutionary algorithms (EA) as well as evolutionary computation (EC). However, it remains unclear to me what the difference between the two is. According to Vikhar, EA forms a subset of EC. However, it remains unclear to me what sort of topics/algorithms would be considered EC but not EA. Is there a clear difference between the two? If so, what is this difference?

",34351,,2444,,5/18/2022 18:27,5/18/2022 18:57,What is the difference between evolutionary computation and evolutionary algorithms?,,2,0,,,,CC BY-SA 4.0 18708,1,,,3/18/2020 15:12,,3,128,"

I'm working currently on a problem and I'm using RL (bandit problem).

In my system, I have an agent that chooses an action among $k$ possible actions, and a user that decides whether the agent chooses the right action or not. If the user is satisfied with the decision made by the agent, he rewards with $+1$, otherwise $-1$.

Is this is a good reward function, knowing that in my problem the values are in the range $[0, 1]$?

Are there any guidelines to follow for defining the reward function? Are there any references (books or articles) that tackle this problem and present a solution?

",34355,,2444,,3/19/2020 15:10,3/19/2020 15:10,What are the guidelines for defining a reward function in reinforcement learning (bandit problem)?,,0,6,,,,CC BY-SA 4.0 18709,1,,,3/18/2020 16:46,,2,40,"

I learned that when creating neural networks the go to was to overfit and then to regularize. However I am now in a situation where, when I make the model more complex (more layers, more filters, ...) my scores become worse.

I am training a CNN to predict pollution 6 hours in advance. The input I give to my model is the pollution of the past 18 hours.

Can I safely say that because there probably is a lot of noise in this data that, that is the reason when increasing my complexity, my model becomes worse?

",34359,,,,,3/18/2020 16:46,Why does model complexity increase my validation score by a lot?,,0,4,,,,CC BY-SA 4.0 18710,2,,18707,3/18/2020 18:05,,1,,"

As you can find on Wikipedia:

Evolutionary algorithms form a subset of evolutionary computation in that they generally only involve techniques implementing mechanisms inspired by biological evolution such as reproduction, mutation, recombination, natural selection, and survival of the fittest.

This means that other types of evolutions, which are not necessarily a biological evolution, are found in evolutionary computation but not in evolutionary algorithms. For example, learning classifier systems are in EC as they are evolutionary, but not completely in EA as they are not biological.

",4446,,145,,5/18/2022 18:57,5/18/2022 18:57,,,,2,,,,CC BY-SA 4.0 18711,1,,,3/18/2020 19:39,,1,32,"

The following is the MDS Objective.

Let's think of a senario where I apply MDS with/from the solution I obtained from PCA. Then I calculate the objective function on the initial PCA solution and MDS solution (after applying MDS on the former PCA solution). Then I would for sure assume that the objective function will decrease for the MDS solution compared with PCA solution. However, when I calculate the objective function respectively, MDS solution yields higher objective function value. Is this normal?

I am attaching my code below:

import os
import pickle
import gzip
import argparse
import time
import matplotlib.pyplot as plt
import numpy as np
from numpy.linalg import norm

from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.neural_network import MLPRegressor, MLPClassifier
from sklearn.preprocessing import LabelBinarizer
from sklearn import decomposition

from neural_net import NeuralNet, stochasticNeuralNet
from manifold import MDS, ISOMAP
import utils

def mds_objective(Z,X):
    sum = 0
    n,d = Z.shape
    for i in range(n):
        for j in range(i+1,n):
            sum += (norm(Z[i,:]-Z[j,:],2)-norm(X[i,:]-X[j,:],2))**2
    return 0.5*sum

dataset = load_dataset('animals.pkl')
X = dataset['X'].astype(float)
animals = dataset['animals']
n, d = X.shape
pca = decomposition.PCA(n_components = 5)
pca.fit(X)
Z = pca.transform(X)
plt.figure()
plt.scatter(Z[:, 0], Z[:, 1])
for i in range(n):
     plt.annotate(animals[i], (Z[i,0], Z[i,1]))
utils.savefig('PCA.png')

print(pca.explained_variance_ratio_)
print(mds_objective(Z,X))


dataset = load_dataset('animals.pkl')
X = dataset['X'].astype(float)
animals = dataset['animals']
n,d = X.shape

model = MDS(n_components=2)
Z = model.compress(X)

fig, ax = plt.subplots()
ax.scatter(Z[:,0], Z[:,1])
plt.ylabel('z2')
plt.xlabel('z1')
plt.title('MDS')
for i in range(n):
       ax.annotate(animals[i], (Z[i,0], Z[i,1]))
utils.savefig('MDS_animals.png')
print(mds_objective(Z,X))

It prints the following:

1673.1096816455256

1776.8183112784652

",34333,,-1,,6/17/2020 9:57,3/18/2020 19:39,Multiple-dimension scaling (MDS) objective for MDS and PCA,,0,3,,,,CC BY-SA 4.0 18712,1,,,3/18/2020 21:17,,4,240,"

I am using the cross-entropy cost function to calculate its derivatives using different variables $Z, W$ and $b$ at different instances. Please refer image below for calculation.

As per my knowledge, my derivation is correct for $dZ, dW, db$ and $dA$, but, if I refer to Andrew Ng Coursera stuff, then I am seeing an extra $\frac{1}{m}$ for $dW$ and $db$, whereas no $\frac{1}{m}$ in $dZ$. Andrew's slides on the left represent derivative and whereas the right side of slides shows NumPy implementation corresponding to the right side equation.

Can someone please explain why there is:

1) $\frac{1}{m}$ in $dW^{[2]}$ and $db^{[2]}$ in Andrew's slides in NumPy representation

2) missing $\frac{1}{m}$ for $dZ^{[2]}$ in Andrew's slides in both normal and NumPy representation.

Am I missing something or doing it in the wrong way?

",34365,,2444,,3/19/2020 21:17,1/14/2021 1:03,Why is my derivation of the back-propagation equations inconsistent with Andrew Ng's slides from Coursera?,,1,0,,,,CC BY-SA 4.0 18713,2,,18701,3/19/2020 1:44,,3,,"

According to the Baidu Research's blog post How Baidu is harnessing the power of AI in the battle against coronavirus (12-03-2020), there are already some artificial intelligence tools or algorithms being used to fight the coronavirus.

Given that I cannot confirm that these AI tools and algorithms I will mention are really being used in practice, I will only quote the parts of the blog post that potentially answer my original question.

To give some context, similar to HIV viruses, the virus that is causing the coronavirus pandemic, SARS-CoV-2 is capable of rapidly mutating, making vaccine development and virus analysis difficult.

AI-powered and non-contact infrared sensor system

Baidu has developed several tools that are effective in building awareness and screening populations, including an AI-powered, non-contact infrared sensor system that provides users with fast multi-person temperature monitoring that can quickly detect a person if they are suspected of having a fever, one of the many symptoms of the coronavirus. This technology is currently being used in Beijing's Qinghe Railway Station to identify passengers who are potentially infected where it can examine up to 200 people in one minute without disrupting passenger flow.

AI-powered pneumonia screening and lesion detection system

By leveraging PaddlePaddle and the semantic segmentation toolkit PaddleSeg, LinkingMed has developed an AI-powered pneumonia screening and lesion detection system, putting it into use in the hospital affiliated with XiangNan University in Hunan Province. The system can pinpoint the disease in less than one minute, with a detection accuracy of 92% and a recall rate of 97% on test data sets.

Automated HealthMap system

The Boston Children's Hospital used an automated HealthMap system that scans online news and social media reports for early warning signs of outbreaks, which led to the initial awareness that COVID-19 was spreading outside China.

Autonomous vehicles carry out non-contact tasks

Access to health care and resources at a moment's notice is vital for battling the spread of the coronavirus. Autonomous vehicles are playing a useful role in providing access to necessary commodities for health-care professionals and the public alike by delivering goods in infected areas and disinfecting hospitals, effectively minimizing person-to-person transmission and alleviating the shortage of medical staff.

Apollo, Baidu's autonomous vehicle platform, partnered with a local self-driving startup called Neolix to deliver supplies and food to the Beijing Haidian Hospital.

",2444,,-1,,6/17/2020 9:57,3/28/2020 21:07,,,,0,,,,CC BY-SA 4.0 18714,2,,18712,3/19/2020 1:56,,1,,"

TL;DR: This has to do with the way A. Ng has defined back propagation for the course.

Left Column

This is only with respect to one input example and so the $\frac{1}{m}$ factor reduces to 1 and can be omitted. He uses lower case to represent one input example (eg a vector $dz$) and upper case with respect to a (mini-)batch (eg a matrix $dZ$).

The $\frac{1}{m}$ factors in $dW,db$

In this definition of backprop, he ""defers"" multiplying by the $\frac{1}{m}$ factor until $dW,db$ rather than ""absorbing"" it into $dZ^{[2]}$. That is, the $dZ^{[2]}$ term is defined in a way that it does not have $\frac{1}{m}$.

Observe, if you move the $\frac{1}{m}$ factor to be in the definition of $dZ^{[2]}$ and remove it from the definitions of $dW,db$ you will still come out with the same values for all $dW,db$.

Speculation

This ""deferred"" multiplication might have to do with numerical stability. Or simply a stylistic choice made by A. Ng. This might also prevent one from ""accidentally"" multiplying by $\frac{1}{m}$ more than once.

",28343,,28343,,3/19/2020 2:06,3/19/2020 2:06,,,,2,,,,CC BY-SA 4.0 18715,1,,,3/19/2020 5:52,,0,98,"

Some of the NLP applications taken from this link NLP Applications:

  • Machine Translation
  • Speech Recognition
  • Sentiment Analysis
  • Question Answering
  • Automatic Summarization
  • Chatbots
  • Market Intelligence
  • Text Classification
  • Character Recognition
  • Spell Check

Which are the NLP applications that supports recurrent neural network?

",9863,,1671,,4/23/2020 23:02,1/9/2023 10:06,Which NLP applications are based on recurrent neural networks?,,2,0,,,,CC BY-SA 4.0 18716,2,,18715,3/19/2020 8:29,,0,,"

Speech recognition and Character recognition are not part of NLP. Everything else on your list can in principle be done with RNNs. But the field is quickly moving towards using transformers.

",2227,,,,,3/19/2020 8:29,,,,2,,,,CC BY-SA 4.0 18717,1,18719,,3/19/2020 9:17,,1,151,"

Suppose we have a policy $\pi$ and we use SARSA to evaluate $Q^\pi(s, a)$, where $a$ is the policy $\pi$.

Can we say that $Q^\pi(s, a) = V^\pi(s)$?

The reason why I think this can be the case is because $Q^\pi(s, a)$ is defined as the value obtained from taking action $a$ and then following policy $\pi$ thereafter. However, the action $a$ taken is the policy according to $\pi$ for all $s \in S $. This seems to corresponds to the value function equation of $V^\pi(s_t) = r(s_t) + \gamma V^\pi(s_{t+1})$.

",32780,,2444,,3/19/2020 13:50,3/19/2020 14:16,What is the relationship between the Q and V functions?,,1,0,,,,CC BY-SA 4.0 18718,2,,16138,3/19/2020 10:05,,1,,"

The model has learnt the ""features"" for the type of inputs, eg. faces. For the problem to be called one-shot, it needs to also correctly classify/compare any new samples. For example, in face recognition application, any new person's images should also result a positive for their own image and negative for any other seen or unseen image.

Since we are using euclidean distance of the final feature layer and not performing any final classification, we can say we are using the weights of a pretrained network and computing final value using that(distance), thus transfer learning. There is no backprop in this, but what you want to do with the embeddings, such as learning a threshold function can be considered learning.

",27875,,,,,3/19/2020 10:05,,,,0,,,,CC BY-SA 4.0 18719,2,,18717,3/19/2020 10:30,,2,,"

Can we say that $Q^\pi(s, a) = V^\pi(s)$

No.

The correct relationship is this:

$$V^\pi(s) = \sum_a \pi(a|s) Q^\pi(s, a)$$

or, if you have a deterministic policy $a = \pi(s)$ you can instead write:

$$V^\pi(s) = Q^\pi(s, \pi(s))$$

Intuitively, this is because the $V^\pi(s)$ is the expected future return when following the policy $\pi$ from state $s$, whilst $Q^\pi(s, a)$ is the expected future return where it ignores the policy for only the next action $a$ (which will decide immediate reward $r$ and next state $s'$ independently of the policy), and thereafter follows $\pi$.

The above equations essentially show what happens when you apply the policy in state $s$ to decide which Q value(s) to use, they remove the independent choice of $a$ in $Q^\pi(s,a)$.

One possible misunderstanding that you have is that ""on-policy"" means the same thing as the equations show when considering action values (Q values) - it does not. When any algorithm learns action values, it learns the same thing conceptually, i.e. the expected (and maybe discounted) sum of future rewards given state $s$ when making a free choice of $a$ for the next step, and thereafter strictly following the policy being evaluated.

What is different between on-policy and off-policy is which policy gets evaluated - for on-policy methods like SARSA you evaluate action values for the same policy that you use to generate actions. For off-policy methods like Q learning you evaluate a different target policy. Both approaches have the same interpretation of what $Q(s,a)$ means otherwise, and have the same relationship between Q and V for their repsective policies.

",1847,,1847,,3/19/2020 14:16,3/19/2020 14:16,,,,1,,,,CC BY-SA 4.0 18721,1,,,3/19/2020 12:25,,1,79,"

This has been a mystery to me.

All the walking robots look like idiots now. But we do have a lot of simulation-based results (Flexible Muscle-Based Locomotion for Bipedal Creatures ), so why can't we just apply the simulation results to a real robot and let it walk, not like an idiot, but like an running ostrich?

With the main loop running at more than 60 fps, I fail to see how possibly the program could fail to stop the robot from losing balance. When I balance a stick on my hand, I could probably only do 5 fps.

We have not only supercomputers connected to the robots, but also reinforcement learning algorithms at our disposal, so what has been the bottleneck in the problem of bipedal walking?

",34382,,,user9947,3/19/2020 16:16,3/19/2020 16:16,Why haven't we solved the problem of bipedal walking?,,0,2,,,,CC BY-SA 4.0 18723,1,,,3/19/2020 17:59,,4,196,"

In the original ResNet paper they talk about using plain identity skip connections when the input and output of a block have the same dimensions.

When the input and output have different dimensions they propose two options:

(A) Use an identity mapping padded with zeros to make up for the extra dimensions

(B) Use a ""projection"".

which (after some digging around in other people's code) I see as meaning: do a convolution with a 1x1 kernel with trainable weights.

(B) is confusing to me because it seems to ruin the point of ResNet by making the skip connection trainable. Then the main path is not really learning a ""residual"" relative to an identity transformation. So at this point, I'm no longer sure how to interpret the intent or expected effect of this type of block. And I would think that one should justify doing it in the first places instead of just not putting a skip connection there at all (which in my mind is the status-quo before this paper).

So can anyone help explain away my confusion here?

",16871,,,,,3/20/2020 12:43,"If the point of the ResNet skip connection is to let the main path learn the residual relative to identity, why are there convolutional skips?",,1,8,,,,CC BY-SA 4.0 18724,2,,18431,3/19/2020 18:33,,0,,"

What about GANs or genetic algorithms?

The first idea (GAN) is that you basically create 2+ random bots who fight each other, and they keep adjusting their weights so that they can beat the other bot. That means, that those 2+ bots keep improving their ""fighting performance"" for as long as you want, eventually being even better than humans.

The second idea (genetic algorithms) is to generate a lot of bots who genetically differ from each other just for a slight mutation. You make them fight, and the best perforing/last standing will become the new clone where the new ""lot of bots"" gets generated from.

",32751,,,,,3/19/2020 18:33,,,,0,,,,CC BY-SA 4.0 18727,2,,15976,3/19/2020 20:30,,0,,"

A little more information about the documents will be helpful. I am guessing that your scenario has webpages from different websites, you're feeding html pages to the network and the page contains the website name or url, which the network is picking up on and using it to label. I am assuming you're using a RNN or similar network for the classification task.

Your question is to identify when your network is using the source identifying feature (name or url), and instruct it to not use this feature. If you can do some preprocessing and identify where in the document the source name/url is, you may tag each word/character in the input with this information. You may use this tag to add a large regularizing loss term which brings down the weights connected to the tagged input words/characters in the first layer.

However, a better strategy will be to remove all instance of the header / url which contains website info, as @clement-hui suggested. This is cleaner and easier to implement than above.

Perhaps your network is identifying the document source slightly more indirectly, and above preprocessing is not sufficient/useful. In this case, you may want to make random chunks of your document, and give each chunk the label corresponding to that document. Here it is more likely that some chunks will not have the source identifying information, and the network will be forced to pick a strategy which labels using the content rather than the source.

At test time you may either feed the whole document or random chunks from the document and use majority voting to get the final answer.

",20799,,,,,3/19/2020 20:30,,,,0,,,,CC BY-SA 4.0 18728,1,,,3/20/2020 1:36,,3,40,"

I have an input vector $X$, which contains a series of measurements within a period, e.g. 100 measurements in 1 sec. The goal is to predict an event, let's say, moving forward, backward or static.

I don't want to predict the output just by looking at one series of measurements, but by looking at a window of $n$ vectors $X$ of measurements, making it dependant on the previous measurements, because of the noise in the measurements.

Is there a way RNN can help me with this? Many to one architecture? LSTM? CNN of 1D + LSTM + dense?

",34390,,2444,,3/20/2020 2:09,3/20/2020 2:09,How to predict an event (or action) based on a window of time-series measurements?,,0,3,,,,CC BY-SA 4.0 18729,1,,,3/20/2020 1:45,,4,558,"

I am reading the book ""Artificial Intelligence: A Modern Approach"" by Stuart and Norvig. I'm having trouble understanding a step of the recursive best-first search (RBFS) algorithm.

Here's the pseudocode.

What does the line s.f <- max(s.g + s.h, node.f) do?

Here's a diagram of the execution of RBFS.

",23895,,2444,,3/20/2020 3:01,3/22/2020 4:29,What does the statement with the max do in the recursive best-first search algorithm?,,1,4,,,,CC BY-SA 4.0 18730,1,18731,,3/20/2020 8:13,,2,73,"

I'm relatively new to image classification. Currently, I am trying to classify insect images, using a convolutional neural network (CNN). When I ask a human expert to identify an insect, I usually provide 2 photos: back and face. It seems that sometimes one feature stands out and allows identification with high certainty (""spots on the back - definitely a ladybug""), while other times you need to cross-reference both angles (""grey back could mean a few things, but after cross-referencing with the eyes - it's a moth"").

How is it customary to implement this? Naively I was considering:

  1. Two separate networks, one for backs and one for faces? If so, what formula is best for weighing in their outputs?

  2. Single network, but separate dual classifications - e.g. ""moth face"", ""moth back"", ""ladybug face"", ""ladybug back""?

  3. A single network, feed everything naively (e.g. moths from different angles, all with the same classification ""moth"") and rely on the NN to sort it out itself?

",34389,,2444,,3/21/2020 14:31,3/21/2020 14:32,How to perform insect classification given two images of the same insect?,,1,0,,,,CC BY-SA 4.0 18731,2,,18730,3/20/2020 9:43,,1,,"

There are several ways you can do this.

One is to input both images in input, so it can be a 2 input system or an input with 6 channels.

As you suggested in 1st point, you can make 2 networks, connect them at the end and add another layer for final classification or use outputs from both and train another classifier (like Gradient bosting). You can look up ensemble techniques and ensemble of neural networks for more.

For dual classification and naive approach, you will again have a problem in case there are some insects that can solely be identified from features at a specific angle, if so you can then discard other angles (e.g. ignore moth face if moth back is the distinguishing feature)

",27875,,2444,,3/21/2020 14:32,3/21/2020 14:32,,,,0,,,,CC BY-SA 4.0 18732,2,,18723,3/20/2020 12:43,,3,,"

Well, I found an answer that satisfies me.

The zero-padded identity is not ideal. Suppose we're mapping from 64 channels to 128 channels. Then the zero-padded identity will map to an output where half of the channels are the same as the inputs, and the other half are all zeros. So that means the main path is learning a residual for half of the output channels, and learning a mapping from the ground up for the other half of the channels.

Now, the alternative is to use option (B) which is the single 1x1 convolution. After reading their ResNetV2 paper I realised that they really double down on this concept of maintaining a ""clean"" alternative path all the way up and down the network. So then the question is, what's ""cleaner""?: a block of two convolutions of the form 3x3x64 then 3x3x128, or a shortcut with a 1x1x128 convolution? The shortcut is a tempting choice here. And in fact, in the original paper they show empirically that the convolutional shortcut is better than the identity shortcut. Maybe it might be worth running a test to see if the convolutional shortcut is better than no shortcut at all, but until I decide to run such a test, I'll presume the authors would have mentioned it. The other thing worth noting is that there are only a few places in the whole network where the dimensions need to be increased, so maybe the impact is small.

",16871,,,,,3/20/2020 12:43,,,,0,,,,CC BY-SA 4.0 18735,2,,16004,3/20/2020 17:09,,1,,"

If the noise is confined to a particular spectral band, Fourier transform followed by filtering, followed by an inverse Fourier transform will work. If it is multiplicative noise, filtering the Fourier transform of the logarithm of the signal might work.

Really, the nature of the noise determines what's possible and the best way to remove it.

",28348,,,,,3/20/2020 17:09,,,,0,,,,CC BY-SA 4.0 18738,1,,,3/21/2020 10:55,,2,61,"

I am approaching the implementation of the OpenPose algorithm for realtime human body pose estimation.

According to the official paper OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, $L$ and $S$ fields (body part maps, and part affinity fields) are estimated. These have the same size as the input image, and, according to the paper, these fields should be outputted at a given step in the forward pass (after a given number of $L$ stages, and $S$ stages), but, since before entering these stages the image is passed through the initial layers of the VGG-19 model, the spatial dimension is encoded and the features that finally enter the $L$ and $S$ stages have other dimensionality.

All the network is convolutional, there's no FC layer at all. The VGG-19 part is the only one that contains MaxPooling layers, hence affecting the spatial relations and size of the receptive fields.

My point is, after stage execution, I get tensors of shape [batch_size, filter_number, 28, 28]. The issue is that the paper is not stating how to decode this information into the $L$ and $S$ maps of size $224 \times 224$.

Following a traditional approach and decoding the final tensors with a linear net from, let's say, $15000 \rightarrow (224 * 224 * \text{ number of body parts }) + (224 * 224 * \text{ number of limbs } * 2)$~A very huge number!, is out of question for any domestic computer, I presume I should have the least 128GbRAM installed, and is not the case.

Another solution is to remove the max-pooling layers from the VGG-19 part, but then although the map size is preserved to $224$, instead of $28$, the huge amount of computations and values that need to be stored also lead to memory errors.

So, the problem is, how can I get to a final output of $224 \times 224$ without FC layers, from a tensor of shape [batch_size, bodyparts, 28, 28]?

Not an easy answer. I will check a TensorFlow implementation I have seen around to see how the problem was solved.

Any parallel ideas are greatly welcome.

",22869,,2444,,3/21/2020 14:24,3/21/2020 14:24,"How can I get to a final output of shape $224 \times 224$, without FC layers, from a tensor of specific shape, in OpenPose?",,0,4,,,,CC BY-SA 4.0 18741,1,,,3/21/2020 13:06,,1,43,"

I'm looking for a community or competition website related to human aggression detection using Deep Learning in a video. Also, I'm looking for a dataset of human aggression activities.

Any suggestions would be appreciated.

",18245,,,,,3/21/2020 13:06,"Human Aggression Detection Community, Competition and dataset",,0,3,,,,CC BY-SA 4.0 18742,1,,,3/21/2020 14:36,,3,149,"

I have a dataset and want to be able to construct a graph from it in a supervised fashion.

Let's assume I have a dataset with N nodes, each node has e.g. 10 features. Out of these N nodes, I want to learn a graph, i.e. an $N \times N$ adjacency matrix. So, I start with $N$ nodes and all I know is a 10-dimensional feature vector for each node. I have no knowledge about the relation between these nodes and want to figure it out.

Here is an example for $N=6$, but in practice $N$ is not fixed.

So the output I would like to get here is a $6\times6$ adjacency matrix, representing the relations between the nodes (undirected).

Note: N is arbitrary and not fixed. So an algorithm should be able to perform on any given N.

My dataset is labeled. For the training dataset, I have the desired adjacency matrix for each collection of input nodes, which is filled with $0$s and $1$s.

However, the output of the algorithm could also be an adjacency matrix filled with non-integer numbers in $[0,1]$, giving some kind of probability of the nodes being connected (preferably close to $0$ or $1$ of course). So I could easily give a number as the label for each node. In the above example, the labels for the three connected nodes could be class $1$, and so on.

Is there any kind of supervised learning algorithm (e.g. some sort of graph neural network) that can perform these tasks?

",34411,,34411,,4/24/2020 13:41,4/24/2020 13:41,How can I learn a graph given nodes with features in a supervised fashion?,,1,3,,,,CC BY-SA 4.0 18743,2,,17456,3/21/2020 16:20,,1,,"

There are other sources that will lead to different results in addition to weight initialization. For example dropout layers. Make sure you specify the random seed.Also data reading using flow from directory,make sure you set shuffle to False or if you do not then set the random seed. If you use transfer learning make that part of your network non-trainable. Some networks have dropout in them and do not provide a way to set the random seed. IF you are using a GPU there are even more issues to contend with.

",33976,,,,,3/21/2020 16:20,,,,0,,,,CC BY-SA 4.0 18744,1,,,3/21/2020 17:40,,0,201,"

In regularzied cost function a L2 regularization cost has been added.

Here we have already calculated cross entropy cost w.r.t $A, W$.

As mentioned in the regularization notebook (see below) in order to do derivation of regularized $J$ (cost function), the changes only concern $dW^{[1]}$, $dW^{[2]}$ and $dW^{[3]}$. For each, you have to add the regularization term's gradient.(No impact on $dA^{[2]}$, $db^{[2]}$, $dA^{[1]}$ and $db^{[1]}$ ?)

But I am doing it using the chain rule then I am getting change in values for $dA^{[2]}$ , $dZ^{[2]}$, $dA^{[1]}$, $dW^{[1]}$ and $db^{[1]}$.

Please refer below how I calculated this ?

Can someone explain why I am getting different results?

What is the derivative of L2 reguarlization w.r.t $dA^{[2]}$ ? (in equation 1)

So my questions are

1) Derivative of L2 regularization cost w.r.t $dA^{[2]}$

2) How adding regularization term not affecting $dA^{[2]}$, $db^{[2]}$, $dA^{[1]}$ and $db^{[1]}$ (i.e. $dA$ and $db$) but changes $dW$'s ?

",34365,,,user9947,3/24/2020 9:18,3/24/2020 9:18,Derivation of regularized cost function w.r.t activation and bias,,0,3,,,,CC BY-SA 4.0 18745,1,18746,,3/21/2020 20:26,,7,956,"

I recently wrote an application using a deep learning model designed to classify inputs. There are plenty of examples of this using images of irises, cats, and other objects.

If I trained a data model to identify and classify different types of irises and I show it a picture of a cat, is there a way to add in an ""unknown"" or ""not a"" classification or would it necessarily have to guess what type of iris the cat most looks like?

Further, I could easily just add another classification with the label ""not an iris"" and train it using pictures of cats, but then what if I show it a picture of a chair (the list of objects goes on).

Another example would be in natural language processing. If I develop an application that takes the input language and spits out ""I think this is Spanish"", what if it encounters a language it doesn't recognize?

",34021,,2444,,1/17/2021 12:14,1/17/2021 12:14,How should the neural network deal with unexpected inputs?,,1,2,,,,CC BY-SA 4.0 18746,2,,18745,3/21/2020 23:11,,11,,"

This is a very important problem that is usually overlooked. In fact, when training a neural network, there's often the implicit assumption that the data is independent and identically distributed, i.e., you do not expect the data to come from a distribution different than the distribution from which your training data comes, so there's also the implicit assumption that data comes from the same family of distributions (e.g. only Gaussians) and that all your training examples are independently drawn from the same distribution (specific mean and variance). Of course, this is a big limitation!

A partial solution to your problem is to use a Bayesian neural network (BNN). The idea of a BNN is to associate, rather than a single number, a distribution (usually a Gaussian distribution) with each unit (or neuron) of the neural network. Therefore, for each unit of the network, there are two learnable parameters: the mean and variance of a Gaussian distribution. Consequently, a BNN usually has the double of the number of parameters of a conventional (or non-Bayesian) neural network. However, by learning a distribution for each parameter, you also learn the uncertainty about the potential true value of each unit, based on the available training data.

The forward passes of such a BNN are stochastic, i.e. you sample from each of these Gaussian distributions for every forward pass, so the output of the network is also stochastic (i.e. given the same input example, the output may be different each time).

If your dataset is small, one expects the BNN to have wide Gaussian distributions, i.e. high uncertainty about the true value of the units. So, one expects a BNN to be able to deal with unexpected inputs more robustly. To be more precise, if you train a BNN with a small dataset, the hope is that Gaussian distributions will be wide and thus the outputs of the BNN will be highly variable (i.e. the model is highly uncertain). The more data is gathered, the less uncertain the BNN should be.

This doesn't completely solve your problem, but it should at least mitigate it, i.e. if you provide an unseen example to the BNN, then ideally it should be uncertain about the actual label of that input.

For simplicity, I didn't explain certain details of BNNs, but, at least, this answer gives you a potential solution. Of course, this doesn't exclude the possibility of having an "unknown" class. The approaches are not mutually exclusive. There may also be other solutions, but I am not aware of any.

",2444,,2444,,1/17/2021 12:14,1/17/2021 12:14,,,,0,,,,CC BY-SA 4.0 18747,2,,18742,3/22/2020 0:00,,2,,"

It's perfectly reasonable to apply 'traditional' Deep Learning approaches to try and learn an adjacency matrix (a matrix is just a vector of vectors, which can be flattened into a single output vector) but you might need a lot of training data as N gets larger.

Your outputs could certainly have the form of an adjacency matrix, as you describe. Whether it's more useful to have 'boolean' (either 0 or 1) or 'probabalistic' entries in the matrix depends both on the data and the specifics of your end application.

",42,,,,,3/22/2020 0:00,,,,4,,,,CC BY-SA 4.0 18749,2,,18729,3/22/2020 4:29,,1,,"

This is probably more easily understood as the collapse/restore macro. The idea is that the previously explored state was collapsed and only the minimum f-cost from the sub-tree was stored. This represents the best unexpanded state in the subtree that was collapsed.

When restoring the portion of the collapsed tree, the f-cost of the restored node could either be the original f-cost (g+h), or it could be the stored f-cost if it is larger. By taking the max, the code ensures that states that are restored maintain at least the cost of the previously best unexpanded state. (If the g+h cost is larger, then we know the state wasn't previously expanded and it wasn't previously the state on the fringe with the minimum edge cost.)

The linked paper gives several examples where similar ideas are used during search.

",17493,,,,,3/22/2020 4:29,,,,1,,,,CC BY-SA 4.0 18751,1,18752,,3/22/2020 7:35,,4,105,"

[TL;DR]

I generated two classes Red and Blue on a 2D space. Red are points on Unit Circle and Blue are points on a Circle Ring with radius limits (3,4). I tried to train a Multi Layer Perceptron with different number of hidden layers, BUT all the hidden layers had 2 neurons. The MLP never reached 100% accuracy. I tried to visualize how the MLP would classify the points of the 2D space with Black and White. This is the final image I get:

At first, I was expecting that the MLP could classify 2 classes on a 2D space with 2 Neurons at each hidden layer, and I was expecting to see a white circle encapsulating the red points and the rest be a black space. Is there a (mathematical) reason, why the MLP fails to create a close shape, rather it seems to go from infinity to infinity on a 2d space ?? (Notice: If I use 3 neurons at each hidden layer, the MLP succeeds quite fast).

[Notebook Style]

I generated two classes Red and Blue on a 2D space.
Red are points on Unit Circle

size_ = 200
classA_r = np.random.uniform(low = 0, high = 1, size = size_)
classA_theta = np.random.uniform(low = 0, high = 2*np.pi, size = size_)
classA_x = classA_r * np.cos(classA_theta)
classA_y = classA_r * np.sin(classA_theta)

and Blue are points on a Circle Ring with radius limits (3,4).

classB_r = np.random.uniform(low = 2, high = 3, size = size_)
classB_theta = np.random.uniform(low = 0, high = 2*np.pi, size = size_)
classB_x = classB_r * np.cos(classB_theta)
classB_y = classB_r * np.sin(classB_theta)

I tried to train a Multi Layer Perceptron with different number of hidden layers, BUT all the hidden layers had 2 neurons.

hidden_layers = 15
inputs = Input(shape=(2,))
dnn = inputs
for l_no in range(hidden_layers):
    dnn = Dense(2, activation='tanh', name = ""layer_{}"".format(l_no))(dnn)
outputs = Dense(2, activation='softmax', name = ""layer_out"")(dnn)

model = Model(inputs=inputs, outputs=outputs)

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics='accuracy'])

The MLP never reached 100% accuracy. I tried to visualize how the MLP would classify the points of the 2D space with Black and White.

limit = 4
step = 0.2
grid = []
x = -limit
while x <= limit:
    y = -limit
    while y <= limit:
        grid.append([x, y])
        y += step
    x += step
grid = np.array(grid)
prediction = model.predict(grid)

This is the final image I get:

xs = []
ys = []
cs = []
for point in grid:
    xs.append(point[0])
    ys.append(point[1])
for pred in prediction:
    cs.append(pred[0])

plt.scatter(xs, ys, c = cs, s=70, cmap = 'gray')
plt.scatter(classA_x, classA_y, c = 'r', s= 50)
plt.scatter(classB_x, classB_y, c = 'b', s= 50)
plt.show()

At first, I was expecting that the MLP could classify 2 classes on a 2D space with 2 Neurons at each hidden layer, and I was expecting to see a white circle encapsulating the red points and the rest be a black space. Is there a (mathematical) reason, why the MLP fails to create a close shape, rather it seems to go from infinity to infinity on a 2d space ?? (Notice: If I use 3 neurons at each hidden layer, the MLP succeeds quite fast).

What I mean by a closed shape, take a look at the second image which was generated by using 3 neurons at each layer:

for l_no in range(hidden_layers):
    dnn = Dense(3, activation='tanh', name = ""layer_{}"".format(l_no))(dnn)

[According to Marked Answer]

from keras import backend as K
def x_squared(x):
    x = K.abs(x) * K.abs(x)
    return x
hidden_layers = 3
inputs = Input(shape=(2,))
dnn = inputs
for l_no in range(hidden_layers):
    dnn = Dense(2, activation=x_squared, name = ""layer_{}"".format(l_no))(dnn)
outputs = Dense(2, activation='softsign', name = ""layer_out"")(dnn)
model.compile(optimizer='adam',
              loss='mean_squared_error',
              metrics=['accuracy'])

I get:

",34425,,34425,,3/22/2020 9:49,3/22/2020 9:49,Why MLP cannot approximate a closed shape function?,,1,0,,,,CC BY-SA 4.0 18752,2,,18751,3/22/2020 8:38,,3,,"

In neural networks, the family of functions and the shapes that they can make for decision surfaces is determined by the activation function you use (in your case, tanh or hyperbolic tangent).

Assuming at least one hidden layer, then the universal approximation theorem applies. How closely you can approximate any given function is limited by the number of neurons, and not strongly by the choice of activation function. However, the choice of activation function is still relevant to how good an approximation is. When you get to low numbers of neurons in a hidden layer, then the approximations are more strongly tied to the nature of the activation function.

With one neuron in a hidden layer, you can only approximate some affine transformation of the activation function. Other low numbers, such as 2, 3 etc, will still show strong tendencies for certain families of shapes. This is very similar conceptually to using limited number of frequencies in a Fourier transform - if you limit yourself to only $a_1 \text{sin}(x) + a_2 \text{sin}(2x)$ to approximate a function, then you will definitely notice the sinusoidal building blocks in any output.

I suspect that if you changed the activation function in the first hidden layer to $f(x) = x^2$ then you could get a good result with two neurons per layer. If you then took that network and tried to train it on a simple linear split, it would fail, always producing some curved closed surface that covered the training examples as best that it could - kind of the opposite problem as you are seeing with your circular pattern fitted to NN with tanh activations throughout.

One interesting thing about using $f(x) = x^2$ is that this is a deliberate choice (given knowledge about how you constructed the example) to map your input space to a new space where examples can be linearly separated. In fact this appears to be what layers in multi-layer NNs learn - each layer incrementally and progressively maps its input space to a new space where examples can be better separated in a linear fashion.

",1847,,1847,,3/22/2020 8:47,3/22/2020 8:47,,,,3,,,,CC BY-SA 4.0 18753,1,18777,,3/22/2020 9:52,,4,759,"

I did a simple Actor-Critic implementation in Keras using 2 networks where the critic learns the Q-Values of every action, and the actor predicts probabilities for choosing each action. In training, the target probabilities for the actor was a one-hot vector with 1.0 in the maximum Q-Value prediction position and 0.0 in all the rest, and simply used fit method on the actor model with mean squared error loss function.

However, I'm not sure what to set as the target when switching to A2C. In all the guides I saw it's mentioned that the critic now learns one value per state, not one value per action in the action space.

This change makes it unclear on how to set the target vector for the actor. The guides/SE questions I went over did not explain this point and simply said that we can calculate the advantage value using the value function (here, here and here) for the current and next state, which is fine, except we can only do that for the specific action taken and not for every action in the action-space because we don't the value for every next state for every action.

In other words, we only know A(s,a) for our memorized a, and we know nothing about the advantage of other actions.

One of my guesses was that you still calculate the Q-Values, because after all, the value function is defined by the Q-Values. The value function is the sum over every action a of Q(s,a)*p(a). So does the critic need to learn the Q-Values and sum their multiplications with the probabilities generated by the policy network (actor), and calculate the advantages of every action?

It's even more confusing because in one of the guides they said that the critic actually learns the advantage values, and not the value function (like all the other guides said), which is strange because you need to use the critic to predict the value function of the state and the next state. Also, the advantage function is per-action and in the implementations I see the critic has one output neuron.

I think that what's being done in the examples I saw was to train the actor to fit a one-hot vector for the selected action (not the best action by the critic), but modify the loss-function value using the advantage value (possibly to influence the gradient). Is that the case?

",32950,,,,,3/24/2020 10:58,How to set the target for the actor in A2C?,,1,0,,,,CC BY-SA 4.0 18754,1,,,3/22/2020 13:46,,3,41,"

I'm trying to develop a better understanding of the concept of ""out-of-distribution"" (generalization) in the context of Bengio's ""Moving from System 1 DL to System 2 DL"" and the concept of ""(meta)-transfer learning"" in general.

These concepts seem to be very strongly related, maybe even almost referring to the same thing. So, what are similarities and differences between these two concepts? Do these expressions refer to the same thing? If the concepts are to be differentiated from each other, what differentiates the one concept from the other and how do the concepts relate?

",34432,,2444,,3/23/2020 22:55,3/23/2020 22:55,"What is the difference between ""out-of-distribution (generalisation)"" and ""(meta)-transfer learning""?",,0,1,,,,CC BY-SA 4.0 18755,1,,,3/22/2020 15:56,,3,83,"

I am currently learning neural networks using data from Touchscreen Input as a Behavioral Biometric. Basically, I am trying to predict "User ID" by training the neural network model shown below.

import time
import os
BATCH_SIZE=32
embedding_size=256
sequence_length=200
BUFFER_SIZE=10000
input_size=41
learning_rate=0.001

inputs_as_tensors=tf.data.Dataset.from_tensor_slices(train_data_features_array)
targets_as_tensors=tf.data.Dataset.from_tensor_slices(train_data_labels_categorical_array)
training_data=tf.data.Dataset.zip((inputs_as_tensors,targets_as_tensors))
#training_data=training_data.batch(sequence_length,drop_remainder=True)
training_dataset=training_data.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(training_dataset)

def build_model(vocab_size,batch_size):
    modelf=tf.keras.Sequential([
        tf.keras.layers.Dense(10,activation="sigmoid",input_shape=(None,10)),
        
                                tf.keras.layers.Dense(30,activation="relu",use_bias=True),  
                                 
                                     tf.keras.layers.Dropout(0.2),                    
                                   tf.keras.layers.Dense(vocab_size)
                          
                               
                                ])
    return modelf

def training_step(inputs,targets,optimizer):
    with tf.GradientTape() as tape:
        predictions=model(inputs)
        loss=tf.reduce_mean(tf.keras.losses.categorical_crossentropy(targets,predictions,from_logits=True))
        
        grads=tape.gradient(loss,model.trainable_variables)
        optimizer.apply_gradients(zip(grads,model.trainable_variables))
        return loss,predictions

model=build_model(input_size,BATCH_SIZE)
i=0
inner_loop=0
checkpoint_dir ='Moses_Model_x'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{i}")

while(1):
    start = time.time()
    for x,y in training_dataset:
        loss,predictions=training_step(x,y,tf.keras.optimizers.RMSprop(learning_rate=0.002))
    print ('Epoch {} Loss {:.4f}'.format(i, loss))
    print ('Time taken for iteration {} is {} sec\n'.format(i,time.time() - start))
    model.save_weights(checkpoint_prefix.format(i=i))
    i=i+1

However, the loss value is actually increasing. Is there anything I have to change or something wrong in my code?

",34435,,27229,,9/24/2021 4:12,9/24/2021 4:12,Why is the loss associated with my neural network increasing?,,0,3,,,,CC BY-SA 4.0 18756,1,,,3/22/2020 20:22,,3,93,"

I am trying to solve the problem of an agent dynamically discovering(start with no information about the environment) the environment and to explore as much of the environment as possible without crashing into obstacles I have the following environment:

where the environment is a matrix. In this the obstacles are represented by 0's and the free spaces are represented with 1s. The position of the agent is given by a label such as 0.8 in the matrix.

The initial internal representation of the environment of the agent will look something like this with the agent position in it .

Every time it explores the environment it keeps updating its own map:

The single state representation is just the matrix containing-

  • 0 for obstacles
  • 1 for unexplored regions
  • 0.8 for position of the agent
  • 0.5 for the places it has visited once
  • 0.2 for the places it has visited more than once

I want the agent to not hit the obstacles and to go around them.

The agent should also not be stuck in one position and try to finish the exploration as quickly as possible.

This is what I plan to do:

In order to prevent the agent from getting stuck in a single place, I want to punish the agent if it visits a single place multiple times. I want to mark the place the agent has visited once as 0.5 and if it has visited it more than once that place will be labelled 0.2

The reason I am marking a place it has visited only once as 0.5 is because if there is a scenario where in the environment there is only one way to go into a region and one way to come out of that region, I don't want to punish this harshly.

Given this problem, I am thinking of using the following reward system-

  • +1 for every time it takes an action that leads to an unexplored region
  • -1 for when it takes an action that crashes into an obstacle
  • 0 if it visits the place twice(i.e 0.5 scenario)
  • -0.75 is it visits a place more than twice

The action space is just-

  • up
  • down
  • left
  • right

Am i right in approaching the problem this way? Is Reinforcement Learning the solution for this problem? Is my representation of the state , action, reward system correct? I am thinking that DQN is not the right way to go because the definition of a terminal state is hard in this problem, what method should I use to solve this problem?

",34210,,9863,,3/24/2020 12:13,3/24/2020 12:13,"Representation of state space, action space and reward system for Reinforcement Learning problem",,0,0,,,,CC BY-SA 4.0 18757,2,,18608,3/22/2020 20:26,,1,,"

Anything that is not linearly separable cant be solved perceptrons, unless you use feature maps on data to map them to a higher dimension in which it is linearly separable.

As a simple, concrete example, perceptron cant learn the XOR function.

This page might help you further.

",32621,,,,,3/22/2020 20:26,,,,2,,,,CC BY-SA 4.0 18758,2,,18576,3/23/2020 0:34,,3,,"

From my experience in industry, a lot of data science (operating on customer information, stored in a database) is still dominated by decision trees and even SVMs. Although neural networks have seen incredible performance on ""unstructured"" data, like images and text, there still do not appear to be great results extending to structured, tabular data (yet).

At my old company (loyalty marketing with 10 million+ members) there was a saying, ""You can try any model you like, but you must try XGBoost"". And let's just say that I did try comparing it to a neural network, and ultimately I did go with XGBoost ;)

",18086,,,,,3/23/2020 0:34,,,,4,,,,CC BY-SA 4.0 18759,1,,,3/23/2020 7:41,,6,1281,"

What are the pros and cons of LSTM vs Bi-LSTM in language modelling? What was the need to introduce Bi-LSTM?

",9863,,2444,user9947,12/22/2021 10:07,12/22/2021 10:08,What are pros and cons of Bi-LSTM as compared to LSTM?,,1,0,,,,CC BY-SA 4.0 18760,1,18761,,3/23/2020 8:43,,4,1151,"

Neural networks are commonly used for classification tasks, in fact from this post it seems like that's where they shine brightest.

However, when we want to classify using neural networks, we often have the output layer to take values in $[0,1]$; typically, by taking the last layer to be the sigmoid function $x \mapsto \frac{e^x}{e^x +1}$.

Can neural networks with a sigmoid as the activation function of the output layer approximate continuous functions? Is there an analogue to the universal approximation theorem for this case?

",31649,,2444,,3/28/2020 0:38,3/28/2020 1:38,Can neural networks with a sigmoid as the activation function of the output layer approximate continuous functions?,,1,4,,,,CC BY-SA 4.0 18761,2,,18760,3/23/2020 14:27,,3,,"

As far as I know, the sigmoid is often used as the activation function of the output layer mainly because it is a convenient way of producing an output $p \in [0, 1]$, which can be interpreted as a probability, although that can be misleading or even wrong (if you interpret it as an uncertainty too).

You may require the output of the neural network to be a probability, for example, if you use a cross-entropy loss function, although you could in principle produce only $0$s or $1$s. The probability $p$ can then be used to decide the class (or label) of the input. For example, if $p > \alpha$, then you purposedly decide that the input belongs to class $1$, otherwise, it belongs to class $0$. The parameter $\alpha$ is called the classification (or decision) threshold. The choice of this threshold can actually depend on the problem and it is one of the reasons people use the AUC metric, i.e. to avoid choosing this classification threshold.

Can neural networks with a sigmoid as the activation function of the output layer approximate continuous functions? Is there an analogue to the universal approximation theorem for this case?

The most famous universal approximation theorem for neural networks assumes that the activation functions of the units of the only hidden layer are sigmoids, but it does not assume that the output of the network will be squashed to the range $[0, 1]$. To be more precise, the UAT (theorem 2 of Approximation by Superpositions of a Sigmoidal Function, 1989, by G. Cybenko) states

Let $\sigma$ be any continuous sigmoidal function. Then finite sums of the

$$G(x) = \sum_{j=1}^N \alpha_j \sigma (y_j^T x + \theta_j)$$

are dense in $C(I_n)$.

In other words, given any $f \in C(I_n)$ and $\epsilon > 0$, there is a sum, $G(x)$, of the above form, for which

$$|G(x) - f(x)| < \epsilon $$

Here, $f$ is the continuous function that you want to approximate, $G(x)$ is a linear combination of the outputs of $N$ (which should be arbitrarily big) units of the only hidden layer, $I_n$ denotes the $n$-dimensional unit cube, $[0, 1]^n$, $C(I_n)$ denotes the space of continuous functions on $I_n$, $x \in I_n$ (so the assumption is that the input to the neural network is an element of $[0, 1]^n$, i.e. a vector $x \in \mathbb{R}^n$, whose entries are between $0$ and $1$) and $y_j$ and $\theta_j$ are respectively the weights and bias of the $j$ unit. The assumption that $f$ is a real-valued function means that $f$ can take any value on $\mathbb{R}$ (i.e. $f: [0, 1]^n \rightarrow \mathbb{R}$). You should note that $G(x)$ is the output of the neural network, which is a combination (where the coefficients are $\alpha_j$) of the outputs of the units in the only hidden layer, so there's no restriction on the output of $G(x)$, unless you restrict $\alpha_i$ (but, in this theorem, there's no restriction on the values $\alpha_j$ can take).

Of course, if you restrict the output of the neural networks to the range $[0, 1]$, you cannot approximate all continuous functions of the form $f: [0, 1]^n \rightarrow \mathbb{R}$ (because not all of these functions will have the codomain $[0, 1]$)! However, the sigmoid has an inverse function, i.e. the logit, so you can reverse the output of such a neural network. So, in this sense (i.e. by reversing the output of the sigmoid), a neural network with a sigmoid as the activation function of the output layer can potentially approximate any continuous function too.

The UAT above only states the existence of $G(x)$ (i.e. it's an existence theorem). It doesn't tell you how you can find $G(x)$. So, if you use a sigmoid as the activation function of the output layer or not is a little bit orthogonal to the universality of neural networks.

",2444,,-1,,6/17/2020 9:57,3/28/2020 1:38,,,,15,,,,CC BY-SA 4.0 18762,1,,,3/23/2020 15:07,,1,390,"

I have this simple neural network in Python which I'm trying to use to aproximation tanh function. As inputs I have x - inputs to the function, and as outputs I want tanh(x) = y. I'm using sigmoid function also as an activation function of this neural network.

import numpy
# scipy.special for the sigmoid function expit()
import scipy.special
# library for plotting arrays
import matplotlib.pyplot
# ensure the plots are inside this notebook, not an external window
%matplotlib inline

# neural network class definition
class neuralNetwork:


    # initialise the neural network
    def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):
        # set number of nodes in each input, hidden, output layer
        self.inodes = inputnodes
        self.hnodes = hiddennodes
        self.onodes = outputnodes

        # link weight matrices, wih and who
        # weights inside the arrays are w_i_j, where link is from node i to node j in the next layer
        # w11 w21
        # w12 w22 etc 
        self.wih = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.hnodes, self.inodes))
        self.who = numpy.random.normal(0.0, pow(self.onodes, -0.5), (self.onodes, self.hnodes))

        # learning rate
        self.lr = learningrate

        # activation function is the sigmoid function
        self.activation_function = lambda x: scipy.special.expit(x)  

        pass


    # train the neural network
    def train(self, inputs_list, targets_list):
        # convert inputs list to 2d array
        inputs = numpy.array(inputs_list, ndmin=2).T
        targets = numpy.array(targets_list, ndmin=2).T

        # calculate signals into hidden layer
        hidden_inputs = numpy.dot(self.wih, inputs)
        # calculate the signals emerging from hidden layer
        hidden_outputs = self.activation_function(hidden_inputs)

        # calculate signals into final output layer
        final_inputs = numpy.dot(self.who, hidden_outputs)
        # calculate the signals emerging from final output layer
        final_outputs = self.activation_function(final_inputs)

        # output layer error is the (target - actual)
        output_errors = targets - final_outputs
        # hidden layer error is the output_errors, split by weights, recombined at hidden nodes
        hidden_errors = numpy.dot(self.who.T, output_errors) 

        # BACKPROPAGATION & gradient descent part, i.e updating weights first between hidden
        # layer and output layer, 
        # update the weights for the links between the hidden and output layers
        self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))

        # update the weights for the links between the input and hidden layers, second part of backpropagation.
        self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
        pass


    # query the neural network
    def query(self, inputs_list):
        # convert inputs list to 2d array
        inputs = numpy.array(inputs_list, ndmin=2).T

        # calculate signals into hidden layer
        hidden_inputs = numpy.dot(self.wih, inputs)
        # calculate the signals emerging from hidden layer
        hidden_outputs = self.activation_function(hidden_inputs)

        # calculate signals into final output layer
        final_inputs = numpy.dot(self.who, hidden_outputs)
        # calculate the signals emerging from final output layer
        final_outputs = self.activation_function(final_inputs)

        return final_outputs

Now I try to query this network, This network has three input nodes one for each x, one node for each input. This network also has 3 output nodes, so It would classify the inputs to given outputs. Where outputs are y, y = tanh(x) function.

# number of input, hidden and output nodes
input_nodes = 3
hidden_nodes = 8
output_nodes = 3
learning_rate = 0.1

# create instance of neural network
n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)

realInputs = []
realInputs.append(1)
realInputs.append(2)
realInputs.append(3)

# for x in (-3, 3):
#     realInputs.append(x)
#     pass

expectedOutputs = []
expectedOutputs.append(numpy.tanh(1));
expectedOutputs.append(numpy.tanh(2));
expectedOutputs.append(numpy.tanh(3));

for y in expectedOutputs:
    print(y)
    pass

training_data_list = []

# epochs is the number of times the training data set is used for training
epochs = 200

for e in range(epochs):
    # go through all records in the training data set
    for record in training_data_list:
        # scale and shift the inputs
        inputs = realInputs
        targets = expectedOutputs
        n.train(inputs, targets)
        pass
    pass

n.query(realInputs)

Outputs: desired vs ones from network with same data as training data:

0.7615941559557649
0.9640275800758169
0.9950547536867305


array([[-0.21907413],
       [-0.6424568 ],
       [-0.25772344]])

My results are completely wrong. I'm a beginner with neural networks so I wanted to build neural network without frameworks like tensor flow... Could someone help me? Thank you.

",34440,,34440,,3/24/2020 10:41,4/18/2021 11:48,Simple three layer neural network with backpropagation is not approximating tanh function,,1,2,,,,CC BY-SA 4.0 18764,1,,,3/23/2020 15:33,,1,71,"

Assume we are given a training dataset $D = \{ (x_i, y_i)\}_{i=1}^{N}$.

My question is: which is better?

  1. A multivariate regression with basis expansion with independent matrix $X$ and dependent matrix $Y$, such that $X \in K; K \subset \mathbb R^n$ and $Y \in \mathbb R^m$ with training data $D$.

Or

  1. A neural network which takes $n$ input variables and returns $m$ output with training data $D$

Without a doubt, the multivariate regression option is better with its basis polynomials because it can adapt any curve required in any dimension and doesn't need a large number of datasets than neural networks. Then, why neural networks are used more than multivariate regression?

Note: Prefer explaining the mechanism of neural network used as regression in your answers. To help us know the degree of flexibility of both.

Edit: You may prefer choosing your own loss function in case you need.

",34312,,34312,,3/24/2020 3:29,3/24/2020 5:01,Which one is better: multivariate regression with basis expansion or neural networks?,,0,2,,,,CC BY-SA 4.0 18765,2,,3753,3/23/2020 17:17,,2,,"

Since I can't comment, there are a few caveats to previous answers.

For instance, if you knew beforehand what the expected boundary function for that variable was, then you could transform it first. For instance, if you knew one feature was expected to be sinusoidal, you could transform your data (theta) using $f(x) = a*sin(\theta)$ first then expect the variable to be linear. At that point it would fit a linear function and solve for the amplitude instead. Of course, in large problems, this may be impractical, but still possible.

Likewise with quadratic data. I could add 1 new feature for every continuous variable that looks like $x^2$ and capture any quadratic dependence.

Another solution in your shown example would be if you knew the data was piece-wise, you could train 3 small, easily trained, NNs for instance with different boundaries. During each new training, you set new boundaries for your functions and see which produces the best results. In your case, this would be something close to $[-\infty,-0.5], [-0.5,0.5]$, and $[0.5,\infty]$. You could argue that this is a single piece-wise neural net just like any traditional piece-wise function. While this last approach seems very simplistic/impractical, this is actually done in practice. For instance in autonomous driving, a separate NN could be trained for object detection (human, lights, signs etc.), another for steering wheel positioning for different speeds,conditions, and turning radius, another for how to accelerate, etc. and then the system runs them all and combines their outputs into a full driving car. Each output is then its boundary of the puzzle on what region of ""driving space"" it handles.

So while I would say the simple answer is no, there are still ways around it if you have some prior knowledge.

As a response for the Universal Approximation Theorem, see this wiki, which says

One of the first versions of the theorem was proved by George Cybenko in 1989 for sigmoid activation functions. It was later shown that the class of deep neural networks is a universal approximator if and only if the activation function is not polynomial.""

",34449,,34449,,4/1/2020 17:23,4/1/2020 17:23,,,,0,,,,CC BY-SA 4.0 18767,2,,18759,3/23/2020 20:35,,7,,"

I would say that the logic behind the introduction was more empirical than technical. The only difference between LSTM and Bi-LSTM is the possibility for Bi-LSTM to leverage future context chunks to learn better representations of single words. There is no special training step or units added, the idea is just to read a sentence forward and backward to capture more information.

And as trivial as the idea sounds, it works, in fact, in the original paper the authors managed to achieve state-of-the-art scores in three tagging tasks, namely part-of-speech tagging, chunking and named entity recognition.

Even though it must be said that these scores were not dramatically higher compared to other models, and also the complete architecture included a Conditional Random Field on top of the Bi-LSTM.

Probably the most important aspect to stress out is that the authors performed two interesting comparison tests: one using random embedding initialisation and another one using only words (unigrams) as input features. Under these two test conditions, Bi-LSTM (with CRF on top) outperformed significantly all other architectures, proving that Bi-LSTM representations are more robust than representation learned by other models.

I would like also to make a side note regarding human reading. It makes sense to consider unidirectional sequence models as the most reasonable to emulate human reading, because we experience reading as a movement of the eyes that goes from one direction to the opposite. But the reality is that saccades (really rapid unconscious eye movement) and other eye movements play an enormous rule in reading. Which means that also we humans do continuously look to past and future words as well in order to understand the purpose of a word or sentence we're processing. Of course in our case these movements are directed by implicit knowledge and habits that allow us to direct our attention only to important words/parts (for example we barely read conjunctions) and it is interesting to notice that now state-of-the-art models based on transformers try to learn exactly this, where to pay attention rather than single probabilities for each word in a vocabulary.

",34098,,2444,user9947,12/22/2021 10:08,12/22/2021 10:08,,,,0,,,,CC BY-SA 4.0 18768,2,,18762,3/24/2020 1:26,,-1,,"

This is because of Vanishing Gradient Problem

What is Vanishing Gradient Problem ?

when we do Back-propagation i.e moving backward in the Network and calculating gradients of loss(Error) with respect to the weights , the gradients tends to get smaller and smaller as we keep on moving backward in the Network. This means that the neurons in the Earlier layers learn very slowly as compared to the neurons in the later layers in the Hierarchy. The Earlier layers in the network are slowest to train.

Reason

Sigmoid function, squishes a large input space into a small input space between 0 and 1. Therefore a large change in the input of the sigmoid function will cause a small change in the output. Hence, the derivative becomes small.

Solution:

Use Activation function as ReLu

Reference:

Vanishing Gradient Solution

",9863,,,,,3/24/2020 1:26,,,,2,,,,CC BY-SA 4.0 18769,1,,,3/24/2020 1:56,,1,141,"

I'd like to develop an MCTS-like (Monte Carlo Tree Search) algorithm for program induction, i.e. learning programs from examples.

My initial plan is for nodes to represent programs and for the search to expand nodes by revising programs. Each edge represents a possible revision.

Many expansions involve a single program: randomly resample a subtree of the program, replace a constant with a variable, etc. It's straightforward to use these with MCTS.

Some expansions, however, generate a program from scratch (e.g. sample a new program). Others use two or more programs to generate a single output program (e.g. crossover in Genetic Programming).

These latter types of moves seem nonstandard for vanilla MCTS, but I'm not deeply familiar with the literature or what technical terms might be most relevant.

How can I model expansions involving $N \ne 1$ nodes? Are there accepted methods for handling these situations?

",33629,,,,,4/18/2021 5:39,MCTS moves with multiple parents,,1,0,,,,CC BY-SA 4.0 18770,2,,18769,3/24/2020 2:04,,1,,"

Approach 1: One way would be to switch from nodes representing programs to nodes representing sets of programs. The root node would represent the empty set ${}$, to which expansions could be applied only if they can generate a program from scratch. The first such expansion would produce some program $p$, so the root would now have child ${p}$. The second expansion would produce $p'$, so the root would now also have child ${p'}$ which would itself also have the child ${p,p'}.

One downside to this approach is that, even assuming reasonable restrictions (e.g. moves can use at most 2 programs, pairs cannot have identical elements, element order doesn't matter), the branching factor will grow combinatorially, because each new program can be combined with all (or most) of the previously generated programs.

Approach 2: Another way would be to switch from nodes representing programs to nodes representing meta-programs describing an initial program and a series of revisions to that program. For example, MCTS might search for expressions in something like this grammar:

Program  -> Empty
          | Program > OneProgramRevision
          | Program Program >> TwoProgramRevision
OneProgramRevision -> AddDatumAsException
                    | SampleStatement
                    | RegenerateSubtree
                    | Delete
                    | ...
TwoProgramRevision -> Concatenate
                    | Crossover
                    | ...

This is just a cartoon: one would need to add details parameterizing each of the various revisions.

",33629,,33629,,3/24/2020 2:21,3/24/2020 2:21,,,,0,,,,CC BY-SA 4.0 18772,1,,,3/24/2020 7:51,,2,41,"

Currently, I'm making a chatbot that is going to be functioning in a website, so I was wondering, is it better to train the chatbot with intentions files or use the database as the intention file, if it the latter, then how would I do it? With SQLite or with Excel? Any guides or tutorial would be appreciated.

I'm planning to use Flask + Python + Html for the chatbot.

",33537,,33537,,3/24/2020 8:06,3/24/2020 9:23,Is it better to rely on an intention file or a database for a web chatbot?,,1,0,,,,CC BY-SA 4.0 18773,1,,,3/24/2020 9:02,,1,203,"

I’m currently trying to train a BART, which is a denoising Transformer created by Facebook researchers. Here’s my Transformer code

import math
import torch
from torch import nn
from Constants import *

class Transformer(nn.Module):
    def __init__(self, input_dim: int, output_dim: int, d_model: int = 200, num_head: int = 8, num_e_layer: int = 6,
                 num_d_layer: int = 6, ff_dim: int = 1024, drop_out: float = 0.1):
        '''
        Args:
            input_dim: Size of the vocab of the input
            output_dim: Size of the vocab for output
            num_head: Number of heads in mutliheaded attention models
            num_e_layer: Number of sub-encoder layers
            num_d_layer: Number of sub-decoder layers
            ff_dim: Dimension of feedforward network in mulihead models
            d_model: The dimension to embed input and output features into
            drop_out: The drop out percentage
        '''
        super(Transformer, self).__init__()
        self.d_model = d_model
        self.transformer = nn.Transformer(d_model, num_head, num_e_layer, num_d_layer, ff_dim, drop_out,
                                          activation='gelu')
        self.decoder_embedder = nn.Embedding(output_dim, d_model)
        self.encoder_embedder = nn.Embedding(input_dim, d_model)
        self.fc1 = nn.Linear(d_model, output_dim)
        self.softmax = nn.Softmax(dim=2)
        self.positional_encoder = PositionalEncoding(d_model, drop_out)
        self.to(DEVICE)

    def forward(self, src: torch.Tensor, trg: torch.Tensor, src_mask: torch.Tensor = None,
                trg_mask: torch.Tensor = None):
        embedded_src = self.positional_encoder(self.encoder_embedder(src) * math.sqrt(self.d_model))
        embedded_trg = self.positional_encoder(self.decoder_embedder(trg) * math.sqrt(self.d_model))
        output = self.transformer.forward(embedded_src, embedded_trg, src_mask, trg_mask)
        return self.softmax(self.fc1(output))

class PositionalEncoding(nn.Module):
    def __init__(self, d_model, dropout=0.1, max_len=5000):
        super(PositionalEncoding, self).__init__()
        self.dropout = nn.Dropout(p=dropout)
        pe = torch.zeros(max_len, d_model)
        position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
        div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
        pe[:, 0::2] = torch.sin(position * div_term)
        pe[:, 1::2] = torch.cos(position * div_term)
        pe = pe.unsqueeze(0).transpose(0, 1)    
        self.register_buffer('pe', pe)

and here’s my training code

def train(x: list):
    optimizer.zero_grad()
    loss = 0.
    batch_sz = len(x)
    max_len = len(max(x, key=len)) + 1  # +1 for EOS xor SOS
    noise_x = noise(x)
    src_x = list(map(lambda s: [SOS] + [char for char in s] + [PAD] * ((max_len - len(s)) - 1), noise_x))
    trg_x = list(map(lambda s: [char for char in s] + [EOS] + [PAD] * ((max_len - len(s)) - 1), x))
    src = indexTensor(src_x, max_len, IN_CHARS).to(DEVICE)
    trg = targetsTensor(trg_x, max_len, OUT_CHARS).to(DEVICE)
    names = [''] * batch_sz

    for i in range(src.shape[0]):
        probs = transformer(src, trg[:i + 1])
        loss += criterion(probs, trg[i])

    loss.backward()
    optimizer.step()

    return names, loss.item()

As you can see in the train code. I am training it ""sequentially"" by inputting the first letter of the data then computing the loss with the output then inputting the first and second character and doing the same thing, so on and so forth.

This doesn’t seem to be training properly though as the denoising is totally off. I thought maybe there’s something wrong with my code or you can’t train Transformers this way.

I'm taking first name data then noising it then training the Transformer to denoise it, but the output to the Transformers doesn't look remotely like the denoised version or even the noised version of the name. I built a denoising autoencoder using LSTMs and it did way better, but I feel like BART should be way out performing LSTMs cause it's supposedly state of the art NLP neural network model.

",30885,,30885,,4/2/2020 11:40,12/31/2021 0:03,Can you train Transformers sequentially?,,1,3,,,,CC BY-SA 4.0 18774,2,,18772,3/24/2020 9:23,,1,,"

Recognising intents is only a small step in developing a chatbot. It's fine to use an ML classifier with training data for that, no need to keep the original list of intents.

However, you should really think about the next step: how are you getting your bot to conduct a dialogue, rather than firing off single responses to user queries. That is where things get difficult, and that is also what distinguishes a good chatbot from a simple-minded Eliza-clone.

The programming language/framework you use is not relevant.

",2193,,,,,3/24/2020 9:23,,,,0,,,,CC BY-SA 4.0 18777,2,,18753,3/24/2020 10:58,,2,,"

In short, my last sentence was the correct answer. The ""target"" is a one-hot with the selected action, but there's a trick.


A2C Loss Function

A very crucial part of A2C implementation that I missed is the custom loss function that takes into account the advantage. The loss function multiplies the advantage with the negative log of the current probability to select the action that was selected.

The trick is that if the advantage is negative, the loss function will switch sign, so the gradients will be applied to the opposite direction.

In one dimension it's easier to understand. Let's say my target prediction is 1 and my actual prediction is 0.6. A simple loss would be defined as target - prediction, or in this case 0.4 and future predictions will be closer to one. If my prediction was 1.4, then the loss would be -0.4. A negative loss would mean predicting a lower result in the future, and a positive result would mean predicting a higher result in the future.

If the sign of the loss function is switched, the prediction will actually move away from 1.

The same thing happens when you multiply the advantage in the loss function. A negative advantage would mean that this action is worse than the value of the state so we need to avoid it, and a positive advantage means that the action is encouraged.


In Keras (Tensorflow 2.0):

Here's the custom loss function:

def custom_actor_loss(y_true, y_prediction, advantage):
    prediction = K.clip(y_prediction, 1e-8, 1 - 1e-8)
    log_probabilities = y_true * K.log(prediction)

    return K.sum(-log_probabilities*advantage)

The values are clipped because log of 0 is undefined.

And the rest of the network building:

input_layer = Input(shape=self._state_size, name='state_in')
advantage = Input(shape=[1], name='advantage')
target_prediction = Input(shape=self._actions_num, name='target')

inner_layer = Dense(units=layer_size, activation='relu')(input_layer)
actor_out = Dense(units=self._actions_num, activation='softmax', name='actor_out')(inner_layer)

self._actor = Model([input_layer, target_prediction, advantage], actor_out, name='actor')
self._actor.add_loss(custom_actor_loss(actor_out, target_prediction, advantage))
self._actor.compile(optimizer=Adam(learning_rate=actor_learning_rate))

And in the training loop (where future_rewards_prediction and critic_prediction are the outputs from the critic for the current and next state (except for the terminal state where the future_rewards_prediction is set to 0):

# Train actor
target_probabilities = np.zeros([1, self._actions_num])
target_probabilities[0][memory[step_idx].action] = 1.0
advantage = memory[step_idx].reward + future_rewards_prediction * self._future_discount - critic_prediction
self._actor.fit((memory[step_idx].state, target_probabilities, advantage), verbose=0)

*Notice how I don't really specify a y in my fit call. This is because of an issue I had when trying to implement a custom loss function in Keras which was solved by this answer.

",32950,,,,,3/24/2020 10:58,,,,0,,,,CC BY-SA 4.0 18778,1,20823,,3/24/2020 11:16,,0,109,"

A2C loss is usually defined as advantage * (-log(actor_predictions)) * target where target is a one-hot vector (with some clipping/noise/etc...) with the selected target.

Does this mean that we get larger losses for smaller mistakes?

If for example the agent has predicted $\pi(a|s)=0.9$ but the advantage is negative, this would mean a larger mistake than if the agent predicted that $\pi(a|s)=0.1$, however, putting the numbers in the formula means a larger loss for the 0.1 prediction.

Assuming advantage=-1, advantage * (-log(actor_predictions)) * target would mean:

$$ -1 * (-log(0.9)) * 1 = log(0.9)=-0.045 $$ $$ -1 * (-log(0.1)) * 1 = log(0.1)=-1 $$

Is my understanding correct?

",32950,,,,,5/1/2020 13:06,Is A2C loss function taking smaller steps for larger mistakes?,,1,0,,,,CC BY-SA 4.0 18779,1,,,3/24/2020 16:02,,2,23,"

Consider a stochastic process $\{X_t \colon t \in T\}$ indexed by a set $T$. We assume for simplicty that $T \in \mathbb{R}^n$. We assume that for any choice of indexes $t_1, \dots, t_n$, the random variables $(X_{t_1}, \dots, X_{t_n})$ are jointly distributed according to a multivariate Gaussian distribution with mean $\mu = (0, \dots, 0)$, for a given covariance matrix $\Sigma$.

Under these assumptions, the stochastic process is completely determined by the 2nd-order statistics. Hence, if we assume a fixed mean at $0$, then the stochastic process is fully defined by the covariance matrix. This matrix can be defined in terms of the covariance function $$ k(t_i, t_j) = \mbox{cov}(X_{t_i}, X_{t_j}). $$ It is well-known that functions $k$ as defined above are admissible kernels. This fact is often used in probabilistic inference, when performing regression or classification.

Several functions can be suitable kernels, but only a few are used in practice, depending on the application.

Given a large amount of related literature, can someone provide an up-to-date list of functions commonly used as kernels in this context?

",34464,,2444,,12/12/2021 13:11,12/12/2021 13:11,Is there an up-to-date list of suitable kernels for Gaussian processes?,,0,0,,,,CC BY-SA 4.0 18780,1,,,3/24/2020 17:00,,0,42,"

I have to perform a regression on three curves as shown in the following plot. Here, accA (y-axis) is the dependent variable, and w (x-axis) is the independent variable. The sum of the three curves adds up to 1.

To perform regression for the three curves, I would like to use a neural network. What architecture/set-up should I use for this task?

",34464,,2444,,3/24/2020 20:01,3/24/2020 20:01,Non-linear regression with a neural network,,0,3,,,,CC BY-SA 4.0 18782,2,,18594,3/24/2020 19:17,,0,,"

The hill-climbing algorithm to implement is as follows:

  1. The algorithm should take four inputs: as always, there will be a multiset S and integer k, which are the Subset and Sum for the Subset Sum problem; in addition, there will be two integers q and r, with roles defined below.
  2. Do the following q times:

(a) Choose a random subset (multiset) $S_0$ of S as the current subset.

(b) Do the following (hill climbing) r times:

i. Find a random neighbor T (see definition of neighbor below) of the current subset.

ii. If neighbor T has smaller residue, then make T the current subset.

(c) Keep track of the residue of the final current subset when starting with subset $S_0$.

  1. Return the smallest residue of the q subsets tested by the algorithm.

Definition: Subset (multiset) B ⊆ S is a neighbor of a subset A of S if you can transform A into B by moving one or two integers from A to B, or by moving one or two integers from B to A, or by swapping one integer in A with one integer in B. An easy way to generate a random neighbor B of a subset A of S is as follows:

  1. Order the elements of S as $x_1, x_2, ..., x_n$.
  2. Initialize B to be a clone of A.
  3. Choose two distinct random indices i and j, where $1 ≤ i; j ≤ n$.
  4. if $x_i$ is in A, remove it from B. Otherwise, add xi to B.
  5. if $x_j$ is in A, then with probability 0.5, remove it from B. If $x_j$ is not in A, then with probability 0.5, add $x_j$ to B.
",33875,,,,,3/24/2020 19:17,,,,0,,,,CC BY-SA 4.0 18783,1,24472,,3/24/2020 20:21,,2,80,"

I am training an AlexNet neural network, with about 12000 images which 80% is for training, 10% is for validation and another 10% is for testing. I have a problem in my plots. There is a big fluctuation in epoch 47, how can I have a smooth plot? What is the problem?

I tried to increase my validation data because the fluctuation was for validation loss. but nothing changed. I decreased learning rate, but it stucks in a local optimum.

",33792,,2444,,11/7/2020 11:27,11/7/2020 11:27,What could cause a big fluctuation of the loss in the last epochs of training an AlexNet?,,1,0,,,,CC BY-SA 4.0 18785,1,18796,,3/24/2020 23:28,,5,1022,"

I know they are not the same in working, but an input layer sends the input to $n$ neurons with a set of weights, based on these weights and the activation layer, it produces an output that can be fed to the next layer.

Aren't the filters the same, in the way that they convert an "image" to a new "image" based on the weights that are in that filter? And that the next layer uses this new "image"?

",34359,,2444,,1/19/2021 15:53,10/13/2021 11:24,Can neurons in MLP and filters in CNN be compared?,,2,0,,,,CC BY-SA 4.0 18787,1,,,3/25/2020 1:18,,3,107,"

There are (two-players, perfect information) combinatorial games for which, at any configuration of the game, a winning move (if there is one) can be quickly computed by a short program. This is the case of the following game that starts with a bunch of matches and each player alternatively removes 1,2 or 3 matches, until the player that removes the last one wins. This is also the case of the Nim game.

On the other hand, understanding the winning strategy of games like Go or chess seems hopeless. However, some machine-learning based programs like alphaGo zero are able ""learn the strategy"" of complex games, using as input data only the rules of the game. I don't really know how these algorithms work but here is my vague question:

For simple game like Nim, can such an algorithm be able to actually find a winning move in any winning configuration of the game ?

The number of configurations of Nim is infinite, but the algorithm will consider during its ""training"" only a finite number of configurations. It seems imaginable that if this training phase is long enough, then the program will be able to capture the winning strategy, like a human would do.

",34474,,34474,,3/25/2020 16:38,3/25/2020 20:41,Can an artificial intelligence be unbeatable at simple games?,,1,0,,,,CC BY-SA 4.0 18788,2,,18671,3/25/2020 2:47,,1,,"

I have not found any simple implementation of a naive EBMT system, but I found some articles, papers and books that may be helpful (although I haven't read them, apart from the first and last one), so I will list them below.

The web article Example-based machine translation provides a decent high-level explanation of example-based machine translation.

The paper Example-Based Machine Translation: A New Paradigm (2002) by Chunyu Kit et al. also seems to provide a detailed description of the EBMT approach, so this paper should provide you with details you need to implement an EBMT system.

The paper A framework of a mechanical translation between Japanese and English by analogy principle (1984) by Makoto Nagao introduced the example-based machine translation approach, so it will be at least historically relevant.

Additionally, the paper Example-Based Machine Translation of the Basque Language and the book Recent Advances in Example-Based Machine Translation (2003), which is not apparently freely available online, could also be useful.

Finally, the article Machine Translation. From the Cold War to Deep Learning gives a nice high-level overview of the main machine translation approaches, so that you can understand the differences between EBMT and other approaches (especially, in case you are not able to distinguish between EBMT and other MT, e.g. those that use a parallel corpus, such as the supervised statistical machine translation approaches).

",2444,,2444,,3/25/2020 11:13,3/25/2020 11:13,,,,0,,,,CC BY-SA 4.0 18791,1,,,3/25/2020 10:41,,2,91,"

We have a directed connected graph, where nodes are places one can go to and edges are the "roads" between places. We have K agents whose goal is to meet in one node. Agents start in different nodes. Note that more than one agent can be at one node at the same time and that all agents move by one node at every turn(they move synchronously).

We have to variants of this task:

  1. in each turn every agent must move

  2. an agent may pass on moving.

For a chosen variant, I have to find an algorithm to complete this task, but it cannot be the state-space search algorithm.

I've been sitting on this for a while, but I cannot think of anything.

I've been thinking if agents could know each other positions in order to choose where to go, but it's a state-space search. I also thought that if agents meet, they could continue together. But I'm looking for an alternative to state-space search algorithms.

",34483,,2444,,8/18/2021 9:50,9/12/2022 13:07,Agents meeting in a directed connected graph,,1,0,,,,CC BY-SA 4.0 18792,1,,,3/25/2020 10:49,,1,39,"

I've been reading some papers on human pose estimation and I'm starting to see the terms top-down and bottom-up crop up a lot. For example in this paper:

Our hourglass module differs from these designs mainly in its more symmetric distribution of capacity between bottom-up processing (from high resolutions to low resolutions) and top-down processing (from low resolutions to high resolutions)

Okay, so what are the main observations or distinctions of top-down vs bottom-up? Why does it make sense to have a paradigm in which we talk about these specifically?

",16871,,,,,3/25/2020 10:49,What are the main points of the top-down vs bottom-up paradigm in neural networks?,,0,0,,,,CC BY-SA 4.0 18793,1,,,3/25/2020 10:55,,2,33,"

I used to work as an analyst in a financial project where we had functions $f$ determining the price, and sometimes the inputs $x$ jumped in such a way to produce anomalous results. We had to report an explanation, and I wish to automate the process. It's not properly a question of AI, more of information science. The idea is that once, for a generic non-linear $f$, you can determine the ranking of relevance of $x_i$ in explaining the result, you can generate a full explanation by:

  1. decompose $f$ as a composition of $f_j$, which are intermediate results with a definite meaning in the application domain (in this case, finance)
  2. apply the algorithm using the $f_j$ instead of $x_i$, and then iterate it to explain the $f_j$ in terms of $x_i$

The relevance is quantified by the information gain of each variable. This will be explained for an application in ranking the $x_i$ directly. We assume to start on a uniform distribution on the $x$ domain, calculate the derived probability density function for $f$, and the information entropy of $f$. Then we fix the $x_i$ one at a time, for each calculate the new p.d.f. of $f$ conditioned on that $x_i$ and the (lower) information entropy of $f$. The information gain is $IG(x_i)$. Choose as the first conditioning the $i$ with the largest information gain, then condition of the remaining $i$ with a decreasing order of $IG_i$. So we could start for example , with $(x_1,x_2,x_3)$, to condition first on $x_2$, then on $(x_2,x_3)$, and then on $(x_2,x_3,x_1)$, getting the percentage contributions as: $\frac{IG_i}{H_y}$. The successive terms $IG_i$ add up always to the total entropy $H_y$, since conditioning on all variables gives a point and zero entropy.

Any opinion on how to improve this ""automated function explanation"" is welcome

",34484,,,,,3/25/2020 10:55,Automated explanation - function results - simple attempt,,0,0,,,,CC BY-SA 4.0 18794,1,,,3/25/2020 11:07,,0,393,"

I am following the NLP course taught by Dan Jurafsky. In the video lectures Supervised Relation Extraction and Semi Supervised and Unsupervised Relation Extraction Jurafsky explains supervised, semi-supervised and unsupervised relation extraction.

But what are the pros and cons of every relation extraction method compared with the other two relation extraction methods?

",9863,,2444,,3/25/2020 14:30,8/20/2021 18:02,"What are the pros and cons of supervised, semi-supervised and unsupervised relation extraction in NLP?",,1,0,,,,CC BY-SA 4.0 18796,2,,18785,3/25/2020 13:31,,3,,"

tl;dr The equivalent to a neuron in a Fully-Connected (FC) layer is the kernel (or filter) of a Convolution layer

Differences

The neurons of these two types of layers have two key differences. These are that the convolution layers implement:

  • Sparse connectivity, i.e. each neuron is connected only to an area of the input, not the whole.
  • Weight sharing, i.e. similar connections end up having the same weights. This is usually visualized as the same filter traversing the image.

Besides these two key differences, there are some other technical details, e.g. how the biases are implemented. Other than that they perform the same operation.

What causes some confusion is that the input of a CNN is usually 2 or 3-dimensional, while a FC is usually 1-dimensional. These aren't mandatory however. To better help visualize the differences between the two I made a couple of figures illustrating the differences between a conv-layer and a FC one, both in 1D.

Sparse connectivity

On the left are two FC neural networks. On the right, are layers with sparse connections.

Weight sharing

On the left is a sparsely connected network. The colors represent the different values of the weights. On the right is the same network with weight-sharing. Note that similar weights (i.e. arrows with the same direction in each layer) have the same value.


To answer your other questions:

Are filters not the same in the way that they convert an "image" to a new "image" based on the weights that are in that filter? And that the next layers use these new "images"?

Yes, if the input of a convolution layer is an image, so will the output. The next layer will also operate on an image.

However, I'd like to note that not all convolution layers accept images as their inputs. There are 1D and 3D convolutional layers as well.

",26652,,2444,,5/15/2021 11:29,5/15/2021 11:29,,,,0,,,,CC BY-SA 4.0 18797,1,,,3/25/2020 14:11,,3,140,"

The book Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman talks about lines, points and conics. A conic is a curve described by a second-degree equation in the plane, so a parabola would be an example of a conic. The purpose and usage of points and lines in computer vision are quite clear. For example, a set of points defines an object, or we can project a 3-dimensional point to a 2-dimensional point in the image plane, or a line represents the space to look for the corresponding point in the second plane of another point in the first plane (epipolar geometry). However, probably because I haven't yet read the part of the book related to the applications of conics in computer vision, it's not clear why do we even care about conics in computer vision.

So, why are conics important in computer vision? Note that I know that conics are defined by points and, given the point-line duality, they can also be defined by lines, but this doesn't still enlightens me on the purpose of conics in computer vision. So, I am looking for applications where conics are used to define the underlying CV model, in a similar way that points and lines are used to describe the pinhole camera model.

",2444,,2444,,3/25/2020 14:26,3/25/2020 14:26,Why are conics important in computer vision?,,0,0,,,,CC BY-SA 4.0 18798,1,18833,,3/25/2020 14:46,,7,429,"

What are some (good) online courses for deep reinforcement learning?

I would like the course to be both programming and theoretical. I really liked David Silver's course, but the course dates from 2015. It doesn't really teach deep Q-learning at this time.

",34488,,2444,,3/26/2020 14:24,1/14/2022 19:35,What are some online courses for deep reinforcement learning?,,2,0,,,,CC BY-SA 4.0 18799,1,18804,,3/25/2020 14:47,,1,353,"

For the A* algorithm, a consistent heuristic is defined as follows: if, for every node $n$ and every successor $m$ of $n$ generated by any action $a$, $h(n) \leq c(n,m) + h(m)$.

Suppose the edge $c(n,m)$ is removed, how do we define consistency for nodes $n, n'$ ? Since the $n'$ is not generated from $n$.

",32780,,16909,,3/26/2020 22:40,3/28/2020 3:46,What does a consistent heuristic become if an edge is removed in A*?,,1,0,,,,CC BY-SA 4.0 18800,2,,18794,3/25/2020 16:09,,1,,"

Supervised

Pros:

  • highest accuracy

Cons:

  • need a large human-labeled training set
  • brittle (doesn't work well with examples that are in a different genre from the training set)

Semi-supervised

Relation bootstrapping

Pros:

  • only requires a small set of labeled data (seed relations)

Cons:

  • complex iterative process

Distant supervision

Pros:

  • training happens in one go (no iterative process)

Cons:

  • requires a big database of relations

Unsupervised

Pros:

  • don't need any labeled data

Cons:

  • need to process a huge quantity of unlabeled data (usually web crawling)
",34485,,34485,,3/28/2020 15:50,3/28/2020 15:50,,,,0,,,,CC BY-SA 4.0 18802,1,,,3/25/2020 19:49,,1,62,"

Is there any known error bounds for the TD(0) algorithm for the value function after a finite number of iterations?

$$ \Delta_t=\max_{s \in \mathcal{S}}|v_t(s)-v_\pi(s)|$$ $$v_{t+1}(s_t)=v_t(s_t)+\alpha(r+v_t(s_{t+1})-v_t(s_t))$$

",33227,,,,,9/12/2020 22:00,Are there known error bounds for TD(0) with a constant learning rate?,,1,0,,,,CC BY-SA 4.0 18803,2,,18787,3/25/2020 20:24,,2,,"

There is actually a github project about 'solving' Nim that implements certain type of Q-learning reinforcement algorithm (described in undergraduate thesis of Erik Jarleberg (Royal Institute of Technology) entitled ""Reinforcement learning on the combinatorial game of Nim"") that supposedly finds that optimal strategy lying down there inside in the game that also human can find out.

The project uses a code in Python, which is 'only' 114 lines of code, so it can be run on your own machine if you got interested and want to test it. Github page tells also:

There is a known optimal strategy for the game which is credited to Charles L. Bouton of Harvard which can serve as a benchmark for evaluating the performance on our Q-learning agent.

(That is the quote I refer for it getting enough good / optimal results).

Q-learning is part of the reinforcement learning algorithm family where the so called agent learns how to play the game by getting rewards from its actions and it tries to maximize the amount of total reward with following strategy:

The goal of the agent is to maximize its total reward. It does this by adding the maximum reward attainable from future states to the reward for achieving its current state, effectively influencing the current action by the potential future reward. This potential reward is a weighted sum of the expected values of the rewards of all future steps starting from the current state.

Quote and more information in Q-learning Wikipedia-page.

For further interest on easy games tackled with Reinforcement Learning, please have a look on following paper: Playing Atari with Deep Reinforcement Learning. There is also an Udemy class on that paper and its findings.

",11810,,11810,,3/25/2020 20:41,3/25/2020 20:41,,,,3,,,,CC BY-SA 4.0 18804,2,,18799,3/25/2020 23:28,,2,,"

Consistency is a property of heuristics. You can think of consistency as the common sense idea that our guess at the time to go from $A \rightarrow B \rightarrow C$ cannot be more than the time to go from $A \rightarrow B$, plus our guess of the time to go from $B \rightarrow C$.

Supposing we remove a given edge $c(n,m)$ from our graph, but that our heuristic function gives the same value for $h(n)$ as before. Can the function be inconsistent? Let's try a proof sketch:

  1. Since the function was consistent before we removed this edge, it must already be the case that $h(n) < c(n, m') + h(m')$ for every other node $m'$ adjacent to $n$.
  2. Removing the edge $c(n,m)$ doesn't change these relationships, because the other edges still have the same costs as before, and the heuristic function still gives the same value for every node.
  3. The same argument may be applied to node $m$.
  4. Since none of these relationships have changed, $h$ is still consistent.
",16909,,2444,,3/28/2020 3:46,3/28/2020 3:46,,,,2,,,,CC BY-SA 4.0 18805,2,,17803,3/25/2020 23:53,,2,,"

(All notations based on Understanding ML: From Theory to Algorithms) The layman's term for NFL is super misleading. The comparison between PAC learnability and NFL is kind of baseless since both proof's are built on a different set of assumptions.

Let's review the definition of PAC learnability:

A hypothesis class $H$ is PAC learnable if there exist a function $m_H : (0, 1)^ 2 → N$ and a learning algorithm with the following property: For every $\epsilon, \delta \in (0, 1)$, for every distribution $D$ over $X$ , and for every labelling function $f : X → {0, 1}$, if the realizable assumption holds with respect to $H, D, f$ , then when running the learning algorithm on $m ≥ m_H (\epsilon, \delta)$ i.i.d. examples generated by $D$ and labeled by $f$ , the algorithm returns a hypothesis $h$ such that, with probability of at least $1 − δ$ (over the choice of the examples), $L_{(D,f )} (h) ≤ \epsilon$.

An important point in this definition is that the complexity bounds (i.e value of $m$) holds irrespective of distribution $D$ (this is known as distribution free). Since, in the proofs we assume error to be $1$ i.e if $f(x) \neq h(x)$ then we assign error $=1$ so $L_D(A(S))$ which is defined as the true probability of error by the classifier ($A(S) = h_S$) will be the same as $\Bbb E_{S \sim D^{m}}(h_S)$. Also, the realizable assumption is not very important here.

Now let's review the definition of NFL:

Let $A$ be any learning algorithm for the task of binary classification with respect to the $0 − 1$ loss over a domain $X$ . Let $m$ be any number smaller than $|X |/2$, representing a training set size. Then, there exists a distribution $D$ over $X × \{0, 1\}$ such that:

  1. There exists a function $f : X → \{0, 1\}$ with $L_{D} (f ) = 0$ (i.e.Realizable).
  2. With probability of at least $1/7$ over the choice of $S \sim D^m$ we have that $L_D (A(S)) ≥ 1/8$.

NOTE: For second statement it suffices to show that $\Bbb E_{S \sim D^{m}}L_D(A'(S)) \geq 1/4$, which can be shown using Markov's Inequality. Also, the definition is implying we consider all functions possible from $X × \{0, 1\}$ and our learning algorithm can pick any function $f$ out of this, which somewhat implies that the set $X$ has been shattered.

If you read the definition it clearly states there exists a $D$, which is clearly different from distribution free assumption of PAC learnability. Also to note that we are restricting sample size $m$ to $|X |/2$. You will be able to falsify the second statement by simply picking bigger $m$ and thus your class is suddenly PAC learnable. Thus the point NFL is trying to make is that:

Without an inductive bias i.e if you pick all possible functions from $f : X → {0, 1}$ as your hypothesis class you would not be able to achieve for all $D$ an accuracy less than $1/8$ with probability greater than $6/7$ given your sample size is at most $|X|/2$.

To prove this, you only have to pick a distribution for which this holds. In the proof of the book they have used the uniform distribution which is the margin between 2 types of distribution. So the idea is lets say you have sampled $m = \frac{|X|}{2}$ points, your learning algorithm returns a hypothesis as per ERM rule (doesn't really matter) on the sampled points. Now you want to comment on the error over $2m$ points and true distribution (uniform distribution in this case). So clearly, the probability of picking a point outside your sampled points (unseen points) is $0.5$. Also, the $A(S) = h_S$ will have a $0.5$ probability of agreeing with the actual label of an unseen point (among all $h$ which agree with the sampled points, half will assign $1$ to an unseen point while other half will assign $0$), which makes the total probability of making an error$=0.25$ over the true distribution or $\Bbb E_{S \sim D^{m}}L_D(A(S)) = 1/4$

Note, that we have picked up uniform distribution but this will also hold for distributions which assigns probability $p \leq 0.5$ on the sampled points, then the probability of picking a point outside your sampled points (unseen points) is $\geq 0.5$ and thus error is $\geq 0.5$, and thus uniform distribution is the mid point. ANother important point to note is that if we pick $m+1$ points we will definitely do better, but then its kind of overfitiing.

This basically translates to why infinite VC dimension hypothesis class is not PAC learnable, because it shatters every set of size $|X|$ and we have already seen the implications of picking a hypothesis class which shatters a set of size $|X|$ in NFL.

This is the informal description of how the NFL theorem was arrived at. You can find the entire explanation in this lecture after which the proof in the book will start to make much more sense.

Thus, inductive bias (restricting hypothesis class to some possible good candidates of $h$) is quite important as can be seen, the effects without any inductive bias.

",,user9947,,user9947,3/26/2020 12:34,3/26/2020 12:34,,,,0,,,,CC BY-SA 4.0 18806,2,,18791,3/26/2020 0:26,,1,,"

It's not possible to solve version 1) of the problem in general. To see why, consider a graph with 2 cities, and 2 agents, where the agents start in opposite nodes. Since both agents need to move every turn, they will never meet in the same city.

For version 2), I'm going to make some assumptions that aren't completely clear from your text:

  • Agents have only local information. They can't see the global structure of the graph, and just all agree to meet at city X, because they don't know that X exists. They also can't see where other agents are initially.
  • Cities are labelled, or otherwise identifiable to the agents. That is, an agent can tell whether they've visited a city before or not.
  • Agents can tell how many agents are in their current city, and in all adjacent cities.
  • There are a finite number of cities, and it's always possible to get from city A to city B, for any pair of cities (A,B).
  • Agents have no way to communicate with each other, except by being present in the same space or not.

In this setup, the following heuristic will always work:

  • If you've never seen another agent, move to a random adjacent city, or stay in place, with a uniform probability of taking each possible action.

  • If you've never seen another agent before, but you see one or more agents now, then switch permanently to the following strategy:

    1. If you can see an adjacent city with more agents than your city, move there now. If you can see several such cities, move to the largest one. Pick randomly in case of a tie.
    2. If an adjacent city has the same number of agents as yours, flip a coin to decide whether to go or stay. Eventually, one city ends up with more than the other, and then step 1 causes all agents to end up in the same city.
    3. Otherwise, if you can see other adjacent cities with fewer agents, just stay put. They'll come to you.
    4. If all adjacent agents are empty and there's a total of $n$ agents you can see in your city right now, then remember $n$.
    5. As long as you still can't see more than $n$ agents in total, stay put with probability $(n-1)/n$ and move to a random adjacent city with probability $1/n$.
    6. If you stayed put per step 5, and you still can only see n agents, and all but one of them is in your current city, move to the city with the single agent who moved. Otherwise, if 0 or more than 1 agent moved, stay put. The agents who moved come back, and we go to step 5.
    7. After successfully moving, go to step 1 again.

You can see that once agents meet up in a group, that group never shrinks. Further, groups of agents are always trying to move around together. You can prove that a group moves an average of about $\frac{1}{2e}$ every step. It may take a while, but eventually every group will bump into every other group, resulting in all the agents being in the same city.

",16909,,,,,3/26/2020 0:26,,,,2,,,,CC BY-SA 4.0 18808,1,,,3/26/2020 3:27,,3,41,"

Is there any way of generating fixed-length sequences with RNNs? I want to tell my character level RNN to generate a name of length 3, 4, 5 and so on. I haven't found anything online like this, but my intuition tells me that, if I append the sequence length (e.g. 5) at each RNN input, the RNN should be able to do this. Does anyone know if this task is possible?

",32023,,2444,,3/26/2020 12:07,3/26/2020 12:07,Is there any way of generating fixed-length sequences with RNNs?,,0,2,,,,CC BY-SA 4.0 18809,1,18814,,3/26/2020 5:22,,4,813,"

VAE is trained to reduce the following two losses.

  1. KL divergence between inferred latent distribution and Gaussian.

  2. the reconstruction loss

I understand that the first one regularizes VAE to get structured latent space. But why and how does the second loss help VAE to work?

During the training of the VAE, we first feed an image to the encoder. Then, the encoder infers mean and variance. After that, we sample $z$ from the inferred distribution. Finally, the decoder gets the sampled $z$ and generates an image. So, in this way, the VAE is trained to make the generated image to be equal to the original input image.

Here, I cannot understand why the sampled $z$ should make the original image, since the $z$ is sampled, it seems that the $z$ does not have any relationship between the original image.

But, as you know, VAE works well. So I think I miss something important or understand it in a totally wrong way.

",18139,,2444,,11/7/2020 1:02,11/7/2020 1:02,Why does the variational auto-encoder use the reconstruction loss?,,1,0,,,,CC BY-SA 4.0 18810,1,18811,,3/26/2020 6:59,,2,1557,"

I'm using PDDL to generate a plan to solve this tower of Hanoi puzzle. I'll give the problem, the rules, the domain and fact sheet for everything.

PDDL is telling me that the goal can be simplified to false; however, I know for a fact that this puzzle is solvable.

Puzzle:

There are 3 posts. Each has rings on it. From bottom to top on each post. The first post has the second largest ring. The second post has the smallest ring, with the second smallest ring on top of it. The third post has the third largest ring, with the largest ring stacked on top of it.

Rules:

The rules of this game are that you may only stack a ring on top of a larger ring. Your goal is to get all of the rings onto the same post, stacked from largest to smallest.

My Code

Domain

(define (domain hanoi)
  (:requirements :strips)
  (:predicates (clear ?x) (on ?x ?y) (smaller ?x ?y))

  (:action move
    :parameters (?disc ?from ?to)
    :precondition (and (smaller ?to ?disc) (on ?disc ?from) 
               (clear ?disc) (clear ?to))
    :effect  (and (clear ?from) (on ?disc ?to) (not (on ?disc ?from))  
          (not (clear ?to))))
  )

Problem

(define (problem hanoi5)
  (:domain hanoi)
  (:objects peg1 peg2 peg3 d1 d2 d3 d4 d5)
  (:init 
    (smaller peg1 d1) (smaller peg1 d2) (smaller peg1 d3)
    (smaller peg1 d4) (smaller peg1 d5)
    (smaller peg2 d1) (smaller peg2 d2) (smaller peg2 d3)
    (smaller peg2 d4) (smaller peg2 d5)
    (smaller peg3 d1) (smaller peg3 d2) (smaller peg3 d3)
    (smaller peg3 d4) (smaller peg3 d5)

    (smaller d2 d1) (smaller d3 d1) (smaller d3 d2) (smaller d4 d1)
    (smaller d4 d2) (smaller d4 d3) (smaller d5 d1) (smaller d5 d2)
    (smaller d5 d3) (smaller d5 d4)

    ;(clear peg2) (clear peg3) (clear d1)
    ;(on d5 peg1) (on d4 d5) (on d3 d4) (on d2 d3) (on d1 d2))
    (clear d2) (clear d4) (clear d1)
    (on d2 peg1) (on d5 peg2) (on d4 d5) (on d3 peg3) (on d1 d3))

  (:goal (and (on d5 d4) (on d4 d3) (on d3 d2) (on d2 d1)))
)

I'm really at a loss here. Thank you!

",34495,,,,,3/26/2020 7:46,Can't solve Towers of Hanoi in PDDL,,1,0,,,,CC BY-SA 4.0 18811,2,,18810,3/26/2020 7:46,,1,,"

Ah hah!

The way I had defined the disks made d5 the LARGEST disk, not the smallest. So, the last few lines of the file should be:

   (clear d4) (clear d2) (clear d5)
    (on d4 peg1) (on d1 peg2) (on d2 d1) (on d3 peg3) (on d5 d3))

  (:goal (and (on d1 d2) (on d2 d3) (on d3 d4) (on d4 d5)))
)

However

If I wanted the opposite to be true, d1 to be the largest, the changes would have been as following:

; Swapped ?disc & ?to in (smaller ...) statement
:precondition (and (smaller ?disc ?to) (on ?disc ?from) 

And

; Swapped all d* and peg* objs within (smaller ...) staements
(smaller d1 peg1) (smaller d2 peg1) (smaller d3 peg1)
(smaller d4 peg1) (smaller d5 peg1)
(smaller d1 peg2) (smaller d2 peg2) (smaller d3 peg2)
(smaller d4 peg2) (smaller d5 peg2)
(smaller d1 peg3) (smaller d2 peg3) (smaller d3 peg3)
(smaller d4 peg3) (smaller d5 peg3)

:)

",34495,,,,,3/26/2020 7:46,,,,0,,,,CC BY-SA 4.0 18813,1,,,3/26/2020 13:08,,2,106,"

To my understanding NFL states that, we cannot have an hypothesis (let's assume it is an approximator like NN in this case) class that can't achieve certain accuracy parameters $\leq \epsilon$ with probability greater than a certain $p$ given the number of points from which we can sample is upper bounded to $m$.

Whereas, the UAC states that an approximator like a NN given enough hidden units can approximate any function (to my knowledge the function must be bounded).

The point where these 2 clashes (as per my knowledge) is that if we increase the paramters in a NN, the UAC will start to hold good, but the VC dimension will increase (or hypothesis class becomes richer) and for the same $m$ our $\epsilon$ increases or $p$ decreases (not sure which one is affected).

So what are the gaps in my knowledge here? How do we make these 2 consistent with each other?

",,user9947,2444,,3/26/2020 13:47,3/26/2020 13:47,Are No Free Lunch theorem and Universal Approximation theorem contradictory in the context of neural networks?,,0,0,,,,CC BY-SA 4.0 18814,2,,18809,3/26/2020 13:13,,2,,"

The VAE uses the ELBO loss, which is composed of the KL term and the likelihood term. The ELBO loss is a lower bound on the evidence of your data, so if you maximize the ELBO you also maximize the evidence of the given data, which is what you indirectly want to do, i.e. you want the probability of your given data (i.e. the data in your dataset) to be high (because you want to use the VAE for the generation of inputs similar to the ones in your dataset). So, the idea is that you optimize both the KL term and the reconstruction (or likelihood) term jointly (i.e. the ELBO). Why? Because, as I just said, the ELBO is the Evidence Lower BOund on the given data, so, by maximizing it, you are also maximizing the evidence of your data. In other words, if you maximize the ELBO, you are finding a decoder that will have a high probability of reconstructing your inputs (i.e. the likelihood term), but, at the same time, you want your encoder to be constrained (i.e. KL term). Please, read this answer for further details.

Here, I cannot understand why the sampled $z$ should make the original image, since the $z$ is sampled, it seems that the $z$ does not have any relationship between the original image.

The relationship is that you will be maximizing the ELBO, which implies (and you can see this implication only if you are familiar with the ELBO loss) you will be minimizing the KL divergence between your posterior and the prior to generate the samples $z$ (i.e. minimizing because there will be a minus in front of the KL term in the ELBO loss) and maximizing the probability of the reconstructed input. More precisely, $z$ is used to reconstruct the input (i.e. the decoder does this), which is then used to calculate the reconstruction loss.

In the mathematical formulations, you will see that the likelihood term of the ELBO is $p(x \mid z)$, i.e. the likelihood of the input $x$ given $z$. The $z$ is the input to the decoder, which produces a reconstruction of $x$. In practice, people will e.g. use the cross-entropy to then calculate the ""reconstruction loss"" (e.g. see this PyTorch implementation), which should correspond to this likelihood term $p(x \mid z)$. Why does the cross-entropy correspond to a likelihood? Because you can actually prove that the cross-entropy is equivalent to the negative log-likelihood. (Also, note that, in the ELBO loss, $p(x \mid z)$ does not appear, but the logarithm of $p(x \mid z)$ appears, but, for simplicity, I have used $p(x \mid z)$ rather than $\log p(x \mid z)$ above.)

",2444,,2444,,3/26/2020 20:48,3/26/2020 20:48,,,,2,,,,CC BY-SA 4.0 18817,2,,18798,3/26/2020 14:40,,5,,"

For the programming part I suggest this YouTube channel by Phil Tabor (he also has a website: neuralnet.ai. I found his videos really useful while I was attending reinforcement learning classes at the uni. He covers basic algorithms like value iteration and policy iteration and also more advanced like deep q learning, covering all main python libraries (Keras, tensorflow, pytorch). Hope it will help you as well!

",34098,,,,,3/26/2020 14:40,,,,0,,,,CC BY-SA 4.0 18818,1,,,3/26/2020 15:22,,2,75,"

I've created a deep Q network. My model does not get better, and can't see what I'm doing wrong. I'm new to RL.

Replay Memory

class ReplayMemory(object):

def __init__(self, input_shape, mem_size=100000):
    self.states = np.zeros((mem_size, input_shape))
    self.actions = np.zeros(mem_size, dtype=np.int32)
    self.next_states = np.zeros((mem_size, input_shape))
    self.rewards = np.zeros(mem_size)
    self.terminals = np.zeros(mem_size)

    self.mem_size = mem_size
    self.mem_count = 0

def push(self, state, action, next_state, reward, terminal):

    idx = self.mem_count % self.mem_size

    self.states[idx] = state
    self.actions[idx] = action
    self.next_states[idx] = next_state
    self.rewards[idx] = reward
    self.terminals[idx] = terminal

    self.mem_count += 1

def sample(self, batch_size):
    batch_index = np.random.randint(0, min(self.mem_count, self.mem_size), batch_size)

    states = self.states[batch_index]
    actions = self.actions[batch_index]
    next_states = self.next_states[batch_index]
    rewards = self.rewards[batch_index]
    terminals = self.terminals[batch_index]

    return (states, actions, next_states, rewards, terminals)

def __len__(self):
    return min(self.mem_count, self.mem_size)

DQN Agent

class DQN_Agent(object):

  def __init__(self, n_actions,n_states, ALPHA=0.001, GAMMA=0.99, eps_start=1 , eps_end=0.01, eps_decay=0.005):
      self.n_actions = n_actions
      self.n_states = n_states

      self.memory = ReplayMemory(n_states)

      self.ALPHA = ALPHA
      self.GAMMA = GAMMA

      self.eps_start = eps_start
      self.eps_end = eps_end
      self.eps_decay = eps_decay


      self.model = self.create_net()
      self.target = self.create_net()

      self.target.set_weights(self.model.get_weights())

      self.steps_counter = 0

  def create_net(self):

    model = Sequential([
        Dense(64, activation=""relu"", input_shape=(self.n_states,)),
        Dense(32, activation=""relu""),
        Dense(self.n_actions)
    ])

    model.compile(loss=""huber_loss"", optimizer=Adam(lr=0.0005))

    return model

def select_action(self, state):
    ratio = self.eps_end + (self.eps_start-self.eps_end)*np.exp(-1*self.eps_decay*self.steps_counter)
    rand = random.random()
    self.steps_counter += 1

    if ratio > rand:
        #print(""random"")
        return np.random.randint(0, self.n_actions)
    else:
        #print(""not random"")
        return np.argmax(self.model.predict(state))


def train_model(self, batch_size):
    if len(self.memory) < batch_size:
        return None

    states, actions, next_states, rewards, terminals = self.memory.sample(batch_size)

    q_curr = self.model.predict(states)
    q_next = self.target.predict(next_states)
    q_target = q_curr.copy()


    batch_index = np.arange(batch_size, dtype=np.int32)

    q_target[batch_index, actions] = rewards + self.GAMMA*np.max(q_next, axis=1)*terminals

    _ = self.model.fit(states, q_target, verbose = 0)

    if self.steps_counter % 10 == 0:
        self.target.set_weights(self.model.get_weights())

My training loop

n_games = 50000
agent = DQN_Agent(2, 4)

scores = []
avg_scores = []

for epoch in range(n_games):
    done = False
    score = 0
    state = env.reset()

    while not done:
       #env.render()
       action = agent.select_action(state.reshape(1,-1)) 
       next_state, reward, done, _ = env.step(action)


       score += reward

       agent.memory.push(state, action, next_state, reward, done)

       state = next_state
       agent.train_model(64)


   avg_score = np.mean(scores[max(0, epoch-100):epoch + 1])
   avg_scores.append(avg_score)
   scores.append(score)
   print(score, avg_score)
",34504,,2444,,3/26/2020 16:47,3/26/2020 16:47,Why is my DQN model not getting better?,,0,0,,,,CC BY-SA 4.0 18820,1,19962,,3/27/2020 9:08,,3,58,"

In the apprenticeship learning algorithm described by Ng et al. in Apprenticeship Learning via Inverse Reinforcement Learning, they mention that expert trajectories come in the form of $\{s_0^i, s_1^i\, ...\}_{i=1}^m$. However, they also mentioned that $s_0 $ is drawn from distribution D. Do all expert trajectories then have to have the same starting state? Why is it not possible to compute the feature expectation based on a single trajectory?

",32780,,2444,,3/27/2020 10:35,4/3/2020 13:54,Do all expert trajectories have the same starting state in apprenticeship learning?,,1,0,,,,CC BY-SA 4.0 18821,1,,,3/27/2020 12:30,,1,18,"

There are many example in python which has a ready made data set, for example there is T-Shirt pre-trained data and thousands images, within few minutes it will tell how many t-shirt images are there in those folders

but how that data detect.tflite itself is created from scratch step by step, If I wanted to manually write those data in csv how it should be done

Ultimately I have data which says, click event, keyboard event, process exe name, title of the window, and timestamp

I want to detect what exactly that user doing in his computer, who told do to what to him I want the software to tell me in words and diagrams

this is definitely a big software, but data classification part unclear to me

I need some guidance

",34514,,,,,3/27/2020 12:30,Data classification model to detect a process in an event log,,0,0,,,,CC BY-SA 4.0 18822,1,,,3/27/2020 14:26,,3,98,"

I stumbled upon a job offer from a company that was looking for someone who was good with Reinforcement Learning (applied to finance) and something in their offer caught my eye. It goes something like this:

We want you to be able to study the price dynamic (of a stock I suppose) and its evolution in order to extract a Joint PDF that will be used in the Optimal Stochastic Control of a Loss Function (or gain)

The thing is I understand what each of these things mean and how they are used separately (from my background in Control theory & dynamical systems) and I worked with fitting Joint PDFs and Copulas before, but I don't understand how a Joint PDF would help with the ""Optimal Stochastic Control of a Loss Function"" ? Thanks.

",34516,,,user9947,3/28/2020 12:42,3/28/2020 12:42,What does a joint probability density function have to do with Stochastic Optimal Control and Reinforcement Learning?,,1,8,,,,CC BY-SA 4.0 18823,1,,,3/27/2020 15:03,,1,76,"

Last year it was announced that Deepmind's Starcraft playing bot AlphaStar was taking on human players in the Starcraft ladder system (some kind of league system as far as I can tell) and that it had reached the Grandmaster level.

Since then I haven't really heard anything anymore about the progress of Alphastar. Given that I don't know anything about Starcraft I was wondering whether somebody has a better clue as to what Alphastar is up to? Is it still playing online? Or when did it stop playing? What was the improvement trajectory during the time it played online? Basically, how did this pan out, as seen from the perspective of the Starcraft community?

",2227,,,,,3/27/2020 15:03,Is AlphaStar still competing in the Star Craft ladder?,,0,0,,,,CC BY-SA 4.0 18825,2,,18822,3/27/2020 15:57,,1,,"

Extracting a joint PDF just means that you create a model that models the behavior of several variables combined instead of in isolation.

If these variables aren't independent and your loss functions is influenced by all of them, you obviously have to learn this joint PDF to minimize your loss.

So I don't see this statement as particularly mysterious.

",2227,,,,,3/27/2020 15:57,,,,3,,,,CC BY-SA 4.0 18826,1,,,3/27/2020 16:07,,2,156,"

In Goodfellow's paper, he says:

Hence, by inspecting Eq. 4 at $D^*_G (\mathbf{x}) = \frac{1}{2}$, we find $C(G) = \log \frac{1}{2}+ \log \frac{1}{2} = − \log 4$. To see that this is the best possible value of $C(G)$

i.e. $D$ and $G$ loss should converge to $\log \frac{1}{2}$. This makes perfect sense. When I train a GAN in PyTorch with BCEloss, the loss for $D$ and $G$ converge to $\log(2)$, the negative of what Goodfellow states and what I'd expect.

What am I missing?

",34518,,2444,,3/27/2020 17:05,3/27/2020 17:05,Why does GAN loss converge to log(2) and not -log(2)?,,0,2,,,,CC BY-SA 4.0 18828,2,,5728,3/27/2020 17:51,,1,,"

I found a paper that gives a table of time complexities for different architectures using linear programming-based training: https://arxiv.org/abs/1810.03218

",34521,,,,,3/27/2020 17:51,,,,2,,,,CC BY-SA 4.0 18829,1,,,3/27/2020 18:28,,5,744,"

Batch norm is a technique where they essentially standardize the activations at each layer, before passing it on to the next layer. Naturally, this will affect the gradient through the network. I have seen the equations that derive the back-propagation equations for the batch norm layers. From the original paper: https://arxiv.org/pdf/1502.03167.pdf

However, I have trouble understanding if there is an intuitive understanding of what effect it actually has on the network. For instance, does it help with the exploding gradient problem, since the activations are rescaled, and the variance of them is constrained?

",18086,,2444,,3/27/2020 18:47,4/27/2020 0:01,What effect does batch norm have on the gradient?,,1,0,,,,CC BY-SA 4.0 18830,1,,,3/27/2020 19:39,,3,214,"

I've just started learning CSP and I find it quite exciting. Now I'm facing a nonogram solving problem and I want to solve it using backtracking with CSP.

The first problem that I face is that I cannot quite figure out what such variables, domains, and constraints could be. My first thought was to make every field(those squares) a variable with such domain: $D_i = \{1,0\}$, where 1 means that a certain field has been colored black and 0 white.

So far I've been mostly learning binary constraints, and I was thinking of using the AC-3 or forward checking algorithm for propagation during the backtracking algorithm.

As far I know constraints of all arity can be represented as a set of binary constraints, so that would enable me to use the algorithms I mentioned. But that leads me to the problem of defining constraints. If every field was a variable then each line and column would be a constraint, based on how certain lines should be colored(numbers defining line for example 2,3,2).

But it's all new and quite hard for me to imagine and come up with. I've been reading some articles and papers on that but they were too advanced for me.

So, does anybody have an idea how can I formulate a nonogram problem as a constraint satisfaction problem?

",34483,,2444,,12/19/2021 20:24,12/19/2021 20:24,How can I formulate a nonogram problem as a constraint satisfaction problem?,,0,3,,,,CC BY-SA 4.0 18831,1,,,3/27/2020 21:56,,4,36,"

I have a question about how weights are updated during back-propagation for some of my samples that have unknown labels (please note, unknown, not missing). The reason they are unknown is because this is genomic data and to generate these data would take 8 years of lab work! Nevertheless I have genomic data for samples that have multiple-labels, sex age organ etc. this is a multi-class multi-label problem.

For most classes, ALL labels are complete. For two or three classes, there are unknown labels. an example would be the developmental stage of samples at age x, the developmental stage of sample at age y are known. the developmental stage of samples at age Z are unknown! (generating this data is what would take most time)... I would therefore like to include all this data during training as it is indispensable. I would like to generate the sigmoid probability and assign unknown label 'Z' as belonging to developmental stage 0 or 1 (known classes) based on a threshold (say >= 0.5)... When one-hot encoding the unknown labels simply have no ground truth, 0 for class developing and 0 for not-developing as follows (example of 3 samples shown for class in question):

  [[1., 0.
   [0., 1.
   [0., 0.  ......]]

the first row is known sample 1, second is known sample 2 and 3rd is unknown, and therefore has no ground truth. It is this sample i would like to assign a label of known class 1 or 2 based on the 'highest probability'.. based on reading and discussions, this is the direction i will be taking for this task, as it can be validated in the lab later... so the approach is, include in training and see what the network 'thinks' it is.

My question is: How does back-propagation handle these known and unknown samples with respect to weight updates.

I should note i have trained the network with ~90% validation performance. for all classes for which is there is complete data, the predictions are great. and the same for classes for which there is unknown data. It can accurately classify the samples for which there is known developmental stages... and it does assign a probability value to those samples that have the 'unknown' label (0,0), so i would really like to know how back-prop is handling these samples for the classes where there are unknown ground truth labels.

thank you!

",34530,,,,,3/27/2020 21:56,How do weights changes handles during back-propagation when there are unknown labels,,0,0,,,,CC BY-SA 4.0 18832,2,,18829,3/27/2020 23:36,,1,,"

""Naturally, this will affect the gradient through the network."" this statement is only partially true, let's see why by starting explaining the real aim of batch normalisation.

As the title of the paper suggest, the aim of batch normalisation is to decrease training time by reducing covariance shift. What is covariance shift? We can conceive it as the variation that can happen between between the values of two layers of a network. We are all familiar with the concept that if we have input features with different unite scale, like kilos and euros, most likely lot of values will have different magnitude orders, like thousands for the weight could appear often with hundred of thousands for the money. When applying the activation function to different order of magnitude this discrepancy will remain, causing the values in the first layer to assume a really broad range of values. This is not good, since high fluctuation means more time to converge to stable values. This is why values fed to neural nets are always standardised.

The authors applied the same logic to the hidden layers, arguing that a deep neural net can be conceived as a repetition of itself (every hidden layer is an input layer that send features to another hidden layer) ergo the features should be normalised at every layer. How can we do it? Normalising every batch is the most natural way to procede, but in this way there is the risk to end up transforming the internal representation of a layer, because normalising is not a linear transformation. This is way the authors propose a clever way to perform hidden layers normalisation, which consists in a classic normalisation followed by a linear scaling performed with two trainable parameters $\beta$ and $\gamma$ (which appear in the last step of the batch norm scratch code below).

A really important thing to notice is that the mean and variance used to perform the classic normalisation are mean and variance calculated on the mini batch. I will explain why this is important in a sec, first I want to stress out that the $\beta$ parameter can actually bring to increase overfitting when batch norm is randomly stucked on top of other layer. The reason why is that, as we see in the scale and shift step, $\beta$ is nothing but a biased term, added to perform the shift of the mean of the batch hidden values. So, to avoid having an extra bias that lead to overfit, the bays on the previous layer should be removed, leaving only the classic matrix of weights parameters.

Back to the gradient problem, we can see that in itself doesn't necessarily lead to increased performances, but it does provide an advantage in terms of hidden layer values convergence. The x axis on the two right sub plots of the figure below represent the variation of the hidden values of net trained with and without batch norm. When trained with batch norm the hidden values reach stable ranges after few iteration. This help the net to reach high accuracy in less iterations (first subplot on the left) but we can see that even without batch norm the net reach eventually high accuracy.

The only help provided by batch norm to the gradient is the fact that, as noticed before, the normalisation is firstly performed by calculating the mean and variance on individual batches. This is important because this partial estimation of mean and variance introduces noice. Similarly to drop out, which has a regularisation effect due to the noice generated by randomly deactivating part of the weights, batch norm can introduce regularisation by adding noice due to the larger or smaller mean and variance estimated on individual batches. But still, batch norm was not introduced as a regularisation technique, and the equation you put on the question simply prove that it is possible to calculate the derivatives of the equations applied to perform the batch norm.

",34098,,,,,3/27/2020 23:36,,,,3,,,,CC BY-SA 4.0 18833,2,,18798,3/28/2020 1:44,,6,,"

Let me first say that deep RL is just the combination of RL with deep learning. So, if you study RL and deep learning, then studying deep RL should be straightforward. For this reason, this answer will point the reader to potentially useful courses on RL (also because there aren't many free courses completely dedicated to deep RL), which have at least one lesson on deep RL or function approximation. I have only followed the course by Isbell and Littman and partially the course by David Silver, so I can't ensure you that the other courses are good, but I found these two useful, although not perfect.

Title Instructor(s) Focus on deep RL? Topics Free
Reinforcement Learning Charles Isbell, Michael Littman No TD learning, convergence, function approximation, POMDP, options, game theory Yes
Introduction to Reinforcement Learning with David Silver David Silver No MDPs, planning, dynamic programming, model-free prediction and control, function approximation, policy gradients, exploration and exploitation Yes
CS234: Reinforcement Learning Winter 2020 Emma Brunskill No See the course schedule; lesson 6 is about DRL Yes
Reinforcement Learning NPTEL No Bandits, MDPs, policy gradients, dynamic programming, TD learning, function approximation, hierarchical RL, POMDP Yes
Reinforcement Learning in the Open AI Gym Phil Tabor ? SARSA, double Q-learning, Monte Carlo methods, Q-learning Yes
Advanced Deep Learning & Reinforcement Learning DeepMind No video 14 discusses DRL topics Yes
Advanced AI: Deep Reinforcement Learning in Python Udemy Yes, it seems ? No
Machine Learning: Beginner Reinforcement Learning in Python Udemy ? ? No
Deep Reinforcement Learning 2.0 Udemy ? ? No
Modern Reinforcement Learning: Deep Q Learning in PyTorch Phil Tabor (Udemy) ? ? No
Modern Reinforcement Learning: Actor-Critic Methods Phil Tabor (Udemy) ? ? No

In any case, if you are familiar with RL and deep learning topics, I encourage you to directly read the DQN papers (both by DeepMind folks)

Of course, deep RL isn't just DQN, but these are two very important papers that you should read. Other key papers on deep RL can be found here.

Note that, depending on your experience with and knowledge of RL and DL, you may require a few iterations to fully understand these papers, but this applies every time you need to read a research paper.

",2444,,2444,,1/14/2022 19:35,1/14/2022 19:35,,,,0,,,,CC BY-SA 4.0 18834,2,,12034,3/28/2020 4:35,,1,,"

It's pretty easy to tell what's going on with AI Angel Angelica. If you've watched some of the videos, you'll notice the steady progression of the real-time rendering. The first videos were really jittery and there were a lot of movements (awkward mouth movements, fingers unable to move, etc) to the more recent ones with realistic-ish mouth movements and full finger movements. So, they started off with a motion capture program with a special suit to capture movement. As they progressed, the suit got a lot more sensors and the programming itself was streamlined. As far as it being artificial intelligence, no, it's not. There's a person in a suit talking to a camera that's been programmed to pick up movement and translate it onto an avatar. Based on the movements from the most recent video, there are sensors on a few parts of the face, neck, shoulders, elbows, waist, hips, knees and feet and probably 3 on each finger, 2 on each thumb. Based upon the camera movements the programming is designed to follow the avatar, which is why it seems to follow her when she moves around and additionally, she's able to turn around while the camera remains in place. There is probably a face camera attached to the suit to track the minute facial changes (think of the movie Avatar) and two or three cameras mounted around the area to pick up the rest of the sensors (which could just be colored dots).

The setup is probably a greenroom with furniture in some of the places the furniture shows up in the 3d 'room' she's in. There is probably another camera or sensor that is used so she can move around when she shows off her computer setup. The model probably just uses a button to switch between them. I don't know if the voice is real or altered, but it has changed over the course of the videos. If it is real, the model is probably female speaking in real-time. Whether or not the model is also the developer is another question. I don't know.

Regardless, the real-time rendering and advanced 3d space analysis requires a pretty hefty computer setup, so whoever is doing this is on the level of a professional with professional-grade computing power. It is WAY beyond anything I'm capable of at the moment, so my hat's absolutely off to the developer and/or model. Extremely impressive.

",34537,,,,,3/28/2020 4:35,,,,0,,,,CC BY-SA 4.0 18835,1,,,3/28/2020 5:03,,3,157,"

I am learning deep learning from Andrew Ng's tutorial Mini-batch Gradient Descent.

Can anyone explain the similarities and dissimilarities between batch GD and mini-batch GD?

",9863,,2444,,1/8/2022 21:10,1/8/2022 21:39,What is the difference between batch and mini-batch gradient decent?,,2,0,,,,CC BY-SA 4.0 18836,2,,18835,3/28/2020 7:07,,3,,"

It is really simple.

In gradient descent not using mini-batches, you feed your entire training set of data into the network and accumulate a cost function based on this full set of data. Then you use gradient descent to adjust the network weights to minimize the cost. Then you repeat this process until you get a satisfactory level of accuracy. For example, if you have a training set consisting of 50,000 samples, you would feed all 50,000 samples along with the 50,000 labels into the network, then perform gradient descent and update the weights. This is a slow process because you have to process 50,000 inputs to do just one step of gradient descent.

To make things go faster instead of running all 50,000 inputs through the network, you split up the training set into ""batches"". For example, you could break the training set up into 50 batches each containing 1000 samples. You would feed the network the first batch of 1000 samples, accumulate the loss value then perform gradient descent and adjust the weights. Then you feed in the next batch of 1000 samples and repeat the process. So, now, instead of only getting one step of gradient descent for 50,000 samples, you get 50 steps of gradient descent. This method of using batches leads to a much faster convergence of the network.

",33976,,2444,,3/28/2020 12:41,3/28/2020 12:41,,,,0,,,,CC BY-SA 4.0 18837,1,,,3/28/2020 7:59,,5,654,"

I am a very beginner in the field of AI. I am basically a Pharma Professional without much coding experience. I use GUI-based tools for the neural network.

I am trying to develop an ANN that receives as input a protein sequence and produces as output a drug molecule. Drug molecules can be represented as fixed-length binary (0-1). This length is 881 bits.

However, I do not know how to transform protein sequences of variable length into a fixed-length binary representation.

So, how should I deal with variable-length inputs for a neural network? What is the best way?

",34540,,18758,,12/17/2021 0:16,12/17/2021 0:18,How should I deal with variable-length inputs for neural networks?,,1,1,,,,CC BY-SA 4.0 18838,2,,18579,3/28/2020 8:51,,1,,"

Searle's Chinese room is analogical and is intended to present an easy-to-understand picture of the essential elements and processes of the digital computer. In the room the man (CPU) has a book of intructions (program) for responding to Chinese input questions. That is just one program of many possible programs the room could run. Each different program would be a different instruction book. SOAR would be just one of those books.

",17709,,,,,3/28/2020 8:51,,,,2,,,,CC BY-SA 4.0 18839,2,,18837,3/28/2020 10:37,,4,,"

The most common way people deal with inputs of varying length is padding.

You first define the desired sequence length, i.e. the input length you want your model to have. Then any sequences with a shorter length than this are padded either with zeros or with special characters so that they reach the desired length. If an input is larger than your desired length, usually you'd split it into multiple inputs.

",26652,,18758,,12/17/2021 0:18,12/17/2021 0:18,,,,0,,,,CC BY-SA 4.0 18840,1,,,3/28/2020 12:59,,2,292,"

I'm trying to perform a segmentation task on images of multiple sizes using fully convolutional neural networks.

Currently, I'm using EfficientNet as a feature extractor, and adding a deconvolution/backwards convolution/transposed convolution layer as described in the original Fully Convolutional Networks for Semantic Segmentation paper.

But this transposed convolution layer doesn't return a filter of a size equivalent to the original image for images of varying sizes.

For example, let's assume the original image is $100 \times 100$, and the last layer contains filters of size $50 \times 50$. To get a filter of the same size as the original, you would need a transposed convolution layer of size $51 \times 51$.

Now, assume you passed in an image of size $200 \times 200$. The last layer would contain filters of size $100 \times 100$. That same transposed convolutional filter of size $51 \times 51$ would result in an output of size $150 \times 150$.

Is there any way to make it so that a fully convolutional network always returns an image of the same size as the original?

",27240,,2444,,6/14/2020 11:02,11/21/2021 6:06,Can a fully convolutional network always return an image of the same size as the original?,,1,1,,,,CC BY-SA 4.0 19840,1,,,3/28/2020 14:45,,3,210,"

After playing around with normal Q-learning I have decided to switch to deep Q-learning and I have encountered this problem.

As I understand, for a task with discrete action space, where there are 4 possible actions (lets say left, right, up, down) my DQN needs to have four outputs. And then argmax of prediction needs to be taken, which will be my predicted action (if argmax(prediction)==2 then I will pick third action, in my case up).

If I use Mean Squared Error loss for my network then the dimension of output needs to be same as dimension of expected target variables. I am calculating target variables using following code:

target = rewards[i] + GAMMA * max(Qs_next_state[i]) which gives me a single number (while predicted output is four dimensional) as I workaround I decided to use custom loss function:

def custom_loss(y_true, y_pred): # where y_true is target, y_pred is output of neural net
    return ((max(y_pred) - y_true)**2) / BATCH_SIZE

But I am not sure if it is correct, from what I have read from tutorials/papers loss functions do not have this max() in them. But how else am I going to end up with same dimensionality between targets and NN outputs. What is the correct approach here?

",26336,,,,,3/28/2020 14:45,Whats the correct loss function to use during deep Q-learning (discrete action space),,0,0,,,,CC BY-SA 4.0 19841,1,,,3/28/2020 16:18,,1,157,"

As I want to start coding a new Trading AI this year (first based on Python and later maybe in C++) I stumbled over the following question:

Today, I would like to make a pro/contra list with you in the area of deep learning vs machine learning. The difference should be clear to most of you. If not, here is a nice explanation from Hackernoon.

Up to now, I was convinced that this future project will be based on Tensorflow, Keras, etc. However, the following came to my mind afterward.

Most of you will probably have heard of Pluribus already. Dr. Sandholm and Mr. Brown (as work for his Ph.D.) were the first to program an AI that won against 6 poker world champions in No-Limit-Texas Holdem Poker. This seemed impossible because poker is a game of imperfect information. If you haven't read/seen their work until now, here's a link to a Facebook blog post Facebook, Carnegie Mellon build first AI that beats pros in 6-player poker. See also the paper Superhuman AI for multiplayer poker and this video.

From the work, it is clear that they wrote the whole thing in C++ and WITHOUT the use of any deep learning library but exclusively on the basis of machine learning. This was also confirmed to me by both of them via email. So it was possible to bring the AI in under 24H to a level that could beat 6 Poker World Champions without any problems.

The stock and crypto market is nothing else. A game of imperfect information. The prices of a crypto coin or stock are influenced by an incredible number of factors. This includes of course prices from the past but also current media (as currently seen with covid-19) and data from the economy.

We want to grab the data out of the "Big Players", like CoinCap API, CryptoAPI.io for all kinds of historical and new charts prices, etc. The same we will do with the Yahoo Finance API to grab data out of the stock market. Depending on the size of this project and how it will develop, we want to implement also some kind of NLP to grab the most out of Economy Data like dailyfx news to predict some in/decreases for some stocks but this is a future feature.

So, basically, the main question is: should we use neural networks (deep learning) or machine learning?

All this leads me to the conclusion that I am not sure what the better option for a trading bot would be. I know that the training of AI-based on deep learning would take much longer than based on machine learning, but is it safe to say that the results are really better?

What are the pros and cons of deep learning and machine learning to develop a trading system?

",35548,,2444,,12/11/2021 21:16,12/11/2021 21:16,What are the pros and cons of deep learning and machine learning to develop a trading system?,,0,0,,,,CC BY-SA 4.0 19842,1,,,3/28/2020 17:15,,1,301,"

I am new to RL, and I am thinking of doing a little project. The goal of the project is to learn an agent play the memory game with cards.

I already created the program for detecting the cards on the table (with YOLO) and classifying them what kind of object they are.

I want an agent to be able to play the memory game by itself, without being explicitly told the rules and such.

Any tips on how to get started to make the RL process easier?

",7344,,2444,,3/28/2020 17:25,3/30/2020 9:16,How can I develop a reinforcement learning agent that plays memory cards game?,,1,3,,,,CC BY-SA 4.0 19844,1,,,3/28/2020 18:26,,1,203,"

As a first step in many NLP courses, we learn about text preprocessing. The steps include lemmatization, removal of rare words, correcting typos etc. But I am not so sure about the actual effectiveness of doing such a step; in particular, if we are learning a neural network for a downstream task, it seems like modern state of the art (BERT, GPT-2) just take essentially raw input.

For instance, this ACL paper seems to show that the result of text preprocessing is mixed, to say the least.

So is text preprocessing really all that necessary for NLP? In particular, I want to contrast/compare against vision and tabular data, where I have empirically found that standardization usually actually does help. Feel free to share your personal experiences/what use cases where text preprocessing helps!

",18086,,18086,,3/28/2020 18:38,4/26/2020 10:00,Is text preprocessing really all that necessary for NLP?,,2,0,,,,CC BY-SA 4.0 19845,2,,19844,3/28/2020 20:58,,2,,"

It all depends on the quality of data. Due to old rule ""Garbage in, garbage out"" link , if you have bad quality data(data redundancy, unstructured data, too much memory, etc) your results won't be spectacular.

In other cases, everybody could be a Data Scientist, because its only task was ""put raw text into classifier"". Also, you should remember that BERT or GPT-2 it's deep learning algorithms so they not need too much processing. Using preprocessing in machine learning is more that needed(prediction of sentiment for example).

Shortly, preprocessing is optional, but highly advisable.

",19476,,19476,,4/26/2020 10:00,4/26/2020 10:00,,,,1,,,,CC BY-SA 4.0 19846,1,19848,,3/28/2020 21:12,,2,476,"

If a certain hypothesis class $\mathcal{H}$ has a $\mathcal{VC}$ dimension $d$ over a domain $X$, how can I prove that $H$ will shatter all subsets of $X$ with size less than $d$, i.e. $\mathcal{H}$ will shatter $A \subset X$ where $|A| \leq d-1$?

",,user9947,2444,,5/19/2021 13:00,5/19/2021 13:00,"How do I prove that $\mathcal{H}$, with $\mathcal{VC}$ dimension $d$, shatters all subsets with size less than $d-1$?",,1,0,,,,CC BY-SA 4.0 19848,2,,19846,3/29/2020 0:29,,2,,"

We can show that it is not true by a counterexample. For example, $X = \{1,2,3\}$ and $\mathcal H = \{\{\},\{1\},\{2\},\{1,2\}\}$ is the finite set hypothesis class. By definition, in this case, the $\mathcal{VC}$ dimension of $\mathcal H$ over the domain $X$ is $d=2$. Although $A = \{3\} \subset X$, whose size is smaller than the $\mathcal{VC}$ dimension, i.e $|A|<d=2$, it is not shattered by $\mathcal H$.

",4446,,2444,,5/19/2021 8:33,5/19/2021 8:33,,,,0,,,,CC BY-SA 4.0 19849,1,,,3/29/2020 2:12,,2,378,"

According to this blog post

The purpose of an activation function is to add some kind of non-linear property to the function

The sigmoid is typically used as an activation function of a unit of a neural network in order to introduce non-linearity.

Is ReLU a non-linear activation function? And why? If not, then why is it used as an activation function of neural networks?

",27529,,2444,,3/29/2020 2:29,5/26/2020 16:34,Is ReLU a non-linear activation function?,,2,0,,,,CC BY-SA 4.0 19850,2,,19849,3/29/2020 2:27,,1,,"

Short Answer: Yes

Visually:

if you see the image from wikipedia, it shown that ReLU (the blue line) is non-Linear (the line is not straight, it turns in 0). You can also check ""visual"" definition of linear function in wikipedia:

""In calculus and related areas, a linear function is a function whose graph is a straight line""

Mathematically:

Linear function of one variable can be defined as:

$ f(x) = ax + b $

If you plot that function in 2D, it will give you a straight line. Then, the form of linear function with multi variables:

$ f(x_1, x_2, ..., x_n) = a_1x_1 + a_2x_2 + ... + a_nx_n + b $

If you again plot that function in the correct dimension it also give you a straight line. And if you that function carefully, it similar with calculation that happen in a neuron. That's why neuron addition and multiplication is a linear function:

$ f(x_1, x_2, ..., x_n) = w_1x_1 + w_2x_2 + ... + w_nx_n + b $

Adding more layer of linear functions doesn't make the function become ""complex"" for example, if you have $f(x)$ like below and then you put another layer of linear function $g(x)$ on top of it:

$f(x) = ax + b$

$g(x) = cf(x) + d = cax + cb + d$

as the neural network is trained to find the value of $a,b,c,d$, we can group the constant from the formula above, and then rewrite to:

$h(x) = mx + n$

with $m=ca$ and $n=cb+d$. So without non-linear function the layer of neural network is useless, it only give you another ""simple"" linear function

ReLU formula is a $f(x)=max(0,x)$, it produces non-linearity as you can't write to linear function format. Using this function will give you ""complexity"" when you add more layer on top of it.

",16565,,16565,,3/29/2020 2:55,3/29/2020 2:55,,,,0,,,,CC BY-SA 4.0 19851,1,,,3/29/2020 4:58,,6,132,"

I know that AI can be used to design printed circuit boards (PCBs), so it can be used to solve complex tasks.

Is there any programming language designed by deep learning (or any other AI technique)?

",33730,,2444,,11/4/2020 16:59,11/4/2020 16:59,Is there any programming language designed by deep learning?,,1,0,,,,CC BY-SA 4.0 19852,1,,,3/29/2020 8:28,,1,176,"

Let's assume I want to build a semantic segmentation algorithm, based on Multires-UNET. My GT-masks are messy and generated by a GAN, but they are getting better and better over time. The goal is knowledge expansion (based on the paper Noisy-Student).

Can you generally say that PreLU and Leaky Relu are better for noisy labels (or imperfect ones), like the situation in GANs in general?

",35557,,2444,,3/29/2020 13:03,12/4/2022 22:02,Are PreLU and Leaky ReLU better than ReLU in the case of noisy labels?,,1,0,,,,CC BY-SA 4.0 19853,1,,,3/29/2020 9:20,,1,284,"

I am a newbie to NLP and NLG. I am tasked to develop a system to generate a report based on a given data table. The structure of the report and the flow is predefined. I have researched on several existing python libraries like BERT, SimpleNLG but they don't seem to fit my need.

For example: input_data(country = 'USA', industry = 'Coal', profit = '4m', trend = 'decline') output: The coal industry in USA declined by 4m.

The input data array can be different combinations (and dynamic) based on a data table. I would like to know if there is any python package available, or any resource discussing a practical approach for this.

",35558,,,,,9/13/2022 11:58,Building a template based NLG system to generate a report from data,,0,2,,,,CC BY-SA 4.0 19855,1,19858,,3/29/2020 15:33,,2,45,"

I have been seeing notations on Expectations with their respective subscripts such as $E_{s_0 \sim D}[V^\pi (s_0)] = \Sigma_{t=0}^\infty[\gamma^t\phi(s_t)]$. This equation is taken from https://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf and $Q^\pi(s,a,R) = R(s) + \gamma E_{s'\sim T(s,a,\cdot)}[V^\pi(s',R)]$ ,in the case of the Bayesian IRL paper.(https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf)

I understand that $s_0 \sim D$ means that the starting state $s_0$ is drawn from a distribution of starting states $D$. But how do we understand the latter with subscript ${s'\sim T(s,a,\cdot)}$ ? How is $s'$ drawn from a distribution of transition probabilities?

",32780,,2444,,3/29/2020 17:10,3/29/2020 17:10,"What does the notation ${s'\sim T(s,a,\cdot)}$ mean?",,1,0,,,,CC BY-SA 4.0 19856,1,27491,,3/29/2020 15:52,,1,155,"

I was reading online that tic-tac-toe has a state space of $3^9 = 19,683$. From my basic understanding, this sounds too large to use tabular Q-learning, as the Q table would be huge. Is this correct?

If that is the case, can you suggest other (non-NN) algorithms I could use to create a TTT bot to play against a human player?

",27629,,2444,,4/25/2021 10:46,4/25/2021 10:48,Non-Neural Network algorithms for large state space in zero sum games,,2,0,,,,CC BY-SA 4.0 19857,2,,19851,3/29/2020 15:55,,5,,"

There are certainly things like this.

I'd say a strong example is layered learning approaches, descended from Peter Stone's work.

A programming language is essentially a collection of useful shorthands for assembly-level instructions. Ultimately, everything you do in a programming language eventually gets executed in assembly. So making a programming language amounts to learning how to write short, reusable, assembly language programs that you can then use as building blocks to solve harder problems.

An example of this in action is Kelly & Heywood's approach to constructing 'Tangled Program Graphs' for reinforcement learning (IJCAI 2018). Here an evolutionary algorithm is used to learn short assembly programs, that can be combined into a graph to make more complex programs. This is similar to graphical programming languages like J.

",16909,,,,,3/29/2020 15:55,,,,0,,,,CC BY-SA 4.0 19858,2,,19855,3/29/2020 16:06,,2,,"

The dot ($.$) at the end of $T(s,a,.)$ shows all possible states that we can go from state $S$ by doing action $a$. As you know there are some probabilities here for choosing those states, that the sum of these probabilities is equal to 1. Hence, $T(s,a,.)$ is a probability distribution.

",4446,,4446,,3/29/2020 16:12,3/29/2020 16:12,,,,0,,,,CC BY-SA 4.0 19859,1,,,3/29/2020 17:39,,1,166,"

I am trying to train a model that detects logos in documents. Since I am not really interested in what kind of logo there is, but simply if there is a logo, does it make sense to combine all logos into 1 logo class?

Or are ""logos"" too diverse to group them together (like some logos are round, some are rectangular, some are even text based etc.) and the diversity of features will just make it hard for the neural network to learn? Or doesn't it matter?

(I am currently trying out the YOLOv3 architecture to begin with. Any other suggestions better suited are also welcome)

",35565,,,,,3/29/2020 21:58,Object detection: combine many classes into one?,,1,0,,,,CC BY-SA 4.0 19860,1,,,3/29/2020 18:21,,1,30,"

I made CSP nonogram solver and I wanna test it on some bigger data.

Where can I find such data to test my program? I've been looking on the internet but I couldn't find anything.

",34483,,,,,3/29/2020 18:21,Where can I find example data for Nonogram solver?,,0,0,,,,CC BY-SA 4.0 19861,1,,,3/29/2020 18:47,,2,57,"

The naive concept of a general AI, or strong AI, or artificial general intelligence, is some kind of software that can answer questions like

What is the volume of a cube that is 1 m wide?

or even

Why are there only two political parties in the US?

The second question requires external knowledge and high-level reasoning. For example that US means USA in the context, the constitution and that having two parties is caused by the mathematical properties of the election system.

But I would expect that newborn human child is intelligent in the sense of intelligence that is used in general artificial intelligence, but toddlers can not answer these questions.

That is not because an infant is not intelligent, but because it is not educated, I think.

What is the apparent level of education of an artificial intelligence that could be called human-like?

",2317,,2444,,3/30/2020 2:09,3/30/2020 2:10,"Would a new human-like general artificial intelligence be more similar, in terms of eduction, to a toddler or an adult human?",,1,0,,,,CC BY-SA 4.0 19862,2,,19861,3/29/2020 19:20,,1,,"

A human-level AI should be able to learn and behave in the same way that humans learn and behave, otherwise, we shouldn't be calling it a human-level AI. So, either if it starts with more or less knowledge of a baby human, it should be able to learn similarly to a human and acquire more knowledge with experience.

What is the apparent level of education of an artificial intelligence that could be called human-like?

To answer you question more directly, the level of education of a human-like AI (even newly created ones) can potentially be variable (i.e. different human-like AIs could potentially have different levels of education), but the human-like AI will necessarily need to be able to increase its level of education and, in general, knowledge (because humans can also do this).

Moreover, note that AGI is not necessarily restricted to human-level AI. So, other AGIs may not follow the same principles of humans.

",2444,,2444,,3/30/2020 2:10,3/30/2020 2:10,,,,0,,,,CC BY-SA 4.0 19863,2,,19859,3/29/2020 21:53,,2,,"

I think there is no absolute answer for this. Often its kind of trial and error. In general the CNN tries to generalize the problem, so using all logos with different augmentations and ground truths can maybe lead to some feature maps, which are so general that the CNN can find logos.

But if your logos are so various, and embedded in colorful websites, the tasks seems quite difficult, also if they vary in shape and form.Like you said, I think you definitely need an FPN (Feature Pyrimad network) to get the different sizes, scales and so on combined with an RPN (region-proposal network) to find the logos multiple times in the websites (if thats nessecary). For that you can use Mask-RCNN (https://github.com/matterport/Mask_RCNN). You can try to transfer-train it on Imagenet-Backbone for example to reduce training time. Just tried to use Mask-RCNN to segment colored cells from medical images. Worked out quite good.

I used ImageLabel for labeling, worked out quite intiutive and saved all in a JSON file.Imglabel

How large is your dataset and how complex are the logos ? Do you have examples which you can demonstrate?

",35557,,35557,,3/29/2020 21:58,3/29/2020 21:58,,,,0,,,,CC BY-SA 4.0 19864,1,,,3/29/2020 22:38,,1,33,"

I am looking to do sequence classification using deep learning. The length of my sequences can vary from a few hundred to several tens of thousands of characters. I was wondering what is a good approach for doing this. I had success with splitting a sequence into subsequences a few hundred characters long and using LSTMs, but then one is faced with the task of putting the results of each of those together and it is nontrivial as well. Any help would be appreciated.

",35572,,,,,3/29/2020 22:38,length independent sequence classification methods,,0,0,,,,CC BY-SA 4.0 19866,1,,,3/30/2020 3:51,,2,37,"

Assume $\mathbf{X} \in R^{N, C}$ is the input of the softmax $\mathbf{P} \in R^{N, C}$, where $N$ is number of examples and $C$ is number of classes:

$$\mathbf{p}_i = \left[ \frac{e^{x_{ik}}}{\sum_{j=1}^C e^{x_{ij}}}\right]_{k=1,2,...C} \in R^{C} \mbox{ is a row vector of } \mathbf{P}$$

Consider example $i$-th, because softmax function $\mathbf{p}:R^C \mapsto R^C$ (eliminate subscript $i$ for ease notation), so the derivative of vector-vector mapping is Jacobian matrix $\mathbf{J}$:

$$\mathbf{J}_{\mathbf{p}}(\mathbf{x}) = \left[ \frac{\partial \mathbf{p}}{\partial x_1}, \frac{\partial \mathbf{p}}{\partial x_2}, ..., \frac{\partial \mathbf{p}}{\partial x_C} \right] = \begin{bmatrix} \frac{\partial p_1}{\partial x_1} & \frac{\partial p_1}{\partial x_2} & \dots & \frac{\partial p_1}{\partial x_C} \\ \frac{\partial p_2}{\partial x_1} & \frac{\partial p_2}{\partial x_2} & \dots & \frac{\partial p_2}{\partial x_C} \\ \dots & \dots & \dots & \dots \\ \frac{\partial p_C}{\partial x_1} & \frac{\partial p_C}{\partial x_2} & \dots & \frac{\partial p_C}{\partial x_C} \end{bmatrix} \in R^{C, C} $$

$\mathbf{J}_{\mathbf{p}}(\mathbf{x})$ is called the derivative of vector ${\mathbf{p}}$ with respect to vector $\mathbf{x}$

$$\mbox{1) Derivative in diagonal:}\frac{\partial p_{k}}{\partial x_{k}} = \frac{\partial}{\partial x_{k}}\left( \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \right)$$

$$ = \frac{\left( \frac{\partial e^{x_k}}{\partial x_k} \right)\sum_{j=1}^C e^{x_j} - e^{x_k}\left(\frac{\partial \sum_{j=1}^C e^{x_j}}{\partial x_k} \right)}{\left(\sum_{j=1}^C e^{x_j}\right)^2} \mbox{ (Quotient rule) }$$

$$ = \frac{e^{x_k}\sum_{j=1}^C e^{x_j} - e^{x_k} e^{x_k}}{\left(\sum_{j=1}^C e^{x_j}\right)^2} = \frac{e^{x_k}(\sum_{j=1}^C e^{x_j} - e^{x_k})}{\left(\sum_{j=1}^C e^{x_j}\right)^2}$$

$$ = \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \left(\frac{\sum_{j=1}^C e^{x_j} - e^{x_k}}{\sum_{j=1}^C e^{x_j}}\right) = \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \left(1 - \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}}\right)$$

$$\rightarrow \frac{\partial p_{k}}{\partial x_{k}} = p_{k}(1-p_{k})$$

$$\mbox{2) Derivative not in diagonal } k \neq c \mbox{ :} \frac{\partial p_{k}}{\partial x_{c}} = \frac{\partial}{\partial x_{c}}\left( \frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}} \right)$$

$$ = \frac{\left( \frac{\partial e^{x_k}}{\partial x_c} \right)\sum_{j=1}^C e^{x_j} - e^{x_k}\left(\frac{\partial \sum_{j=1}^C e^{x_j}}{\partial x_c} \right)}{\left(\sum_{j=1}^C e^{x_j}\right)^2} \mbox{ (Quotient rule) }$$

$$ = \frac{0 - e^{x_k} e^{x_c}}{\left(\sum_{j=1}^C e^{x_j}\right)^2} = -\frac{e^{x_k}}{\sum_{j=1}^C e^{x_j}}\frac{e^{x_c}}{\sum_{j=1}^C e^{x_j}}$$

$$\rightarrow \frac{\partial p_{k}}{\partial x_{c}} = -p_{k}p_{c}$$

$$\rightarrow \mathbf{J}_{\mathbf{p}}(\mathbf{x}) = \begin{bmatrix} p_1(1-p_1) & -p_1p_2 & \dots & -p_1p_C \\ -p_2p_1 & p_2(1-p_2) & \dots & -p_2p_C \\ \vdots & \vdots & \ddots & \vdots \\ -p_Cp_1 & -p_Cp_2 & \dots & p_C(1-p_C) \end{bmatrix}$$

  • Focal Loss: $\displaystyle FL = -\sum_{k=1}^C y_{k} \alpha_{k}(1-p_k)^\gamma \log (p_k)$

$$\nabla_{\mathbf{x}} FL = \nabla_{\mathbf{p}}FL (\mathbf{J}_{\mathbf{p}}(\mathbf{x}))^T $$

$$\nabla_{\mathbf{p}} FL = \begin{bmatrix} \frac{\partial FL}{\partial p_1}\\ \frac{\partial FL}{\partial p_2}\\ \vdots \\ \frac{\partial FL}{\partial p_C} \end{bmatrix} \mbox{ where } \frac{\partial FL}{\partial p_k} = - y_k\alpha_k \left(-\gamma(1-p_k)^{\gamma-1} \log(p_k) + \frac{(1-p_k)^\gamma}{p_k} \right) = y_k \alpha_k\gamma(1-p_k)^{\gamma-1}\log(p_k) - y_k\alpha_k\frac{(1-p_k)^\gamma}{p_k}$$

$$\nabla_{\mathbf{x}} FL = \begin{bmatrix} \frac{\partial FL}{\partial p_1}\\ \frac{\partial FL}{\partial p_2}\\ \vdots \\ \frac{\partial FL}{\partial p_C} \end{bmatrix}^T \begin{bmatrix} \frac{\partial p_1}{\partial x_1} & \frac{\partial p_1}{\partial x_2} & \dots & \frac{\partial p_1}{\partial x_C} \\ \frac{\partial p_2}{\partial x_1} & \frac{\partial p_2}{\partial x_2} & \dots & \frac{\partial p_2}{\partial x_C} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial p_C}{\partial x_1} & \frac{\partial p_C}{\partial x_2} & \dots & \frac{\partial p_C}{\partial x_C} \end{bmatrix} = \begin{bmatrix} \sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_1}\right)\\ \sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_2}\right)\\ \vdots \\ \sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_C}\right) \end{bmatrix}^T \in R^C $$

$$\mbox{Case 1: }\displaystyle \frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_k} \forall k=1,2,...,C$$

$$\frac{\partial FL}{\partial p_k} \frac{\partial p_k}{\partial x_k} = y_k \alpha_k\gamma(1-p_k)^{\gamma-1}\log(p_k)p_k(1-p_k) - y_k\alpha_k\frac{(1-p_k)^\gamma}{p_k}p_k(1-p_k)$$

$$ = y_k \alpha_k (1-p_k)^{\gamma}(\gamma p_k \log(p_k) - 1 + p_k) $$

$$\mbox{Case 2: } (k \neq c)\displaystyle \frac{\partial FL}{\partial p_k}\frac{\partial p_k}{\partial x_c}$$

$$\frac{\partial FL}{\partial p_k} \frac{\partial p_k}{\partial x_c} = - y_k\alpha_k\gamma(1-p_k)^{\gamma-1}\log(p_k)p_kp_c + y_k\alpha_k\frac{(1-p_k)^\gamma}{p_k}p_kp_c$$

$$ = - y_k\alpha_k (1-p_k)^{\gamma-1}p_c(\gamma p_k \log(p_k) - 1 + p_k) $$

$$\mbox{For each } d=1,2,...,C \mbox{ : }\sum_{k=1}^C \left(\frac{\partial FL}{\partial p_k} \frac{\partial p_k}{\partial x_d}\right) = y_d \alpha_d (1-p_d)^{\gamma}(\gamma p_d \log(p_d) - 1 + p_d) + \sum_{c \neq d}^C \left( - y_d\alpha_d (1-p_d)^{\gamma-1}p_c(\gamma p_d \log(p_d) - 1 + p_d) \right) = y_d\alpha_d(1-p_d)^{\gamma-1}(\gamma p_d \log(p_d) - 1 + p_d)\left(1-p_d -\sum_{c \neq d}^C(p_c)\right) $$

$$\rightarrow \nabla_{\mathbf{x}} FL = \left[ y_d\alpha_d(1-p_d)^{\gamma-1}(\gamma p_d \log(p_d) - 1 + p_d)\left(1-p_d -\sum_{c \neq d}^C(p_c)\right) \right]_{d=1,2,...,C}$$

However, the problem is $\left(1-p_d -\sum_{c \neq d}^C(p_c)\right) = 0$ (because sum of all probabilities is 1) then the whole expression collapses to $0$.

Is there any wrong in my focal loss derivation?

Reference:

",28078,,2444,,3/30/2020 12:01,3/30/2020 12:01,Is there any wrong in my focal loss derivation?,,0,0,,,,CC BY-SA 4.0 19867,1,,,3/30/2020 6:58,,1,42,"

Suppose we had a series of single-dimensional data points $X = \{x_1, x_2, \dots, x_n \}$, where $n$ is the number of data points and there corresponding output values $T = \{t_1, t_2, \dots, t_n \}$.

Now, I want to train a single neuron network given below to learn from the data (the model is bad, but I just wanted to try it out as an exercise).

The output function of this neuron would be a recursive function as:

$$ y = f(a_0 + a_1x + a_2 y) $$

where

$$ f(x) = \frac{1}{1 + e^{-x}} $$

for a given $x$.

The error function for such a model would be:

$$ e = \sum_{i=1}^N (y_i - t_i)^2 $$

How should I minimise this loss function? What are the derivatives that I need to use to update the parameters?

(Also, I am new to this problem, therefore it would be really helpful if you tell me sources/books to read about such problems.)

",35576,,2444,,3/30/2020 12:05,3/30/2020 12:05,How do we minimize loss for a single neuron with a feedback?,,0,0,,,,CC BY-SA 4.0 19868,1,,,3/30/2020 8:17,,3,226,"

My task is to solve an optimization problem with deep reinforcement learning. I read about several algorithms like DQN, PPO, DDPG, and A2C/A3C but use cases always seem to be problems like video games (sparse rewards, etc.) or robotics (continuous action spaces, etc.). Since my problem is an optimization issue, I wonder which algorithm is appropriate for my setting:

  • limited number of discrete actions (like 20)
  • high-dimensional states (like 250 values)
  • instant reward after every single action (not only at the end of an episode)
  • a single action can affect the state quite a lot

There's no ""goal"" like in a video game, an episode ends after a certain number of actions. I'm not quite sure which algorithm is appropriate for my use case.

",35578,,2444,,3/30/2020 12:11,8/30/2020 21:02,Which deep reinforcement learning algorithm is appropriate for my problem?,,1,2,,,,CC BY-SA 4.0 19870,2,,19856,3/30/2020 8:29,,0,,"

In the case of TicTacToe, you can make use of game theory. The entire search space can be denoted by a game tree. You bot must now be able to maximize the chance of winning.

You can make use of the Maximin algorithm. This is still computationally intensive on large search spaces. To improve the efficiency Alpha-Beta pruning can be applied to reduce the number of nodes in the Game tree.

These are core AI concepts and will always perform better than neural networks on dully defined and relatively smaller search spaces. Neural networks perform better when it's too difficult to compute all the possible combinations of a game at a certain state.

You can have a look at this to build a TicTacToe bot.

",26384,,,,,3/30/2020 8:29,,,,7,,,,CC BY-SA 4.0 19871,1,,,3/30/2020 8:59,,1,171,"

Introduction

I am trying to setup a Deep Q-Learning agent. I have looked that the papers Playing Atari with Deep Reinforcement Learning as well as Deep Recurrent Q-Learning for Partially Observable MDPs as well as looking at the question How does LSTM in deep reinforcement learning differ from experience replay?.

Current setup

I currently take the current state of a game, not picture but rather the position of the agent, the way the agent is facing and some inventory items. I currently only feed state $S_t$ as at input to a 3 layer NN (1 input, 2 hidden, 3 output) to estimate the Q-values of each action.

The algorithm that I use is almost the same as the one used in Playing Atari with Deep Reinforcement Learning, with the only difference that I do not train after each timestep but rather sample mini_batch*T at the end of each episode and train on that. Where T is the number of time-steps in that episode.

The issue

At the current state, the agent do not learn within 100 00 episodes, which is about 100 00 * 512 training iterations. Making me consider that something is not working, this is where I realised that I do not consider any of the history of the previous steps.

What I currently struggle with is sending multiple time-steps-states to the NN. The reason for this is the complexity of how the game/program I am using. According to Deep Recurrent Q-Learning for Partially Observable MDPs LSTM could be a solution for this, however I would prefer to manually code a RNN rather than using an LSTM. Would not a RNN with something like the following structure have a chance of working?

Also, as far as I know, RNN need the inputs to be fed in sequence and not randomly sampled?

",35582,,-1,,6/17/2020 9:57,3/30/2020 9:33,Do RNN solves the need for LSTM and/or multiple states in Deep Q-Learning?,,0,0,,,,CC BY-SA 4.0 19872,2,,19842,3/30/2020 9:16,,1,,"

The main thing to keep in mind when designing a reinforcement learning agent is that you need to develop an interactive environment in which the agent can learn and define the possible moves the agent can make. In your case, the environment is the memory cards.

Next you need to define how the agent can interact with the environment, that is choosing a card or flipping a card etc. Also how the environment will behave when the agent chooses a move.

Finally you need to define the fitness function, in this case the score (number of pairs created) should be fine. You should have a look at some online RL competitions to get a better understanding of the architecture. AWS DeepRacer

",26384,,,,,3/30/2020 9:16,,,,0,,,,CC BY-SA 4.0 19873,1,,,3/30/2020 10:02,,1,106,"

I'm trying to build a neural network between protein sequence and its drug fingerprint. My input size is 20000. The output size is 881. The sample size is 610.

Can I process this huge neural network? But how? And in which tool?

",34540,,2444,,3/30/2020 12:24,8/27/2020 16:41,How can I process neural network with 25000 input nodes?,,2,1,,,,CC BY-SA 4.0 19874,2,,19873,3/30/2020 11:10,,0,,"

It sure is possible, imagine a CNN can handle way bigger number of inputs. An image with size of 512x512 has already 262144 input nodes when re-arranged to a one-row vector. The trick sicne 2012/2014 is to use Convolutions, and deep ones, so stacking a lot of 3x3 Convolutions for example. Its way less sensitive than a fully-connected Dense network and needs a siginificant amount of less parameters. For more check this out, chapter 9 : Ian-Goodfellow, Deep Learning

Tools for that are tensorflow and keras based on python, or tensorflow-js on java, you can also use pytorch but the community is rather small on comparison.

",35557,,,,,3/30/2020 11:10,,,,1,,,,CC BY-SA 4.0 19875,1,,,3/30/2020 11:27,,1,48,"

Why does DeconvNet (Zeiler, 2014) use ReLU in the backward pass (after unpooling)? Are not the feature maps values already positive due to the ReLU in the forward pass? So, why do the authors apply the ReLU again coming back to the input?

update: I better explain my problem:

given an input image $x$ and ConvLayer $CL$ composed of:

  1. a convolution
  2. an activation function ReLU
  3. a pool operation

$f$ is the output of ConvLayer given an input $x$, i.e. $f=CL(x)$.

So, the Deconv target is to ""reverse"" the output $f$ (the feature map) to restore an approximate version of $x$. To this aim, the authors define a function $CL^{-1}$ composed of 3 subfunctions:

a. unpool

b. activation function ReLU (useless in my opinion, because $f$ is already positive due to the application of the 2. step in $CL(f)$)

c. transposed convolution.

In other words $x\simeq CL^{-1}(f)$ where $CL^{-1} (f) = transpconv(relu(unpool(f)))$. But, if $f$ is the output computed as $f=CL(x)$, it is already positive, so the b. step is useless.

This is what I understood from the paper. Where I wrong?

",2189,,2189,,3/30/2020 18:05,3/30/2020 18:05,Why do DeconvNet use ReLU in the backward pass?,,0,2,,,,CC BY-SA 4.0 19877,1,,,3/30/2020 12:05,,0,32,"

I've been reading this article on convolutional neural networks (I'm a beginner) - and I'm stuck at a point.

What I understand: We have a 4x4 input, and want to transform it to a 2x2 grid. I'm visualising this as a kernel sliding over the 4x4 grid, with just the right number of strides, so as to get 4 outputs which constitute the 2x2 grid (there are animations right above this part of the page in the link attached). The model chooses to represent the 4x4 grid as a vector of length 16. Also, the 4x16 transformation matrix when pre-multiplied to the input vector, produces the output vector, which is mapped back to a 2D grid. Is this right?

Moving on, another screenshot from the same page-

Is it that both the matrices are really the same, and a lot of weights just happen to be zero in the weight matrix, which is what the second matrix is depicting? In that case even, why so many zeros?

P.S. Thanks a lot for being so patient and reading to the end of this post, I really appreciate it. I'm a beginner, and really interested in these topics, and I'm self studying from various online resources - hence, any help is appreciated. I hope this is the right platform for this post.

",35585,,-1,,6/17/2020 9:57,3/30/2020 12:53,Interpreting I/O Transformation Matrix in Convolution,,0,2,,,,CC BY-SA 4.0 19879,1,20062,,3/30/2020 12:47,,6,2841,"

The image above, a screenshot from this article, describes discrete 2D convolutions as linear transforms. The idea used, as far as I understand, is to represent the 2 dimensional $n$x$n$ input grid as a vector of $n^2$ length, and the $m$x$m$ output grid as a vector of $m^2$ length. I don't see why this can't be generalised to higher-dimensional convolutions, since a transformation matrix can be constructed from one vector to another, no matter the length (right?)

My question: Aren't all discrete convolutions (not just 2D) linear transforms?

Are there cases where such a transformation matrix cannot be found?

",35585,,35585,,3/31/2020 8:54,4/13/2020 1:28,Aren't all discrete convolutions (not just 2D) linear transforms?,,2,10,,,,CC BY-SA 4.0 19880,1,,,3/30/2020 14:34,,1,78,"

We are given a computer vision classification task, that is, a task that asks us to predict the category of an image over $n$ predefined classes (the so-called closed set classification problem).

Question: Is it possible to give an estimate on what is the best accuracy one is likely achieve using an end-to-end CNN model (possibly, using a popular backbone) in this task? Do the performances of state-of-the-arts models on open datasets serve as a good reference? If someone claims that they achieve certain performance with some popular CNN architecture, how do we know s/he is not bragging?

You may or may not have access to the training dataset yet. The testing dataset shall be something close to the real-world production scenario. I know this is too vague, but just assume you have a fair judge.

Background: Product teams sometimes asks engineering teams for quick (and dirty) solutions. Engineering teams want to assess the feasibility before say ""Yes we can do $95\%$"" and officially launch (and be responsible) the projects.

",35592,,,,,3/31/2020 6:26,How to estimate the accuracy upper limit of any CNN model over a computer vision classification task,,1,1,,,,CC BY-SA 4.0 19881,1,,,3/30/2020 19:46,,2,5721,"

I am new to deep learning.

I am training a model and I am getting a root mean squared error (RMSE) greater on the test dataset than on the training dataset.

What could be the reason behind this? Is this acceptable to get the RMSE greater in test data?

",33670,,2444,,3/31/2020 12:48,4/21/2020 11:04,Is it normal to have the root mean squared error greater on the test dataset than on the training dataset?,,4,1,,,,CC BY-SA 4.0 19882,2,,19881,3/30/2020 20:52,,0,,"

RMSE stands for Root Mean Squared Error. As the name suggests, it is calculated by taking the square root over the mean of the squared errors of individual points.

It is normal for the test error to be higher than the train error and in most cases, the test error will be greater than the train error.

",21229,,,user9947,3/30/2020 21:37,3/30/2020 21:37,,,,1,,,,CC BY-SA 4.0 19883,2,,3668,3/30/2020 21:51,,2,,"

For a function of one variable, there are only two options for directions in the domain: left or right, so it becomes almost trivial, but you can still talk about gradient descent.

You would take steps to the left if the slope/derivative is positive and make steps to the right if the slope/derivative is negative--i.e. the opposite direction of the derivative (the 1d version of the ""gradient"" in gradient descent), which is equivalent to the higher dimensional case.

",35605,,,,,3/30/2020 21:51,,,,0,,,,CC BY-SA 4.0 19885,1,,,3/31/2020 1:42,,1,34,"

I am performing an adversarial machine learning attack on a neural network for network traffic classification. For adding adversarial perturbations in features such as packet interarrival times and packet size, what norm should I use as a constraint? (eg. l1 norm, l2 norm, l-infinity norm, etc.)

",31240,,,,,3/31/2020 1:42,How do I decide which norm to use for placing a constraint on my adversarial perturbation?,,0,1,,,,CC BY-SA 4.0 19886,2,,12034,3/31/2020 2:58,,1,,"

I think it's a real bot. I've seen people talking about how it's rendering has improved over time and they claimed that was an improvement in the Motion Capture suit. I think that's simply because the program has improved over time. Imagine YOU created AI Angel (assuming it's really AI), would you be more worried about it's functionality or it's realism first? I would want it to act real before it looks real. If you listen closely, the voice has also slowly been improving as well, which makes sense for both explanations. If fake, the voice changer improved, but if real they simply gave it a network of human communication to listen to and compare itself to. It's really quite easy (conceptually) to create a program that learns. If you want to learn more about how easy it can be to make an AI teach itself I recommend Code Bullet on Youtube. He has a variety of videos in which he creates AI's meant to learn how to function in a game at the highest performance.

I know I covered evidence of both sides of the question, but I personally (as a nerdy boy) believe Angelica is really an A.I.

",35609,,,,,3/31/2020 2:58,,,,1,,,,CC BY-SA 4.0 19887,1,,,3/31/2020 4:17,,1,49,"

I am new to CNN. What I have learned so far about the filters is that when we are giving a training example to our model, our model updates the weights by gradient descent to minimize the loss function. So my question is how the weights are retained for a particular class label?

The question is vague as my knowledge is vague. It's my 4th hour to CNN.

For example, if I am talking about the MNIST dataset with 10 labels. Let's say I am giving 1 image to my model initially. It will have a bigger loss for the forward pass. Let's say now it came for the back pass and adjusted the weights for and minimized the loss function for that label. Now, when a new label arrives for training, how will it update the weights for filters which have already been updated according to the previous label?

",35611,,2444,,3/31/2020 11:02,4/30/2020 15:11,How are the weights retained for filters for a particular class in a CNN?,,1,2,,,,CC BY-SA 4.0 19888,2,,19880,3/31/2020 6:26,,2,,"

There is no easy rule for this. You can use transfer learning to select a model that works well on image classification. However the accuracy you achieve will be highly dependent on your training set. If your training set is ""similar"" in quantity and quality to what was used for the accuracy achieved by the transfer learning model in some application you have a reasonable chance of coming close to that accuracy. By similar is mean roughly the same number of images per class. By quality I mean what percentage of the pixels in the images that are occupied by the ""region of interest -ROI), same level of noise in the images etc. Also it depends on the nature of the classes. If the classes are widely different (elephants vs trees) the accuracy should be higher than if you are try to classify closely related images (human faces).

",33976,,,,,3/31/2020 6:26,,,,1,,,,CC BY-SA 4.0 19889,1,21212,,3/31/2020 6:30,,4,85,"

Greetings to all respected colleagues!

I want to consult on the use of FPGAs and neurochips. I plan to use it in my laboratory project for programming control systems on neural networks.

In my work, there are a lot of applications of neural networks, and I became interested in their programming on FPGAs and neurochips. But I don’t know a single example of a really made and working laboratory prototype in which a neural network is implemented on an FPGA or on a neurochip and controls something. If someone shares the link, I would carefully study it.

",32829,,2444,,3/31/2020 11:29,5/16/2020 21:04,Are there examples of neural networks (used for control) implemented on a FPGA or on a neurochip?,,1,6,,,,CC BY-SA 4.0 19890,2,,19879,3/31/2020 6:33,,1,,"

The convolutions are linear transformations. However in typical applications a non linear activation function like RELU is used following the convolution to provide non-linearity otherwise a convolutional neural network would just be a net linear transformation.

",33976,,,,,3/31/2020 6:33,,,,0,,,,CC BY-SA 4.0 19891,1,,,3/31/2020 7:51,,5,2443,"

I am trying to implement a convolutional autoencoder with a dense layer at the bottleneck to do some dimensional reduction. I have seen two approaches for this, which aren't particularly scalable. The first was to introduce 2 dense layers (one at the bottleneck and one before & after that has the same number of nodes as the conv2d layer that precedes the dense layer in the encoder section:

input_image_shape=(200,200,3)
encoding_dims = 20

encoder = Sequential()
encoder.add(InputLayer(input_image_shape))
encoder.add(Conv2D(32, (3,3), activation="relu, padding="same"))
encoder.add(MaxPooling2D((2), padding="same"))
encoder.add(Flatten())
encoder.add(Dense(32*100*100, activation="relu"))
encoder.add(Dense(encoding_dims, activation="relu"))

#The decoder
decoder = Sequential()
decoder.add(InputLayer((encoding_dims,)))
decoder.add(Dense(32*100*100, activation="relu"))
decoder.add(Reshape((100, 100, 32)))
decoder.add(UpSampling2D(2))
decoder.add(Conv2D(3, (3,3), activation="sigmoid", padding="same"))

It's easy to see why this approach blows up as there are two densely connected layers with (32100100) nodes each or more or in that ballpark which is nuts.

Another approach I have found which makes sense for b/w images such as the MNIST stuff is to introduce an arbitrary number of encoding dimensions and reshape it (https://medium.com/analytics-vidhya/building-a-convolutional-autoencoder-using-keras-using-conv2dtranspose-ca403c8d144e). The following chunk of code is copied from the link, I claim no credit for it:

#ENCODER
inp = Input((28, 28,1))
e = Conv2D(32, (3, 3), activation='relu')(inp)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(64, (3, 3), activation='relu')(e)
e = MaxPooling2D((2, 2))(e)
e = Conv2D(64, (3, 3), activation='relu')(e)
l = Flatten()(e)
l = Dense(49, activation='softmax')(l)
#DECODER
d = Reshape((7,7,1))(l)
d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(64,(3, 3), strides=2, activation='relu', padding='same')(d)
d = BatchNormalization()(d)
d = Conv2DTranspose(32,(3, 3), activation='relu', padding='same')(d)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(d)

So, is there a more rigorous way of adding a dense layer after a 2d convolutional layer?

",34002,,2444,,12/2/2021 13:34,12/2/2021 13:34,How to add a dense layer after a 2d convolutional layer in a convolutional autoencoder?,,1,0,,,,CC BY-SA 4.0 19893,1,,,3/31/2020 11:00,,2,50,"

What is the difference between training a model with RGB images and using only the color channels separately (like only the red channel, green channel, etc.)? Would the model also learn patterns between the different colors in the first case?

If for me the single-channel results are relevant but also the patterns between different channels are relevant, it would be beneficial to use them together?

I am asking this because I want to apply this to signals of an accelerometer that has x, y, z-axis data. And I want to increase the resolution of the data. Will the model learn to combine all features from different axis if I input (1024, 3) length, channels of a one-dimensional signal into my one-dimensional CNN?

",35615,,2444,,3/31/2020 11:11,3/31/2020 11:11,What is the difference between training a model with RGB images and using only the color channels separately?,,0,0,,,,CC BY-SA 4.0 19894,1,,,3/31/2020 11:21,,2,103,"

Are there any commonly used activation functions (e.g. that take values in $(0,.5)\cup (.5,1)$)? Preferably for classification?

Why? I was looking for commonly used activation functions on Google, and I noticed that all activation functions are continuous. However, I believe this is not needed in Hornik's paper.

When I did a bit of testing myself, with a discontinuous activation function on the MNIST dataset, the results were good. So I was curious if anyone else used this kind of activation function.

",31649,,2444,,4/1/2020 0:53,4/1/2020 0:53,Are there any commonly used discontinuous activation functions?,,0,2,,,,CC BY-SA 4.0 19895,1,19898,,3/31/2020 11:59,,2,343,"

We require to find the gradient of loss function(cost function) w.r.t to the weights to use optimization methods such as SGD or gradient descent. So far, I have come across two ways to compute the gradient:

  1. BackPropagation
  2. Calculating gradient of loss function by calculus

I found many resources for understanding backpropagation. The 2nd method I am referring to is the image below(taken for a specific example, e is the error: difference between target and prediction):

Also, the proof was mentioned in this paper:here

Moreover, I found this method while reading this blog.(You might have to scroll down to see the code: gradient = X.T.dot(error) / X.shape[0] )

My question is are the two methods of finding gradient of cost function same? It appears different and if yes, which one is more efficient( though one can guess it is backpropagation)

Would be grateful for any help. Thanks for being patient(it's my 1st time learning ML).

",35616,,,,,3/31/2020 15:44,Different methods of calculating gradients of cost function(loss function),,1,3,,,,CC BY-SA 4.0 19896,2,,19881,3/31/2020 12:34,,0,,"

I am training a model and i am getting test results greater than train results.

You don't give us too many details, but most probably it's underfitting.

What could be the reason behind this?

  • Underfitting is often a result of an excessively simple model.

  • Too much regularization techniques were used.

Is this acceptable to get the RMSE greater in test data?

Yes, but that indicates a problem with model, so you should be aware of the consequences.

",19476,,,,,3/31/2020 12:34,,,,0,,,,CC BY-SA 4.0 19897,2,,19887,3/31/2020 14:30,,1,,"

Besides the last layer rest of the weights are shared among all classes. When an image is passed to the network all weights are updated accordingly. The only weights that are directly responsible for one specific class are the ones of the final layer. The rest of the weights are updated to find the best values to minimize the average loss for all classes.

To rephrase there aren't ""filters"" in convolutional layers that are specific for a single class. They are used to extract features so that the final layer can make the final prediction (which has weights for each specific class).

I'd suggest you look a bit into how gradient descent and backpropagation works.

",26652,,,,,3/31/2020 14:30,,,,0,,,,CC BY-SA 4.0 19898,2,,19895,3/31/2020 15:44,,1,,"

I'm pretty sure they're the same thing. Both backpropagation and gradient descent's essential ideas are to calculate the partial derivative of the cost with respect to each weight, and subtract the partial derivative times the learning rate.

",34505,,,,,3/31/2020 15:44,,,,1,,,,CC BY-SA 4.0 19899,1,,,3/31/2020 15:45,,0,98,"

It is well-known that deep feedforward networks can approximate any continuous function from $\mathbb{R}^k$ to $\mathbb{R}^l$, (uniformly on compacts).

However, in practice feature maps are typically used to improve the learning quality and likewise, readout maps are used to make neural networks suited for specific learning tasks.

For example:

  • Classification: networks are composed with the softmax (readout) function so they take values in $(0,1)^l$.

What are examples of commonly used feature and readout maps?

",31649,,2444,,4/1/2020 12:56,4/1/2020 12:56,What are examples of commonly used feature and readout maps?,,0,2,,,,CC BY-SA 4.0 19901,1,,,3/31/2020 19:32,,2,32,"

This may come across as an open and opinion-based question, I definitely want to hear expert opinions on the subject, but I am also looking for references to materials that I can read deeply.

One of the ways question answering systems can be classified is by the type of data source that they use:

  1. Structured knowledge bases with ontologies (DBPedia, WikiData, Yago, etc.).

  2. Unstructured text corpora that contain the answer in natural language (Wikipedia).

  3. Hybrid systems that search for candidate answers in both structured and unstructured data sources.

From my reading, it appears as though structured knowledge bases/knowledge graphs were much more popular back in the days of the semantic web and when the first personal assistants (Siri, Alexa, Google Assistant) came onto the scene.

Are they dying out now in favor of training a deep learning model over a vast text corpus like Bert and/or Meena? Do they have a future in question answering?

",35587,,2444,,1/26/2021 17:27,1/26/2021 17:27,Will structured knowledge bases continue to be used in question answering with the likes of BERT gaining popularity?,,0,0,,,,CC BY-SA 4.0 19902,1,,,3/31/2020 19:58,,1,78,"

This is how they describe their infrastructure in https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf. I want to implement the game of Atari Breakout.

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

class DQN(nn.Module):
    def __init__(self, height, width):
        super(DQN, self).__init__()

        self.height = height
        self.width = width

        self.conv1 = nn.Conv2d(in_channels=4, out_channels=16, kernel_size=8, stride=4)
        self.conv2 = nn.Conv2d(in_channels=6, out_channels=32, kernel_size=4, stride=2)

        self.fc = nn.Linear(in_features=????, out_features=256)
        self.out = nn.Linear(in_features=256, out_features=4)

    def forward(self, state):

        # (1) Hidden Conv. Layer
        self.layer1 = F.relu(self.conv1(state))

        #(2) Hidden Conv. Layer
        self.layer2 = F.relu(self.conv2(self.layer1))

        #(3) Hidden Linear Layer
        self.layer3 = self.fc(self.layer2)

        #(4) Output
        actions = self.out(self.layer3)

        return actions

I will probably instantiate my policy network and my target network the following way :

policy_net = DQN(envmanager.get_height(), envmanager.get_width()).to(device)
target_net = DQN(envmanager.get_height(), envmanager.get_width()).to(device)

I am very new in the world of Reinforcement Learning. I would like to implement their infrastructure in the DQN(), but I think I am wrong in several places. Am I good here? If not, how can I fix it so that it reflect the infrastructure from the above picture?

UPDATE

I know that the formula to calculate the output size is equal to

$O=\frac{W−K+2P}{S}+1$

where $O$ is the output height/length, $W$ is the input height/length, $K$ is the filter size, $P$ is the padding, and $S$ is the stride.

I obtained forself.fc = nn.Linear(in_features=????, out_features=256) that in_features must be equal to $32*9*9$

",35626,,35626,,4/1/2020 15:24,4/1/2020 15:24,Atari Breakout Infrastructure,,0,0,,,,CC BY-SA 4.0 19905,1,20024,,3/31/2020 23:37,,2,286,"

The Deep Learning book mentions that it's been used for years but the oldest sources it mentions are from 2012:

A simple type of solution has been in use by practitioners for many years: clipping the gradient. There are different instances of this idea (Mikolov, 2012; Pascanu et al., 2013). One option is to clip the parameter gradient from a mini-batch element-wise (Mikolov, 2012), just before the parameter update. Another is to clip the $||g||$ of the gradient $g$ (Pascanu et al., 2013) just before the parameter update

But I find it hard to believe that the first uses and mentions of gradient clipping are from 2012. Does anyone know the origins of the solution?

",34395,,34395,,4/5/2020 17:32,4/5/2020 17:32,Which work originally introduced gradient clipping?,,1,0,,,,CC BY-SA 4.0 19906,1,19923,,4/1/2020 2:15,,1,156,"
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
x_data = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]]).float()
y_data = torch.tensor([0, 1, 1, 0]).float()

class Model(nn.Module):
    def __init__(self, input_size, H1, output_size):
        super().__init__()
        self.linear_input = nn.Linear(input_size, 2)
        self.linear_output = nn.Linear(2, output_size)
        self.sigmoid = torch.nn.Sigmoid()

    def forward(self, x):
        x = self.sigmoid(self.linear_input(x))
        x = self.sigmoid(self.linear_output(x))
        return x

    def predict(self, x):
        return (self.forward(x) >= 0.5).float()

model = Model(2,2,1)
lossfunc = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)
epochs = 2000
losses = []

for i in range(epochs):
    pred = model(x_data).view(-1)
    loss = lossfunc(pred, y_data)
    print(""epochs:"", i, ""loss:"", loss.item())
    losses.append(loss.item())
    optimizer.zero_grad()
    loss.backward() 
    optimizer.step()

def cal_score(X, y):
    y_pred = model.predict(X)
    score = float(torch.sum(y_pred.squeeze(-1) == y.byte().float())) / y.shape[0]
    return score

print('test score :', cal_score(x_data, y_data))
def plot_decision_boundray(X):
    x_span = np.linspace(min(X[:, 0]), max(X[:, 0]))
    y_span = np.linspace(min(X[:, 1]), max(X[:, 1]))
    xx, yy = np.meshgrid(x_span, y_span)
    grid = torch.tensor(np.c_[xx.ravel(), yy.ravel()]).float()
    pred_func = model.forward(grid)
    z = pred_func.view(xx.shape).detach().numpy()
    plt.contourf(xx, yy, z)
    plt.show()

plot_decision_boundray(x_data)

As you can see, it's a simple neural network which consists of one hidden layer using BCELoss and Adam.

Normally, it results in the correct one like above.

However, it is sometimes stuck in a local minima and a decision boundary becomes awkward.

Because the input data is limited, I guess that preprocessing of those data might not be possible and only initial weights matter in this problem. I tried initializing them with normal distribution but it didn't work. How can I approach this problem?

",18427,,,,,4/1/2020 23:36,XOR-solving neural network is suffering from local minima,,1,1,,,,CC BY-SA 4.0 19908,1,19910,,4/1/2020 6:03,,1,113,"

Gradient descent is used to reduce the loss and regularization is used to fight over-fitting.

Is there any relation between gradient descent and regularization, or both are independent of each other?

",9863,,2444,,4/2/2020 22:01,4/2/2020 22:40,What is relation between gradient descent and regularization in deep learning?,,1,0,,,,CC BY-SA 4.0 19909,1,,,4/1/2020 6:55,,3,100,"

Consider that my input is an RGB image. The size of my image is $N\times N$. I'm trying to implement NICE algorithm presented by Dinh. The bijective function $f: \mathbb{R}^d \to \mathbb{R}^d$ maps $X$ to $Z$. So I have $p_Z(Z)=p_X(X)$.

What I can't understand is that $N$ is much bigger than $d$. Does this mean that I should downsample the inputs? Does the resulting loss function change if I add a downsampling layer at the beginning of the neural net and also add an upsampling layer at the end of the net?

",35633,,35633,,4/2/2020 14:22,4/2/2020 14:22,Do I have to downsample the input and upsample the output of the neural network when implementing the NICE algorithm?,,0,0,,,,CC BY-SA 4.0 19910,2,,19908,4/1/2020 7:44,,4,,"

Usually, when talking about regularization for neural networks there are 3 main types: L1, L2 and dropout. All affect the gradient descent procedure.

L1 and L2 regularization is implemented in the loss function, and therefore are part of gradient descent directly by altering the derivatives of the loss function thereby altering the weight update rules of the network during gradient descent.

For L1 you add a penalty based on the $\mathcal L^1$ norm of the weight vector, while for L2 you add a penalty based on the $\mathcal L^2$ norm.

For dropout, there is no direct impact on the loss function, but you are still interfering in the gradient descent procedure indirectly by masking nodes to alter the forward and backward propagation.

",22906,,,user9947,4/2/2020 22:40,4/2/2020 22:40,,,,0,,,,CC BY-SA 4.0 19911,1,,,4/1/2020 9:23,,3,1470,"

I am trying to build a DQN model for the Atari Pong game, but I am not sure whether the model is learning at all.

I am using the architecture described in the paper Playing Atari with Deep Reinforcement Learning. And I tested the model on a simpler environment (like CartPole), which worked great, but I am not seeing any progress at all with Pong, I have been training the model for 2-3 hours and its performance is no better than taking random actions.

Should I just keep waiting or there might be something wrong with my code. Around how many episodes should it take before I see some positive results?

",26336,,2444,,4/1/2020 12:33,4/1/2020 12:33,How much time does it take to train DQN on Atari environment?,,0,4,,,,CC BY-SA 4.0 19914,1,,,4/1/2020 11:20,,1,108,"

I'm a big fan of animation and have kept an eye on the deepfake's ability to replicate full body motion.

So I ask

  1. Is there a deepfake software available I can use to gather animation from a video?

  2. Are there publically released a deepfake body tracking tools out there? Even for normal video

",35639,,2444,,4/1/2020 18:12,4/1/2020 18:12,Can I use deepfake to rotoscope for animation?,,0,1,,,,CC BY-SA 4.0 19916,2,,10910,4/1/2020 12:43,,0,,"

Initial state: The monkey, the suspended bananas, and two cratesin the room

Goal test: Monkey has bananas.

Successor function: To jump on crate; to jump off crate; Push crate from
One spot to another; Walk from one spot to another;Grab bananas (if standing on crate).

Cost function: Number of actions.

",35642,,,,,4/1/2020 12:43,,,,0,,,,CC BY-SA 4.0 19917,2,,18489,4/1/2020 17:15,,1,,"

Each node is a position in the arrays

  • values = value of the node

  • conn = indexes of connected nodes

If its an undirected graph, each node must have all the nodes to which they are attached. Instead, in directed graphs, only the start node has the index. For your image:

values = ['A','B','C','D','E']
conn = [[1,2,3],[0,4],[0,3,4],[0,2,3],[1,3]]

Example = 'A' -> 1 ('B') ,2 ('C') ,3 ('D')

def averageGraph(conn):
    if conn.all() != None:
        average = 0
        for node in conn:
            average = average + len(node) #len(node) = nº nodes connected = degree
        return average / len(conn)
    else:
        return None
",35656,,,,,4/1/2020 17:15,,,,2,,,,CC BY-SA 4.0 19920,2,,17629,4/1/2020 19:24,,1,,"

Full disclosure: I work at Dessa, the company that developed this tech.

We built a machine learning experiment management tool, called Atlas. The main feature is experiment management, allowing you to run thousands of experiments concurrently. This might help with your problem above https://github.com/dessa-oss

",35661,,,,,4/1/2020 19:24,,,,1,,,,CC BY-SA 4.0 19922,1,19937,,4/1/2020 22:48,,0,641,"

I have read a lot of information about several notions, like batch_size, epochs, iterations, but because of explanation was without numerical examples and I am not native speaker, I have some kind of problem of understanding still about those terms, so I decided to work with data. Let us suppose we have the following data

Of course, it is just subset of original data, and I want to build a neural network with three hidden layers, the first layer contains 500 nodes, it takes input three variable and on each node, there is sigmoid activation function, next layer contains 100 node and sigmoid activation, the third one contains 50 node and sigmoid again, and finally we have one output with sigmoid to convert the result into 0 or 1 that classify whether a person with those attributes is female or male.

I trained the model using Keras Tensorflow with the following code

model.fit(X_train,y_train,epochs=30)

With this data, what does mean epochs=30? Does it mean that all 177 rows (with 3 input at times) will go to the model 30 times? What about batch_size=None in model.fit parameters?

",35668,,2444,,4/2/2020 0:51,4/2/2020 14:19,What does it mean to have epochs=30 in Keras' fit method given certain data?,,2,4,,,,CC BY-SA 4.0 19923,2,,19906,4/1/2020 23:36,,0,,"

From the log you can see that the loss value is still big compared to the first image, but at least it decrease on each iteration (even very very small). So I suggest you to:

  1. try to use bigger learning rate, I think you model is underfiting.
  2. try to use another weight initialization method

I think the first solution will solve this problem as I see no problem in the code.

",16565,,,,,4/1/2020 23:36,,,,0,,,,CC BY-SA 4.0 19924,2,,19922,4/1/2020 23:55,,0,,"

ok so let me explain in my word how i understood this process: i know that one sample mean one row, therefore if we have data with size(177,3), that means we have 177 sample. because we have divided X and y into training and test, therefore we have following pairs (X_train,y_train) and (X_test, y_test)

now about batch size, if we have let say 177 sample(177 row) and 2 batch_size , that means we have approximately $177/2$ batch right?update process goes like this:

let us suppose network takes 3 input and produce one output, from first sample of data, three data will go to the network and output will be generated, this output will be compared to the first value of y_train and cost function will be created, then next sample will go(it means next three value) and compared to the second value of y_train, also second cost function will be generated, final cost function for first batch will be sum of those cost functions and using gradient method weights are updated, after that one new batches will go through the network and on the based on updated weights, new weights are generated, when all $177/2$ batch will be finished , it will be our 1 epoch right? is that correct?

",35668,,,,,4/1/2020 23:55,,,,0,,,,CC BY-SA 4.0 19927,1,19946,,4/2/2020 3:18,,2,160,"

Many experts seem to think that artificial general intelligence, or AGI, (on the level of humans) is possible and likely to emerge in the near-ish future. Some make the further step to say that superintelligence (much above the level of AGI) will appear soon after, through mechanisms like recursive self-improvement from AGI (from a survey).

However, other sources say that such superintelligence is unlikely or impossible (example, example).

What assumptions do those who believe in superintelligence make? The emergence of superintelligence has generally been regarded as something low-probability but possible (e.g. here). However, I can't seem to find an in-depth analysis of what assumptions are made when positing the emergence of superintelligence. What specific assumptions do those who believe in the emergence of superintelligence make that are unlikely, and what have those who believe in the guaranteed emergence of superintelligence gotten wrong?

If the emergence of superintelligence is to be seen as a low-probability event in the future (on par with asteroid strikes, etc.), which seems to be the dominant view and is the most plausible, what assumptions exactly makes it low-probability?

",35673,,35673,,4/2/2020 21:13,4/3/2020 0:45,What assumptions are made when positing the emergence of superintelligence?,,1,1,,,,CC BY-SA 4.0 19929,2,,19868,4/2/2020 5:49,,1,,"

Theoretically video games and robotics problems are also about optimization(getting maximum reward). So, just like other reinforcement learning problems, I would expect PPO to be the most efficient in your case too. I don't think a ""goal"" is necessary for rl, all you need is the rewards.

",35679,,35679,,4/2/2020 19:54,4/2/2020 19:54,,,,1,,,,CC BY-SA 4.0 19930,1,19931,,4/2/2020 6:38,,1,501,"

I am not using replay here. Can this be a possible deep q learning pseudocode?

s - state    
a - action    
r - reward
n_s - next state
q_net - neural network representing q

step()
{

    get s,a,r,n_s
    q_target[s,a]=r+gamma*max(q_net[n_s,:])
    loss=mse(q_target,q_net[s,a])
    loss.backprop()

}

while(!terminal)
{    
    totalReturn+=step();
}
",35679,,2444,,4/2/2020 16:01,4/2/2020 16:01,Can this be a possible deep q learning pseudocode?,,1,0,,,,CC BY-SA 4.0 19931,2,,19930,4/2/2020 8:05,,0,,"

It looks generally valid to me. There are a couple of things missing/implied that I'd like to give feedback on though:

I am not using replay here

Then it won't work, except for the most simple and trivial of problems (where you probably would not need a neural network anyway).

get s,a,r,n_s

I would make it more explicit where you get these values from, and split up the assignments.

# before step()
s = env.start state()
a = behavior_policy(q_net[n_s,:])

  # inside step(), first action
  r, n_s = env.step(s, a)

  # ....rest of loop

  # inside step(), last actions
  s = n_s
  a = behavior_policy(q_net[n_s,:])

In the above code, env is the environment which takes state, action pairs as input and returns reward plus next state. It would be equally valid to have current state as a property of the environment, and query that when needed. The behaviour_policy is a function that selects an action based on current action values - typically this might use an $\epsilon$-greedy selection.

while(!terminal) {

You appear to run just one episode in your pseudo-code. You will want an outer loop to run many episodes. Also it is not clear how you are deciding the value of terminal - in practice you will need a handle to current environment state via some variable.

Without the outer loop to start new episodes, and some variables defined to communicate between sections of code, it is difficult to follow the code and decide where to add details (such as where env.start_state() call should go, or whether something like s = env.reset() would be more appropriate).

totalReturn+=step();

In your pseudocode you do not return anything from step and it is not clear what you hope to do with this totalReturn variable. Technically it won't equal the definition of return in RL for any state, not even the starting state if gamma < 1.0.

However, the sum of all rewards seen in an episode is a useful metric. In Deep RL it is OK to treat gamma as a solution hyperparameter, and your target metric can be expected undiscounted return from the start state.

",1847,,1847,,4/2/2020 8:11,4/2/2020 8:11,,,,0,,,,CC BY-SA 4.0 19937,2,,19922,4/2/2020 14:19,,1,,"

Batch size and epochs are independent parameters - they serve very different purposes. Your main question as I understand it (and for general, non-library specific consumption) is what is an epoch and how is the data used for each epoch?

Simply put, an epoch is a single iteration though the training data. Each and every sample from your training dataset will be used once per epoch, whether it is for training or validation. Therefore, the more epochs, the more the model is trained. The key is to identify the number of epochs that fits the model to the data without overfitting.

Your explaination of how batch size affects the training process is correct but not relevant to the question since it has no relation to the epoch training iterations. That is not to say that these values should be considered independently since they have similar effects on the model training process.

",31980,,,,,4/2/2020 14:19,,,,0,,,,CC BY-SA 4.0 19938,1,,,4/2/2020 14:20,,0,157,"

I'm doing a Deep Q-learning project. All of my rewards are positive and there are two terminal states. One of them has a zero reward and the other has a high positive reward.

The rewards are stochastic and my Q-network must generate non-zero Q-values for all states and actions. Based on my project, I must use these numbers to create a probability density. In other words, the normalized Q-values of each state generated by network define a probability density for choosing an action.

How should I define my loss function? Is there a project or paper which I could look at and decide how to define the loss function? I am searching for similar projects and their proposed Deep Q-learning algorithms.

",35633,,2444,,4/2/2020 16:05,4/2/2020 16:05,How should I define the loss function when using DQN to estimate the probability density?,,0,5,,,,CC BY-SA 4.0 19940,1,,,4/2/2020 17:30,,1,26,"

I am currently working on a term paper on the topic of Narrative Similarity, based on Loizos Michael's work ""Similarity of Narratives"". I am trying to find the latest trends within this field of study for the literature overview in my assignement. However up untill now I wasn't able to find any new work on this particular subject.

I would appreciate any literature recommendation by anyone out there that has worked on this topic or is currently doing so.

Link to Michael's work: https://www.researchgate.net/publication/316280937_Similarity_of_Narratives

",31914,,,,,4/2/2020 17:30,What are the current research trends in recognizing narrative similarity?,,0,0,,,,CC BY-SA 4.0 19943,1,,,4/2/2020 18:49,,4,3816,"

Soon I will be working on biomedical image segmentation (microscopy images). There will be a small amount of data (a few dozens at best).

Is there a neural network, that can compete with U-Net, in this case?

I've spent the last few hours searching through scientific articles that are dealing with this topic, but haven't found a clear answer and I would like to know what other possibilities are. The best answers I found are that I could consider using ResU-Net (R2U-Net), SegNet, X-Net and backing techniques (article).

Any ideas (with evidence, not necessarily)?

",35696,,2444,,6/13/2020 0:00,7/3/2022 3:09,What are some good alternatives to U-Net for biomedical image segmentation?,,1,2,,,,CC BY-SA 4.0 19944,1,,,4/2/2020 20:39,,3,278,"

According to Wikipedia

In computability theory, Rice's theorem states that all non-trivial, semantic properties of programs are undecidable. A semantic property is one about the program's behavior (for instance, does the program terminate for all inputs), unlike a syntactic property (for instance, does the program contain an if-then-else statement). A property is non-trivial if it is neither true for every computable function, nor false for every computable function.

A syntactic property asks a question about a computer program like ""is there is a while loop?""

A semantic properties asks a question about the behavior of the computer program. For example, does the program loop forever (which is the Halting problem, which is undecidable, i.e., in general, there's no algorithm that can tell you if an arbitrarily given program halts or terminates for a given input)?

So, Rice's theorem proves all non-trivial semantic properties are undecidable (including whether or not the program loops forever).

AI is a computer program (or computer programs). These program(s), like all computer programs, can be modeled by a Turing machine (Church-Turing thesis).

Is safety (for Turing machines, including AI) a non-trivial semantic question? If so, is AI safety undecidable? In other words, can we determine whether an AI program (or agent) is safe or not?

I believe that this doesn't require formally defining safety.

",27283,,2444,,4/3/2020 18:34,2/25/2021 4:10,Does Rice's theorem prove safe AI is undecidable?,,1,3,0,,,CC BY-SA 4.0 19946,2,,19927,4/2/2020 21:42,,0,,"

I am not an expert on the topic, but I will provide some information that could be useful or helpful.

I think that the first and maybe trivial assumption that people make when they say that AGI or SI can emerge is that general intelligence (whatever the definition is) is computable, i.e. there's a Turing machine or, in general, a mathematical model of computation that can simulate an algorithm that possibly represents general intelligence. I don't know how likely the mind of a human (or any other general intelligence) is computable (or to what extent), but there's the so-called computational theory of mind (CTM) that goes into this direction.

An assumption related to the emergence of SI is that recursive self-improvement is physically possible. I also don't know exactly how likely or unlikely recursive self-improvement is, but my limited knowledge of thermodynamics (and physics) suggests that it will not be possible (at least, at the pace or the way people claim or want to suggest). I am not saying that we will not see more progress in the next years (we will), but this doesn't mean that we will be able to create an AGI. People often overestimate themselves and their intelligence.

A third assumption is related to the current achievements in the artificial intelligence field. Many people claim that there has been a lot of progress in recent years and that there's no reason to believe that this will not continue. However, history tells us that the predictions of AI scientists about the future of the AI field were often wrong. There have been at least 2 AI winters because of these wrong predictions and unmet expectations.

Many people also don't exclude the possibility of the emergence of AGI or SI only because they have not yet been proven wrong.

Another assumption is that there won't be any catastrophe (like COVID-19 can potentially be) that will significantly slow down the general scientific progress.

",2444,,2444,,4/3/2020 0:45,4/3/2020 0:45,,,,6,,,,CC BY-SA 4.0 19947,1,,,4/3/2020 0:56,,1,126,"

I am thinking about developing a GAN.

What is the difference between using dense layers as opposed to convolutional layers in my networks when dealing with images?

",35700,,2444,,4/7/2020 1:25,4/7/2020 1:25,What is the difference between using dense layers as opposed to convolutional layers in my networks when dealing with images?,,0,3,,,,CC BY-SA 4.0 19948,2,,13432,4/3/2020 0:59,,0,,"

Although this does not strictly answer your question (but it is at least very related), Jürgen Schmidhuber has some interesting ideas about compression and how it relates to artificial intelligence, prediction, curiosity, etc. For example, have a look at this paper Simple Algorithmic Theory of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes, whose abstract states

In this summary of previous work, I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively more ""beautiful"". Curiosity is the desire to create or discover more non-random, non-arbitrary, ""truly novel"", regular data that allows for compression progress because its regularity was not yet known. This drive maximizes ""interestingness"", the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and recent artificial systems.

See also his interesting talk Juergen Schmidhuber: The Algorithmic Principle Beyond Curiosity and Creativity on YouTube.

",2444,,,,,4/3/2020 0:59,,,,0,,,,CC BY-SA 4.0 19950,1,,,4/3/2020 6:09,,1,30,"

I am interested in doing model-free RL but not using the Markov assumptions typical for MDPs or POMDPs.

What are alternative paradigms that don't rely on the Markov assumptions? Are there any common approaches when this assumption is violated?

EDIT: I am asking for mathematical models that do not make the Markov assumption and so could be used for problems where the Markov assumption does not hold

",25337,,25337,,4/3/2020 14:46,4/3/2020 14:46,What are the most common non-Markov RL paradigms?,,0,0,,,,CC BY-SA 4.0 19952,1,,,4/3/2020 10:09,,1,61,"

Given a discrete, finite Markov Decision Process (MDP) with its usual parameters $(S, A, T, R, \gamma)$, it is possible to obtain the optimal policy $\pi^{*}$ and the optimal value function $V^{*}$ through one of many planning methods (policy iteration, value iteration or solving a linear program).

I am interested in obtaining a random near-optimal policy $\pi$, with the value function associated with the policy given by $V^{\pi}$, such that $$ \epsilon_1 < ||V^{*} - V^{\pi}||_{\infty} < \epsilon_2$$

I wish to know an efficient way of achieving this goal. A possible approach is to generate random policies and then to use the given MDP model to evaluate these policies and verify that they satisfy the criteria.

If only an upper bound were needed, the idea that near optimal value functions induce near optimal policies could be used, that is, we can show that, if $$||V - V^{*}||_{\infty} < \epsilon, \quad \epsilon > 0$$ and if $\pi$ is the policy that is greedy with respect to the value function $V$, then $$ ||V^{\pi} - V^{*}||_{\infty} < \frac{2\gamma\epsilon}{1 - \gamma}$$ So by picking a suitable $\epsilon$ for the given $\gamma$, we can be sure of any upper bound $\epsilon_2$.

However, I would also like that the policy $\pi$ not be ""too good"", hence the requirement for a lower bound.

Any inputs regarding an efficient solution or reasons for the lack thereof are welcome.

",28384,,,,,4/5/2020 10:17,Efficient algorithm to obtain near optimal policies for an MDP,,0,2,,,,CC BY-SA 4.0 19954,2,,18514,4/3/2020 11:31,,2,,"

I'm not sure it's possible to help much because this is an experimental question. I'm afraid the only answer comes with testing many different options.

I see a little thing that might be making your model a little worse, though:

  • You're concatenating ""relu"" with ""sigmoid"".

Placing two different nature values in the same array may make it more difficult for updating weights properly.

A few independent suggestions:

  • Make the output of the images model have ""relu"" activation as well before the concatenation. Preferrably, use a batch normalization before the image relu and before the number relu (this way you concatenate values that are in very similar ranges).
  • Instead of concatenating, you can try Multiply()([img_output, numeric_output]), in this case, both outputs must have the same size, one of them uses ""relu"" or ""linear"", and the other uses ""sigmoid"".

Now, something important when using AUC: you need big batch sizes, because AUC is dependent on the whole data, it's not like usual metrics/losses that you can take the mean from the results of each batch.

",35709,,,,,4/3/2020 11:31,,,,2,,,,CC BY-SA 4.0 19955,1,,,4/3/2020 11:46,,2,133,"

I've written a Monte Carlo Tree Search player for the game of Castle (AKA Shithead, Shed, Palace...). I have set this MCTS player to play against a basic rule-based AI for ~30000 games and collected ~1.5 million game states (as of now) along with whether the MCTS player won that particular game in the end after being in that particular game state. The game has a large chance aspect, and, currently, the MCTS player is winning ~55% of games. I want to see how high I can push it. In order to do this, I aim to produce a NN that will act as a game state evaluation function to use within the MCTS.

With this information, I've already tried an SVM, but came to the conclusion that the game space is too large for the SVM to classify a given state accurately.

I hope to be able to train a NN to evaluate a given state and return how good that state is for the MCTS player. Either with a binary GOOD/BAD or I think it would be more helpful to return a value between 0-1.

The input to the NN is a $4 \times 41$ NumPy array of binary values (0, 1) representing the MCTS players hand, MCTS face-up cards, OP face-up cards, MCTS no. face-down cards, OP no. face-down cards. Shown below.

Describes the np.array:

The np.array is made from the database entries of game states. An example of this information is below. However, I am currently omitting the TOP & DECK_EMPTY columns in this model. WON (0, 1) is used as the label.

This is my keras code:

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(2, activation=tf.nn.softmax))

model.compile(optimizer='adam',
             loss='sparse_categorical_crossentropy',
             metrics=['accuracy'])

model.fit(X_train, y_train, epochs=3)

This model isn't performing.

  • Do you think it is possible to obtain a useful NN with my current approach?

  • What layers should I look to add to the NN?

  • Can you recommend any training/learning material that I could use to try and get a better understanding?

",33966,,2444,,2/6/2021 21:59,2/6/2021 21:59,Is this a good approach to evaluate the game state with a neural network?,,0,0,,,,CC BY-SA 4.0 19956,1,19968,,4/3/2020 11:48,,2,1334,"

In an episodic training of an RL agent, should I always start from the same initial state or I can start from several valid initial states?

For example, in a gym environment, should my env.reset() function always resets me to the same start state or it can start from different states at each training episode?

",34341,,2444,,4/3/2020 12:13,4/4/2020 7:36,Should I always start from the same start state in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 19957,1,,,4/3/2020 12:01,,2,59,"

I am wondering about the correlation between the loss and the derivative of the loss wrt a single scalar parameter, with the same sample. That means: considering a machine learning model with parameters $\theta \in R$, I want to figure out the relationship between $Loss(x)$ and $\frac{\partial Loss(x)}{\partial \theta_i}$, where $i \in \{1,2,3,...,n\}$

Intuitively, I would like to consider that they are in a positive correlation, is it right? If it is right, how can I prove it in a mathematical way?

",35710,,,user9947,4/7/2020 1:38,4/7/2020 1:38,Is the derivative of the loss wrt a single scalar parameter proportional to the loss?,,1,3,,,,CC BY-SA 4.0 19958,2,,19943,4/3/2020 12:14,,2,,"

Hey i am working on my Bachelor thesis at the moment and use UNET in combination with a GAN for image segmentation. I spend the last 5 months on that, so on my tests, the new approach of januar 2020, called Multires-UNET is quite a good choice for more texture orientated segmentation. I use the current github implementation. Its quite nice, maybe you notice that you can easyly tweak the number of parameters with ""alpha"" in the implementation, to scale the multiple Resnets in the Unet structure.

I tried also other segmentation networks like Mask_RCNN with different backbones or tried to construct various types of CAE on my own, but always had to come back to a UNET like structure. Same goes for ResU-NET and R2U, the multires one worked better for my purposes cause i didnt need any kind of LSTM modules.

Some examples which may clarify the difference in performance on a specific task:

Original Image:

Ground-Truth:

Classical UNET++ (Unet with skip-connections)(2.5 million parameters) more parameters (wide) didnt change the result.

Multires-UNET (alpha=1.67, think was about 7 million parameters)

Can you show some of ur microscopy images, how complex is ur task and what do you want to segment ?

",35557,,35557,,4/4/2020 13:27,4/4/2020 13:27,,,,4,,,,CC BY-SA 4.0 19962,2,,18820,4/3/2020 13:10,,3,,"

All right, I figured it out. trajectories need not have the same starting state because the distribution of $s_0$ is drawn from a distribution D (mentioned in the paper). Had been confused because many of the code implementations on github focus on trajectories starting from the same state.

Hope this helps everyone !

",32780,,32780,,4/3/2020 13:54,4/3/2020 13:54,,,,2,,,,CC BY-SA 4.0 19963,1,19966,,4/3/2020 15:01,,2,76,"

I have created a neural network that is able to recognize images with the numbers 1-5. The issue is that I have a database of 16x5 images which ,unfortunately, is not proving enough as the neural network fails in the test set. Are there ways to improve a neural network's performance without using more data? The ANN has approximately a 90% accuracy on the training sets and a 50% accuracy in the test ones.

Code:

clear
graphics_toolkit(""gnuplot"")
sigmoid = @(z) 1./(1 + exp(-z));
sig_der = @(y) sigmoid(y).*(1-sigmoid(y));


parse_image;   % This external f(x) loads the images so that they can be read. 
%13x14
num=0;
for i=1:166
  if mod(i-1,10)<=5 && mod(i-1,10) > 0
    num=num+1;
    data(:,num) = dlmread(strcat(""/tmp/"",num2str(i)))(:);
  end
end



function [cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i)
  mid_layer(:,1)=sum(w1.*data(:,i));
  mid_layer(:,2)=sigmoid(mid_layer(:,1));
  last_layer(:,1)=sum(mid_layer(:,2).*w2);
  last_layer(:,2)=sigmoid(last_layer(:,1));
  exp_res=rem(i,5);
  if exp_res==0
    exp_res=5;
  end
  exp_result=zeros(5,1); exp_result(exp_res)=1;
  cost = exp_result-last_layer(:,2);
end

function [w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i)
  delta(1:5) = cost;
  delta(6:20) = sum(cost' .* w2,2);
  w2 = w2 + 0.05 .* delta(1:5) .* mid_layer(:,2) .* sig_der(last_layer(:,1))';
  w1 = w1 + 0.05 .* delta(6:20) .* sig_der(mid_layer(:,1))' .* data(:,i);
end

w1=rand(182,15)./2.*(rand(182,15).*-2+1);
w2=rand(15,5)./2.*(rand(15,5).*-2+1);

for j=1:10000
  for i=[randperm(85)]
    [cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i);
    [w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i);
    cost_mem(j,i,:)=cost;
  end
end
",35660,,,,,4/3/2020 15:46,How can I train a neural network if I don't have enough data?,,2,0,,,,CC BY-SA 4.0 19964,1,,,4/3/2020 15:18,,1,47,"

Convex optimisation is defined as:

I have seen a lot of talk about convex loss functions in Neural Networks and how we are optimising rewards or penalty in AI/ML systems. But I have never seen any loss function formulated in the aforementioned way. So my question is:

Is there any role of convex optimization in AI? If so, in what algorithms or problem settings or systems?

",,user9947,2444,,4/3/2020 15:49,4/3/2020 15:54,What is the role of convex optimisation in AI systems?,,1,0,,,,CC BY-SA 4.0 19965,2,,19963,4/3/2020 15:44,,1,,"

In theory, yes, using synthetic data generation. This involves applying transformations to the original images to generate new 'unique' images. Some standard techniques include rotating, flipping, stretching, zooming or brightening. Obviously not all of these make sense depending on the data. In your problem, zooming, stretching and brightening could be used but flipping should not. Rotation could work but only for small angles.

Generally this is implemented by replacing the dataset for each epoch of training. Therefore, the number of images used in each training iteration is the same but the images themselves have been altered.

In practice, it's not a magic bullet. The reason a larger dataset generally yields better models is because the probability of a new feature falling within the feature distribution of the training data is higher. With synthetic data generation the new features are only marginally different to the original so even if the number of images to train on is increased, the feature distributions are not that different. There is a lot of variation in handwritten numbers so it would be very hard to guess how effective this would be without trying it.

",31980,,,,,4/3/2020 15:44,,,,0,,,,CC BY-SA 4.0 19966,2,,19963,4/3/2020 15:46,,3,,"

You can synthetically increase the number of samples. For example with augmentation or unsupervised adaption (Self-training). With augmentation you grant the system way more robustness so i would really recommend this. For example this github. The problem with such small database sizes is that your test-set is also very small and you cannot test properly if your network generalizes well, or just overfits.

You can try transfer learning with another larger network to adapt those feature extractors and use them on ur problem. That may work better than training a new one from scratch with so less labeled images. Hope i could help at least a little, stay tuned.

",35557,,,,,4/3/2020 15:46,,,,5,,,,CC BY-SA 4.0 19967,2,,19964,4/3/2020 15:48,,1,,"

Is there any role of convex optimization in AI?

Yes, of course!

If so, in what algorithms or problem settings or systems?

The problem of finding the parameters of a support vector machine can be formulated as a convex optimization problem. Another example is linear regression.

See also the paper Convex Optimization: Algorithms and Complexity (2014) by Sébastien Bubeck, which also mentions SVM as a typical example.

",2444,,2444,,4/3/2020 15:54,4/3/2020 15:54,,,,2,,,,CC BY-SA 4.0 19968,2,,19956,4/3/2020 15:51,,2,,"

It depends on the task the agent is trying to learn and of course on the environment constrains.

In an Atari game agents have a pre-fixed starting point because that's part of the games rules, so I would say that this is enough of a justification to make each simulation start from that starting point. Moreover, you have to pay attention to the kind of reward function you're using, for example (a really dumb one just to give a grasp of the concept) if you're rewarding the agent depending on how close it gets to the ending point rather than how far it goes from the starting point, the agent might end up having huge rewards only because it respawned close to end point, which would be an artefact and not a fair reward for a good action choice. Other situations in which it does not really make sense to select a random starting point might be dialogue systems, in which you know that a conversation start with greetings, therefore it would not be logical to make an agent starting with a random question or sentence (the space in these case would be made of different dialogue acts).

There are anyway situations in which the environment allows a random selection of the start location of the agent. In this paper for example, an agent was trained to escape from a maze, and both, the exit point and the initial location of the agent where randomly selected at each iteration. This was partially due to research reasons, the authors were analysing the possibility to train policies more complex than a simple 'space memorisation' one but anyway, conceptually there's nothing wrong in this situation in selecting a random starting point. Other tasks could be following or escaping from a specific object, again, as long as the reward function is properly designed, even with a random starting location at the end the agent would anyway learn to move faster when close to the target (toward or in the opposite direction depending on the task). Actually, in this situation I think that a random initial location would have potential benefits, like avoiding the agents to learn biases due to external biases of the environment (like parts of space with less random obstacles than others).

",34098,,,,,4/3/2020 15:51,,,,0,,,,CC BY-SA 4.0 19970,1,,,4/3/2020 16:09,,1,192,"

Let $\mathcal{S}$ be the state-space in a reinforcement learning problem where rewards are in $\mathbb{R}$, and let $V:\mathcal{S} \to \mathbb{R}$ be an approximate value function. Following the GAE paper, the TD-residual with discount $\gamma \in [0,1]$ is defined as $\delta_t^V = r_t + \gamma V(s_{t + 1}) - V(s_t)$.

I am confused by the formula for the GAE-$\lambda$ advantage estimator, which is $$ \hat{A}_t^{\text{GAE}(\gamma, \lambda)} = \sum_{l = 0}^\infty (\gamma \lambda)^l \delta_{t + l}^V. $$

This seems to imply that $\delta_t^V$ is defined for $t > N$, where $N$ is the length of the current trajectory/episode. It looks like in implementations of this advantage estimator, it is just assumed that $\delta_t^V = 0$ for $t > N$, since the sums are finite. Is there a justification for this assumption? Or am I missing something here?

",33762,,2444,,6/4/2020 17:12,6/4/2020 17:12,Is the TD-residual defined for timesteps $t$ past the length of the episode?,,0,0,,,,CC BY-SA 4.0 19971,1,,,4/3/2020 16:19,,1,38,"

I am wondering how can I find the appropriate reward value for each specific problem. I know this is a highly empirical process, but I am sure that the value is not set totally at random. I want to know what are the general guidelines and practices to find the appropriate reward value for any reinforcement learning problem.

",34341,,2444,,4/3/2020 18:10,4/3/2020 18:10,How can I find the appropriate reward value for my reinforcement learning problem?,,0,2,,,,CC BY-SA 4.0 19974,1,19975,,4/3/2020 18:43,,2,130,"

In chapter 3.5 of Sutton's book, the value function is defined as:

Can someone give me some clarification about why there is the expectation sign behind the entire equation? Considering that the agent is following a fixed policy $\pi$, why there should be an expectation when the trajectory of the future possible states is fixed (or maybe I am getting it wrong and it's not). In total, if the expectation here has the meaning of averaging over a series of trajectories, what are those trajectories and what are the weights of them when we want to compute the expected value over them according to this Wikipedia definition of the expected values?

",35719,,2444,,4/3/2020 19:42,4/4/2020 4:34,Why is there an expectation sign in the Bellman equation?,,2,0,,,,CC BY-SA 4.0 19975,2,,19974,4/3/2020 19:19,,4,,"

There needs to be an $E_{\pi}$ over the infinite discounted return term because of two reasons-

  1. The policy could be stochastic in nature. That is, for any given state $s_t$ at time $t$, the policy $\pi(s_t)$ does not provide a deterministic action $a$, but rather, it provides us with a distribution over the possible next states, that is the action at time $t$, $a_t$ is distributed as- $$a_t \sim \pi(s_t)$$
  2. Even if the policy $\pi$ being followed by an agent is deterministic, there still needs to be an expectation over the behavior of the underlying stochastic MDP environment. That is, any action $a_t$, in general, only provides us with a distribution over the possible next states of the agent. That is, $$P(s_{t + 1} = s') = P_{\pi}(s' | s_t) = \sum_{a \in A} T(s,a,s') \times P_{\pi}(a_t = a)$$ Here $T(s, a, s')$ is the transition function for the MDP and the above equation captures the stochasticity arising from both 1 and 2.

As you see the expectation does not have to do with averaging over a collection of trajectories. However, that idea is often used in Monte-Carlo estimation of value functions.

EDIT: As pointed out in the comments, it is not correct to say that the expectation is not over a collection of trajectories.

",28384,,28384,,4/4/2020 4:34,4/4/2020 4:34,,,,6,,,,CC BY-SA 4.0 19979,1,,,4/3/2020 19:46,,1,108,"

Why can an AI, like AlphaStar, work in StarCraft, although the environment is only partially observable? As far as I know, there are no theoretical results on RL in the POMDP environment, but it appears the core RL techniques are being used in partially observable domains.

",32390,,2444,,4/3/2020 20:22,4/3/2020 20:22,Why can the core reinforcement learning algorithms be applied to POMDPs?,,0,7,,,,CC BY-SA 4.0 19980,2,,19974,4/3/2020 20:03,,2,,"

In addition to this answer, I would like to note that, if the future trajectories were fixed (i.e. the environment and the policies were deterministic, and the agent always starts from the same state), the expectation of the sum (of the fixed rewards) would simply correspond to the actual sum, because the sum is a constant (i.e. the expectation of a constant is the constant itself), so the expectation operator also applies to the deterministic cases. Therefore, the expectation is a general way of expressing the value of a state in all possible cases (both when trajectories are fixed or not).

",2444,,2444,,4/3/2020 20:11,4/3/2020 20:11,,,,0,,,,CC BY-SA 4.0 19981,1,,,4/3/2020 20:52,,2,33,"

I'm creating an RL application for the game Connect Four. I've researched the different strategies for the game and which positions are more favourable to lead to a win.

Should I be assigning greater rewards when the application places a token in those particular positions? If so, that would mean the application/algorithm is a Connect Four specific application, and not generic?

",27629,,27629,,4/3/2020 21:01,4/3/2020 21:01,How should I define the reward function for the Connect Four game?,,0,0,,,,CC BY-SA 4.0 19982,1,,,4/3/2020 21:03,,1,51,"

I'm creating an RL application for the game Connect Four.

In general, should I be aiming to create an application that's more generic, which would 'learn' different games, or specific to a particular game (e.g. Connect Four, by assigning greater rewards to certain token positions in the C4 grid)?

Does the difference between the two approaches just come down to adapting their respective reward functions to reward specific achievements or positions (in a board game setting), or something else?

",27629,,2444,,4/4/2020 1:44,4/4/2020 18:11,Should I be trying to create a generic or specific (to particular game) reinforcement learning agent?,,1,0,,,,CC BY-SA 4.0 19983,1,37687,,4/3/2020 21:12,,0,133,"

Does this prove AI Safety is undecidable?

Proof:

Output meaning output to computer program.

[A1] Assume we have a program that decides which outputs are “safe”.

[A2] Assume we have an example of an unsafe output: “unsafe_output”

[A3] Assume we have an example of safe output: “safe_output”.

[A4] Define a program to be safe if it always produces safe output.

[A5] Assume we have a second program (safety_program) that decides which programs are safe.

[A6] Write the following program:

def h()
   h_is_safe := safety_program(h)
   if (h_is_safe):
      print unsafe_output
   else:
      print safe_output

Clearly h halts.

If the safety_program said h was safe, then h prints out unsafe_output.

If the safety_program said h was not safe, then h prints out safe_output.

Therefore safety_program doesn’t decide h correctly.

This is a contradiction. Therefore we made a wrong assumption: Either safe output cannot be decided, or safe programs cannot be decided.

Therefore, in general, the safety of computer programs, including Artificial Intelligence, is undecidable.

Therefore AI Safety is undecidable.

",27283,,27283,,4/4/2020 3:12,10/29/2022 20:12,Does this prove AI Safety is undecidable?,,2,14,,,,CC BY-SA 4.0 19984,2,,19983,4/3/2020 21:50,,1,,"

In my opinion, there are several flaws in your proof and reasonings.

First, note that, in the case of Turing's proof, h will actually loop forever (i.e. not halt) when the oracle says that h halts. In this case, there's an actual contradiction, because h will do the opposite of what the oracle says.

So, to follow Turing's proof, you would need to make h behave unsafely if the oracle says h is safe. But how should we define a safe or unsafe program? There are many unsafe behaviors. For example, in a certain context, an insult could be unsafe, in other contexts, a certain limb movement could be unsafe, and so on. So, an agent is unsafe or behaves unsafely usually with respect to another agent (or itself) or environment. You probably need to keep this in mind if you want to prove anything about the safety of AI agents.

In your second assumption, you are implicitly saying that any machine that produces the output unsafe_output is unsafe, but, of course, this definition is not a realistic definition of an unsafe program.

To help you define safety in a more reasonable and natural way, I think it may be useful to reason first in terms of artificial agents, which are higher-level concepts than Turing machines. Then you could find a way of mapping agents to TMs and attempt to prove your conjectures by using the tools of the theory of computation.

",2444,,2444,,4/4/2020 2:25,4/4/2020 2:25,,,,2,,,,CC BY-SA 4.0 19985,1,,,4/3/2020 22:05,,3,830,"

When you play video games, sometimes there is an AI that attempts to predict what are you going to do.

For example, in the Candy Crush game, if you finish the level and you still have moves remaining, you get to see fishes or other powers destroying the other candies, but, instead of watching 10 minutes of your combos without moving at all after accomplishing a level, like this Longest video game combo ever probably, it tells an alert that says tap to skip, so basically the AI is predicting all the possible combos that will keep proceeding automatically and calculating every automatic move.

How can artificial intelligence predict such a thing?

",35725,,2444,,4/4/2020 2:02,4/21/2020 10:22,How can artificial intelligence predict the next possible moves of the player?,,4,2,,,,CC BY-SA 4.0 19986,1,19988,,4/3/2020 22:06,,1,1127,"

To clarify it in my head, the value function calculates how 'good' it is to be in a certain state by summing all future (discounted) rewards, while the reward function is what the value function uses to 'generate' those rewards for it to use in the calculation of how 'good' it is to be in the state?

",27629,,2444,,10/8/2020 12:51,11/25/2022 23:22,What is the relationship between the reward function and the value function?,,2,0,,,,CC BY-SA 4.0 19987,1,20071,,4/3/2020 22:33,,2,467,"

I'm currently trying to code the NEAT algorithm by myself, but I got stuck with two questions. Here they are:

What happens if during crossover a node is removed (or disabled) and there's a connection that was previously connected to that specific node? Because, in that case, some connections are no longer useful. Do I keep the useless connections or do I prevent this from happening? Or maybe I'm missing something?

Someone on AI SE said that:

You could:

1.) Use only the connection genes in crossover, and derive your node genes from the connection genes

2.) Test if every node is in use, and delete the ones that are not

But the problem with that is that my genomes will lose some complexity. Maybe I can use the nodes during crossover, and then disable the connections that were using this node. That way, I'm keeping the genotype complex, but the phenotype is still working.

Is there another way to workaround this problem or this is the best way?

",35722,,-1,,6/17/2020 9:57,4/7/2020 9:32,"Do I have to crossover my node genes in NEAT, and how?",,1,1,,,,CC BY-SA 4.0 19988,2,,19986,4/3/2020 23:17,,1,,"

I think it is pedagogically useful to distinguish between the theory (equations) and the practice (algorithms).

If you're talking about the definition of the value function (the theory)

\begin{align} v_{\pi}(s) & \dot{=} \mathbb{E}_{\pi} \left[ G_t \mid S_t = s \right]\\ &= \mathbb{E}_{\pi} \left[ \sum_{k=0}^\infty \gamma^k R_{t+k+1} \bigl\vert S_t = s \right]\\ \end{align}

for all $s \in \mathcal{S}$, where $\dot{=}$ means "is defined as" and $\mathcal{S}$ is the state space, then the value function can be defined in terms of the reward, as it can be clearly seen above. (Note that $R_{t+k+1}$, $G$ and $S_t$ are random variables, and, in fact, expectations are taken with respect to random variables).

The definition above can actually be expanded to be a Bellman equation (i.e. a recursive equation) defined in terms of the reward function $R(s, a)$ of the underlying MDP. However, often, rather than the notation $R(s, a)$, you will see $p(s', r \mid s, a)$ (which represents the combination of the transition probability function and the reward function). Consequently, the value is a function of the reward.

If you're estimating a value function (the practice), e.g. using Q-learning, you don't necessarily use the reward function of the Markov decision process. You can estimate the value function by just observing the rewards that you receive while exploring the environment, without really knowing the reward function. But, by exploring the environment, you can actually estimate the reward function. For example, if every time you're in state $s$ you take action $a$ and you receive reward $r$, then you already know something about the actual underlying reward function. If you explore enough the MDP, you could potentially learn the reward function too (unless it keeps on changing, in that case, it may be more difficult to learn it).

To conclude, yes, value functions are certainly very related to reward functions and rewards, in ways that you immediately see from the equations that define the value functions.

",2444,,2444,,10/8/2020 13:00,10/8/2020 13:00,,,,0,,,,CC BY-SA 4.0 19989,2,,6429,4/4/2020 1:06,,0,,"

The answer is yes, an AI can be trained to write even a whole story. I just want to tell you right off the bat that an AI already did something even more difficult than generating a story. I'm talking about that thing at the end of my explanation.

All the links in my explanations are leading to external sources that I found, you can go check them. Without any further do, here are the main reasons why I think AIs can generate the outline of a story:

  1. AIs are really good at recognizing patterns, and generating things that are similar to others. Surprisingly, there's a lot of patterns in stories. Stories are always structured, so this part isn't the real problem. There's a great Wiki about the seven basic plots.
  2. But even if an AI can generate a good story structure, can it make a story appealing? Well, it depends on how big the ""brain"" of the AI is. Because it turns out that the more neurons and synapses an AI have, the more it can ""understand"" human language or emotions. So, if an AI has a big enough brain, it can generate stuff that make sense. Here's the best example of an AI being able to generate human-like stuff : https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html.

For the how, I think that the training data isn't insignificant. So, to be able to train an AI like that, we need a lot of examples. This is possible, because movies' screenplays are public, and can be downloaded by anyone. So, an AI can easily learn from this huge amount of screenplays. Here are some examples of websites where we can get screenplay of movies: https://stephenfollows.com/resource/sites-to-find-movie-scripts/, https://www.simplyscripts.com/movie-screenplays.html.

After that, we just need to format the data, so we can give it to our AI. In my opinion, it's completely possible to make a good AI that writes good stories, because Google already did something similar. I think that the chatbot Meena, created by Google, is the proof that an AI can learn way more than just pattern recognition.

",35722,,35722,,4/4/2020 18:23,4/4/2020 18:23,,,,0,,,,CC BY-SA 4.0 19990,2,,19982,4/4/2020 1:39,,1,,"

If what you mean by a generic reinforcement-learning application is an application that can learn any game (or some games), then you can't do it. Why? Because the goal of each game isn't the same, so you have to adapt the rewards depending on the game. If you just want to make an AI for Connect Four, I suggest you to make a specific RL application for that game.

I want to mention another thing: you shouldn't give a reward based on token's position, because it's hard to know what token's position is the best. Instead, just assign a big reward to the winner. That way, you're generalizing your algorithm, and you're avoiding your AI to focus on the wrong goal.

You have to be careful, you don't know for sure what token's position is the best. Try to give a reward ONLY when a player wins the game, because you can know for sure that winning is the best thing to do. It may sound idiot said like that, but that way, you're ensuring you AI to learn by itself what token's position is the best.

",35722,,35722,,4/4/2020 18:11,4/4/2020 18:11,,,,10,,,,CC BY-SA 4.0 19991,2,,18595,4/4/2020 2:30,,1,,"

Firstly, the key of implementing a good genetic algorithm like the NEAT one is fitness. Now fitness is everything, it tells basically what your snakes will learn. If you have a bad fitness function, your AIs will target the wrong goal. You shouldn't give fitness when a snake is aligning with food, because that's not what you want. What you really want your snake to do is eating food.

So, I suggest you to try giving fitness only when your snake is eating food, and maybe remove some as time passes (because of hunger). Your fitness could even be just the time a snake survived, since to survive your snakes have to eat, and avoid touching themselves! Definitely try this: just giving fitness as time passes!

Secondly, you have to give good inputs to your AIs, so that they have enough information about their environment to optimize their strategy. I suggest you to try inputs that are relative to your snake's position, and direction. X and Y position of the snake and the food are bad inputs, because they're not relative to the snake, they're relative to the origin of your game. So, every second the position of the snake is changing, and this could lead to some distraction.

The snake doesn't need to know his position, he just need to know the distance between him and the food, and the angle between the direction of the snake (him) and the food.

Lastly, you may review your outputs. But this is a minor problem, you should definitely focus on the fitness of your snakes. But, if you want to go deeper, try having two outputs instead of four: speed, and turning velocity. That way, it's easy for your snakes to go straight forward. When a snake wants to go forward with left, right, up, down outputs, it's way harder.

",35722,,,,,4/4/2020 2:30,,,,0,,,,CC BY-SA 4.0 19992,2,,18208,4/4/2020 2:51,,0,,"

When it comes to genetic algorithms (neuroevolution), you can pretty much evolve any kind of parameters. You just need to have a fairly complex system where each parameter changes the way the inputs are affecting the outputs.

So, to answer your question, any activation function should work! It would be surprising that the activation function is your problem. But if you want to test some activation functions, here's a list of activation functions. But tanh should work perfectly fine.

As I already mentioned in another answer, the key to have a great neuroevolution algorithm is the fitness function. Your rewards should reflect the goal you're aiming for. You didn't mentioned what fitness function you used, but here's what I suggest: reward a player ONLY if he progresses in the game. For example, your fitness function could simply be the distance traveled by your birds. This could maybe solve your problem...

",35722,,35722,,5/2/2020 15:32,5/2/2020 15:32,,,,0,,,,CC BY-SA 4.0 19993,2,,16051,4/4/2020 3:59,,1,,"

As I can see, at each frame, you're giving 1 fitness if the snake gets closer to the food, and you remove 10 fitness if it gets away from the food. Am I right? Also, you're removing 50 fitness when a snake dies, and giving 15 fitness when it eats a food?

Try to remove the -10 fitness at each frame where the snake gets away from food, it's not necessary. Not getting fitness is also a penalty, you don't need to exaggerate the penalty as it's making your -50 fitness being negligible. Because of that, dying quickly is better than surviving for 6 frames without getting closer to the food. So the first strategy your AIs are going to find is spinning around to die quickly (because surviving is bad for them). This is not what you want.

But these are only assumptions, try it, and tell me if it works!

",35722,,,,,4/4/2020 3:59,,,,1,,,,CC BY-SA 4.0 19994,1,,,4/4/2020 5:12,,3,55,"

From my understanding, maximum likelihood estimation chooses the set of parameters for the estimator that maximizes likelihood with the ground truth distribution.

I always interpreted it as the training set having a tendency to have most examples near the mean or the expected value of the true distribution. Since most training examples are close to the mean (since they have been sampled from this distribution) maximizing the estimator's chance of sampling these examples gets the estimated distribution close to the ground truth distribution.

This would mean that any MLE procedure on a dataset of outliers should fail miserably. Are this interpretation and conclusion correct? If not, what is wrong with the mentioned interpretation of maximizing likelihood for an estimator?

",25658,,2444,,4/4/2020 18:47,4/4/2020 18:47,Is maximum likelihood estimation meaningless for a dataset of only outliers?,,0,1,,,,CC BY-SA 4.0 19995,2,,19956,4/4/2020 7:36,,2,,"

It is your choice.

This can even be different between training and target system. The approach called ""exploring starts"" chooses a random start state (and action if you are assessing a deterministic policy for action values).

In general, if you don't have a reason to pick exploring starts, you should aim for your env.reset() function to put the environment into a state drawn from the distribution of start states that you expect the agent to encounter in production. This will help if you are using function approximation - it will mean that the distribution of training data will better match the distribution seen in production, and approximators can be sensitive to that.

In some cases, such as policy-gradient methods, your cost function will be defined in terms of expected return given a start state distribution, so at least during assessment you will want a env.reset() function that matches that target start distribution.

It is still OK to have different distributions for start states for training and assessment, and might be worth investigating as a hyperparameter for training. For instance if the training start state distribution can pick states that are hard to get to randomly otherwise, it may help the agent to find any of those states that are high value.

",1847,,,,,4/4/2020 7:36,,,,0,,,,CC BY-SA 4.0 19997,1,19999,,4/4/2020 10:19,,1,89,"

I was going through the paper on GAN by Ian Goodfellow. Under the related work section, there is an equation. I cannot decipher the equation. Can anyone help me understand the meaning of the equation?

$$\lim_{\sigma \to 0} \nabla_{\mathbf x} \mathbb E_{\epsilon \sim \mathcal N(0, \sigma^2 \mathbf I)} f(\mathbf x+\epsilon) = \nabla_x f(\mathbf x)$$

Also, any guide to understanding mathematical notation for reading research paper is highly appreciated.

",31749,,2444,,4/4/2020 13:42,4/4/2020 13:42,"What does equation in the ""related work"" section of the GAN paper mean?",,1,0,,,,CC BY-SA 4.0 19999,2,,19997,4/4/2020 11:13,,2,,"

In full:

The limit, as standard deviation $\sigma$ tends towards zero, of the gradient with respect to vector $\mathbf{x}$, of the expectation - where perturbation $\epsilon$ follows the normal distribution with mean 0 and variance $\sigma^2$ times identity vector $[1,1,1,1...]$ * - of any function $f$ of $\mathbf{x}$ plus $\epsilon$ is equal to the gradient with respect to $x$ of the same function of $\mathbf{x}$.

If we break that down:

$$\lim\limits_{\sigma \rightarrow 0}$$

The limit, as standard deviation $\sigma$ tends towards zero of

$$\nabla_\mathbf{x}$$

the gradient with respect to vector $\mathbf{x}$ of

$$\mathbb{E}$$

the expectation ...

$$\mathbb{E}_{\epsilon \sim \mathcal{N}(0, \sigma^2\mathbf{I})}$$

[the expectation] - where perturbation $\epsilon$ follows the normal distribution with mean 0 and variance $\sigma^2$ times identity vector $[1,1,1,1...]$ * - of

$$f(\mathbf{x} +\epsilon)$$

any function $f$ of $x$ plus $\epsilon$

$$ = \nabla_x f(\mathbf{x})$$

is equal to the gradient with respect to $\mathbf{x}$ of the same function $f(\mathbf{x})$.

Basically it says that taking small perturbations of a vector input to a function and measuring the gradient at those varied points can be used to give you a valid estimate of the true gradient at the point you are making variations of.

In terms of understanding the equations, read introductory texts to the research area, and if like me your maths has been unused for many years before attempting this, expect to spend time and effort. Re-read the equations, memorise and write out the basic ones from the field, apply them to simple problems that might be presented in text books. Reading maths equations is not much different to reading music or reading another language - it takes concentration, practice, time and effort to become fluent enough to read an equation and comprehend it. The different research fields can be quite different too, some might be as similar enough to get by with what you know already, others may require learning all over again.


* I am not 100% certain of the interpretation of $\mathbf{I}$ as an identity vector - a matrix may be more appropriate, which depends on the form of $\mathcal{N}(\mu, \sigma^2)$ when handling vector distributions. A matrix form for the second argument would be more general and allow for covariance, although the use of $\mathbf{I}$ would then explicitly remove covariance and make each component of $\epsilon$ independent, which is required for this result.

",1847,,1847,,4/4/2020 13:18,4/4/2020 13:18,,,,4,,,,CC BY-SA 4.0 20000,1,,,4/4/2020 11:38,,1,25,"

I have an AI design for deciding the length of green and red lamps of the traffic. In my design, every crossroads has its own agent. This agent has input the amount of vehicle in each road in a single junction. AI then decide how long is the red lamp and the green lamp in each junction. The fitness function is the average commute time in the city. Each agent may communicate with each other, and give reward or punishment to other AI. What AI algorithm works like this?

",35738,,,,,4/4/2020 11:38,What kind of artificial intelligence is this? A decentralized swarm intelligence where the input and output is split among the agents,,0,0,,,,CC BY-SA 4.0 20001,1,20036,,4/4/2020 12:56,,2,137,"

I have learned so far how to linear regression with one or multiple features. So far, so good, everything seems to work fine, at least for my first simple examples.

However, I now need to normalise my features for training. I'm doing this by calculating the mean and the standard deviation per feature, and then calculate the normalised feature by subtracting the mean, taking the absolute value, and dividing by the standard deviation. Again, so far, so good, the results of my tensors which I use for training look good.

I understand why I need to normalise input data, and I also understand why one can do it like this (I know that there are other ways as well, e.g. to map values to a 0-1 interval).

Now I was wondering about two things:

  • First, after having trained my network, when I want to make a prediction for a specific input – do I need to normalise this as well, or do I use the un-normalised data? Does it make a difference? My gut feeling says, I should normalise it, as it should make a difference, but I'm not sure. What should I do here, and why?
  • Second, either way, I get a result. Now I was wondering whether I need to denormalise this? I mean, it should make a difference, shouldn't it? If so, how? How do I get from the normalised result value to a denormalised one? Do I just need to reverse the calculation with mean and standard deviation, to get the actual value?

It would be great if someone could shed some light on this.

",35739,,2444,user9947,12/21/2021 14:51,12/21/2021 14:51,Do I need to denormalise results in linear regression?,,1,0,,,,CC BY-SA 4.0 20004,1,,,4/4/2020 13:21,,1,113,"

I'm creating an RL application for the game Connect Four.

If I tell the algorithm which moves/token positions will receive greater rewards, surely it's not actually learning anything; it's just a basic lookup for the algorithm? ""Shall I place the token here, or here? Well, this one receives a greater reward, so I choose this one.""

For example, some pseudocode:

function get_reward()
    if 2 in a line
        return 1
    if 3 in a line
        return 2
    if 4 in a line
        return 10
    else 
        return -1

foreach columns
    column_reward_i = get_reward(column_i)
    if column_reward_i >= column_rewards
        place_token(column_i)
",27629,,,,,4/4/2020 17:17,"In RL, if I assign the rewards for better positional play, the algorithm is learning nothing?",,1,2,,,,CC BY-SA 4.0 20005,1,25585,,4/4/2020 13:41,,6,141,"

Do I have to prevent nodes created from the same connection gene to have different IDs/innovation number? In this example, the node 6 is created from the connection going from node 3 to node 4:

In the case where that specific node was already globally created, is it useful to give it the same ID for crossover? Because the goal of NEAT is to do meaningful crossover by doing historical marking. The paper from Kenneth O. Stanley says at page 108:

[...] by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number.

Why don't we do that for node genes too?

",35722,,35722,,4/4/2020 14:41,1/8/2021 17:03,"In NEAT, is it a good idea to give the same ID to node genes created from the same connection gene?",,1,0,,,,CC BY-SA 4.0 20006,1,,,4/4/2020 13:54,,2,36,"

Some people claim that DQN was used to play many Atari games. But what actually happened? Was DQN trained only once (with some data from all games) or was it trained separately for each game? What was common to all those games? Only the architecture of the RL agent? Did the reward function change for each game?

",27629,,2444,,4/4/2020 14:02,4/4/2020 14:02,How was the DQN trained to play many games?,,0,1,,,,CC BY-SA 4.0 20007,2,,20004,4/4/2020 15:16,,4,,"

What you are proposing is closer to a heuristic for searching than a reward for RL. This is a blurred line, but generally if you start analysing the problem yourself, breaking it down into components and feeding that knowledge into the algorithm, then you place more emphasis on your understanding of the problem, and less on any learning that an agent might do.

Typically in a RL formulation of a simple board game, you would choose rewards or +1 for a win (the goal), 0 for a draw, and -1 for a loss. All non-terminal states would score 0 reward. The point of the RL learning algorithm is that the learning process would assign some nominal value to interim states due to observing play. For value-based RL approaches, such as Q learning or Monte Carlo Control, the algorithm does this more or less directly by ""backing up"" rewards that it experiences in later states into average value estimates for earlier states.

Most game-playing agents will combine the learning process, which will be imperfect given the limited experience an agent can obtain compared to all possible board states, with a look-ahead search method. Your heuristic scores would also make a reasonable input to a search method too - the difference being you may need to search more deeply using your simple heuristic than if you used a learned heuristic. The simplest heuristic would just be +1 for a win, 0 for everything else, and is still reasonably effective for Connect 4 if you can make it search e.g. 10 moves ahead.

The combination of deep Q learning and negamax search is quite effective in Connect 4. It can make near perfect agents. However, if you actually want a perfect agent, you are better off skipping the self-learning approach and working on optimised look-ahead search with some depth of opening moves stored as data (because search is too expesnive in the early game, even for a simple game like Connect 4).

",1847,,1847,,4/4/2020 17:17,4/4/2020 17:17,,,,3,,,,CC BY-SA 4.0 20008,1,,,4/4/2020 19:10,,1,708,"

I think this model is underfitting. Is this correct?

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_1 (LSTM)                (50, 60, 100)             42400     
_________________________________________________________________
dropout_1 (Dropout)          (50, 60, 100)             0         
_________________________________________________________________
lstm_2 (LSTM)                (50, 60)                  38640     
_________________________________________________________________
dropout_2 (Dropout)          (50, 60)                  0         
_________________________________________________________________
dense_1 (Dense)              (50, 20)                  1220      
_________________________________________________________________
dense_2 (Dense)              (50, 1)                   21        
=================================================================

The above is a summary of the model.
Any advice on how the model could be improved?

",35748,,40434,,8/28/2021 23:32,8/28/2021 23:32,Is this LSTM model underfitting?,,1,1,,,,CC BY-SA 4.0 20009,1,20015,,4/4/2020 19:46,,2,96,"

So, Tay the racist tweeter bot... one thing that could have prevented this would have been to have a list of watchwords to not respond to, with some logic similar to foreach (word in msg) {if (banned_words.has(word)) disregard()}.

Even if that wouldn't, what I'm getting at is obvious: I am building a chatterbot that must be kid-friendly. For my sake and for the sake of whoever finds this question, is there a resource consisting of a .csv or .txt of such words that one might want to handle? I remember once using a site-blocking productivity extension that had visible its list of banned words; not just sexually charged words, but racial slurs, too.

",35622,,,,,4/5/2020 6:32,Does there exist a resource for vetting banned words for chatbots?,,1,3,,,,CC BY-SA 4.0 20011,2,,20008,4/4/2020 21:37,,1,,"

You need to include optimizer you used to make sure it is correct. By the way, your drop-out layers are not going to do anything, so you should take them away.

You likely don’t have test and train data in time-series because all data points are connected. It just has prediction value and ground truth of each period.

I recommend you use the whole dataset and rotate changing hyper-parameters of LSTM to find the best model.

",35752,,,,,4/4/2020 21:37,,,,0,,,,CC BY-SA 4.0 20012,1,,,4/5/2020 1:41,,3,179,"

After spending some time reading about POMDP, I'm still having a hard time understanding how grid-based solutions work.

I understand the finite horizon brute-force solution, where you have your current belief distribution, enumerate every possible collection of action/observation combinations for a given depth and find the expected reward.

I have tried to read some sources about grid-based approximations, for example, these slides describe the grid-based approach.

However, it's not clear to me what exactly is going on. I'm not understanding how the value function is actually computed. After you take an action, how do you update your belief states to be consistent with the grid? Does the grid-based solution simply reduce the set of belief states? How does this reduce the complexity of the problem?

I'm not seeing how this reduces the number of actions, observation combinations needed to be considered for a finite-horizon solution.

",32390,,2444,,4/5/2020 14:13,4/5/2020 15:06,What is the intuition behind grid-based solutions to POMDPs?,,1,0,,,,CC BY-SA 4.0 20013,1,,,4/5/2020 1:52,,1,57,"

I’m currently working on the Food-101 dataset. I want to train a model that is greater than 85% accuracy for top-1 for the test set, using a ResNet50 or smaller network with a reasonable set of augmentations. I’m running 10 epochs using ResNet34 and I’m currently on the 8th epoch. This is how its doing:

epoch   train_loss  valid_loss  error_rate  time
0   2.526382    1.858536    0.465891    25:21
1   1.981913    1.566125    0.406881    27:21
2   1.748959    1.419548    0.372129    27:16
3   1.611638    1.315319    0.346980    25:16
4   1.568304    1.250232    0.328069    24:43
5   1.438499    1.193816    0.313762    24:26
6   1.378019    1.156924    0.307426    24:30
7   1.331075    1.131671    0.299010    24:26
8   1.314978    1.115857    0.297079    24:24

As you can see, it doesn’t seem like I’m going to do better than 71% accuracy at this point. The dataset size is 101,000. It has 101 different kinds of food and each food has a 1000 images. Training this definitely takes long but what are some things I can do to improve its accuracy?

",35754,,,user9947,4/7/2020 2:30,4/7/2020 2:30,Running 10 epochs on the Food-101 dataset,,1,0,,,,CC BY-SA 4.0 20014,2,,20012,4/5/2020 3:23,,3,,"

I will attempt to provide an answer to your questions based on the information you can find in the papers A Heuristic Variable Grid Solution Method for POMDPs (1997) by Ronen I. Brafman and Point-based value iteration: An anytime algorithm for POMDPs (2003) by Joelle Pineau et al.

A grid-based approximate solution to a POMDP attempts to estimate a value function only at a subset of the number of belief states. Why? Because estimating the value function for all belief states is typically computationally infeasible for non-small POMDPs, given that the belief-space MDP (i.e. an MDP where the state space consists of probability distributions over the original states of the POMDP) of a POMDP with $n$ states has an uncountably large state space. Why? Because of the involved probability distributions.

How do we compute the value for the belief states that do not correspond to a point of the grid? We can use e.g. interpolation, i.e. the value of a belief state that does not correspond to a point of the grid is computed as a function of the value of the belief states that correspond to other grid points (typically, the neighboring grid points).

Why is this approach feasible? The assumption is that interpolation is not as expensive as computing the value of a belief state. However, note that you may not need to interpolate at every step of your algorithm, i.e. interpolation could be performed only when the value of a certain belief state is required.

How do you compute the value of a belief state that corresponds to a grid point? It can be computed with a value iteration (dynamic programming) algorithm for POMDPs. An overview of a value iteration algorithm can be found in section 2 of the paper Point-based value iteration: An anytime algorithm for POMDPs. Here's an example of the application of the value iteration algorithm for POMDPs.

The grid-based approach, introduced in Computationally Feasible Bounds for Partially Observed Markov Decision Processes (1991) by William S. Lovejoy, is very similar to the point-based approach, which was introduced in Point-based value iteration: An anytime algorithm for POMDPs. The main differences between the two approaches can be found in section 3 of Point-based value iteration: An anytime algorithm for POMDPs.

The idea of discretizing your problem or simply computing the desired value at a subset of the domain has been applied in other contexts too. For example, in the context of computer vision, you can approximate the derivative (or gradient) of an image (which is thus considered a function) at discrete points of the domain (i.e. the pixels).

There's a Julia implementation of the first grid-based approximative solution to POMDP. There's also a Python implementation of the point-based approach. These implementations may help you to understand the details of these approaches.

",2444,,2444,,4/5/2020 15:06,4/5/2020 15:06,,,,9,,,,CC BY-SA 4.0 20015,2,,20009,4/5/2020 6:32,,1,,"

I have not found one other than scraping a few pages from Urban Dictionary, I built my list via crowdsourced style and got a number of interesting words I had not considered.

Start with the worst words you can think of, then try slang and accidental or on purpose misspellings of them

",34095,,,,,4/5/2020 6:32,,,,0,,,,CC BY-SA 4.0 20016,1,,,4/5/2020 8:02,,1,37,"

Currently, I am following the Dan Jurofsky NLP Tutorial and CS 224 Stanford 2019. Can you list tutorials and blogs for beginners to master the cross-lingual information retrieval?

",9863,,2444,,4/5/2020 15:23,4/5/2020 15:23,What are examples of tutorials and blogs for beginners to master the cross-lingual information retrieval?,,0,0,,,,CC BY-SA 4.0 20017,1,,,4/5/2020 9:30,,3,364,"

Is there any difference between the model distribution and data distribution, or are they the same?

",31749,,2444,,4/5/2020 12:56,4/5/2020 14:09,What is the difference between model and data distributions?,,1,0,,,,CC BY-SA 4.0 20018,2,,20017,4/5/2020 10:40,,3,,"

Yes. In Machine Learning we consider that the samples in your training set are sampled from an underlying distribution called the data generating distribution.

Generative models classify the samples by trying to learn the distribution of the data. In most cases, either the model is incapable of doing so, or the training samples aren't enough to properly describe the data-generating distribution, so the model learns an approximation of this. This is what you call the model's distribution.

You can find more info about these concepts in a more detailed answer I wrote. If you're familiar with GANs, you can also read this post, to see where these two concepts come into play when training the two networks.

",26652,,2444,,4/5/2020 14:09,4/5/2020 14:09,,,,2,,,,CC BY-SA 4.0 20019,1,,,4/5/2020 11:09,,2,152,"

I would like to develop an LSTM because I have a variable input matrix. I am zero-padding to a specific length of 800.

However, I am not sure of how to classify a certain situation when each input matrix has multiple labels inside, i.e. 0, 1 and 2. Do I need to use multi-label classification?

Data shape

(250,800,4)

x_train(150,800,4)
y_train(150,800,1)
x_test(100,800,4)
y_test(100,800,1)

Building LSTM

model = Sequential()    
model.add(LSTM(100, input_shape=))    
model.add(Dropout(0.5))    
model.add(Dense(100, activation='relu'))    
model.add(Dense(800, activation='softmax'))    
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

I am not sure of how to build the LSTM for training and test. If I train my model with a 3D shape. It will mean that for real time predictions I should also have a 3D shape, but the idea is to have a 2D matrix as an input.

",35763,,35763,,4/9/2020 9:31,4/9/2020 9:31,How to implement a LSTM for multilabel classification problem?,,0,0,,,,CC BY-SA 4.0 20020,2,,20013,4/5/2020 14:05,,1,,"

try using an adjustable learning rate. Keras has a number of callbacks that are useful for this purpose. The ReduceLROnPlateau callback can be used to monitor validation loss and reduce the learning rate by a factor if the validation loss does not decrease after a user specified number of epochs. The ModelCheckpoint callback is useful to monitor the validation loss and save the model with the lowest loss which can then be used to make predictions. Documentation is here.

",33976,,,,,4/5/2020 14:05,,,,0,,,,CC BY-SA 4.0 20023,1,20025,,4/5/2020 17:00,,3,117,"

How could I use a 16x16 image as an input in a HMM? And at the same time how would I train it? Can I use backpropagation?

",35660,,2193,,4/5/2020 17:41,4/6/2020 9:58,How can I use a Hidden Markov Model to recognize images?,,1,0,,,,CC BY-SA 4.0 20024,2,,19905,4/5/2020 17:30,,2,,"

Short Answer

Tomas Mikolov's mention of gradient clipping in a single paragraph of his PhD thesis in 2012 is the first appearance in the literature.

Long Answer

The first source (Mikolov, 2012) in the Deep Learning book is Mikolov's PhD thesis and can be found here. The end of section 3.2.2 is where gradient clipping is discussed, only it's called turncating.

... The exploding gradient problem has been described in [4].

A simple solution to the exploding gradient problem is to truncate values of the gradients. In my experiments, I did limit maximum size of gradients of errors that get accumulated in the hidden neurons to be in a range < −15; 15 >. This greatly increases stability of the training, and otherwise it would not be possible to train RNN LMs successfully on large data sets.
...
[4] Y. Bengio, P. Simard, P. Frasconi. Learning Long-Term Dependencies with Gradient Descent is Difficult. IEEE Transactions on Neural Networks, 5, 157-166, 1994.

A search of the referenced paper [4] shows that it does describe the problem as Mikolov said, but it does not present gradient clipping as a solution.

So I had a look at the second source Deep Learning mentioned: On the difficulty of training Recurrent Neural Networks. It directly cites Mikolov as having proposed clipping:

We would make a final note about the approach proposed by Tomas Mikolov in his PhD thesis (Mikolov, 2012) (and implicitly used in the state of the art results on language modelling (Mikolov et al., 2011)). It involves clipping the gradient’s temporal components element-wise (clipping an entry when it exceeds in absolute value a fixed threshold). Clipping has been shown to do well in practice and it forms the backbone of our approach.

I thought about emailing Mikolov to verify that his thesis was the origin of the idea. But then I noticed that he is a co-author of this paper which cites him as proposing it! Though I still wonder if it was commonly used in practice before even though it had not been published.

",34395,,,,,4/5/2020 17:30,,,,2,,,,CC BY-SA 4.0 20025,2,,20023,4/5/2020 17:46,,5,,"

You wouldn't, normally. A HMM is used to model sequences of observations, and it would not make sense to use it for image recognition. Unless they are sequential, such as strokes in handwriting.

HMMs are typically used in fields such as speech recognition and part-of-speech tagging. Here you observe a sequence of events that you want to fit to a model in order to classify the individual observations.

For training a HMM you would use something like the Baum-Welch Algorithm; for finding the most likely sequence (ie the recognition process) the Viterbi Algorithm is used.

",2193,,2193,,4/6/2020 9:58,4/6/2020 9:58,,,,1,,,,CC BY-SA 4.0 20026,1,,,4/5/2020 18:08,,1,146,"

After transforming timeseries into an image format, I get a width-height ratio of ~135. Typical image CNN applications involve either square or reasonably-rectangular proportions - whereas mine look nearly like lines:

Example dimensions: (16000, 120, 16) = (width, height, channels).

Are 2D CNNs expected to work well with such aspect ratios? What hyperparameters are appropriate - namely, in Keras/TF terms, strides, kernel_size (is 'unequal' preferred, e.g. strides=(16, 1))? Relevant publications would help.


Clarification: width == timesteps. The images are obtained via a transform of the timeseries, e.g. Short-time Fourier Transform. channels are the original channels. height is the result of the transform, e.g. frequency information. The task is binary classification of EEG data (w/ sigmoid output).

Relevant thread

",32165,,32165,,4/6/2020 12:08,12/22/2022 18:05,How to handle extremely 'long' images?,,1,7,,,,CC BY-SA 4.0 20027,1,,,4/5/2020 19:47,,2,31,"

To be clear, recursion in linguistics is here better called "nesting" in this CS context to avoid confusing it with the other recursion. How does one detect nesting? I am particularly interested in the example case of conjunctions. For example: say that I want to look for sentences that look like this:

Would you rather have ten goldfish or a raccoon?

Seems straightforward: a binary choice. However, how do you distinguish a binary choice with nesting from a ternary (or n-ary) choice?

Would you rather have (one or two dogs) or (a raccoon)?

Would you rather have (two dogs) or (ten goldfish) or (a raccoon)?

Ditto for implied uses of "or," which is more common than the latter of the above:

Would you rather have (one or two dogs),[nothing] (ten goldfish), or (a raccoon)?

Given the available tools for NLP (POS-taggers and the like), how do you count the number of conjunctions to say "there are n surface-level clauses in the sentence, with n-or-zero clauses nested within."?

",35622,,-1,,6/17/2020 9:57,4/5/2020 22:18,"How does one detect linguistic recursion so as to know how much nesting there is, if any?",,0,0,,,,CC BY-SA 4.0 20028,1,20068,,4/6/2020 3:42,,1,36,"

I'm trying to understand how Bidirectional RNNs work.

Specifically, I want to know whether a single cell is used with different states, or two different cells are used, each having independent parameters.

In pythonic pseudocode,

Implementation 1:

cell = rev_cell = RNNCell()
cell_state = cell.get_initial_state()
rev_cell_state = rev_cell.get_initial_state()
for i in range(len(series)):
    output, cell_state = cell(series[i], cell_state)
    rev_output, rev_cell_state = rev_cell(series[-i-1], rev_cell_state)
    final_output = concatenate([output, rev_output])

Implementation 2:

cell = RNNCell()
rev_cell = RNNCell()
cell_state = cell.get_initial_state()
rev_cell_state = rev_cell.get_initial_state()
for i in range(len(series)):
    output, cell_state = cell(series[i], cell_state)
    rev_output, rev_cell_state = rev_cell(series[-i-1], rev_cell_state)
    final_output = concatenate([output, rev_output])

Which of the above implementations is correct? Or is the working of Bidirectional RNNs completely different altogether?

",33346,,,,,4/7/2020 8:06,Inner working of Bidirectional RNNs,,1,0,,,,CC BY-SA 4.0 20029,2,,20026,4/6/2020 3:53,,0,,"

I had recently used a slightly unorthodox method to process such images, which involved using RNNs.

Assume the image dimensions to be (16000, 120, 16) = (width, height, channels), as in the question.

Apply a 2D convolution (or multiple such convolutions) of shape(1, k, c), such that the output of the convolutions becomes (16000, 1, c). So if you only use a single convolutional layer, k=120.

Then, squeeze the extra dimension, to get the shape (16000, c).

The problem has now been transformed back into a sequence problem! You can use RNN variants for further processing.

",33346,,33346,,4/6/2020 4:00,4/6/2020 4:00,,,,1,,,,CC BY-SA 4.0 20030,1,,,4/6/2020 4:02,,1,39,"

A paper says

However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images.

What does ""shape information"" mean here?

I am aware of basic concept of GAN(generative adversarial networks), though I don't understand what ""shape information"" refers to.

I am aware the image above is an illustration of Image Segmentation. Would I think of the any one of the segmented area (red, green, blue) as a shape for a GAN?

Could someone please give a hint? Thanks in advance.

",35782,,35782,,4/6/2020 4:24,5/1/2021 6:06,"What does ""shape information"" mean in terms of GAN(generative adversarial networks)?",,0,1,,,,CC BY-SA 4.0 20033,2,,16260,4/6/2020 4:44,,1,,"
  • The number of dichotomies of 4 data points will clearly be $2^4 = 16$. According to these slides the definition of dichotomy in context of Statistical Learning is:

Different ‘hypotheses’ over the finite set of $N$ input points.

Which basically means hypotheses with unique behaviours over the input points. Two or more different hypotheses can have same behaviour on the data points (consider the case of a square covering the $4$ data points, an even larger square will also cover the $4$ data points. Thus they are different hypotheses but have same behaviour) and hence emphasis on the term unique.

  • The proof of axis aligned squares having $\mathcal V \mathcal C$ dimension $3$ can be found here. It's pretty straightforward so I don't want to explain it here.
",,user9947,,user9947,4/6/2020 20:54,4/6/2020 20:54,,,,0,,,,CC BY-SA 4.0 20034,1,20049,,4/6/2020 7:16,,5,405,"

I know that this has been asked a hundred times before, however, I was not able to find a question (and an answer) which actually answered what I wanted to know, respectively, which explained it in a way I was able to understand. So, I'm trying to rephrase the question…

When working with neural networks, you typically split your data set into three parts:

  • Training set
  • Validation set
  • Test set

I understand that you use the training set for, well, train the network, and that you use the test set to verify how well it has learned: By measuring how well the network performs on the test set, you know what to expect when actually using it later on. So far, so good.

Now, a model has hyper parameters, which – besides the weights – need to be tuned. If you change these, of course, you get different results. This is where in all explanations the validation set comes into play:

  • Train using the training set
  • Validate how well the model performs using the validation set
  • Repeat this for a number of variants which differ in their hyperparameters (or do it in parallel, right from the start)
  • Finally, select one and verify its performance using the test set

Now, my question is: Why would I need steps 2 and 3? I could as well train multiple version of my model in parallel, and then run all of them against the test set, to see which performs best, and then use this one.

So, in other words: Why would I use the validation set for comparing the model variants, if I could directly use the test set to do so? I mean, I need to train multiple versions either way. What is the benefit of doing it like this?

Probably, there is some meaning to it, and probably I got something wrong, but I can't figure out what. Any hints?

",35739,,2444,,4/6/2020 20:06,4/6/2020 20:06,Why do we need both the validation set and test set?,,2,0,,,,CC BY-SA 4.0 20035,1,20039,,4/6/2020 7:58,,3,169,"

As per a post, image-to-image translation is a type of CV problem.

I guess I understand the concept of image-to-image translation.

I am aware that GANs(generative adversarial networks) are good at this kind of problems.

I just wondered what the commonly used techniques are for this kind of problems Before GAN?

Could someone please give a hint? Thanks in advance.

",35782,,35782,,4/6/2020 13:04,4/6/2020 13:04,"Before GAN, what are the commonly used techniques for image-to-image translation?",,1,0,,,,CC BY-SA 4.0 20036,2,,20001,4/6/2020 11:10,,0,,"

Okay, I figured it out by myself, simply by trial and error:

When you normalize your training and test data, you also need to normalize the input you want to have a prediction for. You will also need to denormalize the result, to get a reasonable prediction.

Specifically, in my case, this means:

  • First I calculated the mean and the standard deviation, and then subtracted the mean and divided by standard deviation, to normalize my training and test data.
  • So to normalize the input I want to have a prediction for, I also subtract the mean and divide by the standard deviation.
  • Finally, when I get a result, I multiple by the standard deviation, and add the mean.

This way, I get reasonable results 😊

",35739,,,,,4/6/2020 11:10,,,,5,,,,CC BY-SA 4.0 20037,1,,,4/6/2020 12:01,,1,96,"

I have been working on my own AI for a while now, trying to implemented SGD with momentum from scratch in python. After looking around and studying all the maths behind it, i finally managed to implement SGD in a neural network that i trained to recognize the classic MNIST digits dataset. As activation function i always used sigmoid for both hidden and output neurons, and everything seems to work more or less ok, but now i wanted to step it up a bit and try to let SGD operate with different activations, so i added 2 other functions to my code: relu and tanh. The behaviours that i expected based on articles, documentation and ""tutorials"" found online were:
tanh: Should be slightly better than sigmoid
relu: should be much better than sigmoid and tanh
(By better i mean faster or at least higher accuracy the the end, or a mix of both)

Using tanh it looks like it's much slower converging to a minimum compared to sigmoid

Using relu...well, the results were very, VERY horrible Here's the outputs with the different activations (Learning rate: 0.1, Epochs: 5, MiniBatch size: 10, Momentum: 0.9)

Sigmoid training

[Sigmoid for hidden layers, sigmoid for output layer]
Epoch: 1/5 (14.3271 s): Loss: 0.0685, Accuracy: 0.6231, Learning rate: 0.10000
Epoch: 2/5 (14.0060 s): Loss: 0.0503, Accuracy: 0.6281, Learning rate: 0.10000
Epoch: 3/5 (14.0081 s): Loss: 0.0482, Accuracy: 0.6382, Learning rate: 0.10000
Epoch: 4/5 (13.8516 s): Loss: 0.0471, Accuracy: 0.7085, Learning rate: 0.10000
Epoch: 5/5 (13.9411 s): Loss: 0.0374, Accuracy: 0.7990, Learning rate: 0.10000

Tanh training

[Tanh for hidden layers, sigmoid for output layer]
Epoch: 1/5 (13.7553 s): Loss: 0.3708, Accuracy: 0.4171, Learning rate: 0.10000
Epoch: 2/5 (13.7666 s): Loss: 0.2580, Accuracy: 0.4623, Learning rate: 0.10000
Epoch: 3/5 (13.5550 s): Loss: 0.2289, Accuracy: 0.4824, Learning rate: 0.10000
Epoch: 4/5 (13.7311 s): Loss: 0.2211, Accuracy: 0.5729, Learning rate: 0.10000
Epoch: 5/5 (13.6996 s): Loss: 0.2142, Accuracy: 0.5779, Learning rate: 0.10000

Relu training

[Relu for hidden layers, sigmoid for output layer]
Epoch: 1/5 (14.2100 s): Loss: 0.7725, Accuracy: 0.0854, Learning rate: 0.10000
Epoch: 2/5 (14.6218 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000
Epoch: 3/5 (14.2116 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000
Epoch: 4/5 (14.1657 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000
Epoch: 5/5 (14.1427 s): Loss: 0.1000, Accuracy: 0.0854, Learning rate: 0.10000

Another run with relu

Epoch: 1/5 (14.7391 s): Loss: 15.4055, Accuracy: 0.1658, Learning rate: 0.10000
Epoch: 2/5 (14.8203 s): Loss: 59.2707, Accuracy: 0.1709, Learning rate: 0.10000
Epoch: 3/5 (15.3785 s): Loss: 166.1310, Accuracy: 0.1407, Learning rate: 0.10000
Epoch: 4/5 (14.9285 s): Loss: 109.9386, Accuracy: 0.1859, Learning rate: 0.10000
Epoch: 5/5 (15.1280 s): Loss: 158.9268, Accuracy: 0.1859, Learning rate: 0.10000

For these examples the epochs are just 5 but incrementing the epochs the results dont change, tanh and relu for me perform worse than sigmoid.

Here is my python code reference for SGD:

SGD with momentum

This method was created to accept different activation functions to dynamically use them when creating the neural network object

The activation functions and their derivatives:

Activation functions and derivatives

The loss function i used is the mean squared error:

def mean_squared(output, expected_result):
    return numpy.sum((output - expected_result) ** 2) / expected_result.shape[0]


def mean_squared_derivative(output, expected_result):
    return output - expected_result

Is there some concept i am missing? Am i using the activation functions the wrong way? I really cannot find the answer to this even after searching for a long time. I feel like the problem is somewhere in the backpropagation but i can't find it. Any kind of help would be greatly appriciated

PS: I hope i posted this in the right place, i am pretty new to asking questions here, so if there is any problem i will move the question somewhere else

Edit:

I tried to implement this with tensorflow, using relu for hidden layers and sigmoid for output. The results i get with this implementation are the same as the ones i mentioned in my question, so unless i am doing something wrong in both situations i am left to think i cannot use relu with sigmoid, which makes sense cause relu can have very high values while sigmoid pushes them down between 0 and 1, therefore most of the times giving values very close to 1.
Code reference:
TensorFlow implementation

",35792,,35792,,4/9/2020 7:51,4/9/2020 7:51,"Stochastic gradient descent does not behave as expected, even with different activation functions",,1,2,,,,CC BY-SA 4.0 20039,2,,20035,4/6/2020 12:51,,2,,"

Image to Image translation is the task of transferring an image's characteristics from one domain and representing it in another. GANs have provided an end to end method to do this task. Prior to Gans, these tasks were done individually, by using classic image processing techniques mainly. Techniques such as image denoising, or finding edges in photos, or using web results to join various images were used.

As mentioned by P Isola et. al., in the CycleGAN paper, Hertzmann et al. in their paper, Image Analogies, have employed a non-parametric texture model on a single input-output training image pair.

In Image Quilting for Texture Synthesis and Transfer, the authors used existing patches of images and stitched them together.

In Data-driven Hallucination of Different Times of Day from a Single Outdoor Photo, here the authors compare the input image with an available dataset of time-lapse videos similar to the input and find the frame at the time of input image and frame for a target time. They use local affine transformation to change the scene of an input image into target image.

Recent works have focused on using a dataset of paired input-output examples to learn the translation function using CNNs for semantic segmentation, Fully convolutional networks for semantic segmentation

",35791,,,,,4/6/2020 12:51,,,,0,,,,CC BY-SA 4.0 20040,1,20043,,4/6/2020 12:58,,3,2234,"

The paper Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration uses the term ""proxy data sets"" in this way

To develop DL techniques for synthesizing high-resolution realistic fundus images serving as proxy data sets for use by retinal specialists and DL machines.

I googled that term, but didn't find a definition of ""proxy data sets"". What are ""proxy data sets"" in machine learning?

The paper Analysis of Manufacturing Process Sequences, Using Machine Learning on Intermediate Product States (as Process Proxy Data) mentions a similar term

The advantage of the product state-based view is the focus on the product itself to structure the information and data involved throughout the process. Using the intermediate product states as proxy data for this purpose

Does ""proxy data"" mean the same thing as ""proxy data sets"" does?

",35782,,2444,,4/7/2020 19:28,4/7/2020 19:28,"What are ""proxy data sets"" in machine learning?",,1,0,,,,CC BY-SA 4.0 20042,1,,,4/6/2020 14:01,,1,44,"

How I can identify holes in a 3D CAD file? I want to identify different types of holes, counterbored or countersunk holes. My program lets me extract, for example, the faces and adjacency of the faces. I am talking about the Siemens NX for example. The different types of holes you can see there: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_VwwfDrdggc&psig=AOvVaw1kIYdOt2qxazSYFXwHuXpb&ust=1586329824363000&source=images&cd=vfe&ved=0CAIQjRxqFwoTCOjd1qHh1egCFQAAAAAdAAAAABAJ

",35799,,35799,,4/7/2020 7:11,4/7/2020 7:11,How I can identify holes in a 3D CAD file?,,0,1,,,,CC BY-SA 4.0 20043,2,,20040,4/6/2020 14:10,,5,,"

In computer science, if you say "A is a proxy for B", then it means that "A replaces B" (temporarily or not), or that "A is used as an intermediary for B".

The term "proxy" usually refers to a server, i.e. there are the so-called proxy servers, which intuitively do the same thing (i.e. they are used as intermediaries). The following picture is worth more than 1000 words.

Let's go back to your context. In the following paragraph

To develop DL techniques for synthesizing high-resolution realistic fundus images serving as proxy data sets for use by retinal specialists and DL machines.

The deep learning techniques will be used to generate data that will be used as a replacement for real data for use by retinal specialists.

In your second paragraph

The advantage of the product state-based view is the focus on the product itself to structure the information and data involved throughout the process. Using the intermediate product states as proxy data for this purpose

The word "intermediate" is even used, so there should not be any need for further explanation, although I admit that I have no idea of what "product states" are in this context, but they will be used as "proxy data" for other data.

Does "proxy data" mean the same thing as "proxy data sets" does, if yes, I will go through that paper.

I guess so, but bear in mind that "data" is more general than "data set".

I have found other slightly different uses of the expression "proxy data", not in the context of computer science, but in the context of paleoclimatology. For example, have a look at this web article What Are Proxy Data?.

Just for completeness, here are some dictionary definitions of proxy.

authority given to a person to act for someone else, such as by voting for them in an election

a person who you choose to do something officially for you

a situation, process, or activity to which another situation, etc. is compared

",2444,,-1,,6/17/2020 9:57,4/6/2020 14:10,,,,0,,,,CC BY-SA 4.0 20044,1,20046,,4/6/2020 14:50,,4,964,"

While looking at the mathematics of the back-propagation algorithm for a multi-layer perceptron, I noticed that in order to find the partial derivative of the cost function with respect to a weight (say $w$) from any of the hidden layers, we're just writing the error function from the final outputs in terms of the inputs and hidden layer weights and then canceling all the terms without $w$ in it as differentiating those terms with respect to $w$ would give zero.

Where is the back-propagation of error while doing this? This way, I can find the partial derivatives of the first hidden layer first and then go towards the other ones if I wanted to. Is there some other method of going about it so that the Back Propagation concept comes into play? Also, I'm looking for a general method/algorithm, not just for 1-2 hidden layers.

I'm fairly new to this and I'm just following what's being taught in class. Nothing I found on the internet seems to have proper notation so I can't understand what they're saying.

",35800,,2444,,4/6/2020 16:26,4/12/2022 8:36,Why is it called back-propagation?,,2,1,,,,CC BY-SA 4.0 20045,1,,,4/6/2020 14:59,,1,22,"

I am looking into exponentially damped signals where it is a stationary signal (after implementing the Adfuller statistical test) and I would like to look into how can I extract meaningful features out of the signal in order to do pattern recognition with machine learning. Can anyone guide me on where I can find articles/blogs of signal processing techniques and feature extraction of exponentially damped signal?

My situation:

I want to look into features that relate to damping of the signal, I already looked at it in the frequency domain and I found out that from my datasets (considering the first 3 Natural frequencies/modes) the peaks are almost the same (there's like the same deviation by only like [+] or [-] 0.5 from freq. values). Looking into the damping factor, I found out that only the second damping ratio was different but still small deviation around the same ([+] or [-] 0.5). So, I thought that it would be difficult for machine learning to identify the difference between cases. One of my ideas is to look into energy dissipation as it might be related to damping, but I don't know how to approach it or from which domain I need to go in order to get the features.

Side Question:

I have several questions regarding signal processing:

  • Say I have a signal and would like to extract features from it, what steps or points that I should know in order to implement signal processing? (As I am using Python).
  • When I used signaltonoise function online (python) in order to see the signal-to-noise ratio and I got a positive SNR. However, if I pass the signal into, for example, a band-pass filter to concentrate on a certain frequency band I get a negative SNR. Why is that?
  • How can I extract features from STFT? And I also know about wavelet and HHT, what are the uses of both algorithms and how to also extract features from it?
",35795,,,,,4/6/2020 14:59,Feature extraction for exponentially damped signals,,0,2,,,,CC BY-SA 4.0 20046,2,,20044,4/6/2020 15:18,,2,,"

Have a look at the following article Principles of training multi-layer neural network using backpropagation. It was very useful to me.

You can also see here an example of backpropagation in Matlab. It effectively solves the XOR problem. You can also play around with the cost function or the learning rate. You may get surprising results! Does this answer your question?

",35660,,35660,,4/6/2020 16:34,4/6/2020 16:34,,,,2,,,,CC BY-SA 4.0 20048,2,,20044,4/6/2020 17:04,,8,,"

Why is it called back-propagation?

I don't think there is anything special here!

It's called back-propagation (BP) because, after the forward pass, you compute the partial derivative of the loss function with respect to the parameters of the network, which, in the usual diagrams of a neural network, are placed before the output of the network (i.e. to the left of the output if the output of the network is on the right, or to the right if the output of the network is on the left).

It's also called BP because it is just the application of the chain rule. Why is this interesting?

Let me answer this question with an example. Consider the function $y=e^{\sin(x^{2})}$. This is a composite function, i.e. a function composed of multiple simpler functions, which, in this case, are $e^x$, $\sin(x)$, $x^2$ and $x$. To compute the derivative of $y$ with respect to $x$, let's define the following variables

\begin{align} y &= f(u) = e^u,\\ u &= g(v) = \sin v = \sin(x^2),\\ v &= h(x) = x^2 \end{align}

The derivative of $y$ with respect to the variable $x$ is (according to the chain rule)

$$ \underset{\color{red}{\LARGE \rightarrow}}{ \frac{dy}{dx} = \frac{dy}{du} \color{green}{\cdot} \frac{du}{dv} \color{green}{\cdot} \frac{dv}{dx}} $$

If you read this equation from the left to the right, you can see that we are going backward (i.e. from the function $y$ to the function $v$). This is the same thing with BP!

Why is it called "chain rule"? Because you are chaining different partial derivatives. More specifically, you are multiplying them.

BP is also known as the reverse mode of automatic differentiation. Why? The automatic differentiation should be self-explanatory, given that the BP algorithm is just the computation of partial derivatives, and you do this automatically, i.e. with a program, rather than by hand. The expression "reverse mode" refers to the fact that we compute the derivatives from the outer function (which, in the example above, is $e^x$) to the inner function (which, in the example above, is $x$). The Wikipedia article related to automatic differentiation provides more details.

What exactly are you back-propagating?

The partial derivative of the loss function $\mathcal{L}$ with respect to a parameter $w_i$, i.e. $\frac{\partial \mathcal{L}}{\partial w_i}$, intuitively, represents the "contribution" of the parameter $w_i$ to the loss. After having computed these partial derivatives (i.e. the gradient), you use gradient descent to update each parameter $w_i$ as follows

$$ w_i \leftarrow w_i - \gamma \frac{\partial \mathcal{L}}{\partial w_i} $$

where $\frac{\partial \mathcal{L}}{\partial w_i}$ represents WHAT we propagatED, which is the error (or loss) that the neural network makes.

This gradient descent step will hopefully make your network produce a smaller error next time.

The modern version of back-propagation was published (in 1970) by a Finnish master's student called Seppo Linnainmaa, but he didn't reference neural networks. This Jürgen Schmidhuber's article goes into the details of the history of BP.

",2444,,2444,user9947,4/12/2022 8:36,4/12/2022 8:36,,,,5,,,,CC BY-SA 4.0 20049,2,,20034,4/6/2020 17:58,,5,,"

The difference between the validation and test set in my opinion should be explained in this way:

  • the validation set is meant to be used multiple times.
  • the test set is meant to be used only once.

I think that the misunderstanding here arise because machine learning is mostly taught focusing only on a specific part of a large pipeline, which is the model training. In every tutorial standard datasets are used so that you don't have to worry about data collection, data labelling (it's really sad to see that lot of people have not a clue about what the inter annotator agreement is), data pre-processing and especially, all the part about the real application of the model is almost never mentioned.

The importance of having a set of instances that you can use for fine tuning (validation) and a set of instances that your model never encountered neither in training nor during fine tuning (test) becomes particularly clear if you focus on the subsequent deployment of the model you trained. No one expect a model to have the same performance scores in training and when applied to some unknown data. And the crucial point is that the performance of a model on the validation set are not representative either of the behaviour of a model with unknown data, because the same validation data have been used to fine tuned the model! So here's why having a set of data completely new to the model is important, because it gives you a much more unbiased view about the model performance on a real use case scenario.

",34098,,,,,4/6/2020 17:58,,,,1,,,,CC BY-SA 4.0 20050,2,,20034,4/6/2020 18:18,,2,,"

Simply stated, you use your validation set to regularize your model for unseen data. Test data is completely unseen data, on which you evaluate your model.

Various validation strategies are used to improve your model to perform for unseen data. So strategies like k-fold cross-validation are used. Also, the validation set helps you in tuning your hyperparameters such as learning rate, batch size, hidden units, number of layers, etc.

Train, Validation, Test sets help you in identifying whether you are underfitting or overfitting.

E.g. If human error at a task is 1%, train error is 8%, validation error is 10%, test set error is 12 % then,

Difference between,

  1. Human level and training set error tells you about ""Avoidable Bias""
  2. Training set error and Validation set error tells you about ""Variance and data mismatch""
  3. Validation set error and Test error tells you about ""degree of overfitting"" with the validation set.

Based on these metrics, you can apply appropriate strategies for better performance on validation or test sets.

",35791,,,,,4/6/2020 18:18,,,,0,,,,CC BY-SA 4.0 20051,2,,19957,4/6/2020 18:45,,1,,"

The derivative $f'(x)$ is correlated with $f(x)$ in a certain sense. In fact, $f'(x)$ is a function of $f$, so we could even say that there's a cause-effect relationship.

The derivative at a specific point $c$ of the domain, i.e. $f'(c)$, can either be negative or positive. If $f'(c) > 0$, then $f(c)$ is increasing (with respect to an increase of $x$). If $f'(c) < 0$, then $f(c)$ decreasing (with respect to an increase of $x$).

This can easily be seen from an example. Consider $f(x) = x^2$, then $f'(x) = 2x$. Let $c = 2$, then $f'(2) = 4$, so the function is increasing. In fact, $f(1) = 2 \leq f(2) = 4 \leq f(3) = 9$. Similarly, let $c = -1$, then $f'(-1) = -2$, so the function is decreasing. In fact, $\leq f(-2) = 4 \geq f(-1) = 1 \geq f(0) = 0$ (note that the function is decreasing as we increase $x$!).

Consider a model with only one parameter, then the partial derivative of the loss function with respect to that parameter corresponds to the derivative of the loss function. So, the reasoning above applies to this model. What about a model with more than one parameter? The same thing happens.

If the function decreases, does its derivative also decrease? In general, no, and this can easily be seen from a plot of a function and its derivative. For example, consider a plot of a parabola and its derivative (which is a linear function).

On the left of the y-axis, the parabola is decreasing, but its derivative is increasing, while, on the right of the y-axis, the parabola is increasing and the linear function is still increasing.

This is the same thing with a loss function of an ML model and its partial derivative.

",2444,,,,,4/6/2020 18:45,,,,2,,,,CC BY-SA 4.0 20052,1,,,4/6/2020 19:42,,2,154,"

I'm watching the video Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorflow Tutorial | Edureka where the author says that the LSTM and GRU architecture help to reduce the vanishing gradient problem. How do LSTM and GRU prevent the vanishing gradient problem?

",9863,,2444,,4/6/2020 19:45,10/17/2022 11:02,How do LSTM and GRU avoid to overcome the vanishing gradient problem?,,1,0,,,,CC BY-SA 4.0 20053,1,20108,,4/6/2020 20:20,,4,187,"

I'm developing my first neural network, using the well known MNIST database of handwritten digit. I want the NN to be able to classify a number from 0 to 9 given an image.

My neural network consists of three layers: the input layer (784 neurons, each one for every pixel of the digit), a hidden layer of 30 neurons (it could also be 100 or 50, but I'm not too worried about hyperparameter tuning yet), and the output layer, 10 neurons, each one representing the activation for every digit. That gives to me two weight matrices: one of 30x724 and a second one of 10x30.

I know and understand the theory behind back propagation, optimization and the mathematical formulas behind that, that's not a problem as such. I can optimize the weights for the second matrix of weights, and the cost is indeed being reduced over time. But I'm not able to keep propagating that back because of the matrix structure.

Knowing that I have find the derivative of the cost w.r.t. the weights:

d(cost) / d(w) = d(cost) / d(f(z)) * d(f(z)) / d(z) * d(z) / d(w)

(Being f the activation function and z the dot product plus the bias of a neuron)

So I'm in the rightmost layer, with an output array of 10 elements. d(cost) / d(f(z)) is the subtraction of the observed an predicted values. I can multiply that by d(f(z)) / d(z), which is just f'(z) of the rightmost layer, also an unidimensional vector of 10 elements, having now d(cost) / d(z) calculated. Then, d(z)/d(w) is just the input to that layer, i.e. the output of the previous one, which is a vector of 30 elements. I figured that I can transpose d(cost) / d(z) so that T( d(cost) / d(z) ) * d(z) / d(w) gives me a matrix of (10, 30), which makes sense because it matches with the dimension of the rightmost weight matrix.

But then I get stuck. The dimension of d(cost) / d(f(z)) is (1, 10), for d(f(z)) / d(z) is (1, 30) and for d(z) / d(w) is (1, 784). I don't know how to come up with a result for this.

This is what I've coded so far. The incomplete part is the _propagate_back method. I'm not caring about the biases yet because I'm just stuck with the weights and first I want to figure this out.

import random
from typing import List, Tuple

import numpy as np
from matplotlib import pyplot as plt

import mnist_loader

np.random.seed(42)

NETWORK_LAYER_SIZES = [784, 30, 10]
LEARNING_RATE = 0.05
BATCH_SIZE = 20
NUMBER_OF_EPOCHS = 5000


def sigmoid(x):
    return 1 / (1 + np.exp(-x))


def sigmoid_der(x):
    return sigmoid(x) * (1 - sigmoid(x))


class Layer:

    def __init__(self, input_size: int, output_size: int):
        self.weights = np.random.uniform(-1, 1, [output_size, input_size])
        self.biases = np.random.uniform(-1, 1, [output_size])
        self.z = np.zeros(output_size)
        self.a = np.zeros(output_size)
        self.dz = np.zeros(output_size)

    def feed_forward(self, input_data: np.ndarray):
        input_data_t = np.atleast_2d(input_data).T
        dot_product = self.weights.dot(input_data_t).T[0]
        self.z = dot_product + self.biases
        self.a = sigmoid(self.z)
        self.dz = sigmoid_der(self.z)


class Network:

    def __init__(self, layer_sizes: List[int], X_train: np.ndarray, y_train: np.ndarray):
        self.layers = [
            Layer(input_size, output_size)
            for input_size, output_size
            in zip(layer_sizes[0:], layer_sizes[1:])
        ]
        self.X_train = X_train
        self.y_train = y_train

    @property
    def predicted(self) -> np.ndarray:
        return self.layers[-1].a

    def _normalize_y(self, y: int) -> np.ndarray:
        output_layer_size = len(self.predicted)
        normalized_y = np.zeros(output_layer_size)
        normalized_y[y] = 1.

        return normalized_y

    def _calculate_cost(self, y_observed: np.ndarray) -> int:
        y_observed = self._normalize_y(y_observed)
        y_predicted = self.layers[-1].a

        squared_difference = (y_predicted - y_observed) ** 2

        return np.sum(squared_difference)

    def _get_training_batches(self, X_train: np.ndarray, y_train: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
        train_batch_indexes = random.sample(range(len(X_train)), BATCH_SIZE)

        return X_train[train_batch_indexes], y_train[train_batch_indexes]

    def _feed_forward(self, input_data: np.ndarray):
        for layer in self.layers:
            layer.feed_forward(input_data)
            input_data = layer.a

    def _propagate_back(self, X: np.ndarray, y_observed: int):
        """"""
        der(cost) / der(weight) = der(cost) / der(predicted) * der(predicted) / der(z) * der(z) / der(weight)
        """"""
        y_observed = self._normalize_y(y_observed)
        d_cost_d_pred = self.predicted - y_observed

        hidden_layer = self.layers[0]
        output_layer = self.layers[1]

        # Output layer weights
        d_pred_d_z = output_layer.dz
        d_z_d_weight = hidden_layer.a  # Input to the current layer, i.e. the output from the previous one

        d_cost_d_z = d_cost_d_pred * d_pred_d_z
        d_cost_d_weight = np.atleast_2d(d_cost_d_z).T * np.atleast_2d(d_z_d_weight)

        output_layer.weights -= LEARNING_RATE * d_cost_d_weight

        # Hidden layer weights
        d_pred_d_z = hidden_layer.dz
        d_z_d_weight = X

        # ...

    def train(self, X_train: np.ndarray, y_train: np.ndarray):
        X_train_batch, y_train_batch = self._get_training_batches(X_train, y_train)
        cost_over_epoch = []

        for epoch_number in range(NUMBER_OF_EPOCHS):
            X_train_batch, y_train_batch = self._get_training_batches(X_train, y_train)

            cost = 0
            for X_sample, y_observed in zip(X_train_batch, y_train_batch):
                self._feed_forward(X_sample)
                cost += self._calculate_cost(y_observed)
                self._propagate_back(X_sample, y_observed)

            cost_over_epoch.append(cost / BATCH_SIZE)

        plt.plot(cost_over_epoch)
        plt.ylabel('Cost')
        plt.xlabel('Epoch')
        plt.savefig('cost_over_epoch.png')


training_data, validation_data, test_data = mnist_loader.load_data()
X_train, y_train = training_data[0], training_data[1]

network = Network(NETWORK_LAYER_SIZES, training_data[0], training_data[1])
network.train(X_train, y_train)

This is the code for mnist_loader, in case someone wanted to reproduce the example:

import pickle
import gzip


def load_data():
    f = gzip.open('data/mnist.pkl.gz', 'rb')
    training_data, validation_data, test_data = pickle.load(f, encoding='latin-1')
    f.close()

    return training_data, validation_data, test_data

",35806,,35806,,4/7/2020 17:06,4/9/2020 11:36,How to perform back propagation with different sized layers?,,1,0,,,,CC BY-SA 4.0 20054,1,,,4/6/2020 21:39,,2,170,"

From what I've figured

(a) converting mathematical theorems and proofs from English to formal logic is a straightforward job for mathematicians with sufficient background, except that it takes time.

(b) once converted to formal logic, computer verification of the proof becomes straightforward.

If we can automate (a), a lot of time and intellectual labour (that could be dedicated elsewhere) is saved in doing (b) on published research papers.

Note that if solving (a) in its entirety is hard, we could expect the mathematicians to meet the computer system halfway and avoid writing lengthy English paras that are hard to convert. If it becomes doable enough, submitting a formal logical version of your paper could even become a standard procedure that is expected.

Additional benefit of solving (a) would be to do the process in reverse: mathematicians could delegate smaller tasks and lemmas (both trivial and non-trivial tasks) to an automated theorem prover (ATP). Assisted theorem proving will become more popular and boost productivity, maybe even surprise us once in a while by coming up with proofs that the paper writer couldn't. This is further of value if we predict a sharp upward trajectory of the capability of ATPs in the future. If anything, this could be self-fulfilling, as the demonstration of potential for good ATPs combined by a large corpus of proofs and problems in formal logical format could drive an increase in research on ATPs.

Forgive me if I sound like a salesman, but how doable is this? What will be the main challenges faced in developing NLP-based AI to convert papers, and how tractable are these challenges given today's state of the field?

P.s. I understand that proofs generated by ATPs are often hard to understand intuitively and can end up proving results without clearly exposing the underlying proof method used. But it is still a benefit to be able to use the final results

",35810,,35810,,4/7/2020 8:13,4/19/2020 17:59,What are the challenges faced by using NLP to convert mathematical texts into formal logic?,,1,0,,,,CC BY-SA 4.0 20055,1,20064,,4/6/2020 21:46,,2,79,"

This is a picture of a recurrent neural network (RNN) found on a udemy course (Deep Learning A-Z). The axis at the bottom is ""time"".

In a time series problem, each yellow row from left to right would represent a sequence of a feature. In this picture, then, there are 6 sequences from 6 different features that are being fed to the network.

I am wondering if the arrows in this picture are completely accurate in an RNN. Shouldn't every yellow node also connect to every other blue node along its depth dimension? By depth dimension here I mean the third dimensional axis of the input tensor.

For example, the yellow node at the bottom left of this picture, which is closest to the viewer, should have an arrow pointing to all the blue nodes in the array of blue nodes that is at the very left, and not just to the blue node directly above it.

",35809,,35809,,4/7/2020 0:31,4/7/2020 4:49,Is this a correct visual representation of a recurrent neural network (RNN)?,,1,4,,,,CC BY-SA 4.0 20056,1,,,4/6/2020 21:56,,1,43,"

I am trying to perform binary classification of search results based on the relevance to the query. I followed this tutorial on how to make an SVM, and I got it to work with a small iris dataset. Now, I am attempting to use the LETOR 4.0 MQ2007 dataset by Microsoft to classify. The dataset has 21 input vectors as well as a score from 0 to 2. I classified 0 as -1 and 1, 2 as 1. My algorithm reaches 57.4% accuracy after 1000 epochs with 500 samples of each classification. My learning rate is 0.0001. here is my code.

from tqdm import tqdm
import numpy as np
from sklearn.metrics import accuracy_score


print(""-------------------------------------"")
choice = input(""Train or Test: "")
print(""-------------------------------------"")

# HYPERPARAMETERS
feature_num = 21
epochs = 1000
sample_size = 500
learning_rate = 0.0001

if choice == ""Train"":

    out_file = open('weights.txt', 'w')
    out_file.close()

    print(""Serializing Train Data..."")

    # SERIALIZE DATA
    file = open('train.txt')
    train_set = file.read().splitlines()
    positive = []
    negative = []

    # GRAB TRAINING SAMPLES
    for i in train_set:
        if (i[0] == '1' or i[0] == '2') and len(positive) < sample_size:
            positive.append(i)
        if (i[0] == '0') and len(negative) < sample_size:
            negative.append(i)

    train_set = positive+negative
    file.close()

    features = []
    query = []

    # CREATE TRAINING VECTORS
    alpha = np.full(feature_num, learning_rate)
    weights = np.zeros((len(train_set), feature_num))
    output = np.zeros((len(train_set), feature_num))
    score = np.zeros((len(train_set), feature_num))

    for i in tqdm(range(len(train_set))):
        elements = train_set[i].split(' ')
        if int(elements[0]) == 0:
            score[i] = [-1] * feature_num
        else:
            score[i] = [1] * feature_num

        query.append(int(elements[1].split(':')[1]))
        tmp = []
        for feature in elements[2:2+feature_num]:
            if feature.split(':')[1] == 'NULL':
                tmp.append(0.0)
            else:
                tmp.append(float(feature.split(':')[1]))
        features.append(tmp)

    features = np.asarray(features)

    print(""-------------------------------------"")
    print(""Training Initialized..."")

    # TRAIN MODEL
    for i in tqdm(range(epochs)):

        # FORWARD y = sum(wx)
        for sample in range(len(train_set)):
            output[sample] = weights[sample]*features[sample]
            output[sample] = np.full((feature_num), np.sum(output[sample]))

        # NORMALIZE NEGATIVE SIGNS
        output = output*score
        # UPDATE WEIGHTS
        count = 0
        for val in output:
            if(val[0] >= 1):
                cost = 0
                weights = weights - alpha * (2 * 1/epochs * weights)
            else:
                cost = 1 - val[0]
                # WEIGHTS = WEIGHTS + LEARNING RATE * [X] * [Y]
                weights = weights + alpha * (features[count] * score[count] - 2 * 1/epochs * weights)

            count += 1

    # EXPORT WEIGHTS
    out_file = open('weights.txt', 'a+')
    for i in weights[0]:
        out_file.write(str(i)+'\n')
    out_file.close()

elif choice == ""Test"":

    print(""Serializing Test Data..."")

    # SERIALIZE DATA
    file = open('train.txt')
    train_set = file.read().splitlines()
    positive = []
    negative = []
    for i in train_set:
        if (i[0] == '1' or i[0] == '2') and len(positive) < sample_size:
            positive.append(i)
        if (i[0] == '0') and len(negative) < sample_size:
            negative.append(i)

    test_set = positive+negative

    file = open('weights.txt', 'r').read().splitlines()
    weights = np.zeros((len(test_set), feature_num))

    # CREATE TEST SET
    for i in range(len(weights)):
        weights[i] = file
    features = []
    query = []
    output = np.zeros((len(test_set), feature_num))
    score = np.zeros((len(test_set)))

    for i in tqdm(range(len(test_set))):
        elements = test_set[i].split(' ')
        if int(elements[0]) == 0:
            score[i] = -1
        else:
            score[i] = 1

        query.append(int(elements[1].split(':')[1]))
        tmp = []
        for feature in elements[2:2+feature_num]:
            if feature.split(':')[1] == 'NULL':
                tmp.append(0.0)
            else:
                tmp.append(float(feature.split(':')[1]))
        features.append(tmp)

    features = np.asarray(features)

    for sample in range(len(test_set)):
        output[sample] = weights[sample]*features[sample]
        output[sample] = np.full((feature_num), np.sum(output[sample]))

    predictions = []
    for val in output:
        if(val[0] > 1):
            predictions.append(1)
        else:
            predictions.append(-1)

    print(""-------------------------------------"")
    print(""Predicting..."")
    print(""-------------------------------------"")
    print(""Prediction finished with ""+str(accuracy_score(score, predictions)*100)+""% accuracy."")

My training algorithm

if(val[0] >= 1):
    cost = 0
    weights = weights - alpha * (2 * 1/epochs * weights)
else:
    cost = 1 - val[0]
    # WEIGHTS = WEIGHTS + LEARNING RATE * [X] * [Y]
    weights = weights + alpha * (features[count] * score[count] - 2 * 1/epochs * weights)

What could I do to help the model train? Am I not giving it enough time? Is the algorithm wrong? Are the hyperparameters ok? Thanks for all your help.

",4744,,2444,,4/7/2020 1:23,4/7/2020 1:23,Why is my SVM not reaching good accuracy when trained to perform binary classification of search results?,,0,2,,,,CC BY-SA 4.0 20057,1,,,4/6/2020 22:47,,1,63,"

Let's say we use an MLE estimator (implementation doesn't matter) and we have a training set. We assume that we have sampled the training set from a Gaussian distribution $\mathcal N(\mu, \sigma^2)$.

Now, we split the dataset into training, validation and test sets. The result will be that each will have maximum likelihoods for the following Gaussian distributions $\mathcal N(\mu_{training}, \sigma^2_{training}), \mathcal N(\mu_{validation}, \sigma^2_{validation}), \mathcal N(\mu_{test}, \sigma^2_{test})$.

Now, let's assume the case where $\mu_{validation}<\mu_{training}<\mu_{test}$ and $\mu_{training}<\mu<\mu_{test}$.

Clearly, if we perform validation using this split, then the model that gets selected will be closer to $\mu_{validation}$, which will worsen the performance on actual data, whereas if we only used the training set, the performance could actually be better (this is the simplest case without taking into account the effect of variance).

So, we will have a $4!$ combinations between the means, and each one might improve or worsen the performance (probably in $50 \%$ cases performance will be worsened, assuming symmetry).

So, what am I missing here? Were my aforementioned assumptions wrong? Or does the validation set has a completely different purpose?

",,user9947,2444,user9947,12/12/2021 13:15,12/12/2021 13:15,What is the theoretical basis for the use of a validation set?,,1,0,,,,CC BY-SA 4.0 20059,1,20088,,4/7/2020 0:59,,1,591,"

Background:

I have a 2D CNN model that I am applying to a regression task with some uniquely extracted spectrograms. The specifics of the data set are mostly irrelevant and very domain specific so I won't go into detail, but it is essentially just image classification with a MSE loss function for each label and a unique image of 100x4000. When I re-train the model from scratch multiple times then provide it my testing data set, it has predictions that vary significantly across each iteration and thus a high variance. Supposedly the only difference between a trained model versus another would be the random initialization of weights and the random train/validation split. I feel that the train/validation split has been ruled out by when I've done k-fold cross validation and my model has done very good for all segments of my train/validation splits and acquired good results for the validation in each split. But these same models persist to have high variance in the test data set.

Question:

If I am seeing a high variance for the predictions from my trained model across multiple different runs of re-training, what do I attack first to reduce my variance on my predictions for my test data set?

I've found many articles talking about bias and variance in the data set but not as much criticism directed towards model design. What things can I explore in my dataset or model design, and/or tools I can use to strengthen my model? Does my model need to be bigger/smaller?

Ideas/Solutions: A few Ideas I'd like to acquire some criticism for.

  1. Regularization applied to model such as L1/L2 Regularization, dropout, or early stopping.
  2. Data augmentation applied to dataset (inconveniently not an option right now, but in a more general scenario it could be).
  3. Bigger or smaller model?
  4. Is the random initialization of weight actually very important? Maybe train multiple models and take the average of their collective answers to get the best prediction on real world data (test set).

Personal Note: I have had experience with all these items before on other projects and with personal projects and have some moderate confidence justifying regularization and data augmentation. However, I lack some perspective as to any other tools that might be useful to explore the cause of model variance. I wanted to ask this question here to start a discussion in a general sense of this problem.

Cheers

EDIT: CLARIFICATION. When I say 'variance' I mean specifically variance across models, not variance of predictions from 1 trained model across the test set. Example: Instead lets say I am trying to predict a value somewhere between 1 and 4 (expected_val=3). I train 10 models to do this and 4 of the models accurately predict 3 with a VERY low standard deviation across all the test set samples. Thus a low variance and high accuracy/precision for these 4 models. But the other 6 models predict wildly and some predict 1 very confidently every time and the others could be 4. And I've even had models that predicted negative values even though I have NO training or testing samples that have negative labels.

",33189,,,,,4/7/2020 17:20,"CNN High Variance across multiple trained models, what does it mean?",,1,0,,,,CC BY-SA 4.0 20061,1,,,4/7/2020 1:29,,2,290,"

I recently came across symbol-to-symbol and symbol-to-number differentiation, out of which symbol to symbol seemed fairly straightforward - the computational graph is extended to include gradient calculations and relationships between gradients.

I have a problem in understanding what exactly symbol-to-number differentiation is. Does it map directly every variable in the backprop to its relevant gradient? If yes, how does it do this without knowing about the rest of the computational graph?

If the question is unclear, to increase context - TensorFlow uses symbol to symbol differentiation whereas torch uses symbol to number (apparently).

Came across this in section 6.5.5 of Deep Learning, the book. The material mentioned has a convincing explanation for symbol-to-symbol differentiation but could not say the same for symbol-to-number differentiation.

",25658,,25658,,4/8/2020 8:12,4/8/2020 8:12,What is symbol-to-number differentiation?,,0,3,,,,CC BY-SA 4.0 20062,2,,19879,4/7/2020 3:13,,4,,"

Convolution is a pretty misused term in recent times with the advent of CNN.

Short answer, Convolution is a linear operator (check here) but what you are defining in context of CNN is not convolution, it is cross-correlation which is also linear in case of images (dot product).

Convolution:

Computers work only on discrete values, so the discrete time convolution is defined similarly as: $$ y[n] = \sum_{k=\infty}^{\infty} x[k] h[n-k] $$

which has the nice property of

$$Y(e^{j\omega}) = X(e^{j \omega})H(e^{j \omega})$$

where each $A(e^{j\omega})$ is the Fourier Transform of $a(t)$ So this is the basic idea of discrete convolution. The point to illustrate here is that convolution operation has some nice properties and very helpful.

Now there are 2 main types of convolution (as far as I know). One is linear convolution another is circular convolution. In terms of $2D$ signal they are defined in the following ways: $$y[m,n] = \sum_{j=-\infty}^{\infty}\sum_{k=-\infty}^{\infty} x[i,j]h[m-i,n-j]$$

Circular convolution is the same except that the input sequence is finite from $0$ to $N-1$ giving rise to periodicity in frequency domain. The convolution operation is a little bit different due to this periodicity.

So these are the actual definitions of convolution. It has linearity property (clearly obvious since it is used to calculate output of LTI system), and it also has the linearity in terms of matrix multiplications, since we want to do the computer to do these calculations for us. I don't know a lot but there are many clever manipulations e.g the FFT algorithm an indispensable tool in signal processing (used to convert signals to frequency domain for a certain sampling rate). Like this you can define convolutions in terms of Hermitian matrices and stuff (only if you want to process in the $n$ domain it is much easier to process in the frequency domain).

For example a $1D$ circular convolution between 2 signals in $n$ domain defined in matrix form, can be written as follows $y_n=h_n*x_n$: where $$y_t = \begin{bmatrix} y_0 \\ y_1 \\ y_2 \\ y_3 \end{bmatrix} = \begin{bmatrix} h_0 & h_3 & h_2 & h_1\\ h_1 & h_0 & h_3 & h_2\\ h_2 & h_1 & h_0 & h_3 \\ h_3 & h_2 & h_1 & h_0 \end{bmatrix} \begin{bmatrix} x_0 \\ x_1 \\ x_2 \\ x_3 \end{bmatrix}$$

The same can be done quite easily by converting the functions into frequency domain, multiplying and converting back to time domain.

When we move to $2D$ domain similar equation is formed except (in $n$ domain)instead of $h_0$ we will have a Hermitian matrix $H_0$ and we will perform the Kronecker product (I don't know the justification or proofs of these equations, it probably satisfies the convolution theorem and is fast when ran on computers). Again much easier to do in frequency domain.

When we move to multiple dimensions, it is called Multidimensional discrete convolution as per this Wikipedia article. As the article suggests the property where everything is in frequency domain. $$Y(k_1,k_2,...,k_M)=H(k_1,k_2,...,k_M)X(k_1,k_2,...,k_M)$$ still holds good. When we do convolutions in the $n$ domain things get tricky and we have to do clever matrix manipulations as shown in the example above. Whereas, as stated above things get much easier in the frequency domain.

Its counter-intuitive that a picture has a frequency domain, and in general its actually not frequency. BUT in DSP we use a lot of filters whose math properties are similar to those of filters in the traditional sense of frequency and hence has the same calculations as in a frequency setting as shown in the 1st example of $1D$ signal.

The point convolution by definition is a linear operation. Check here if the explanations are unconvincing. There are Linear Time Varying systems and its output maybe determined by convolution, but then it uses the equation at a certain point of time i.e: $$Y(e^{j\omega}, t) = H(e^{j\omega}, t)X(e^{j\omega} )$$.

Now whether it can be represented by matrix products in $n$ domain or not I cannot say. But generalising, it should be possible, only it will involve increasingly complex matrix properties and operations.

Cross Correlation:

Cross correlation is nothing similar to convolution, it has a bunch of interesting properties on its own, hence I do not like the two being mixed together. In signal processing it is mainly related to finding energy (auto-correlation) and has other applications in communication channels.

While in statistics, it is related to finding correlation (misused term since it can take any value, but correlation coefficient takes value between -1 to 1) between 2 stochastic processes, at different time instants, where we have multiple samplings of the signal or ensembles of the signal based on which the expectation is taken (See here). In CNNs maybe a way to see this would be, the image dataset is a ensemble of stochastic process, while the filter is another process which is fixed. And moving the filter over the image is time delay.

In the context of digital image processing cross correlation is just a dot product. And hence it can be represented by matrices. Whether cross correlation is linear or not is difficult to tell (at least I don't know), but in the context of image processing it seems linear. Bottom-line is that it can be definitely implemented by matrices as it is a dot product, whether it is linear in true mathematical sense is doubtful.

The more intuitive way in context of CNNs would be to view filters as just a bunch of shared weights for better regularisation instead of cross correlation or convolution.

",,user9947,,user9947,4/13/2020 1:28,4/13/2020 1:28,,,,7,,,,CC BY-SA 4.0 20063,1,,,4/7/2020 3:39,,1,926,"

I guess I understand the concept of face detection, a technique specifies the location of multiple objects in the image, and draws bounding boxes on the target.

The question is related to the concept of a landmark. For example, the bottom guy in the image above pointed out by the red arrow has 18 green dots on his face. Is anyone of the dots is a landmark?

What is the size of a landmark? What is the acceptable error of its position? For example, the landmark in the middle of his nose has to be in what kind of range?

Could someone please give a hint?

",35782,,2444,,4/7/2020 12:57,4/7/2020 15:48,What is a landmark in computer vision?,,1,0,,,,CC BY-SA 4.0 20064,2,,20055,4/7/2020 4:49,,0,,"

I will answer this question, leaving it open to challenge by anyone more knowledgeable.

The equations to update each layer of an RNN are

$y_t = \sigma(W_x x_t + b_x )$

and

$h_t = \sigma(W_hx_t + b_h)$

Where $h_t$ is the hidden layer (in blue in the picture), and $y_t$ is the output layer (red in picture). This equations says that every single component in the hidden layer vector, i.e. every unit in the hidden layer, is a function of the linear combination of the $x_t$ vector, which is the first yellow row along the depth axis. In other words, all the first yellow input nodes on the bottom left in the picture.

Thus, technically this picture is not correct, all the yellow nodes should have points to all the blue nodes. Also, by similar reasoning, all the blue nodes in the subsequent steps of the hidden layer should be connected to all the blue nodes of the previous layer.

Of course, that would make for a much uglier/harder picture to make, so I don't blame the authors, although this has given me a few hours of confused research, which I guess still meets their educational goal.

",35809,,,,,4/7/2020 4:49,,,,0,,,,CC BY-SA 4.0 20065,1,20070,,4/7/2020 5:53,,1,45,"

My question is about when to balance training data for sentiment analysis.

Upon evaluating my training dataset, which has 3 labels (good, bad, neutral), I noticed there were twice as many neutral labels as the other 2 combined, so I used a function to drop neutral labels randomly.

However, I wasn't sure if I should do this before or after creating the vocab2index mappings.

To explain, I am numericizing my text data by creating a vocabulary of words in the training data and linking them to numbers using enumerate. I think to use that dictionary of vocab2index values to numericise the training data. I also use that same dictionary to numericise the testing data, dropping any words that do not exist in the dictionary.

When I took a class on this, they had balanced the training data AFTER creating the vocab2index dictionary. However, when I thought about this in my own implementation, it did not make sense. What if some words from the original vocabulary are gone completely, then we aren't training the machine learning classifier on those words, but they would not be dropping from the testing data either (since words are dropping from X_test based on whether they are in the vocab2index dictionary).

So should I be balancing the data BEFORE creating the vocab2index dictionary?

I linked the code to create X_train and X_test below in case it help.

def create_X_train(training_data='Sentences_75Agree_csv.csv'):
    data_csv = pd.read_csv(filepath_or_buffer=training_data, sep='.@', header=None, names=['sentence','sentiment'], engine='python')
    list_data = []
    for index, row in data_csv.iterrows():
        dictionary_data = {}
        dictionary_data['message_body'] = row['sentence']
        if row['sentiment'] == 'positive':
             dictionary_data['sentiment'] = 2
        elif row['sentiment'] == 'negative':
             dictionary_data['sentiment'] = 0
        else:
             dictionary_data['sentiment'] = 1 # For neutral sentiment
        list_data.append(dictionary_data)
    dictionary_data = {}
    dictionary_data['data'] = list_data
    messages = [sentence['message_body'] for sentence in dictionary_data['data']]
    sentiments = [sentence['sentiment'] for sentence in dictionary_data['data']]

    tokenized = [preprocess(sentence) for sentence in messages]
    bow = Counter([word for sentence in tokenized for word in sentence]) 
    freqs = {key: value/len(tokenized) for key, value in bow.items()} #keys are the words in the vocab, values are the count of those words

    # Removing 5 most common words from data
    high_cutoff = 5
    K_most_common = [x[0] for x in bow.most_common(high_cutoff)] 
    filtered_words = [word for word in freqs if word not in K_most_common]

    # Create vocab2index dictionary:
    vocab = {word: i for i, word in enumerate(filtered_words, 1)}
    id2vocab = {i: word for word, i in vocab.items()}
    filtered = [[word for word in sentence if word in vocab] for sentence in tokenized] 

    # Balancing training data due to large number of neutral sentences
    balanced = {'messages': [], 'sentiments':[]}
    n_neutral = sum(1 for each in sentiments if each == 1)
    N_examples = len(sentiments)
    # print(n_neutral/N_examples)
    keep_prob = (N_examples - n_neutral)/2/n_neutral
    # print(keep_prob)
    for idx, sentiment in enumerate(sentiments):
        message = filtered[idx]
        if len(message) == 0:
            # skip this sentence because it has length 0
            continue
        elif sentiment != 1 or random.random() < keep_prob:
            balanced['messages'].append(message)
            balanced['sentiments'].append(sentiment)

    token_ids = [[vocab[word] for word in message] for message in balanced['messages']]
    sentiments_balanced = balanced['sentiments']

    # Unit test:
    unique, counts = np.unique(sentiments_balanced, return_counts=True)
    print(np.asarray((unique, counts)).T)
    print(np.mean(sentiments_balanced))
    ##################

    # Left padding and truncating to the same length 
    X_train = token_ids
    for i, sentence in enumerate(X_train):
        if len(sentence) <=30:
            X_train[i] = ((30-len(sentence)) * [0] + sentence)
        elif len(sentence) > 30:
            X_train[i] = sentence[:30]
    return vocab, X_train, sentiments_balanced
def create_X_test(test_sentences, vocab):
    tokenized = [preprocess(sentence) for sentence in test_sentences]
    filtered = [[word for word in sentence if word in vocab] for sentence in tokenized] # X_test filtered to only words in training vocab
    # Alternate method with functional programming:
    # filtered = [list(filter(lambda a: a in vocab, sentence)) for sentence in tokenized]
    token_ids = [[vocab[word] for word in sentence] for sentence in filtered] # Numericise data

    # Remove short sentences in X_test
    token_ids_filtered = [sentence for sentence in token_ids if len(sentence)>10]
    X_test = token_ids_filtered
    for i, sentence in enumerate(X_test):
        if len(sentence) <=30:
            X_test[i] = ((30-len(sentence)) * [0] + sentence)
        elif len(sentence) > 30:
            X_test[i] = sentence[:30]
    return X_test
",35816,,2444,,4/7/2020 13:05,4/7/2020 13:05,Should I be balancing the data before creating the vocab-to-index dictionary?,,1,0,,,,CC BY-SA 4.0 20066,1,,,4/7/2020 6:59,,2,56,"

I've tried to do my research on Bayesian neural networks online, but I find most of them are used for image classification. This is probably due to the nature of Bayesian neural networks, which may be significantly slower than traditional artificial neural networks, so people don't use them for text (or document) classification. Am I right? Or is there a more specific reason for that?

Are bayesian neural networks suited for text (or document) classification?

",35817,,2444,,4/7/2020 13:13,4/7/2020 13:13,Are bayesian neural networks suited for text (or document) classification?,,0,2,,,,CC BY-SA 4.0 20067,2,,12390,4/7/2020 8:06,,2,,"

What you are trying to achieve, is a game that learns to play flappy bird. For doing this you need a neural network AND a genetic algorithm, those two things work together.
About your concerns on the output, you don't have to know if the action will benefit or not, i will soon explain why.

The neural network part

So, what you need is to know how to build a neural network, i don't know your knowledge about it, but i suggest starting from the basics. In this scenario, you need a feed forward neural network, because you just take the inputs from the current flappy bird scene/frame (such as the y position of the bird, the distance from the closes pipe ecc..) and feed it through a network that outputs either 1 or 0 (jump or don't jump) in the only output neuron we just decided it has.

In python you can implement a neural network from scratch, or using a neural network framework that does al the dirty work for you.

  • From scratch you would need to use numpy for matrix calculations, and you would need to learn matrix multiplication, dot products and all that fancy stuff (You can just let numpy taking care of the matrix calculations, but understanding how it works behind the scenes always helps understand new problems that you might come across when doing more advanced stuff)
  • Using a framework like Tensorflow for python, the only thing you need to do is find the right structure for the network you want to use. You will not have to worry about how activations work, or how the feed forward is performed (But again, it's a good thing to know when working with neural nets)

The genetic algorithm part or """"learning""""

I say """"learning"""" because at first sight it might look like learning, but really it is not. The genetic algorithm works like ""the survival of the fittest"", where the ""smarter"" birds, which are the ones that reached the higher score on the current generation, will have a chance to have their child little birds, that have the same brain as their parent, with either some minimal modifications, or a mix of their parent brains.
The process of this """"learning"""", so the genetic algorithm, works like so:

  1. Create a generation of let's say 200 birds, every bird has a brain with random weights, so at the first run, they are all very...not smart
  2. The game starts, and every frame of the game, the brain of the bird recieves as input some data that is taken from the current frame ( y pos of the bird, distance from pipe...)
  3. The brain ( neural network ) of each bird, performs a feed forward with that data, and outputs what at the beginning is a very random result, let's say 0.75 for one bird
  4. At this point you decided that 0.75 is greater than 0.5, so you take that as a 1, which stands for ""jump"", while if it was 0.3, so 0, the bird does nothing and keeps falling
  5. Shortly the bird will die cause he has no idea of what he is doing, so he most likely collides with a pipe or the ground.
  6. After all birds met their fate, you see that some birds reached further than others, so you choose, for example, 5 of the best performing ones.
  7. Now you try to create a new generation of 200 birds using only the brains of those 5 that were choosen, by mixing and modifying theyr brains
  8. Now the new birds have a brand new brain, that in some cases might be better than the previous one, so chances are that some of those birds will reach a higher score, therefore flap further into the level.
  9. Repeat from point 6

In practice your ""perform_genetic_algorithm"" function in python, will have to choose the birds with the highest score, and as wild as it sounds, mix their brains and modify them, hoping that some modifications will improve the performance of the bird.

I can't think of output since you don't really know if the action of flapping will benefit you or not

The mechanism above explains why you basically do not care at all about the output, except saying to the game engine: ""hey the bird decided to flap, do it"". Whether it's the right action or not, doesn't matter, as the smarter birds are naturally gonna get further and so be more likely to be choosen for next generation.

Hopefully now it's all more clear.
Here is some useful links for building a neural network and for understanding the genetic algorithm:

  • How to build a neural network: I am linking this because it contains all useful information about how to build a very basic neural network in python. In your case, you would have to ignore all the part about backpropagation, loss & error calculation and SGD, and just look at the feed forward part.
  • How to build a neural network - 2: This is another example of building a neural network that i found really useful, probably it's simpler and more straight forward than the previous link, but again, the backpropagation part is not needed for this genetic based learning.
  • Video tutorials on genetic algorithm: This is a very long but very explanatory playlist of videos that dives into the nature of genetic algorithms and how to implement one
  • Genetic algorithm optimization: Other source about genetic algorithms
",35792,,35792,,4/7/2020 9:03,4/7/2020 9:03,,,,0,,,,CC BY-SA 4.0 20068,2,,20028,4/7/2020 8:06,,2,,"

The second implementation looks more correct and inline with how Bidirectional is defined. Specifically, bidirectionality doen't change the forward/backward logic of either direction, and just merges (concat/sum/...) the outputs of forward/backward at a matching timestep t.

You can check how Keras implements it here. There are distinct self.forward_layer and self.backward_layer that are initialized separately.

Your for loop doesn't look ok though. To calculate the output at time step 0 you'd have to calculate forward(0) and backward(n), which means you have to run backward on all the samples first. In practice, each direction is calculated separately and the results are merged afterwards. Check the implementaion in Keras here.

",34315,,,,,4/7/2020 8:06,,,,0,,,,CC BY-SA 4.0 20069,1,,,4/7/2020 8:24,,2,20,"

How to work with GCN when the features of each node is not a 1D vector? For example, if the graph has N nodes and each node has features of the form $C \times D \times E$.

Also, is there an open-source implementation for such a case?

",35819,,2444,,4/7/2020 13:15,4/7/2020 13:15,How should I deal with multi-dimensional tensors for nodes in a graph convolution network?,,0,1,,,,CC BY-SA 4.0 20070,2,,20065,4/7/2020 8:26,,0,,"

If you look at the words in your dictionary (vocab) before/after pruning, most likely you'd see there isn't a lot of difference, not so much to affect your model performance.

In fact, creating a dictionary and model training are two more or less indpendent processes. To make your life easier, you could find the largest dev set you can find for building your vocab (excluding test set), and freeze it for all subsequent ETL/modeling. This way you don't have to deal with dictionary versioning, for example, after choosing different subsets of your training data.

Also if you have compute capacity, I'd suggest to upsample positive/negative classes instead, because those neutral samples you're dropping do have signal on the use of language and which borderline ambiguous samples should be treated as neutral.

",34315,,,,,4/7/2020 8:26,,,,7,,,,CC BY-SA 4.0 20071,2,,19987,4/7/2020 9:32,,1,,"

Okay, I'm first going to review how NEAT works. I hope this helps you model NEAT successfully as a whole, not just limited to your question.

We use neuro-evolution to create a specific behavior that solves a given task. The behavior can be simple and complex.

Now let's focus on behavior... Different neural networks can create the same behavior (A.K.A the competing conventions problem).

We also want to end up with a neural network that is very efficient.

So we want to solve two problems with one algorithm (NEAT): find the behavior that solves a task and find the most efficient neural network that creates the behavior.

There's a simple way to search for the most efficient neural net: start with the simplest neural net and slowly build the neural net up.

The hard part is defining behavior (how do we define behavior in neural networks?). NEAT introduces their very interesting definition of behavior: an atomic unit of neural network behavior is the connection of the neural network: which node connects to which and with what weight.

Now I'm approaching your question(concern): you want to make sure your genomes can grow complex, which means that you want to preserve the capacity for your genome to express complex behavior. As the paper states, behavior does not have anything to do with nodes, let alone node genes. Behavior is a set of neural network connections.

So here's my answer to your question: care about crossing over connections, not for the node genes. Find an efficient way to calculate the node genes. For instance, you'll consume a lot of memory if you just concatenate node genes from two parents without overlaps. There is a chance that some of the node genes might not be in use (i.e. disabled connections, not inheriting some connection genes due to probability).

Hope this helps :)

",35823,,,,,4/7/2020 9:32,,,,3,,,,CC BY-SA 4.0 20072,1,,,4/7/2020 9:36,,1,57,"

GANs have shown good progress across a wide variety of domains ranging from image translation, image generation, text to image synthesis, audio/video generation, image super-resolution and many more.

Although these concepts have great research potential, what are some real-world products or applications that can be developed using GANs?

A few I know are drug discovery and product customization. What else can you suggest?

",35791,,2444,,4/8/2020 13:26,4/8/2020 13:26,What are some real-world products or applications that can be developed using GANs?,,0,3,,,,CC BY-SA 4.0 20073,1,20097,,4/7/2020 10:36,,2,46,"

I am having a hard time understanding the proof of theorem 1 presented in the "Off-Policy Temporal-Difference Learning with Function Approximation" paper.

Let $\Delta \theta$ and $\Delta \bar{\theta}$ be the sum of the parameter increments over an episode under on-policy $T D(\lambda)$ and importance sampled $T D(\lambda)$ respectively, assuming that the starting weight vector is $\theta$ in both cases. Then

$E_{b}\left\{\Delta \bar{\theta} | s_{0}, a_{0}\right\}=E_{\pi}\left\{\Delta \theta | s_{0}, a_{0}\right\}, \quad \forall s_{0} \in \mathcal{S}, a_{0} \in \mathcal{A}$

We know that: $$ \begin{aligned} &\Delta \theta_{t}=\alpha\left(R_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t}\\ &R_{t}^{\lambda}=(1-\lambda) \sum_{n=1}^{\infty} \lambda^{n-1} R_{t}^{(n)}\\ &R_{t}^{(n)}=r_{t+1}+\gamma r_{t+2}+\cdots+\gamma^{n-1} r_{t+n}+\gamma^{n} \theta^{T} \phi_{t+n} \end{aligned} $$

and $$\Delta \bar{\theta_{t}}=\alpha\left(\bar{R}_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}$$ $$ \begin{aligned} \bar{R}_{t}^{(n)}=& r_{t+1}+\gamma r_{t+2} \rho_{t+1}+\cdots \\ &+\gamma^{n-1} r_{t+n} \rho_{t+1} \cdots \rho_{t+n-1} \\ &+\gamma^{n} \rho_{t+1} \cdots \rho_{t+n} \theta^{T} \phi_{t+n} \end{aligned} $$

And it is proven that: $$ E_{b}\left\{\bar{R}_{t}^{\lambda} | s_{t}, a_{t}\right\}=E_{\pi}\left\{R_{t}^{\lambda} | s_{t}, a_{t}\right\} $$

Here is the proof, it begins with:

$E_{b}\{\Delta \bar{\theta}\}=E_{b}\left\{\sum_{t=0}^{\infty} \alpha\left(\bar{R}_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$ $=E_{b}\left\{\sum_{t=0}^{\infty} \sum_{n=1}^{\infty} \alpha(1-\lambda) \lambda^{n-1}\left(\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$.

which I believe is incorrect since,

$E_{b}\{\Delta \bar{\theta}\}=E_{b}\left\{\sum_{t=0}^{\infty} \alpha\left(\bar{R}_{t}^{\lambda}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$ $=E_{b}\left\{\sum_{t=0}^{\infty} \alpha \left(\sum_{n=1}^{\infty}(1-\lambda) \lambda^{n-1}\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$.

and taking out the second sigma will lead to a sum over constant terms.

Furthermore, it is claimed that in order to prove the equivalence above, it is enough to prove the equivalence below: $$ \begin{array}{c} E_{b}\left\{\sum_{t=0}^{\infty}\left(\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\} \\ =E_{\pi}\left\{\sum_{t=0}^{\infty}\left(R_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t}\right\} \end{array} $$

Which I don't understand why. and even if it is the case there are more ambiguities in the proof:

$E_{b}\left\{\sum_{t=0}^{\infty}\left(\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t} \rho_{1} \rho_{2} \cdots \rho_{t}\right\}$ $$=\sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} p_{b}(\omega) \phi_{t} \prod_{k=1}^{t} \rho_{k} E_{b}\left\{\bar{R}_{t}^{(n)}-\theta^{T} \phi_{t} | s_{t}, a_{t}\right\}$$ (given the Markov property, and I don't understand why Markovian property leads to conditional independence !) $$=\sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} \prod_{j=1}^{t} p_{s_{j-1}, s_{j}}^{a_{j-1}} b\left(s_{j}, a_{j}\right) \phi_{t} \prod_{k=1}^{t} \frac{\pi\left(s_{k}, a_{k}\right)}{b\left(s_{k}, a_{k}\right)} \cdot \left(E_{b}\left\{\bar{R}_{t}^{(n)} | s_{t}, a_{t}\right\}-\theta^{T} \phi_{t}\right)$$

$$= \sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} \prod_{j=1}^{t} p_{s_{j-1}, s_{j}}^{a_{j-1}} \pi\left(s_{j}, a_{j}\right) \phi_{t} \cdot\left(E_{b}\left\{\bar{R}_{t}^{(n)} | s_{t}, a_{t}\right\}-\theta^{T} \phi_{t}\right)$$

$$=\sum_{t=0}^{\infty} \sum_{\omega \in \Omega_{t}} p_{\pi}(\omega) \phi_{t}\left(E_{\pi}\left\{R^{(n)} | s_{t}, a_{t}\right\}-\theta^{T} \phi_{t}\right)$$ (using our previous result) $$=E_{\pi}\left\{\sum_{t=0}^{\infty}\left(R_{t}^{(n)}-\theta^{T} \phi_{t}\right) \phi_{t}\right\} . \diamond$$

I'd be grateful if anyone could shed a light on this.

",35827,,-1,,6/17/2020 9:57,4/8/2020 11:03,"Equivalence between expected parameter increments in ""Off-Policy Temporal-Difference Learning with Function Approximation""",,1,0,,,,CC BY-SA 4.0 20074,2,,20037,4/7/2020 11:56,,1,,"

Could you post the pseudocode of your backpropagation algorithm? I recommend you start off as simple as possible (this includes your cost f(x), I would simply use Yexpected-Youtput) and see if it works and then continue adding things. If it's your first time with neural networks, I recommend you check this link out and you could also try practising the algorithms on a programming language like Octave/Matlab (it can be very efficient speed wise). Also check this question out (link). At the bottom there is a code example for the XOR problem. Please post the pseudocode of your code instead of just dumping it there. Finally, don't just copy paste algorithms into your code, you need to understand them.

",35660,,,,,4/7/2020 11:56,,,,6,,,,CC BY-SA 4.0 20075,1,20084,,4/7/2020 12:05,,53,61486,"

I am reading the article How Transformers Work where the author writes

Another problem with RNNs, and LSTMs, is that it’s hard to parallelize the work for processing sentences, since you have to process word by word. Not only that but there is no model of long and short-range dependencies.

Why exactly does the transformer do better than RNN and LSTM in long-range context dependencies?

",9863,,2444,,4/7/2020 16:08,1/26/2022 11:24,Why does the transformer do better than RNN and LSTM in long-range context dependencies?,,4,1,,,,CC BY-SA 4.0 20076,1,,,4/7/2020 12:16,,3,249,"

I am trying to understand the formulation of the maximum entropy Inverse RL method by Brian Ziebart. Particularly, I am stuck on how to understand the computation of state - visitation frequencies.

In order to do so, they utilize a dynamic programming approach to compute the visitation frequency, in which the next state frequency is calculated based upon the state visitation frequency at the previous time step.

This is the algorithm below where, $D_{s_i,t}$ is the probability of state $s_i$ being visited at time step $t$.

What is the difference between this way of computing state visitation frequency compared to the naive method of summing the total number of times state $s_i$ appears in the trajectory divided by the trajectory length?

",32780,,2444,,4/7/2020 13:25,10/27/2022 12:03,"How is the state-visitation frequency computed in ""Maximum Entropy Inverse Reinforcement Learning""?",,1,0,,,,CC BY-SA 4.0 20081,1,,,4/7/2020 14:34,,1,136,"

What is the benefit of a test data set, especially for naive bayes estimator or decision tree construction?

When using a naive bayes classifier the probabilities are a fact. As far as I know there is nothing one could tune (like the weights in a neural net). So what is the purpose of the test data set? Simply to know if one can apply naive bayes or not?

Similiarly what is the benefit of the test data set when constructing a decision tree. We alread use the gini impurity to construct the best possibe decision tree and there is nothing we could do when we get bad results with the test data set.

",27777,,27777,,4/24/2020 23:20,4/24/2020 23:20,What is the meaning of test data set in naive bayes classifier or decision trees?,,6,1,,,,CC BY-SA 4.0 20082,2,,20063,4/7/2020 15:33,,2,,"

The paper A Brief Introduction to Statistical Shape Analysis (2002) by M. B. Stegmann and D. D. Gomez provides a definition of a landmark in the context of statistical shape analysis, which I will report below.

Definition 1: Shape is all the geometrical information that remains when location, scale and rotational effects are filtered out from an object.

For example, in the following diagram, the shape of a hand, after several transformations (i.e. translation, scaling and rotation), is illustrated. This definition captures the intuitive notion of the shape of an object. In fact, as expected, after these transformations, all of the objects below correspond to a hand.

Definition 2: A landmark is a point of correspondence on each object that matches between and within populations.

In your picture, all of the faces (apart from the occluded one) have 18 green points. Each of these points roughly corresponds to the same position in the face. For example, there's a green point on tip of the nose of each face. In your picture, the populations refer to the faces.

You can divide landmark into three groups

  • Anatomical landmarks Points assigned by an expert that corresponds between organisms in some biologically meaningful way.

  • Mathematical landmarks Points located on an object according to some mathematical or geometrical property, i.e. high curvature or an extremum point.

  • Pseudo-landmarks Constructed points on an object either on the outline or between landmarks.

The landmarks in your picture could be considered anatomical landmarks, so they might have been assigned by an expert (a human), so this should explain why they do not exactly correspond. Moreover, note that e.g. noses have different shapes across humans, so correspondences are unlikely to be exact in most cases.

What is the size of a landmark?

This is just an educated guess, but a landmark is probably just a point (i.e. a coordinate of a pixel in the image), although in your picture they look bigger, but this is probably to make them more noticeable.

What is the acceptable error of its position?

This is also a guess, but this might depend on the application and the used algorithms, i.e., in certain cases, your algorithms may be very sensitive to the position of the landmarks, in other cases, that might not be a big problem.

",2444,,-1,,6/17/2020 9:57,4/7/2020 15:48,,,,0,,,,CC BY-SA 4.0 20084,2,,20075,4/7/2020 16:12,,44,,"

I'll list some bullet points of the main innovations introduced by transformers , followed by bullet points of the main characteristics of the other architectures you mentioned, so we can then compared them.

Transformers

Transformers (Attention is all you need) were introduced in the context of machine translation with the purpose to avoid recursion in order to allow parallel computation (to reduce training time) and also to reduce drops in performance due to long dependencies. The main characteristics are:

  • Non sequential: sentences are processed as a whole rather than word by word.
  • Self Attention: this is the newly introduced 'unit' used to compute similarity scores between words in a sentence.
  • Positional embeddings: another innovation introduced to replace recurrence. The idea is to use fixed or learned weights which encode information related to a specific position of a token in a sentence.

The first point is the main reason why transformer do not suffer from long dependency issues. The original transformers do not rely on past hidden states to capture dependencies with previous words. They instead process a sentence as a whole. That is why there is no risk to lose (or "forget") past information. Moreover, multi-head attention and positional embeddings both provide information about the relationship between different words.

RNN / LSTM

Recurrent neural networks and Long-short term memory models, for what concerns this question, are almost identical in their core properties:

  • Sequential processing: sentences must be processed word by word.
  • Past information retained through past hidden states: sequence to sequence models follow the Markov property: each state is assumed to be dependent only on the previously seen state.

The first property is the reason why RNN and LSTM can't be trained in parallel. In order to encode the second word in a sentence I need the previously computed hidden states of the first word, therefore I need to compute that first. The second property is a bit more subtle, but not hard to grasp conceptually. Information in RNN and LSTM are retained thanks to previously computed hidden states. The point is that the encoding of a specific word is retained only for the next time step, which means that the encoding of a word strongly affects only the representation of the next word, so its influence is quickly lost after a few time steps. LSTM (and also GruRNN) can boost a bit the dependency range they can learn thanks to a deeper processing of the hidden states through specific units (which comes with an increased number of parameters to train) but nevertheless the problem is inherently related to recursion. Another way in which people mitigated this problem is to use bi-directional models. These encode the same sentence from the start to end, and from the end to the start, allowing words at the end of a sentence to have stronger influence in the creation of the hidden representation. However, this is just a workaround rather than a real solution for very long dependencies.

CNN

Also convolutional neural networks are widely used in nlp since they are quite fast to train and effective with short texts. The way they tackle dependencies is by applying different kernels to the same sentence, and indeed since their first application to text (Convolutional Neural Networks for Sentence Classification) they were implement as multichannel CNN. Why do different kernels allow to learn dependencies? Because a kernel of size 2 for example would learn relationships between pairs of words, a kernel of size 3 would capture relationships between triplets of words and so on. The evident problem here is that the number of different kernels required to capture dependencies among all possible combinations of words in a sentence would be enormous and unpractical because of the exponential growing number of combinations when increasing the maximum length size of input sentences.

To summarize, Transformers are better than all the other architectures because they totally avoid recursion, by processing sentences as a whole and by learning relationships between words thanks to multi-head attention mechanisms and positional embeddings. Nevertheless, it must be pointed out that also transformers can capture only dependencies within the fixed input size used to train them, i.e. if I use as a maximum sentence size 50, the model will not be able to capture dependencies between the first word of a sentence and words that occur more than 50 words later, like in another paragraph. New transformers like Transformer-XL tries to overcome exactly this issue, by kinda re-introducing recursion by storing hidden states of already encoded sentences to leverage them in the subsequent encoding of the next sentences.

",34098,,52242,,1/26/2022 9:18,1/26/2022 9:18,,,,1,,,,CC BY-SA 4.0 20085,2,,20075,4/7/2020 16:20,,9,,"

Let's start with RNN. A well known problem is vanishin/exploding gradients, which means that the model is biased by most recent inputs in the sequence, or in other words, older inputs have practically no effect in the output at the current step.

LSTMs/GRUs mainly try to solve this problem, by including a separate memory (cell) and/or extra gates to learn when to let go of past/current information. Check these series of lectures for more in-depth discussion. Also check the interactive parts of this article for some intuitive understanding of dependency on past elements.

Now, given all this, information from past steps still goes through a sequence of computations and we're relying on these new gate/memory mechanisms to pass information from old steps to the current one.

One major advantage of the transformer architecture, is that at each step we have direct access to all the other steps (self-attention), which practically leaves no room for information loss, as far as message passing is concerned. On top of that, we can look at both future and past elements at the same time, which also brings the benefit of bidirectional RNNs, without the 2x computation needed. And of course, all this happens in parallel (non-recurrent), which makes both training/inference much faster.

The self-attention with every other token in the input means that the processing will be in the order of $\mathcal{O}(N^2)$ (glossing over details), which means that it's going to be costly to apply transformers on long sequences, compared to RNNs. That's probably one area that RNNs still have an advantage over transformers.

",34315,,2444,,4/7/2020 16:25,4/7/2020 16:25,,,,2,,,,CC BY-SA 4.0 20086,1,,,4/7/2020 16:58,,1,204,"

I've been trying to hyper tuning my KNNBasic algorithm by the help of grid search for recommendation system for movie review data. The problem is that both of my KNNBasicTuned and KNNBasicUntuned shows the same result. Here is my code for KNNTuning.. I have tried the SVD algo tuning and it worked perfectly so all my libraries are working perfectly. However my all libraries are in my github linke : https://github.com/iSarcastic99/KNNBasicTuning

Code of KNNBasicTuning :

# -*- coding: utf-8 -*-
""""""
Created on Sat Apr  4 01:25:40 2020

@author: rahulss
""""""
#My libraries

from MovieLens import MovieLens    
from surprise import KNNBasic
from surprise import NormalPredictor
from Evaluator import Evaluator
from surprise.model_selection import GridSearchCV

import random
import numpy as np

#loading my working data
def LoadMovieLensData():
    ml = MovieLens()
    print(""Loading movie ratings..."")
    data = ml.loadMovieLensLatestSmall()
    print(""\nComputing movie popularity ranks so we can measure novelty later..."")
    rankings = ml.getPopularityRanks()
    return (ml, data, rankings)

np.random.seed(0)
random.seed(0)

# Load up common data set for the recommender algorithms
(ml, evaluationData, rankings) = LoadMovieLensData()

print(""Searching for best parameters..."")
param_grid = {'n_epochs': [10, 30], 'lr_all': [0.005, 0.010],
          'n_factors': [50, 90]}
gs = GridSearchCV(KNNBasic, param_grid, measures=['rmse', 'mae'], cv=3)

gs.fit(evaluationData)

# best RMSE score
print(""Best RMSE score attained: "", gs.best_score['rmse'])

# combination of parameters that gave the best RMSE score
print(gs.best_params['rmse'])

# Construct an Evaluator to, you know, evaluate them
evaluator = Evaluator(evaluationData, rankings)

params = gs.best_params['rmse']
KNNBasictuned = KNNBasic(n_epochs = params['n_epochs'], lr_all =  params['lr_all'], n_factors = params['n_factors'])
evaluator.AddAlgorithm(KNNBasictuned, ""KNN - Tuned"")

KNNBasicUntuned = KNNBasic()
evaluator.AddAlgorithm(KNNBasicUntuned, ""KNN - Untuned"")


# Evaluating all algorithms
evaluator.Evaluate(False)

evaluator.SampleTopNRecs(ml, testSubject=85, k=10)
",25685,,,,,4/7/2020 16:58,Why can't I Hyper tune my KNNBasic Algorithm?,,0,6,,,,CC BY-SA 4.0 20087,2,,20081,4/7/2020 17:09,,1,,"

Your assumption about the test data is not correct completely. Maybe you use the test data to tune your learning algorithm to work better on the test data, but it's not the whole thing. Sometimes you need to know that the ML method is working or not and have a sense about how much does it work!

You have other scenarios that you want to evaluate your method:

  1. Compare the result of the leaner with other techniques. For example, you are considering DT versus an SVM classifier over a data set. If you want to compare them, you need a value to found such a sense about the comparison.

  2. Sometimes you are using an ensemble method and you want to tune some parameters to balance between using different ML methods. Hence, you need to evaluate these learning methods (DT, Naive Bayes) to improve the ensemble method.

",4446,,,,,4/7/2020 17:09,,,,0,,,,CC BY-SA 4.0 20088,2,,20059,4/7/2020 17:20,,0,,"

Firstly if you're doing spectrogram classification, you'd probably want to use a loss function like cross entropy. That would give you less ""high-variance"" like results when you're calculating your evaluation metrics.

About your ideas:

  • Dropout and early stopping would be enough

  • Data augmentation with speech is quite helpful, and there are a few libraries to help you with volume/speed/noise augmentation. Volume/speed augmentation is actually really important and you'd want to do it anyway for decent results in production. Speech processing is sometimes a bit tricky, in the sense that what we hear may not be what ends up in the spectrogram and parameters like noise, recording setup, encoding, accent end up causing a domain-mismatch (covariate shift)

  • In general, a smaller model (less parameters) helps with high variance. Start with small and simple models first, make sure eval metrics look ok, and then try larger models.
  • Random initialization doesn't really affect the results in a systematic way. You can indeed ensemble predictions of models trained with different initialization; that does help with variance but don't expect a lot of difference. The gain is not worth the computation cost for models applied in practice (ie not kaggle)

Other ideas:

  • Check that you shuffle data properly in train/test, and between folds
  • Do some error analysis on the mis-predicted test samples, do they have something in common. Check the label distribution in your dev/test set.
  • If the test set was captured in a process different from your dev set (total blind), don't expect a robust performance. In general, speech recognition is really data hungry with a decent ""generic* model needing hundreds of hours of recording from different domains (tv, news, movies, telephone, lecture, echo, etc.).
",34315,,,,,4/7/2020 17:20,,,,0,,,,CC BY-SA 4.0 20089,2,,20081,4/7/2020 17:38,,0,,"

In the field of ML and AI, you should always remember that before choosing any algorithm you should know the data .One should always start with Data Analyzing, which itself is field for this critical job. Decision Tree can never work with its best optimization without tuning the datasets. Here is a great article that you can refer : Tuning Decision Tree

Purpose of test data in naive byes classifier: 1) It's necessity to check the accuracy, hits, hit rates, coverage, diversity, novelty, etc. metrics.

2) It hypertune your testdata(as anti_train_set) also by using mean, standard deviation, variance.

I really think that you should try other algorithms to train your datasets. I can't name all of them. However, in Neural network, rnn, cnn, rbm are some great algorithm to work with.

Please always remember that Machine learning is like an art, where datasets (test, train, evaluated) are colors and its up-to us to use the right amount of them.

",25685,,,,,4/7/2020 17:38,,,,0,,,,CC BY-SA 4.0 20092,1,,,4/7/2020 21:53,,1,41,"

I save the trained model after a certain number of episodes with the special save() function of the DDPG class (the network is saved when the reward reaches zero), but when I restore the model again using saver.restore(), the network gives out a reward equal to approximately -1800. Why is this happening, maybe I'm doing something wrong? My network:

import tensorflow as tf
import numpy as np
import gym

epsiode_steps = 500

# learning rate for actor
lr_a = 0.001

# learning rate for critic
lr_c = 0.002
gamma = 0.9
alpha = 0.01
memory = 10000
batch_size = 32
render = True


class DDPG(object):
    def __init__(self, no_of_actions, no_of_states, a_bound, ):
        self.memory = np.zeros((memory, no_of_states * 2 + no_of_actions + 1), dtype=np.float32)

        # initialize pointer to point to our experience buffer
        self.pointer = 0

        self.sess = tf.Session()

        self.noise_variance = 3.0

        self.no_of_actions, self.no_of_states, self.a_bound = no_of_actions, no_of_states, a_bound,

        self.state = tf.placeholder(tf.float32, [None, no_of_states], 's')
        self.next_state = tf.placeholder(tf.float32, [None, no_of_states], 's_')
        self.reward = tf.placeholder(tf.float32, [None, 1], 'r')

        with tf.variable_scope('Actor'):
            self.a = self.build_actor_network(self.state, scope='eval', trainable=True)
            a_ = self.build_actor_network(self.next_state, scope='target', trainable=False)

        with tf.variable_scope('Critic'):
            q = self.build_crtic_network(self.state, self.a, scope='eval', trainable=True)
            q_ = self.build_crtic_network(self.next_state, a_, scope='target', trainable=False)

        self.ae_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Actor/eval')
        self.at_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Actor/target')

        self.ce_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Critic/eval')
        self.ct_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Critic/target')

        # update target value
        self.soft_replace = [
            [tf.assign(at, (1 - alpha) * at + alpha * ae), tf.assign(ct, (1 - alpha) * ct + alpha * ce)]
            for at, ae, ct, ce in zip(self.at_params, self.ae_params, self.ct_params, self.ce_params)]

        q_target = self.reward + gamma * q_

        td_error = tf.losses.mean_squared_error(labels=(self.reward + gamma * q_), predictions=q)

        self.ctrain = tf.train.AdamOptimizer(lr_c).minimize(td_error, name=""adam-ink"", var_list=self.ce_params)

        a_loss = - tf.reduce_mean(q)

        # train the actor network with adam optimizer for minimizing the loss
        self.atrain = tf.train.AdamOptimizer(lr_a).minimize(a_loss, var_list=self.ae_params)

        tf.summary.FileWriter(""logs2"", self.sess.graph)

        # initialize all variables

        self.sess.run(tf.global_variables_initializer())
        self.saver = tf.train.Saver()
        self.saver.restore(self.sess, ""Pendulum/nn.ckpt"")


    def choose_action(self, s):
        a = self.sess.run(self.a, {self.state: s[np.newaxis, :]})[0]
        a = np.clip(np.random.normal(a, self.noise_variance), -2, 2)

        return a

    def learn(self):
        # soft target replacement
        self.sess.run(self.soft_replace)

        indices = np.random.choice(memory, size=batch_size)
        batch_transition = self.memory[indices, :]
        batch_states = batch_transition[:, :self.no_of_states]
        batch_actions = batch_transition[:, self.no_of_states: self.no_of_states + self.no_of_actions]
        batch_rewards = batch_transition[:, -self.no_of_states - 1: -self.no_of_states]
        batch_next_state = batch_transition[:, -self.no_of_states:]

        self.sess.run(self.atrain, {self.state: batch_states})
        self.sess.run(self.ctrain, {self.state: batch_states, self.a: batch_actions, self.reward: batch_rewards,
                                    self.next_state: batch_next_state})

    # we define a function store_transition which stores all the transition information in the buffer
    def store_transition(self, s, a, r, s_):
        trans = np.hstack((s, a, [r], s_))

        index = self.pointer % memory
        self.memory[index, :] = trans
        self.pointer += 1

        if self.pointer > memory:
            self.noise_variance *= 0.99995
            self.learn()

    # we define the function build_actor_network for builing our actor network and after crtic network
    def build_actor_network(self, s, scope, trainable)
        with tf.variable_scope(scope):
            l1 = tf.layers.dense(s, 30, activation=tf.nn.tanh, name='l1', trainable=trainable)
            a = tf.layers.dense(l1, self.no_of_actions, activation=tf.nn.tanh, name='a', trainable=trainable)
            return tf.multiply(a, self.a_bound, name=""scaled_a"")               

    def build_crtic_network(self, s, a, scope, trainable):
        with tf.variable_scope(scope):
            n_l1 = 30
            w1_s = tf.get_variable('w1_s', [self.no_of_states, n_l1], trainable=trainable)
            w1_a = tf.get_variable('w1_a', [self.no_of_actions, n_l1], trainable=trainable)
            b1 = tf.get_variable('b1', [1, n_l1], trainable=trainable)
            net = tf.nn.tanh(tf.matmul(s, w1_s) + tf.matmul(a, w1_a) + b1)

            q = tf.layers.dense(net, 1, trainable=trainable)
            return q

    def save(self):
        self.saver.save(self.sess, ""Pendulum/nn.ckpt"")

env = gym.make(""Pendulum-v0"")
env = env.unwrapped
env.seed(1)

no_of_states = env.observation_space.shape[0]
no_of_actions = env.action_space.shape[0]

a_bound = env.action_space.high
ddpg = DDPG(no_of_actions, no_of_states, a_bound)

total_reward = []

no_of_episodes = 300
# for each episodes
for i in range(no_of_episodes):
    # initialize the environment
    s = env.reset()

    # episodic reward
    ep_reward = 0

    for j in range(epsiode_steps):

        env.render()

        # select action by adding noise through OU process
        a = ddpg.choose_action(s)

        # peform the action and move to the next state s
        s_, r, done, info = env.step(a)

        # store the the transition to our experience buffer
        # sample some minibatch of experience and train the network
        ddpg.store_transition(s, a, r, s_)

        # update current state as next state
        s = s_

        # add episodic rewards
        ep_reward += r

        if int(ep_reward) == 0 and i > 200:
            ddpg.save()
            print(""save"")
            quit()

        if j == epsiode_steps - 1:
            total_reward.append(ep_reward)
            print('Episode:', i, ' Reward: %i' % int(ep_reward))

            break
",35842,,,,,4/7/2020 21:53,Why does the result when restoring a saved DDPG model differ significantly from the result when saving it?,,0,0,,,,CC BY-SA 4.0 20093,2,,7926,4/7/2020 23:30,,0,,"

I was very bored.

The short answer is yes and the long answer is basically yes.

I'm going to gloss over what you could possibly mean by AI and instead focus on what AI programs pretty much are:

A mish-mash of algorithms and methods borrowed from mathematics, perhaps more specifically statistics, inserted into a good old-fashioned program.

Let's start from the very beginning. The very beginning.

For the sake of this argument, everything is a language and there exists an omnipotent alphabet which contains all possible symbols you can come up with to convey a message. Such an alphabet would contain everything you've ever known, all their permutations, and then some. It would contain your clothes as well. Forget countable infinity or all those concepts, those come way later.

It is important to realise that right now I'm communicating with you through not just through the power of a string of characters. You're also experiencing other stimuli that you, or whatever part of ""you"", is/are capable of interpreting into another language. To be crude, everything you decide to classify as a 'standalone thing' is a compiler that runs in perpetual execution, translating all the 'things' it 'receives' into other 'things' it 'spits out' for other 'things' to 'receive'.

Think about a modern day computer. You write a program in your fancy little language. You hit compile. A compiler goes through your code, and spits out more code. Except this time this code is written in another language, sometimes ""closer to the metal"", sometimes ""just about the same abstract level"" and sometimes ""even more abstract"", and this process repeats itself until somewhere along the line that code you started with, has been interpreted to mere electrical signals, which then themselves are being 'compiled' by an entity we will call 'the universe' and that's up to empirical observation to determine what and what's not going on. (Except ""the universe"" was always responsible. But we will partition things for the sake of ..partitioning things. You get what I mean, I know you do.)

Now let's jump back to languages. Mathematics is a language in the sense that:

  • It is built of symbols contained in an alphabet we will call X
  • The several fields specify their own grammatical structure through which we can decide whether or not a statement is well-formed. This encompasses everything from where you can put the + operator in high school algebra, to how you can write a proof in formal logic. There needs to be no justification for how you build a grammar. You can always just make a new grammar and use it instead. Of course, it might not be capable of forming statements which are compatible with other grammars.

What's interesting about X is that its definition is not fixed. Throughout time, we've introduced new symbols into the mathematical alphabet to be able to express more concepts while keeping things separate. (Or rather some people have had the sense to keep it this way.) For example, whenever you see Leibniz's integration symbol, you know you're probably dealing with some kind of integration and not something novel that you've never heard of before.

Now here's where I actually answer your question:

  • I assume that by ""program"" you are referring to the mathematical construct as defined in theoretical computer science: A string of characters from an alphabet.
  • This string is then fed to a compiler (lexer|parser|semantic analyser) which spits out another string (mainly the job of the semantic analyser). This string usually is built from characters of a different alphabet. That is to say, the compiler is a function which maps a well formed string of language A to a string of language B
  • The end goal of compiling a program is to execute it, which basically means a succession of compilers will take the output of the previous compiler and spit back yet another string, until the string is essentially electrons moving about the circuitry of the computer in your bedroom and producing fancy lights on your monitor

So whenever you write an ""AI program"", you're just writing a ""program"" that contains some ""AI algorithms"" which are really just applications of things we've known in mathematics for 100s of years, which again, are really just a string of characters that are about to be translated by a compiler.

In other words, nothing you can ever write is not deterministic, provided you look at the bigger picture.

A common argument I see is that since AI programs usually ""adapt"" and ""self-optimise"" when solving the problem, they're not quite deterministic, in the sense that feeding the program the same input twice will (hopefully) yield better results the second time. Except what really happened is that you had an input string that you partitioned into inputs A and B, and fed them to the algorithm in succession. Had you fed AB initially, you would've obtained the same results.

",35843,,,,,4/7/2020 23:30,,,,0,,,,CC BY-SA 4.0 20094,1,20113,,4/8/2020 3:39,,7,401,"

For people who have experience in the field, why is creating AI that has the ability to write programs (that are syntactically correct and useful) a hard task?

What are the barriers/problems we have to solve before we can solve this problem? If you are in the camp that this isn't that hard, why hasn't it become mainstream?

",22840,,2444,,12/7/2020 14:32,3/17/2022 22:05,Why is creating an AI that can code a hard task?,,2,0,,,,CC BY-SA 4.0 20096,1,20098,,4/8/2020 6:49,,4,2369,"

I encountered the phrase/concept off-the-shelf CNN in this paper in which authors used off-the-shelf CNN representation, OverFeat, with simple classifiers to address different recognition tasks.

As I understand it correctly, it literally means something is ready to be used for a task without alteration.

Can somebody explain in simple words what off-the-shelf CNN technically means in the context of AI and convolutional neural networks?

",31312,,2444,,4/8/2020 13:37,5/13/2020 16:42,"What does ""off-the-shelf"" mean?",,1,1,,,,CC BY-SA 4.0 20097,2,,20073,4/8/2020 11:03,,1,,"

First part is correct \begin{align} &\sum_{n=1}^{\infty} \alpha(1-\lambda)\lambda^{n-1} (\bar R_t^{(n)} - \theta^T \phi_t)\\ =& \alpha[\sum_{n=1}^{\infty} (1-\lambda)\lambda^{n-1} \bar R_t^{(n)} - \sum_{n=1}^{\infty} (1-\lambda)\lambda^{n-1} \theta^T \phi_t] \end{align} $\sum_{n=1}^{\infty} (1-\lambda)\lambda^{(n-1)}$ sums to $1$ so we have \begin{equation} \alpha[\sum_{n=1}^{\infty} (1-\lambda)\lambda^{n-1} \bar R_t^{(n)} - \theta^T \phi_t] \end{equation} For the second part it's enough to prove equivalence for any $n$ because result contains sum over $n$. If you have 2 sums $\sum x_n$, $\sum y_n$ then the sums will be equal if for any $n$, $x_n = y_n$.

For the third part, we are in state $s_t$ and we already took action $a_t$ so we have \begin{align} &E_b \{ \sum_{t=0}^{\infty} (\bar R_t^{(n)} - \theta^T\phi_t)\phi_t \rho_1\rho_2\cdots\rho_t\}\\ =& \sum_{t=0}^{\infty} E_b \{(\bar R_t^{(n)} - \theta^T\phi_t)\phi_t \rho_1\rho_2\cdots\rho_t\}\\ =& \sum_{t=0}^{\infty} E_b \{\phi_t \rho_1\rho_2\cdots \rho_t\} E_b \{(\bar R_t^{(n)} - \theta^T\phi_t)|s_t, a_t\} \end{align} that is because $\rho_i, i = 1, \ldots, t-1$ depends on $s_i, a_i$. Because of Markov property expectation over $\bar R_t^{(n)}$ doesn't depend on those state it only depends on $s_t, a_t$ so they are independent. We don't need to consider $\phi_t$ and $\rho_t$ in expectation over $\bar R_t^{(n)}$ either because, like I said, we are in state $s_t$ and we took $a_t$ so they are already decided they would be considered a constant. We can then split total expectation in part $E_b \{\phi_t \rho_1\rho_2\cdots \rho_t\}$ for getting to state $s_t$ and taking action $a_t$ and part $E_b \{(\bar R_t^{(n)} - \theta^T\phi_t)|s_t, a_t\}$ for expectation over $\bar R_t^{(n)}$ after we got to state $s_t$ and took action $a_t$.

",20339,,,,,4/8/2020 11:03,,,,0,,,,CC BY-SA 4.0 20098,2,,20096,4/8/2020 13:47,,1,,"

The dictionary's definition of off-the-shelf is

used to describe a product that is available immediately and does not need to be specially made to suit a particular purpose

The same dictionary provides several examples

You can purchase off-the-shelf software or have it customized to suit your needs.

If you have complex needs, we don't recommend that you buy software off the shelf.

For this, off-the-shelf algorithms included in the robot's programming libraries are used.

So, your intuition is correct! An off-the-shelf model, software, product, etc., is any model, software or, respectively, product that would be easily (or immediately) available or applicable to the specific context, but, at the same time, it may also be applicable to many other contexts or problems.

An off-the-shelf convolutional neural network is thus a typical or standard CNN that can be applied immediately in that context (but that is potentially applicable to many other contexts or problems). Examples of CNNs that could be used as off-the-shelf models are AlexNet or LeNet-5, but the actual choice depends on the context and needs.

An off-the-shelf model can also be a baseline model (i.e. a very simple model that is used in experiments as the model that every other model should outperform), but not necessarily.

Alex Graves uses this term/expression in his paper Practical Variational Inference for Neural Networks.

",2444,,-1,,6/17/2020 9:57,5/13/2020 16:42,,,,2,,,,CC BY-SA 4.0 20100,1,20101,,4/8/2020 15:59,,4,98,"

For my master thesis I am working on a dialogue system that should be deployed in hospitals to administer simple questionnaires to patients. I already did literature research and I'm fine with what I found since I don't have to replicate something which has been already done, but I noticed that there are really few papers regarding this specific 'robot interviewer' topic.

Let me explain the task a bit more in details: in a real interview a human interviewer usually starts with greetings and with an explanation of the questionnaire to administer, and then he/she starts asking some more or less structured questions to the person to interview. The idea here is to replace a human interviewer with a dialogue system.

Now, at a first glimpse it seems like a task that can be easily hand coded, and indeed lot of real application simply use systems in which specific questions are stored in memory in association with some already made answers to choose from (here's an example), and the system simply show them (or read in the case of humanoid robots) to the people being interviewed, the system then wait for the answer and then move on with the next questions.

The point is that in real interviews the conversation flow is obviously much more smooth and natural. A human being can detect doubts in the voice of the interviewed person, which can also explicitly ask for explanations, a human being can also understand when an answer comes with emotional implications ('yes I feel sad every day') and we are able to automatically react to these hidden implications with emotional fillers ('I'm sorry to hear that'). All these aspects require to train some natural language understanding module in order to be replicated in an artificial agent (and this is actually what I'm currently working on), so I though I would have found more papers on this.

Now, despite having found a tons of paper related to open domain dialogue systems, affective and attentive systems, even systems able to reply with irony, I did not found many papers about dialogue systems for smooth interviews or questionnaire administration, which in my opinion sounds like an much easier task to tackle (especially if compared to open domain conversations). The only two papers that I found which truly focused on interviewer systems are:

So my question is: did I miss something, like some specific keywords? Or is there actually a gap in the literature with regard to the design of dialogue systems for interviews and questionnaire (or surveys) administration? I'm interested in any link or hint from anyone working on similar applicants, thank you in advance!

",34098,,2444,,4/8/2020 18:01,4/8/2020 18:01,Is there any literature on the design of dialogue systems for interviews and questionnaire administration?,,1,2,,,,CC BY-SA 4.0 20101,2,,20100,4/8/2020 16:49,,4,,"

[Disclaimer: I work for a company that provides a platform for developing conversational AI systems]

The platform used by the company I work for has a sentiment analysis component, so you can recognise if the user input expresses certain emotions. The dialogues are encoded in 'flows', which are graphs with an initial trigger consisting of output nodes and transitions that correspond to user input. In order to react to emotions expressed by the user you'd have a trigger that does not react to the user input text, but instead to a setting of the sentiment flags.

For example, in a call centre bot, if the user repeatedly uses angry sentiment in their utterances, the bot could detect this and trigger a flow that says ""It seems you are not happy with my responses -- please wait while I transfer you to a human agent"".

This all depends on the mechanics of organising the dialogues. If you have a ML model, this will probably not be as easy as if you have a manually designed graph structure (which gives you more control over such matters).

Being a commercial system, there are obviously no academic papers on this. But as it's so easy to do within the platform, it doesn't seem too much of a problem in the first place.

",2193,,,,,4/8/2020 16:49,,,,4,,,,CC BY-SA 4.0 20103,1,,,4/8/2020 21:00,,2,86,"

When using k-fold cross-validation in a deep learning problem, after you have computed your hyper-parameters, how do you decide how long to train your final model? My understanding is that, after the hyperparameters are selected, you train your model one more time on the entire set of data, but it's not clear to me when you decide to stop training.

",32390,,2444,,4/8/2020 21:52,4/8/2020 23:08,"After having selected the best model with cross-validation, for how long should I train it?",,1,0,,,,CC BY-SA 4.0 20104,1,20105,,4/8/2020 21:57,,1,302,"

Can a variational auto-encoder (VAE) learn images whose pixels have been generated from a Gaussian distribution (e.g. $N(0, 1)$), i.e. each pixel is a sample from $N(0, 1)$?

My gut feeling says no, because the VAE adds additional noise $\epsilon$ to the original image in the latent space, and if all images and pixels are random from the same distribution it would be impossible do decode/reconstruct into the particular input image. However, VAEs are a bit of a mystery to me internally. Any help would be appreciated!

",35867,,2444,,4/8/2020 23:43,4/8/2020 23:43,Can a variational auto-encoder learn images composed of random noise at each pixel (each drawn from the same distribution)?,,1,10,,,,CC BY-SA 4.0 20105,2,,20104,4/8/2020 23:05,,1,,"

VAE's try and model the distribution of your data. So it's not going to learn "" images composed of random noise at each pixel"" per se (though, if overfitting, it could remember them). But it would be very capable of learning the simple noise distribution from which you sampled your random pixels

",11351,,,,,4/8/2020 23:05,,,,10,,,,CC BY-SA 4.0 20106,2,,20103,4/8/2020 23:08,,2,,"

Short answer: training ""duration"" or number of epochs/updates should be cross-validated too: you want to early-stop your training to prevent overfitting.

Longer answer:

Think of accuracy on the validation set as an estimate of accuracy on future data, given the value of some hyperparameter. In this case, the hyperparameter of interest is the number of training epochs. So: for each CV fold, train the network (e.g. up to some maximum number of epochs). After each epoch, record accuracy on the validation set. Compute the average validation set accuracy (across CV folds) for each number of epochs. Choose the number of epochs that maximizes this value.

https://stats.stackexchange.com/questions/298084/k-fold-cross-validation-for-choosing-number-of-epochs

",11351,,,,,4/8/2020 23:08,,,,1,,,,CC BY-SA 4.0 20107,2,,13120,4/8/2020 23:40,,2,,"

GANs are notably hard to train and it is not uncommon to have large bumps in the losses. The learning rate is a good start but the instability may come from a wide variety of reasons. I'm assuming that you have no bug in your code or data.

For one, gradient descent is not well suited to the 2-player game we're playing. I've personally found ExtraAdam to yield much more stable training (code, paper).

It could also come from the loss and many tricks have been developed and one of the most popular one is enforcing smoothness in the gradient (see W-GAN, W-GAN-GP etc.). SpectralNorm (code paper) is a very popular and recent normalization technique for the discriminator.

There are a number of additional tricks to make GANs work like label smoothing and flipping, different update rates for the discriminator and generators (as in BigGAN for instance). I suggest you have a look at this nice repo of (somewhat seasoned) tricks: ganhacks.

",11351,,,,,4/8/2020 23:40,,,,0,,,,CC BY-SA 4.0 20108,2,,20053,4/8/2020 23:53,,2,,"

I'm not familiar with python I'm afraid, but I'll give a go at presenting some of the maths...

In order to perform the back propagation using matrices, matrix transpose is used for different sized layers. Also note that when using matrices for this, we need to distinguish different types of matrix multiplication, namely matrix product, and the Hadamard product which operates differently (the latter of which is denoted by a circle with a dot in the centre).

Note the back propagation formulas:

(EQ1) \begin{equation*}\delta ^{l} = (w^{l+1})^{T} \delta ^{l+1} \odot \sigma {}' (z ^{l})\end{equation*}

(EQ2) \begin{equation*}\frac{\partial E}{\partial w}=a^{l-1}\delta ^{l}\end{equation*}

As you can see the transpose of the weight matrix is used to multiply against the delta of the layer.

As an example, consider a simple network with an input layer of 3 neurons, and an output layer of 2 neurons.

The unactivated feed forward output is given by...

\begin{equation*}z_{1} = h_{1} w_{1} + h_{2} w_{3} + h_{3} w_{5}\end{equation*} \begin{equation*}z_{2} = h_{1} w_{2} + h_{2} w_{4} + h_{3} w_{6}\end{equation*}

Which can be represented in matrix form as...

\begin{equation*} z = \begin{bmatrix}z_{1}\\z_{2}\end{bmatrix} = \begin{bmatrix}h_{1} w_{1} + h_{2} w_{3} + h_{3} w_{5}\\h_{1} w_{2} + h_{2} w_{4} + h_{3} w_{6}\end{bmatrix} \end{equation*}

(EQ3) \begin{equation*} = \begin{bmatrix}w_{1}&w_{3}&w_{5}\\w_{2}&w_{4}&w_{6}\end{bmatrix} \begin{bmatrix}h_{1}\\h_{2}\\h_{3}\end{bmatrix} \end{equation*}

Which allows us to forward propagate from a layer of 3 neurons, to a layer of 2 neurons (via matrix multiplication).

For the back propagation, we need the error of each neruon...

N.B Cost is a metric used to denote the error of the entire network and depends on your cost function. It is usually the mean of the sum of the errors of each nuerons, but of course depends on your cost function used.

e.g for MSE is... \begin{equation*} C_{MSE}=\frac{1}{N}\sum (o_{n}-t_{n})^{2} \end{equation*}

We are interested in the derivative of the error for each neuron (not the cost), which by the chain rule is...

\begin{equation*} \frac{\partial E}{\partial z} = \frac{\partial E}{\partial o} \frac{\partial o}{\partial z} \end{equation*}

Expressed in matrix form...

\begin{equation*} \frac{\partial E}{\partial z} = \begin{bmatrix} \frac{\partial E}{\partial z_{1}}\\ \frac{\partial E}{\partial z_{2}} \end{bmatrix} = \begin{bmatrix} \frac{\partial E}{\partial o_{1}} \frac{\partial o_{1}}{\partial z_{1}}\\ \frac{\partial E}{\partial o_{2}} \frac{\partial o_{2}}{\partial z_{2}} \end{bmatrix} \end{equation*}

(EQ4) \begin{equation*} = \begin{bmatrix} \frac{\partial E}{\partial o_{1}} \\ \frac{\partial E}{\partial o_{2}} \end{bmatrix} \odot \begin{bmatrix} \frac{\partial o_{1}}{\partial z_{1}} \\ \frac{\partial o_{2}}{\partial z_{2}} \end{bmatrix} \end{equation*}

Note the use of the hadamard product here. In fact, given these are simply vectors, vector dot product would work, but hadamard is used as this becomes important later when using this expression with the matrix equations as we want to distinguish the use of hadamard product as opposed to matrix product.

We start off with our first delta error, which for the first layer back propagation is...

\begin{equation*} \delta ^{L} = \frac{\partial E}{\partial o} \end{equation*}

And then we want to calculate the next delta error using the formula (EQ1)...

\begin{equation*} \delta ^{l} = (w^{l+1})^{T} \delta ^{l+1} \odot \sigma {}' (z ^{l}) = \frac{\partial E}{\partial h} \end{equation*}

Explicitly, the equations are...

\begin{equation*}\frac{\partial E}{\partial h_{1}}=\frac{\partial E}{\partial z_{1}} w_{1} + \frac{\partial E}{\partial z_{2}} w_{2}\end{equation*} \begin{equation*}\frac{\partial E}{\partial h_{2}}=\frac{\partial E}{\partial z_{1}} w_{3} + \frac{\partial E}{\partial z_{2}} w_{4}\end{equation*} \begin{equation*}\frac{\partial E}{\partial h_{3}}=\frac{\partial E}{\partial z_{1}} w_{5} + \frac{\partial E}{\partial z_{2}} w_{6}\end{equation*}

Also given the transpose of the weight matrix which allows us to back propagate from a layer of 2 neurons to a layer of 3 neurons (via matrix multiplication) ...

\begin{equation*} \left (w \right )^{T} = \left ( \begin{bmatrix} w_{1} & w_{3} & w_{5}\\ w_{2} & w_{4} & w_{6} \end{bmatrix} \right )^{T} = \begin{bmatrix} w_{1} & w_{2}\\ w_{3} & w_{4}\\ w_{5} & w_{6} \end{bmatrix} \end{equation*}

So in a similar way to how we represented the forward pass (EQ3), this can be represented in matrix form...

\begin{equation*} \frac{\partial E}{\partial h} = \begin{bmatrix} \frac{\partial E}{\partial h_{1}}\\ \frac{\partial E}{\partial h_{2}}\\ \frac{\partial E}{\partial h_{3}} \end{bmatrix} = \begin{bmatrix} \frac{\partial E}{\partial z_{1}} w_{1} + \frac{\partial E}{\partial z_{2}} w_{2} \\ \frac{\partial E}{\partial z_{1}} w_{3} + \frac{\partial E}{\partial z_{2}} w_{4} \\ \frac{\partial E}{\partial z_{1}} w_{5} + \frac{\partial E}{\partial z_{2}} w_{6} \end{bmatrix} \end{equation*}

(EQ5) \begin{equation*} = \begin{bmatrix} w_{1} & w_{2}\\ w_{3} & w_{4}\\ w_{5} & w_{6} \end{bmatrix} \begin{bmatrix} \frac{\partial E}{\partial z_{1}} \\ \frac{\partial E}{\partial z_{2}} \end{bmatrix} \end{equation*}

And then plugging the hadamard version of the delta (EQ4) into this, we get...

\begin{equation*} \begin{bmatrix} w_{1} & w_{2}\\ w_{3} & w_{4}\\ w_{5} & w_{6} \end{bmatrix} \begin{bmatrix} \frac{\partial E}{\partial o_{1}} \\ \frac{\partial E}{\partial o_{2}} \end{bmatrix} \odot \begin{bmatrix} \frac{\partial o_{1}}{\partial z_{1}} \\ \frac{\partial o_{2}}{\partial z_{2}} \end{bmatrix} \end{equation*}

aka (EQ1) ...

\begin{equation*}(w^{l+1})^{T} \delta ^{l+1} \odot \sigma {}' (z ^{l})\end{equation*}

And thus we have back propagated from a layer of 2 neurons, to a layer of 3 neurons (via matrix multiplication) thanks to transpose.

For completeness... the other aspect of the back propagation to uses matrices, is the delta weight matrix....

\begin{equation*} \frac{\partial E}{\partial w} = \begin{bmatrix} \frac{\partial E}{\partial w_{1}} & \frac{\partial E}{\partial w_{3}} & \frac{\partial E}{\partial w_{5}}\\ \frac{\partial E}{\partial w_{2}} & \frac{\partial E}{\partial w_{4}} & \frac{\partial E}{\partial w_{6}} \end{bmatrix} \end{equation*}

As mentioned before, you need to cache weight matrix and the activated layer output of a forward pass of the network.

In a similar vein to (EQ3)...

We explcitily have the equations...

\begin{equation*}\frac{\partial E}{\partial w_{1}}=\frac{\partial E}{\partial z_{1}}h_{1}\end{equation*} \begin{equation*}\frac{\partial E}{\partial w_{2}}=\frac{\partial E}{\partial z_{2}}h_{1}\end{equation*} \begin{equation*}\frac{\partial E}{\partial w_{3}}=\frac{\partial E}{\partial z_{1}}h_{2}\end{equation*} \begin{equation*}\frac{\partial E}{\partial w_{4}}=\frac{\partial E}{\partial z_{2}}h_{2}\end{equation*} \begin{equation*}\frac{\partial E}{\partial w_{5}}=\frac{\partial E}{\partial z_{1}}h_{3}\end{equation*} \begin{equation*}\frac{\partial E}{\partial w_{6}}=\frac{\partial E}{\partial z_{2}}h_{3}\end{equation*}

Note the use of h1, h2 and h3, which are the activated outputs of the previous layer.. (or in the case of our example these are the inputs).

Which we represent in matrix form...

\begin{equation*} \frac{\partial E}{\partial w} = \begin{bmatrix} \frac{\partial E}{\partial z_{1}}h_{1} & \frac{\partial E}{\partial z_{1}}h_{2} & \frac{\partial E}{\partial z_{1}}h_{3}\\ \frac{\partial E}{\partial z_{2}}h_{1} & \frac{\partial E}{\partial z_{2}}h_{2} & \frac{\partial E}{\partial z_{2}}h_{3} \end{bmatrix} \end{equation*} (EQ6) \begin{equation*} = \begin{bmatrix} \frac{\partial E}{\partial z_{1}} \\ \frac{\partial E}{\partial z_{2}} \end{bmatrix} \begin{bmatrix} h_{1} & h_{2} & h_{3} \end{bmatrix} \end{equation*}

Which just so happens to be (EQ2) ...

\begin{equation*}\frac{\partial E}{\partial w}=\delta ^{l}a^{l-1}\end{equation*}

:)

Since the delta weight matrix and the original weight matrix are the same dimensions, it is trivia to apply the learning rate...

\begin{equation*} w = \begin{bmatrix} w_{1} - \alpha \frac{\partial E}{\partial w_{1}} & w_{3} - \alpha \frac{\partial E}{\partial w_{3}} & w_{5} - \alpha \frac{\partial E}{\partial w_{5}}\\ w_{2} - \alpha \frac{\partial E}{\partial w_{2}} & w_{4} - \alpha \frac{\partial E}{\partial w_{4}} & w_{6} - \alpha \frac{\partial E}{\partial w_{6}} \end{bmatrix} \end{equation*}

\begin{equation*} = \begin{bmatrix}w_{1}&w_{3}&w_{5}\\w_{2}&w_{4}&w_{6}\end{bmatrix} - \alpha \begin{bmatrix} \frac{\partial E}{\partial w_{1}} & \frac{\partial E}{\partial w_{3}} & \frac{\partial E}{\partial w_{5}}\\ \frac{\partial E}{\partial w_{2}} & \frac{\partial E}{\partial w_{4}} & \frac{\partial E}{\partial w_{6}} \end{bmatrix} \end{equation*}

\begin{equation*} = w - \alpha \frac{\partial E}{\partial w}\end{equation*}

I'll omit the equations for the bias, as these are easy to put in and don't require any other equations other than the ones presented here. So in summary, the equations you want are (EQ3), (EQ5) and (EQ6) to use matrices, and the ability to change move between layers of differing number of neurons is the matrix transpose. Hope this helps, and let me know if you want me to expand on anything.

It might be also important to note that you appear to be using MSE as the cost. This is perhaps not optimal for the MNIST data set which is a clasification. A better cost function to use would be cross entropy.

Happy propagating!

",32400,,32400,,4/9/2020 11:36,4/9/2020 11:36,,,,0,,,,CC BY-SA 4.0 20113,2,,20094,4/9/2020 4:38,,2,,"

AI has been applied to programming (check out TabNine, my favorite autocomplete engine) although not in as robust a fashion as you describe.

Programming requires a high level of abstract while AI is typically trained to solve a very specific task. Given thousands of examples of insert sort in Python I think a model could be trained (perhaps after autocomplete and syntax correction) figure it out. However at this point the field has not developed a more general intelligence that can apply the ideas of the algorithm to other problems.

Addition based on comments:

Big picture, training an algorithm to solve a general class of problems (say, web dev) requires a huge number of examples or an immense number of trials. Further, as the complexity of the problem grows the number of parameters necessary to build the model grows. Writing code is a very complex problem and would thus require a huge amount of data and a huge number of parameters making it totally infeasible with today's math and (because of how the math is solved) hardware.

Modern AI is has a very simple goal: find the model that solves a problem optimally. If we could quickly search every possible model this would be simple. Fields like machine (deep) learning and reinforcement learning are concerned with finding a good solution in a reasonable amount of time. At this point no such solution exists for a problem of such complexity.

",26481,,26481,,4/13/2020 2:39,4/13/2020 2:39,,,,2,,,,CC BY-SA 4.0 20114,2,,17609,4/9/2020 5:26,,4,,"

Even the first artificial neural network - Rosenblatt's perceptron [1] had a discontinuous activation function. That network is in introductory chapters of many textbooks about AI. For example, Michael Negnevitsky. Artificial intelligence: a guide to intelligent systems. Second Edition shows how to train such networks on pages 170-174.

Error backpropagation algorithm can be modified to accommodate discontinuous activation functions. The details are in paper [2]. That paper points out a possible application: training a neural network on micro-controllers. As the multiplication of the output of the previous layer $x_j$ by the weigth $w_{ij}$ is exspensive, the author suggested to approximate it with a left shift by $n$ bits (multiplication by $2^n$) for the corresponding $n$ in which case the activation function is discontinuous (a staircase).

An example of a neural network with discontinuous activation functions applied to a restoration of degraded images is in Ref. [3]. Applications of recurrent neural networks with discontinuous activation functions to convex optimization problems are in Ref. [4]. Probably more examples can be found in the literature.

References

  1. Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958 Nov; 65(6):386-408. PMID: 13602029 DOI: 10.1037/h0042519
  2. Findlay, D.A. Training networks with discontinuous activation functions. 1989 First IEE International Conference on Artificial Neural Networks, (Conf. Publ. No. 313), London, UK, 1989, pp. 361-363.
  3. Ferreira, L. V.; Kaszkurewicz, E.; Bhaya, A. Image restoration using L1-norm regularization and a gradient-based neural network with discontinuous activation functions. 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, 2008, pp. 2512-2519. DOI: 10.1109/IJCNN.2008.4634149
  4. Liu, Q.; Wang, J. Recurrent Neural Networks with Discontinuous Activation Functions for Convex Optimization. Integration of Swarm Intelligence and Artificial Neural Network, pp. 95-119 (2011) DOI: 10.1142/9789814280150_0004
",15524,,,,,4/9/2020 5:26,,,,2,,,,CC BY-SA 4.0 20116,1,,,4/9/2020 9:41,,2,51,"

In ML we often have to store a huge amount of values ranging from 0 to 1, mostly being probabilities. The most common data structure to do so seems to be a floating point? Indeed, the range of floating points is huge. This makes them imprecise in the desired interval and inefficient, right?

This question suggests using the biggest integer value to represent a 1 and the smallest for 0. Also, it points to the Q number format, where all bits can be chosen as fractional, which sounds very efficient.

Why have these data types still not found their ways into numpy, tensorflow etc.? Am I missing something?

",35821,,,,,4/9/2020 9:41,What is the most efficient data type to store probabilities?,,0,0,,,,CC BY-SA 4.0 20118,1,,,4/9/2020 10:47,,5,290,"

Most reinforcement learning agents are trained in simulated environments. The goal is to maximize performance in (often) the same environment, preferably with a minimum amount of interactions. Having a good model of the environment allows to use planning and thus drastically improves the sample efficiency!

Why is the simulation not used for planning in these cases? It is a sampling model of the environment, right? Can't we try multiple actions at each or some states, follow the current policy to look several steps ahead and finally choose the action with the best outcome? Shouldn't this allow us to find better actions more quickly compared to policy gradient updates?

In this case, our environment and the model are kind of identical and this seems to be the problem. Or is the good old curse of dimensionality to blame again? Please help me figure out, what I'm missing.

",35821,,,,,12/28/2020 5:33,Isn't a simulation a great model for model-based reinforcement learning?,,3,2,,,,CC BY-SA 4.0 20120,1,,,4/9/2020 12:25,,1,95,"

The definition of MDL according to these slides is:

The minimum description length (MDL) criteria in machine learning says that the best description of the data is given by the model which compresses it the best. Put another way, learning a model for the data or predicting it is about capturing the regularities in the data and any regularity in the data can be used to compress it. Thus, the more we can compress a data, the more we have learnt about it and the better we can predict it.

MDL is also connected to Occam’s Razor used in machine learning which states that ""other things being equal, a simpler explanation is better than a more complex one."" In MDL, the simplicity (or rather complexity) of a model is interpreted as the length of the code obtained when that model is used to compress the data.

To put it in short according to MDL principle, we prefer predictors with relatively smaller description length (i.e can be described within a certain length) for a given description language (this definition is without delving into exact technical details as it is not necessary to the question).

Since MDL is dependent on the description language we use, can we say feature engineering can cause a change in the selection of the predictor? For example as this picture shows:

To me, it seems that, in the first picture, we will require a longer description length predictor in Cartesian coordinates, as compared to a predictor in polar co-ordinates (just a single discerning radius needs to be specified). So, to me, it seems feature engineering changed the selection of the predictor to a relatively simple one (in the sense that it will have a shorter description length). Thus feature engineering has changed the description length required for out predictor. Did I make any wrong assumptions? If so why?

",,user9947,,user9947,4/9/2020 16:25,4/13/2020 22:44,Can feature engineering change the selection of the model according to the minimum description length?,,1,1,,,,CC BY-SA 4.0 20121,1,,,4/9/2020 13:46,,4,472,"

Imagine a set of simple (non-self-intersecting) polygons given by the coordinate pairs of their vertices $[(x_1, y_1), (x_2, y_2), \dots,(x_n, y_n)]$. The polygons in the set have a different number of vertices.

How can I use machine learning to solve various supervised regression and classification problems for these polygons such as prediction of their areas, perimeters, coordinates of their centroids, whether a polygon is convex, whether its centroid is inside or outside, etc?

Most machine learning algorithms require inputs of the same size but my inputs have a different number of coordinates. This may probably be handled by recurrent neural networks. However, the coordinates of my input vectors can be circularly shifted without changing the meaning of the input. For example, $$[(x_1, y_1), (x_2, y_2),...,(x_n, y_n)]$$ and $$[(x_n, y_n), (x_1, y_1),...,(x_{n-1}, y_{n-1})]$$ represent the same polygon where a starting vertex is chosen differently.

Which machine learning algorithm is both invariant to a circular shifting of its input coordinates and can work with inputs of different sizes?

Intuitively, an algorithm could learn to split each polygon into non-overlapping triangles, calculate areas or perimeters of each triangle, and then aggregate these computations somewhere in the output layer. However, the labels (areas or perimeters) are given only for the whole polygons, not for the triangles. Also, the perimeter of the polygon is not the sum of the perimeters of the triangles. Is thinking about this problem in terms of triangles misleading?

Could you please provide references on machine learning algorithms that solve such tasks? Or any advice, how to approach this task? It does not have to be neural network and does not have to learn exact analytic formulas. Approximate results would be enough.

",15524,,2444,,4/9/2020 14:40,4/9/2020 15:11,How can I use machine learning to predict properties (such as the area) of simple polygons?,,1,0,,,,CC BY-SA 4.0 20125,2,,20121,4/9/2020 15:11,,3,,"

You can split each polygon into a collection of triangles and sum up the areas. Not really sure why you would bother with ML.

Anyway if you approximates these polygons as images you could maybe train a CNN. Look at the image classification networks which provide bounding boxes.

",32390,,,,,4/9/2020 15:11,,,,2,,,,CC BY-SA 4.0 20126,2,,20094,4/9/2020 15:26,,1,,"

I am not an expert on this specific topic, but I can say a few words. I will use the term "programming" to refer to software development (of any kind).

If you are in the camp that this isn't that hard, why hasn't it become mainstream?

It's definitely hard, otherwise, we would have already some useful artificial programmers.

Why is creating AI that can code a hard task?

Programming is actually a hard task because it often requires creativity and a deep understanding of the context(s), goal(s), programming languages, etc. In other words, it's a very complex task (even for humans), apart from the exceptions where you can copy and paste.

Programming can probably be considered an AI-complete problem, i.e. a problem that probably requires an AGI to be solved. In other words, if an AI was as capable as humans in terms of programming, then that probably means it is an AGI (but this is not guaranteed), i.e. programming is a task that probably requires general intelligence. This is why I say that programming is an AI-complete problem. However, note that being able to program is just a necessary (but not sufficient) ability that an AI needs to possess in order for it to be an AGI (although not all general intelligences, e.g. animals, may be able to develop software, but the definition of general intelligence is also fuzzy).

AFAIK, no AGI has yet been created, and I think we are still very far away from that goal. Currently, most AI systems are only able to tackle a specific problem (i.e. we only have narrow AIs, such as AlphaGo). You could say that programming is a very specific problem too, but this is misleading or wrong, because, unless you just want to develop very specific programs in a very limited context (and there are already machine learning models and approaches, such as neural programmer-interpreters and genetic programming respectively, that can do this to some extent; see the answers to this question for other examples), then you will need to know a lot about other contexts too. For example, consider the task of developing a program that can detect signs of cancer in images. To develop this program, the AI would need to have the knowledge of an AI engineer, doctor, etc.

Furthermore, programming often requires common-sense knowledge. For example, while reading the software specifications, the AI needs to interpret them in the way that they were originally meant to be interpreted. This also suggests that programming requires an AGI (or human-level AI) to be solved.

(Finally, to address a comment, note that writing a 4-line program is not equivalent to writing a 10-line program. Also, the length of the program often doesn't correspond to its difficulty or complexity, so that alone is not a good measure of the ability to program.)

What are the barriers/problems we have to solve before we can solve this problem?

I think that the answer to this question is also the answer to the question "How can we create an AGI?". However, to be more concrete, I think that, in order to be able to create an AI that is able to program as well as humans, we will need to be able to create an AI that is able to think about low- and high-level concepts, compose them and it will probably require common-sense knowledge (so knowledge representation). A typical supervised learning solution will not be enough to solve this task. See the paper Making AI Meaningful Again, which also suggests that ML-based solutions may not be enough to solve many tasks.

",2444,,2444,,12/7/2020 14:26,12/7/2020 14:26,,,,0,,,,CC BY-SA 4.0 20127,1,20135,,4/9/2020 15:52,,5,2663,"

So this is my current result (loss and score per episode) of my RL model in a simple two players game:

I use DQN with CNN as a policy and target networks. I train my model using Adam optimizer and calculate the loss using Smooth L1 Loss.

In a normal "Supervised Learning" situation, I can deduce that my model is overfitting. And I can imagine some methods to tackle this problem (e.g. Dropout layer, Regularization, Smaller Learning Rate, Early Stopping).

  • But would that solution will also work in RL problem?
  • Or are there any better solutions to handle overfitting in RL?
",16565,,48391,,11/22/2021 13:37,11/28/2022 14:50,How can I handle overfitting in reinforcement learning problems?,,2,0,,,,CC BY-SA 4.0 20128,1,,,4/9/2020 16:00,,0,43,"

Neural Network is trained to learn a non-linear function, the more layers it has, the more is the quality of the prediction and the ability to match the real-world function correctly (lets leave aside overfitting for now).

Now, given that the concept of a derivative of a function (if it is not a line) is only defined at a single point , and worse than that, it is defined as a tangent line (so, its linear in essence) , and backpropagation uses this tangent line to project weight updates, does this mean that Gradient Descent is linear at its fundamental level ?

Then if Gradient Descent is using linearity to propagate back weight updates, imagine how big the error would be for non linear function? For example , you have calculated that the weight number 129 of your network has to be decreased by 0.003412, but at that point with this new weight, the function might have already reversed its direction, and the real update for this weight must be a negative number!!! Isn't this the reason that our deep fully connected networks have such a difficulty to learn , because with more layers stacked up, the more non-linear the model becomes, and thus, the weight updates we propagate back to lower layers could be treated as ""best guess"" values instead of something that can be fully trusted?

I am correct in assuming that Gradient Descent is not calculating correct weight updates on each backwards step and that the only reason that the network eventually converges to required model is because these imprecisions are fixed in a loop (called epoch). So, if we use an analogy, Gradient Descent would be like navigating the World on a boat with maps developed with the assumption that the Earth is flat. With such maps, you can sail in near-by areas, but if you would travel around the world without knowing that the Earth is round you will never arrive to your destination, and that's exactly what we are experiencing when we train deep fully connected networks without making them converge.

This means, if Gradient Descent is broken in such a way, a correct Gradient Descent algorithm would only have to do SINGLE backwards step and update all the weights in one pass, giving the minimum error that is theoretically possible in 1 epoch .... I am right ?

So, my question basically is: is Gradient Descent a really broken algorithm or I am missing something?

",34084,,,,,4/9/2020 16:00,What is the degree of linearity in the error propagated by Gradient Descent?,,0,10,,,,CC BY-SA 4.0 20135,2,,20127,4/9/2020 16:18,,3,,"

Overfitting refers to a model being stuck in a local minimum while trying to minimise a loss function. In Reinforcement Learning the aim is to learn an optimal policy by maximising or minimising a non-stationary objective-function which depends on the action policy, so overfitting is not exactly like in the supervised scenario, but you can definitely talk about sub-optimal policies.

If we think of a specific task like avoiding stationary objects, a simple sub-optimal policy would be to just stay still without moving at all, or moving in circles if the reward function was designed to penalise lack of movements.

The way to avoid an agent to learn sub-optimal policies is to find a good compromise between exploitation, i.e. the constant selection of the next action to take based on the maximum expected reward possible, and exploration, i.e. a random selection of the next action to take regardless of the rewards. Here's a link to an introduction to the topic: Exploration and Exploitation in Reinforcement Learning

It is worth mentioning that sometimes an agent can actually outsmart humans though, some examples are reported in this paper The Surprising Creativity of Digital Evolution. I particularly like the story of the insect agent trained to learn to walk while minimising the contact with the floor surface. The agent surprisingly managed to learn to walk without touching the ground at all. When the authors checked what was going on they discovered that the insect leaned to flip itself and then walk using its fake 'elbows' (fig7 in the linked paper). I add this story just to point out that most of the time the design of the reward function is itself even more important than exploration and exploitation tuning.

",34098,,47256,,11/28/2022 14:50,11/28/2022 14:50,,,,8,,,,CC BY-SA 4.0 20136,1,,,4/9/2020 17:09,,1,4933,"

I am trying to train a LSTM, but I have some problems regarding the data representation and feeding it into the model.

My data is a numpy array of three dimensions: One sample consist of a 2D matrix of size (600,5). 600(timesteps) and 5(features). However, I have 160 samples or files that represent the behavior of a user in multiple days. Altogether, my data has a dimension of (160,600,5).

The label set is an array of 600 elements which describes certain patterns of each 2D matrix. The shape of the output should be (600,1).

My question is how can I train the LSTM to the corresponding label set? What would be the best approach to handle this problem? The idea is that the output should be an array of (600,1) with 3 label inside.

Multiple_outputs {0,1,2}
      Output:    0000000001111111110000022222220000000000000
                 -------------600 samples ------------------

Input: (1, 600, 5) 
Output: (600, 1) 
Training: (160,600,5)

I look forward for some ideas!

dataset(160,600,5)

X_train, X_test, y_train, y_test = train_test_split(dataset[:,:,0:4], dataset[:,:,4:5],test_size = 0.30)

model = Sequential()
model.add(InputLayer(batch_input_shape = (92,600,5 )))
model.add(Embedding(600, 128))
#model.add(Bidirectional(LSTM(256, return_sequences=True)))
model.add(TimeDistributed(Dense(2)))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer=Adam(0.001),
              metrics=['accuracy'])

model.summary()


model.fit(X_train,y_train, batch_size=92, epochs=40, validation_split=0.2)
",35763,,35763,,4/10/2020 12:08,4/10/2020 12:08,How to train a LSTM with multidimensional data,,1,0,,,,CC BY-SA 4.0 20137,2,,18773,4/9/2020 17:10,,1,,"

I assume you're referring to this paper, which basically ""reuses"" the same architecture as the original ""Attention is all you need"" paper for machine translation; the difference being that source and target are noisy and original sentences.

I can't figure the for loop logic inside your train function; but reading your comment, you don't have to traverse the target sequentially. Instead, we rely on masks which exclude paddings, and most importantly help with implementing teacher forcing for the decoder stack. Check this post and how it creates the three types of masks needed (src, target, no_peek).

Having said that, you will need a for loop when serving your model (inference), because you will have to decode autoregressively. That is, you'll have to decode a token first, and feed it back to the decoder to generate the next token (until you reach EOS). Check ""Testing the model"" section in this article.

",34315,,,,,4/9/2020 17:10,,,,0,,,,CC BY-SA 4.0 20138,1,,,4/9/2020 17:51,,3,333,"

I have a vectorized implementation of the neural network in c++. I successfully solve the classification problems of Fashion MNIST and CIFAR.

Now I am modifying my code to do the Linear regression. I am stuck at a point. I have to use MSE loss function here instead of squared error.

My questions are: 1) In linear regression tasks; is there a difference between MSE and squared error(There is only a difference of mean value which means dividing by the number of mini-batch...according to my understanding)? 2) My C++ implementation of this network is given below, to implement MSE shall I modify my loss function line and divide it by batch size?

__global__ void loss(double* X, double* Y, double *Z, size_t n) {

    size_t index = blockIdx.x * blockDim.x + threadIdx.x;

    if (index < n) {
        Z[index] = ((X[index] - Y[index]));
    }
} 
void forward_prop(){
    L_F( b1, b_x, w1, a1, BATCH_SIZE, layer_0_nodes, layer_1_nodes );
    tan_h(a1, BATCH_SIZE, layer_1_nodes);

    L_F( b2, a1, w2, a2, BATCH_SIZE, layer_1_nodes, layer_2_nodes );
    tan_h(a2, BATCH_SIZE, layer_2_nodes);

    L_F( b3, a2, w3, a3, BATCH_SIZE, layer_2_nodes, layer_3_nodes );
}
void backward_prop(){

    cuda_loss(a3, b_y, loss_m, BATCH_SIZE, layer_3_nodes);
    L_B(a2, loss_m, dw3, layer_2_nodes, BATCH_SIZE, layer_3_nodes, true);
    tan_h_B(a2, BATCH_SIZE, layer_2_nodes);

    L_B(loss_m, w3, dz2, BATCH_SIZE, layer_3_nodes, layer_2_nodes, false);
    cuda_simple_dot_ab(dz2, a2, BATCH_SIZE, layer_2_nodes);
    L_B(a1, dz2, dw2, layer_1_nodes, BATCH_SIZE, layer_2_nodes, true);
    tan_h_B(a1, BATCH_SIZE, layer_1_nodes);

    L_B(dz2, w2, dz1, BATCH_SIZE, layer_2_nodes, layer_1_nodes, false);
    cuda_simple_dot_ab(dz1, a1, BATCH_SIZE, layer_1_nodes);
    L_B(b_x, dz1, dw1, layer_0_nodes, BATCH_SIZE, layer_1_nodes, true);
}
__global__ void linearLayerForward( double *b, double* W, double* A, double* Z, size_t W_x_dim, size_t W_y_dim, size_t A_x_dim) {

    size_t row = blockIdx.y * blockDim.y + threadIdx.y;
    size_t col = blockIdx.x * blockDim.x + threadIdx.x;

    size_t Z_x_dim = A_x_dim;
    size_t Z_y_dim = W_y_dim;

    double Z_value = 0;

    if (row < Z_y_dim && col < Z_x_dim) {
        for (size_t i = 0; i < W_x_dim; i++) {
            Z_value += W[row * W_x_dim + i] * A[i * A_x_dim + col];
        }
        Z[row * Z_x_dim + col] = Z_value + b[col];
    }
}

__global__ void linearLayerBackprop(double* W, double* dZ, double *dA,
                                    size_t W_x_dim, size_t W_y_dim,
                                    size_t dZ_x_dim) {

    size_t col = blockIdx.x * blockDim.x + threadIdx.x;
    size_t row = blockIdx.y * blockDim.y + threadIdx.y;

    // W is treated as transposed
    size_t dA_x_dim = dZ_x_dim;
    size_t dA_y_dim = W_x_dim;

    double dA_value = 0.0f;

    if (row < dA_y_dim && col < dA_x_dim) {
        for (size_t i = 0; i < W_y_dim; i++) {
            dA_value += W[i * W_x_dim + row] * dZ[i * dZ_x_dim + col];
        }
        dA[row * dA_x_dim + col] = dA_value;
    }
}


__global__ void tanhActivationForward(double* Z, size_t n) {

    size_t index = blockIdx.x * blockDim.x + threadIdx.x;

    if (index < n) {
            Z[index] = tanh(Z[index]);
    }
}

__global__ void tanhActivationBackward(double* Z, size_t n) {

    size_t index = blockIdx.x * blockDim.x + threadIdx.x;

    if (index < n) {
            Z[index] = 1-(tanh(Z[index]) * tanh(Z[index]));
    }
}
```
",29724,,,,,4/9/2020 17:51,How to implement Mean square error loss function in mini batch GD,,0,0,,,,CC BY-SA 4.0 20139,1,,,4/9/2020 18:41,,1,28,"

I am on Lecture 2 of Stanford CS330 Multi-Task and Meta-learning, and on slide 10, the professor describes using a one-hot input vector to represent the task, and she also explained that there would be independent weight matrices for each task

How is the input to a multi-task network encoded to allow the features of all the tasks to be associated with different weights?

Would you have an input vector containing all the features for every task, and then multiply the input vectors by the task ID vector? Is there a more efficient way to approach this problem?

In other terms, here’s what I’m thinking:

network_input[i] = features[i] * task[i]

where features is a 2d matrix of feature vectors for every task, and task is a one-hot vector corresponding to the task number. Is that multiplicative conditioning?

",4744,,4744,,4/10/2020 16:15,4/10/2020 16:15,How do I format task features with a one-hot task identification vector to ensure separate weight matrices for each task in multi-task RL?,,0,2,,,,CC BY-SA 4.0 20141,1,,,4/9/2020 21:33,,2,1024,"

I keep looking through the literature, but can't seem to find any information regarding the time complexity of the forward pass and back-propagation of the sequence-to-sequence RNN encoder-decoder model, with and without attention.

The paper Attention is All You Need by Vaswani et. al in 2017 states the forward pass cost is $O(n^3)$, which makes sense to me (with 1 hidden layer). In terms of $X$ the input length, and $Y$ the output length, it then looks like $O(X^3 + Y^3)$, which I understand.

However, for training, it seems to me like one back-propagation is at worst $O(X^3 + Y^3)$, and we do $Y$ of them, so $O(Y(X^3 Y^3))$.

This is the following diagram, where the green blocks are the hidden states, the red ones are the input text and the blue ones are output text.

If I were to add global attention, as introduced by Luong et. al in 2015, the attention adds an extra $X^2$ in there due to attention multiplication, to make an overall inference of $O(X^3 + X^2 Y^3)$, and training even worse at $O(XY(X^3 + X^2 Y^3))$ since it needs to learn attention weights too.

The following diagram shows the sequence-to-sequence model with attention, where $h$'s are the hidden states, $c$ is the context vector and $y$ is the output word, $Y$ of such output words and $X$ such inputs. This setup is described in the paper Effective Approaches to Attention-based Neural Machine Translation, by Luong et al. in 2015.

Is my intuition correct?

",35867,,2444,,10/10/2020 22:09,10/10/2020 22:09,What is the time complexity of the forward pass and back-propagation of the sequence-to-sequence model with and without attention?,,0,1,,,,CC BY-SA 4.0 20142,1,,,4/9/2020 23:21,,4,608,"

I am trying to do classification using NEAT-python for the first time, and I am having difficulty getting the accuracy rate. I tried the same problem with an ANN and was able to get a good accuracy rate (96%+), but NEAT-Python gives barely 40%.

Here's how I set up:

  • Problem: Train 100 probability values to predict classification (1-10)

  • Input and output setup: inputs = number of input shape (100 values of prob), and output is 10 values of probability assoc with 10 classes.

  • Activation: I applied ReLU for feedforward, then applied softmax

    • Fitness function: I used the loglikelihood. I was unsure about how to set up the fitness function. I also used mean accuracy rate in the genome. Both gave similar results.

In terms of hyperparameters, I am trying various values and haven't had any luck with it. Today I am trying with an increase in population size and generations. I have another feature input that can be used.

Are there any resources that discuss how to handle mixed data for NEAT?

Any help is greatly appreciated.

",35845,,35845,,4/13/2020 16:52,4/13/2020 16:52,How to perform classification with NEAT-Python?,,0,5,,,,CC BY-SA 4.0 20143,1,,,4/10/2020 0:15,,1,49,"

I am a records manager and I am being asked if I recommend Office 365. I'm having a hard time making a recommendation because I am missing an essential piece of information: can Office 365 replace the manual process of placing records into categories based on organizational function? It is important that this is done accurately, because the category determines how long the records should be kept before they are irretrievably destroyed. James Lappin seems to say that yes, Office 365 is underwritten by Project Cortex, and it is capable of doing this.

My sense is that artificial intelligence is not yet capable of determining a conceptual category for records. For this to be true, a machine would have to replicate a complex human process: reading a free-text, free-form document; identifying the relevant pieces of information in a document, while ignoring others to determine what the document is ""about""; then taking the answer of what the document is ""about"" and matching it to a predefined set of major organizational activity.

Are there any AI experts who can comment on how realistic it is to expect Project Cortex to do this?

",35892,,,,,4/11/2020 0:38,Can artificial intelligence classify textual records?,,1,0,,,,CC BY-SA 4.0 20144,2,,20136,4/10/2020 0:18,,1,,"

I don't see any special characteristic in the problem you're posing. Any LSTM can handle multidimensional inputs (i.e. multiple features). You just need to prepare your data such as they will have shape [batch_size, time_steps, n_features], which is the format required by all main DL libraries (pytorch, keras and tensorflow).

I linked below 2 tutorials that shows how to implement an LSTM for part of speech tagging in Keras and Pytorch. This task is conceptually identical to what you want to achieve: use 2D inputs (i.e. embeddings) to predict the class (i.e. the pos tags) of each element of a sequence (i.e. every single word). For example:

# Input sample, 4 steps, encoded in 2D embeddings matrix
['we', 'had', 'crispy', 'fries']

# Output predictions, 4 labels
['PRP', 'VBD', 'JJ', 'NNS']

In your case the predicted label would be 0, 1 or 2 and you would have to encode your data in a matrix of shape [n_batch, 600, 5]. The only thing you definetly want to pay attention, since you're using temporal data, is to not shuffle your data at all before the training.

Keras tutorial: Build a POS tagger with an LSTM using Keras

Pytorch tutorial: LSTM’s in Pytorch

",34098,,,,,4/10/2020 0:18,,,,3,,,,CC BY-SA 4.0 20145,1,,,4/10/2020 0:19,,1,282,"

Lately, I have implemented DQN for Atari Breakout. Here is the code:

https://github.com/JeremieGauthier/AI_Exercices/blob/master/Atari_Breakout/DQN_Breakout.py

I have trained the agent for over 1500 episodes, but the training leveled off at around 5 as score. Could someone look at the code and point out a few things that should be corrected?

Actually, the training is not going more than 5 in average score. Is there a way to improve my performance?

",35626,,35626,,4/10/2020 13:53,4/10/2020 13:53,Why isn't my DQN agent improving when trained on Atari Breakout?,,0,7,,,,CC BY-SA 4.0 20146,1,20147,,4/10/2020 2:36,,1,575,"

I see some papers use the term ""natural image domain"". I googled that but didn't find any explanation of it.

I guess I understand the normal meaning of ""natural image"", such as the image people take by phones. The images in ImageNet database are all natural images.

Is ""natural image domain"" a subfield of computer vision?

",35896,,2444,,4/10/2020 3:38,4/10/2020 12:54,"What is ""natural image domain""?",,1,1,,,,CC BY-SA 4.0 20147,2,,20146,4/10/2020 7:13,,0,,"

An image of a natural domain refers to any kind of image that has some variance in its structure, so it does not always present the same peculiarities.

For example, in a digits classification task, you can train your network on the MNIST dataset, that is a simple black & white set of images that are always well defined: the digit is in darker pixels, and the background is always white, they just have different shapes.

For the same task but in the natural image domain, the digits may vary in color, noise, and even position inside of a single image. As you may imagine this does not apply to digits only, but it includes any image that was not specifically created or modified to be fed into a neural network.

So, to put it simply, a neural network trained for natural images is able to perform classification in any real-life situation or scenario, without the need to edit the image to adapt it to what the network was trained for (for example, changing the color or cropping so the object you want to recognize is exactly in the center).

Here a similar question that may fill some doubts you may have.

",35792,,2444,,4/10/2020 12:54,4/10/2020 12:54,,,,2,,,,CC BY-SA 4.0 20149,1,,,4/10/2020 9:39,,4,102,"

I implemented an LSTM with Keras to perform word ordering task (given a syntactically unordered sentence, the goal is to label each word of the sentence with the right position in this one.) So, my dataset is composed by numerical vectors and each numerical vector represents a word.

I train my model trying to learn the local order of a syntactic subtree composed by words that have syntactic relationships (for example, a subtree could be a set of three words in which the root is the verb and children are subject and object relationship).

I padded each subtree to a length of 20, which is the maximum subtree length that I found in my dataset. With padding introduction, I inserted a lot of vectors composed of only zeros.

My initial dataset shape is (700000, 837), but knowing that Keras wants a 3D dataset, I reshaped it to (35000, 20, 837) and the same for my labels (from 700000 to (35000, 20)).

As loss function, I'm using the ListNet algorithm loss function, which takes a list of words and for each computes the probability of the element to be ranked in the first position (then ranking these scores, I obtain the predicted labels of each word).

The current implementation is the following:

model = tf.keras.Sequential()
model.add(LSTM(units=100, activation='tanh', return_sequences=True, input_shape=(timesteps, features)))
model.add(Dense(1, activation='sigmoid'))

model.summary()

model.compile(loss=listnet_loss, optimizer=keras.optimizers.Adam(learning_rate=0.00005, beta_1=0.9, beta_2=0.999, amsgrad=True), metrics=[""accuracy""])

model.fit(training_dataset, training_dataset_labels, batch_size=1, epochs=number_of_epochs, workers=10, verbose=1, callbacks=[SaveModelCallback()])

And SaveModelCallback simply saves each model during training.

At the moment I obtain, at each epoch, very very similar results:

Epoch 21/50
39200/39200 [==============================] - 363s 9ms/step - loss: 2.5483 - accuracy: 0.8246
Epoch 22/50
39200/39200 [==============================] - 359s 9ms/step - loss: 2.5480 - accuracy: 0.8245
Epoch 23/50
39200/39200 [==============================] - 360s 9ms/step - loss: 2.5478 - accuracy: 0.8246

I have to questions:

  1. Could zero-padding affect learning in a negative way? And if yes, how could we not consider this padding?

  2. Is it a good model for what I have to do?

",33440,,2444,,4/10/2020 12:51,4/10/2020 12:51,Could zero-padding affect learning in a negative way?,,0,0,,,,CC BY-SA 4.0 20150,1,20155,,4/10/2020 9:46,,5,276,"

I know the most basic rudimentary theory on AI, and I want to delve into actual practical coding with AI and machine learning. I already know a decent bit of coding in C++ and I'm learning Python syntax now.

I think I want to start implementing artificial intelligence techniques for simple games (like snake or maybe chess, which isn't really a simple game, but I know a lot about it), and then move on to more complex methods and algorithms.

So, what are some resources (e.g. tutorials, guides, books, etc.) for coding some artificial intelligence techniques in the context of games?

",31156,,2444,,5/22/2020 18:08,5/22/2020 18:08,What are some resources for coding some artificial intelligence techniques in the context of games?,,1,2,,,,CC BY-SA 4.0 20155,2,,20150,4/10/2020 15:21,,2,,"

One of the simplest games you could solve with an AI technique is tic-tac-toe, which is a very simple game. To solve it, you could use minimax or alpha-beta pruning (an extension of minimax), which are very basic but fundamental search techniques for two-player games, so minimax can be applied not only to tic-tac-toe but any two-player game.

The notes CS 161 Recitation Notes - Minimax with Alpha-Beta Pruning provide a decent overview (with a concrete example) of alpha-beta pruning, which may be a little bit confusing at the beginning, but it is a relatively simple search technique that you can grasp with a few reading iterations (but take this with a grain of salt because the time and effort that it takes to learn something strongly depends on your knowledge and experience). There are several tutorials online that show how to implement minimax and alpha-beta pruning for tic-tac-toe. I could list them all, but I think it's better you look for them and choose your favorite. Maybe have a look at this implementation.

Of course, when it comes to games, one cannot forget about reinforcement learning. I cannot currently list any good tutorials, but there are many resources online. You can even apply RL to tic-tac-toe, so this may be your next step. As someone mentioned in the comments, OpenAI's gym is an RL library for implementing RL agents, which can be used to solve many games, so you should definitely have a look at it. OpenAI's gym comes with a nice introduciton, especially if you are already familiar with RL, which, of course, you probably should before trying to use it. In principle, you can apply RL to many games, including chess, snake, etc.

There's also a book entitled Clever Algorithms: Nature-Inspired Programming Recipes by Jason Brownlee that describes numerous AI techniques, some of them could, in principle, be applied to games too. The book is concise and relatively clear. It also comes with the implementation (in Ruby, which is a language similar to Python) of all of the algorithms presented in it.

To conclude, although this site is not appropriate for recommendations, I recommend you first get familiar with fundamental search techniques, such as minimax and alpha-beta pruning, then you could start learning RL, which can be applied to many games. There are many resources online for both of them.

",2444,,2444,,4/10/2020 19:13,4/10/2020 19:13,,,,0,,,,CC BY-SA 4.0 20158,1,,,4/10/2020 22:07,,4,611,"

In chapter 4.1 of Sutton's book, the Bellman equation is turned into an update rule by simply changing the indices of it. How is it mathematically justified? I didn't quite get the initiation of why we are allowed to do that?

$$v_{\pi}(s) = \mathbb E_{\pi}[G_t|S_t=s]$$

$$ = \mathbb E_{\pi}[R_{t+1} + \gamma G_{t+1}|S_t=s]$$

$$= \mathbb E_{\pi}[R_{t+1} + \gamma v_{\pi}(S_{t+1})|S_t=s]$$

$$ = \sum_a \pi(a|s)\sum_{s',r} p(s',r|s,a)[r+ \gamma v_{\pi}(s')]$$

from which it goes to the update equation:

$$v_{k+1}(s) = \mathbb E_{\pi}[R_{t+1} + \gamma v_{k}(S_{t+1})|S_t=s]$$

$$=\sum_a \pi(a|s)\sum_{s',r} p(s',r|s,a)[r+ \gamma v_{k}(s')]$$

",34341,,2444,,12/10/2020 1:20,5/19/2022 18:26,Why can the Bellman equation be turned into an update rule?,,3,1,0,,,CC BY-SA 4.0 20159,1,,,4/10/2020 22:11,,1,48,"

The following image is a screenshot from a video tutorial that illustrates the concept of gradient descent algorithm with a 3D animation.

Do the numbers on the top of the balls pointed out by the red arrows represent the gradient?

",35896,,2444,,4/11/2020 23:36,4/11/2020 23:36,What do these numbers represent in this picture of a surface?,,1,9,,,,CC BY-SA 4.0 20160,1,20179,,4/10/2020 23:22,,2,457,"

Recently, I have completed Atari Breakout (https://arxiv.org/pdf/1312.5602.pdf) with DQN.

Similar to DQN, what are the most common deep reinforcement learning algorithms and models in 2020? It seems that DQN is outdated and policy gradients are preferred.

",35626,,35626,,4/11/2020 14:39,11/20/2020 18:57,What are the most common deep reinforcement learning algorithms and models apart from DQN?,,1,0,0,,,CC BY-SA 4.0 20161,1,,,4/10/2020 23:58,,1,28,"

Are there any training datasets using standard font text rather than handwritten ones?

I tried using the MNIST handwritten one on font based chars, but it didn't work well.

",35924,,2444,,4/11/2020 0:37,4/11/2020 0:37,Are there any training datasets using standard font text rather than hand written ones?,,0,2,,,,CC BY-SA 4.0 20162,2,,20143,4/11/2020 0:33,,1,,"

AI can categorize documents very accurately. It is not a new application but in the last year the accuracy of the underlying algorithms such as text classifiers, and language models in general, has significantly improved. There are applications of language models which now surpass human performance. Microsoft is one of the leaders in this area. For example, see their Turing Natural Language Generation (T-NLG) language model: Turing-NLG: A 17-billion-parameter language model by Microsoft. The T-NLG was developed in a research group called Project Turing which works on AI tools for Microsoft's products including Office-365. My guess is this same research group makes the AI for Project Cortex.

",5763,,5763,,4/11/2020 0:38,4/11/2020 0:38,,,,1,,,,CC BY-SA 4.0 20163,2,,20158,4/11/2020 1:10,,5,,"

Why are we allowed to convert the Bellman equations into update rules?

There is a simple reason for this: convergence. The same chapter 4 of the same book mentions it. For example, in the case of policy evaluation, the produced sequence of estimates $\{v_k\}$ is guaranteed to converge to $v_\pi$ as $k$ (i.e. the number of iterations) goes to infinity. There are other RL algorithms that are also guaranteed to converge (e.g. tabular Q-learning).

To conclude, in many cases, the update rules of simple reinforcement learning (or dynamic programming) algorithms are very similar to their mathematical formalization because algorithms based on those update rules are often guaranteed to converge. However, note that many more advanced reinforcement learning algorithms (especially, the ones that use function approximators, such as neural networks, to represent the value functions or policies) are not guaranteed or known to converge.

",2444,,2444,,4/11/2020 3:50,4/11/2020 3:50,,,,3,,,,CC BY-SA 4.0 20166,1,20170,,4/11/2020 6:42,,1,265,"

I came across the formula for Upper Confidence Bound Action Selection (while studying multi-armed bandit problem), which looks like:

$$ A_t \dot{=} \operatorname{argmax}_a \left[ Q_t(a) + c \sqrt{ \frac{\ln t}{N_t(a)} } \right] $$

Although, I understand what the second term in the summation actually means but I am not able to figure out how and from where the exact expression came from, what is the log doing there? What effect does $c$ have? And, why a square root?

",35926,,2444,,6/16/2020 11:46,10/23/2020 10:08,How do we reach at the formula for UCB action-selection in multi-armed bandit problem?,,1,0,,,,CC BY-SA 4.0 20167,1,,,4/11/2020 7:22,,4,236,"

I am trying to understand the results of the paper Visualizing and Understanding Convolutional Networks, in particular the following image:

What are these 3x3 blocks and their 9 cells representing?

From my understanding, each 3x3 block of the i-th layer corresponds to a randomly chosen feature map in that layer (e.g. for the layer-1 they randomly chose 9 feature maps, for layer-2 16 feature maps etc). On the left part (grayish images), the j-th 3x3 block shows 9 visualizations obtained by mapping the top-9 activations (single values) of that particular feature map to the ""pixel space"" (using a deconvolutional network). On the right part, the j-th block shows the 9 patches of input images, corresponding to the top-9 activations (e.g. in the first layer and i-th feature map, the j-th image patch is the local region of input image which is seen by the j-th neuron of that feature map). Is my understanding correct?

However, it's not entirely clear to me how the top-9 activations are chosen. It seems that for each layer and each feature-map, an activation is picked for a different input image (that's why we see e.g. different persons in layer-3, row-1, col-1, and different cars in layer-3, row-2, col-2). So within each block, the top-9 activations are obtained from 9 different images (but images of the same class) of the entire dataset (but in principle it could be that more than one activations are coming from the same image).

",31657,,31657,,4/15/2020 20:17,12/19/2022 12:00,"Understanding the results of ""Visualizing and Understanding Convolutional Networks""",,0,0,,,,CC BY-SA 4.0 20168,1,,,4/11/2020 8:54,,3,134,"

What does the normalization of the inputs mean in the context of PPO? At each time step of an episode, I only know the values of this time step and of the previous ones, if I take track of them. This means that for each observation and for each reward at each time step I will do:

value = (value - mean) / std

before passing them to the NN, right? Specifically, I compute mean and std by keeping track of the values for the whole episode and at each time step, I add the new values to an array. Is this a valid approach?

Also, how can I handle negative rewards, such that being positive?

",35930,,2444,,4/11/2020 12:40,4/11/2020 12:40,How does normalization of the inputs work in the context of PPO?,,0,0,,,,CC BY-SA 4.0 20169,1,,,4/11/2020 8:57,,1,89,"

The objective function of an SVM is the following:

$$J(\mathbf{w}, b)=C \sum_{i=1}^{m} \max \left(0,1-y^{(i)}\left(\mathbf{w}^{t} \cdot \mathbf{x}^{(i)}+b\right)\right)+\frac{1}{2} \mathbf{w}^{t} \cdot \mathbf{w}$$ where

  • $\mathbf{w}$ is the model's feature weights and $b$ is its bias parameter
  • $\mathbf{x}^{(i)}$ is the $i^\text{th}$ training instance's feature vector
  • $y^{(i)}$ is the target class ($-1$ or $1$) for the $i^\text{th}$ instance
  • $m$ is the number of training instances
  • $C$ is the regularisation hyper-parameter

And if I was to use a kernel, this would become:

$$J(\mathbf{w}, b)=C \sum_{i=1}^{m} \max \left(0,1-y^{(i)}\left(\mathbf{u}^{t} \cdot \mathbf{K}^{(i)}+b\right)\right)+\frac{1}{2} \mathbf{u}^{t} \cdot \mathbf{K} \cdot \mathbf{u}$$

where the kernel can be the Gaussian kernel:

$$K(\mathbf{u}, \mathbf{v})=e^{-\gamma\|\mathbf{u}-\mathbf{v}\|^{2}}$$

How would I go about finding its gradient with respect to the input?

I need to know this as to then apply this to a larger problem of a CNN with its last layer being this SVM, so I can then find the gradient of this output wrt the input of the CNN.

",29877,,2444,,6/25/2020 12:05,6/25/2020 12:05,What is the gradient of a non-linear SVM with respect to the input?,,0,5,,,,CC BY-SA 4.0 20170,2,,20166,4/11/2020 9:15,,3,,"

Here is an intuitive description/explanation.

$c$ is there for a trade-off between exploration and exploitation. If $c=0$ then you only consider $Q_t(a)$ (no exploration). If $c \rightarrow \infty$ then you only consider exploration term.
$\frac{\ln t}{N_t(a)}$ is there to balance out exploration term. If you consider a simple case where you only have one action (then it wouldn't make sense to explore you could always pick that action but let's pretend there is) then as $t \rightarrow \infty$, because $\ln t$ has sublinear growth, \begin{equation} \frac{\ln t}{N_t} \rightarrow 0 \end{equation} So, after you picked an action infinitely many times, the exploration term will completely diminish, i.e. you already know a lot about what that action does. If you picked a numerator that doesn't have sublinear growth, then as $t \rightarrow \infty$ exploration term would not diminish, so you would always have a chance to explore and exploration term can "overpower" action value term if $Q_t$ is very small, even after you picked an action infinitely many times, which is not desired.

A similar thing is with multiple actions, $\ln t$ will make sure the exploration term $\rightarrow 0$ if you picked it many times, but it's still better than constant term $K/N_t(a)$, where $K$ is some constant, because it can diminish too fast.

With $\ln t$ you will also not stop exploring completely if you haven't picked some action in a long time, because $\ln t$ will keep growing and $N_t(a)$ will remain the same, so their fraction will increase with time, which is useful in non-stationary environments.

The square root is also there probably to balance out the magnitude of exploration term.

You can also see this answer. It has couple of links to some papers for a more mathematical description.

",20339,,2444,,10/23/2020 10:08,10/23/2020 10:08,,,,0,,,,CC BY-SA 4.0 20172,1,,,4/11/2020 9:39,,3,197,"

My problem is that every time I am trying to train my PPO agent I get NaN values after a while. The diagnostic that I get is the following:

ep=   3| t=  144/450000|  0.8 sec| rew=31.3005| ploss=-2.5e-02| vfloss=4.7e+01| kl=1.3e-03| ent=8.5e+00| clipfrac=0.0e+00
ep=   6| t=  288/450000|  1.1 sec| rew=31.2144| ploss=-2.2e-02| vfloss=4.1e+01| kl=1.3e-03| ent=8.5e+00| clipfrac=0.0e+00
ep=   9| t=  432/450000|  1.4 sec| rew=28.2668| ploss=-2.9e-02| vfloss=3.5e+01| kl=1.6e-03| ent=8.5e+00| clipfrac=0.0e+00
ep=  12| t=  576/450000|  1.7 sec| rew=28.2910| ploss=-2.7e-02| vfloss=3.6e+01| kl=1.7e-03| ent=8.5e+00| clipfrac=0.0e+00
ep=  15| t=  720/450000|  2.0 sec| rew=27.4817| ploss=-2.3e-02| vfloss=3.0e+01| kl=1.8e-03| ent=8.5e+00| clipfrac=0.0e+00
ep=  18| t=  864/450000|  2.3 sec| rew=29.8415| ploss=-4.5e-02| vfloss=3.4e+01| kl=4.0e-03| ent=8.5e+00| clipfrac=2.8e-02
ep=  21| t= 1008/450000|  2.6 sec| rew=29.1447| ploss=-2.7e-02| vfloss=2.7e+01| kl=2.0e-03| ent=8.5e+00| clipfrac=6.9e-03
ep=  24| t= 1152/450000|  3.0 sec| rew=30.2001| ploss=-3.5e-02| vfloss=2.8e+01| kl=1.7e-03| ent=8.5e+00| clipfrac=6.9e-03
ep=  27| t= 1296/450000|  3.3 sec| rew=31.4069| ploss=-2.9e-02| vfloss=3.7e+01| kl=3.0e-03| ent=8.5e+00| clipfrac=2.1e-02
ep=  30| t= 1440/450000|  3.6 sec| rew=27.7963| ploss=-4.6e-02| vfloss=2.3e+01| kl=7.3e-03| ent=8.5e+00| clipfrac=1.7e-01
ep=  33| t= 1584/450000|  3.9 sec| rew=30.8561| ploss=-5.9e-02| vfloss=2.5e+01| kl=9.6e-03| ent=8.5e+00| clipfrac=2.6e-01
ep=  36| t= 1728/450000|  4.2 sec| rew=27.3002| ploss=-6.9e-02| vfloss=2.2e+01| kl=1.3e-02| ent=8.5e+00| clipfrac=3.1e-01
ep=  39| t= 1872/450000|  4.5 sec| rew=28.0270| ploss=-5.6e-02| vfloss=2.1e+01| kl=8.9e-03| ent=8.5e+00| clipfrac=2.0e-01
ep=  42| t= 2016/450000|  4.9 sec| rew=28.0624| ploss=-5.8e-02| vfloss=2.0e+01| kl=7.5e-03| ent=8.5e+00| clipfrac=2.4e-01
ep=  45| t= 2160/450000|  5.2 sec| rew=28.6224| ploss=-8.4e-02| vfloss=2.3e+01| kl=7.2e-03| ent=8.5e+00| clipfrac=2.0e-01
ep=  48| t= 2304/450000|  5.5 sec| rew=32.3889| ploss=-4.3e-02| vfloss=2.6e+01| kl=7.1e-03| ent=8.5e+00| clipfrac=2.9e-01
ep=  51| t= 2448/450000|  5.8 sec| rew=31.4241| ploss=-1.0e-01| vfloss=2.7e+01| kl=7.0e-03| ent=8.5e+00| clipfrac=2.1e-01
ep=  54| t= 2592/450000|  6.1 sec| rew=33.4760| ploss=-5.1e-02| vfloss=2.5e+01| kl=7.3e-03| ent=8.5e+00| clipfrac=2.4e-01
ep=  57| t= 2736/450000|  6.4 sec| rew=31.0780| ploss=-8.8e-02| vfloss=2.3e+01| kl=6.9e-03| ent=8.5e+00| clipfrac=3.0e-01
ep=  60| t= 2880/450000|  6.7 sec| rew=34.1286| ploss=-6.9e-02| vfloss=2.7e+01| kl=7.6e-03| ent=8.5e+00| clipfrac=3.1e-01
ep=  63| t= 3024/450000|  7.1 sec| rew=31.0017| ploss=-6.1e-02| vfloss=2.5e+01| kl=1.4e-02| ent=8.5e+00| clipfrac=3.7e-01
ep=  66| t= 3168/450000|  7.4 sec| rew=32.3697| ploss=-1.1e-01| vfloss=2.2e+01| kl=1.2e-02| ent=8.5e+00| clipfrac=4.6e-01
ep=  69| t= 3312/450000|  7.7 sec| rew=31.4455| ploss=-7.4e-02| vfloss=2.4e+01| kl=8.4e-03| ent=8.5e+00| clipfrac=3.7e-01
ep=  72| t= 3456/450000|  8.0 sec| rew=32.1896| ploss=-9.2e-02| vfloss=2.0e+01| kl=1.3e-02| ent=8.4e+00| clipfrac=4.1e-01
ep=  75| t= 3600/450000|  8.3 sec| rew=31.3721| ploss=-9.4e-02| vfloss=2.4e+01| kl=1.4e-02| ent=8.4e+00| clipfrac=4.2e-01
ep=  78| t= 3744/450000|  8.6 sec| rew=35.5718| ploss=-1.0e-01| vfloss=3.0e+01| kl=1.0e-02| ent=8.4e+00| clipfrac=4.6e-01
ep=  81| t= 3888/450000|  9.0 sec| rew=32.2289| ploss=-1.3e-01| vfloss=2.9e+01| kl=1.6e-02| ent=8.4e+00| clipfrac=5.5e-01
ep=  84| t= 4032/450000|  9.3 sec| rew=31.7656| ploss=-1.0e-01| vfloss=2.3e+01| kl=1.3e-02| ent=8.4e+00| clipfrac=4.4e-01
ep=  87| t= 4176/450000|  9.6 sec| rew=35.4555| ploss=-8.8e-02| vfloss=3.3e+01| kl=5.8e-03| ent=8.4e+00| clipfrac=3.1e-01
ep=  90| t= 4320/450000|  9.9 sec| rew=33.2766| ploss=-1.2e-01| vfloss=2.4e+01| kl=1.5e-02| ent=8.4e+00| clipfrac=5.7e-01
ep=  93| t= 4464/450000| 10.2 sec| rew=32.5218| ploss=-1.1e-01| vfloss=2.4e+01| kl=1.8e-02| ent=8.4e+00| clipfrac=5.6e-01
ep=  96| t= 4608/450000| 10.5 sec| rew=34.7137| ploss=-9.7e-02| vfloss=2.5e+01| kl=1.5e-02| ent=8.4e+00| clipfrac=4.3e-01
ep=  99| t= 4752/450000| 10.9 sec| rew=35.3797| ploss=-8.2e-02| vfloss=2.5e+01| kl=1.6e-02| ent=8.4e+00| clipfrac=4.9e-01
ep= 102| t= 4896/450000| 11.2 sec| rew=nan| ploss=nan| vfloss=nan| kl=nan| ent=nan| clipfrac=0.0e+00
C:\Users\ppo.py:154: RuntimeWarning: invalid value encountered in greater
  adv_nrm = (adv_nrm - adv_nrm.mean()) / max(1.e-8,adv_nrm.std()) # standardized advantage function estimate
ep= 105| t= 5040/450000| 11.5 sec| rew=nan| ploss=nan| vfloss=nan| kl=nan| ent=nan| clipfrac=0.0e+00

Any ideas why this is arised?

",35930,,,,,4/11/2020 9:39,NaNs after a while in training of PPO,,0,3,,,,CC BY-SA 4.0 20174,2,,20159,4/11/2020 10:57,,1,,"

It represents the value of the loss function J(x1, x2; θ); the valley has value 0 in the video. You can see that the lowest ball with value 3.13 is on a steep point with a high gradient, so it's not the gradient.

",34315,,,,,4/11/2020 10:57,,,,0,,,,CC BY-SA 4.0 20175,1,,,4/11/2020 11:31,,2,38,"

I am trying to perform a white-box attack on a model.

Would it be possible to simply use the numerical gradient of the output wrt input directly rather than computing each subgradient of the network analytically? Would this (1) work and (2) actually be a white box attack?

As I would not be using a different model to 'mimic' the results but instead be using the same model to get the outputs, am I right in thinking that this would still be a white box attack.

",29877,,2444,,12/9/2020 10:56,12/9/2020 10:56,"To perform a white-box adversarial attack, would the use of a numerical gradient suffice?",,0,1,,,,CC BY-SA 4.0 20176,1,,,4/11/2020 12:53,,15,5651,"

I am watching the video Attention Is All You Need by Yannic Kilcher.

My question is: what is the intuition behind the dot product attention?

$$A(q,K, V) = \sum_i\frac{e^{q.k_i}}{\sum_j e^{q.k_j}} v_i$$

becomes:

$$A(Q,K, V) = \text{softmax}(QK^T)V$$

",9863,,2444,user9947,11/30/2021 15:39,12/5/2021 8:34,What is the intuition behind the dot product attention?,,1,0,,,,CC BY-SA 4.0 20177,2,,20158,4/11/2020 13:55,,0,,"

You're asking why the finite horizon policy evaluation converges to the infinite right?

Since the total reward is bounded(by the discount factor) you know that you can make your finite horizon policy evaluation get arbitrarily close to it in a finite number of steps.

People praise Bartos book but I find it annoying to read as he's not formal enough with mathematics.

",32390,,,,,4/11/2020 13:55,,,,1,,,,CC BY-SA 4.0 20178,2,,20176,4/11/2020 14:49,,21,,"

Let's start with a bit of notation and a couple of important clarifications.

$\mathbf{Q}$ refers to the query vectors matrix, $q_i$ being a single query vector associated with a single input word.

$\mathbf{V}$ refers to the values vectors matrix, $v_i$ being a single value vector associated with a single input word.

$\mathbf{K}$ refers to the keys vectors matrix, $k_i$ being a single key vector associated with a single input word.

Where do these matrices come from? Something that is not stressed out enough in a lot of tutorials is that these matrices are the result of a matrix product between the input embeddings and 3 matrices of trained weights: $\mathbf{W_q}$, $\mathbf{W_v}$, $\mathbf{W_k}$.

The fact that these three matrices are learned during training explains why the query, value and key vectors end up being different despite the identical input sequence of embeddings. It also explains why it makes sense to talk about multi-head attention. Performing multiple attention steps on the same sentence produces different results, because, for each attention 'head', new $\mathbf{W_q}$, $\mathbf{W_v}$, $\mathbf{W_k}$ are randomly initialised.

Another important aspect not stressed out enough is that for the encoder and decoder first attention layers, all the three matrices comes from the previous layer (either the input or the previous attention layer) but for the encoder/decoder attention layer, the $\mathbf{Q}$ matrix comes from the previous decoder layer, whereas the $\mathbf{V}$ and $\mathbf{K}$ matrices come from the encoder. And this is a crucial step to explain how the representation of two languages in an encoder is mixed together.

Once computed the three matrices, the transformer moves on to the calculation of the dot product between query and key vectors. The dot product is used to compute a sort of similarity score between the query and key vectors. Indeed, the authors used the names query, key and value to indicate that what they propose is similar to what is done in information retrieval. For example, in question answering, usually, given a query, you want to retrieve the closest sentence in meaning among all possible answers, and this is done by computing the similarity between sentences (question vs possible answers).

Of course, here, the situation is not exactly the same, but the guy who did the video you linked did a great job in explaining what happened during the attention computation (the two equations you wrote are exactly the same in vector and matrix notation and represent these passages):

  • closer query and key vectors will have higher dot products.
  • applying the softmax will normalise the dot product scores between 0 and 1.
  • multiplying the softmax results to the value vectors will push down close to zero all value vectors for words that had a low dot product score between query and key vector.

In the paper, the authors explain the attention mechanisms saying that the purpose is to determine which words of a sentence the transformer should focus on. I personally prefer to think of attention as a sort of coreference resolution step. The reason why I think so is the following image (taken from this presentation by the original authors).

This image shows basically the result of the attention computation (at a specific layer that they don't mention). Bigger lines connecting words mean bigger values in the dot product between the words query and key vectors, which means basically that only those words value vectors will pass for further processing to the next attention layer. But, please, note that some words are actually related even if not similar at all, for example, 'Law' and 'The' are not similar, they are simply related to each other in these specific sentences (that's why I like to think of attention as a coreference resolution). Computing similarities between embeddings would never provide information about this relationship in a sentence, the only reason why transformer learn these relationships is the presences of the trained matrices $\mathbf{W_q}$, $\mathbf{W_v}$, $\mathbf{W_k}$ (plus the presence of positional embeddings).

",34098,,2444,,12/5/2021 8:34,12/5/2021 8:34,,,,7,,,,CC BY-SA 4.0 20179,2,,20160,4/11/2020 14:59,,4,,"

There are several common deep reinforcement algorithms and models apart from deep Q networks (or deep Q learning). I will list some of them below (along with a link to the paper that introduced them), but note that some of these may not be state-of-the-art (at least, not anymore, and it's likely that all of these will be replaced in the future).

For an exhaustive overview of deep RL algorithms and models, maybe take a look at this pre-print Deep Reinforcement Learning (2018) by Yuxi Li.

",2444,,2444,,11/20/2020 18:57,11/20/2020 18:57,,,,0,,,,CC BY-SA 4.0 20180,1,20181,,4/12/2020 0:02,,2,493,"

Although no artificial general intelligence (AGI) has yet been created, probably, there are already some courses on the topic. So, what are some online (preferably free) courses on AGI?

",2444,,2444,,5/22/2020 18:06,7/16/2021 12:56,What are some online courses on artificial general intelligence?,,1,0,,,,CC BY-SA 4.0 20181,2,,20180,4/12/2020 0:02,,10,,"

As far as I know, no AGI system has yet been created, so that's why there aren't yet many courses on AGI. However, there are a few courses that attempt to address AGI as the main topic but from different perspectives. Below, I will mention the ones that I found and partially followed, and give some info about them.

MIT 6.S099: Artificial General Intelligence

This is organized by Lex Fridman. It is a series of lessons and talks primarily given by a diverse set of guest appearances, such as

  • Josh Tenenbaum (researcher and professor in computational cognitive science),
  • Nate Derbinsky (who gives a lesson on cognitive architectures, Soar, etc.),
  • Stephen Wolfram (creator of Mathematica, Wolfram Alpha and the Wolfram Language; he talks about his work throughout the years, especially, the development of Wolfram Alpha and Language),
  • Marc Raibert (CEO of Boston Dynamics, who gives a lesson on his work at Boston Dynamics and the robots they have developed),
  • Lisa Feldman Barrett (professor of psychology, who gives a very insightful lesson on emotions and feelings, with possibly different perspectives),
  • Rosalind Picard (a professor at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of two companies, Affectiva and Empatica; in her talk with Lex, she talks about affective computing and other interesting concepts and issues)
  • Marcus Hutter (inventor of a mathematical theory of general intelligence, AIXI; in his talk with Lex, he talks about Occam's razor, Solomonoff induction, Kolmogorov complexity, the definition of intelligence, AIXI, rewards, bounded rationality and consciousness)

Knowledge-Based AI: Cognitive Systems

It's taught by Ashok Goel and David Joyner. As the name of the courses suggests, this course focuses on knowledge-based AI.

The Society of Mind

This course is taught by Marvin Minsky, who had also written a book and developed a theory of natural intelligence with the same name.

CS 294-149: Safety and Control for Artificial General Intelligence (Fall 2018)

I don't think that there are (free) recorded video lectures, but, in the lecture schedule section of the course, you have links to references that they use during the course (e.g. this paper), which seems to focus on safety and control aspects of future AGI systems. See also these notes.

General Theory of General Intelligence

This is not really a course, but it's a series of videos by Ben Goertzel, who summarises some of the topics presented in a paper that the same Ben Goertzel wrote about AGI.

",2444,,2444,,7/16/2021 12:56,7/16/2021 12:56,,,,0,,,,CC BY-SA 4.0 20183,1,20205,,4/12/2020 3:23,,1,48,"

Simplified: What is it called in AI when a program is designed to make ""x in the style of y;"" when it trains off of two types of sources in order to make a thing from source one, informed by features from source two? For example, if a network made up of two smaller networks were to take sheet music of a specific compositional style in network A and audio samples from a certain genre in B and through an interface creates music from a certain genre in a certain compositional style; the is comes from One, the seems comes from Two.

For more coarse and obvious examples:

  • ""Compose synthpop in the style of Beethoven""
  • ""Draw impressionism in the style of Mondrian""
  • ""Generate casserole recipes using only ingredients most likely to fluctuate in price given current market data""
  • ""Sketch baseballs that look like they're made of espresso foam""
",35622,,2444,,4/12/2020 20:02,4/12/2020 20:02,"What is it called in AI when a program is designed to make ""x in the style of y""?",,1,1,,,,CC BY-SA 4.0 20184,1,,,4/12/2020 3:26,,3,56,"

I have an image of some nano particles that was taken with Scanning Electron Microscope (SEM) attached here. I want to obtain center points coordinates (x,y) for each particle. Doing it by hand is very tedious. Since I just started to learn Machine Learning and got introduced to Artificial Neural Networks and kinda understand that they they are helpful with image classification, I am curious if I can use these tools to achieve my goal.

I found this article where they discuss kind similar work,,, but I am curious if you have seen anything practical or if you can give me some steps where and how to start, that's really helpful.
Any guidance is appreciated.

",35951,,,,,4/12/2020 6:29,Can neural network help me with detecting center coordinates of particles in an image?,,1,0,,,,CC BY-SA 4.0 20185,1,,,4/12/2020 3:41,,3,303,"

How can I cluster the data frame below with several features and observations? And how would I go about determining the quality of those clusters? Is k-NN appropriate for this?

id     Name             Gender   Dob    Age  Address
1   MUHAMMAD JALIL      Male    1987    33   Chittagong
1   MUHAMMAD JALIL      Male    1987    33   Chittagong
2   MUHAMMAD JALIL      Female  1996    24   Rangpur
2   MRS. JEBA           Female  1996    24   Rangpur
3   MR. A. JALIL        Male    1987    33   Sirajganj
3   MR. A. JALIL        Male    1987    33   Sirajganj
3   MD. A. JALIL        Male    1987    33   Sirajganj
4   MISS. JEBA          Female  1996    24   Rangpur
4   PROF. JEBA          Female  1996    24   Rangpur
1   MD. A. JALIL        Male    1987    33   Chittagong
1   MUHAMMAD A. JALIL   Male    1987    33   Chittagong
",35941,,2444,,4/12/2020 15:53,4/21/2020 15:26,How can I cluster this data frame with several features and observations?,,6,1,,,,CC BY-SA 4.0 20186,1,,,4/12/2020 3:44,,1,243,"

Let us confine ourselves to the case where we have a $n$ dimensional input and a $+1$ or $-1$ output. It can be shown that:

For every $n$, there exists a dense NN of depth 2, such that it contains all functions from ${±1}^n$ to ${±1}$. (given sign activation functions and some other very simple assumptions).

Check section 20.3 for the proof.

So, if a neural net can approximate any function, then it has a $\mathcal V \mathcal C $ dimension of infinite (considering the $n$ dimensional set of points as our universe).

Thus, it can realize all types of functions (or its hypothesis set contains all set of functions), and hence cannot have prior knowledge (prior knowledge in the sense used in the No Free Lunch theorem).

Are my deductions correct? Or did I make wrong assumptions? Are there actually any prior beliefs in a neural network that I am missing?

A detailed explanation would be nice.

",,user9947,,user9947,1/22/2021 15:44,1/22/2021 15:44,"If a neural network is a universal function approximator, can it have any prior beliefs?",,1,0,,,,CC BY-SA 4.0 20187,2,,20184,4/12/2020 6:29,,1,,"

This is a very hard problem, you have many overlapping points with objects which aren't completely round. I'm not very knowledgeable on CV but I suspect you will find it very challenging.

I would probably say a handcrafted detection algorithm would probably be easier, something like an edge detector which fit circles to arcs and labeled the points. But it's still going to be nontrivial and maybe impossible to get it working with high accuracy.

",32390,,,,,4/12/2020 6:29,,,,0,,,,CC BY-SA 4.0 20188,1,,,4/12/2020 7:20,,0,391,"

I am a novice in AI and I like to build a chatbot to predict diseases using patient narration as input. Initially, I simply want to train my chatbot on 1 disease only. And once this initial milestone is accomplished then I want to train it on other diseases.

I want to know whether I should build and train my model first and then move towards building a chatbot or should i create a chatbot first and then train it on a disease.

Also please justify which approach is better and why?

",34306,,,,,4/12/2020 12:11,What is the best approach to build a self-learning AI chatbot?,,1,0,,,,CC BY-SA 4.0 20189,1,20197,,4/12/2020 10:08,,2,119,"

I believe to understand the reason why on-policy methods cannot reuse trajectories collected from earlier policies: the trajectory distribution change with the policy and the policy gradient is derived to be an expectation over these trajectories.

Doesn't the following intuition from the OpenAI Vanilla Policy Gradient description indeed propose that learning from prior experience should still be possible?

The key idea underlying policy gradients is to push up the probabilities of actions that lead to higher return, and push down the probabilities of actions that lead to lower return.

The goal is to change the probabilities of actions. Actions sampled from previous policies are still possible under the current one.

I see that we cannot reuse the previous actions to estimate the policy gradient. But couldn't we update the policy network with previous trajectories using supervised learning? The labels for the actions would be between 0 and 1 based on how good an action was. In the simplest case, just 1 for good actions and 0 for bad ones. The loss could be a simple sum of squared differences with a regularization term.

Why is that not used/possible? What am I missing?

",35821,,35821,,4/15/2020 14:08,4/15/2020 14:08,Could we update the policy network with previous trajectories using supervised learning?,,1,0,,,,CC BY-SA 4.0 20190,1,,,4/12/2020 10:12,,2,149,"

I am looking for datasets that are used as a testing standard in the fully connected neural networks (FCNN). For example, in the image recognition and CNN, CIFAR datasets are used in most of the papers, but can't find anything like that for the FCNN.

",31324,,2444,,4/12/2020 14:43,4/14/2020 16:52,What are standard datasets for fully connected neural networks?,,2,0,,,,CC BY-SA 4.0 20191,1,20242,,4/12/2020 11:05,,3,59,"

I did watch the course DeepLearning of Andrew Ng and he told that we should create parameter w small like:

parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) ** 0.001 

But in the last application assignment. They choose another way:

layers_dims = [12288, 20, 7, 5, 1]
def initialize_parameters_deep(layer_dims):
    np.random.seed(3)
    parameters = {}
    L = len(layer_dims)

    for l in range(1, L):
        parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) / np.sqrt(layer_dims[l - 1])
        parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
        assert (parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
        assert (parameters['b' + str(l)].shape == (layer_dims[l], 1))
    return parameters

And the result of this way is very good but if I choose w like the old above, It's just have 34% correct!

So do you can explain ?

",35958,,,,,4/14/2020 3:34,How to choosing the random value for parameter w in deep learning network?,,1,1,,,,CC BY-SA 4.0 20192,2,,18590,4/12/2020 11:41,,3,,"

When ever you are buliding a ML Model don't take accuracy seriously(Mistake done by Netflix that cost them alot), you should try to get the hit scores as they will help you to know how many times your model worked on real world users.However, if your model must have to measure the accuracy try it with the RMSE score as it will penalise you more for being more out of the Line. Here is the link for more information on it RMSE Its hard to predict if its overfitting or underfitting as your graph is vague(for example what does graph lines representing). However, you can solve underfitting by following steps: 1) Increase the size or number of parameters in the ML model. 2) Increase the complexity or type of the model. 3) Increasing the training time until cost function in ML is minimised.

For overfitting you can try Regularization methods like weight decay provide an easy way to control overfitting for large neural network models. A modern recommendation for regularization is to use early stopping with dropout and a weight constraint.

",25685,,,,,4/12/2020 11:41,,,,1,,,,CC BY-SA 4.0 20193,1,20240,,4/12/2020 12:04,,3,1207,"

I have a labeled dataset composed of 3000 data. Its single feature is the price of the house and its label is the number of bedrooms.

Which classifier would be a good choice to classify these data?

",26472,,26472,,4/14/2020 7:18,4/21/2020 11:29,Which classifier should I use for a dataset with one feature?,,5,2,,,,CC BY-SA 4.0 20194,1,,,4/12/2020 12:07,,1,68,"

I am trying to optimize the cost function calculation in regression analysis using a non-matrix multiplication based approach.

More specifically, I have a point $x = (1, 1, 2, 3)$, to which I want to apply a linear transformation $n$ times. If the transformation is denoted by a $4 \times 4$ matrix $A$, then the final transformation would be given by $A^n * x$.

Given that matrix multiplication can computational expensive, is there a way we can speed up the computation, assuming we would need to run multiple iterations of this simulation?

",5634,,2444,,4/13/2020 13:02,9/10/2020 14:01,Is there any way to apply linear transformations on a vector other than matrix multiplication?,,1,1,,,,CC BY-SA 4.0 20195,2,,20188,4/12/2020 12:11,,2,,"

The former: build and train a model first, and then think about the user interface.

Effectively, a chatbot is a user interface to your model. If you run it 'off-line' on input text and it works, then you have achieved your goal without the added complexity of driving a conversation (which is harder than one might think).

Also, building an 'abstract' chatbot devoid of any content is going to be harder. What should that chatbot talk about?

From my own experience (I work in conversational AI), you might not succeed in building a decent chatbot, especially as a novice. But you might be able to train a model to identify diseases from textual input. So if you do that first, you have at least got something! Especially if that is your main reason for the project in the first place.

",2193,,,,,4/12/2020 12:11,,,,2,,,,CC BY-SA 4.0 20197,2,,20189,4/12/2020 12:27,,2,,"

You cannot really do that because you have no way of knowing how good the action really is to make reasonable labels for supervised learning (that's the whole point why we need reinforcement learning). The only way to possibly know that is to make labels based on the return that you got from that action but the return is based on an old trajectory with the old policy. The return for that specific action depends on actions that happened after that action in the trajectory and return for those actions change with time.

To make things clearer, consider a simple case. Let's say you take action $a_1$ and you end up in state $s_1$ with reward $0$. Then you have two possibilities, you take action $a_2$ and end up in terminal state $s_2$ with reward $-10$ or you take action $a_2'$ and end up in terminal state $s_2'$ with reward $10$. Let's say you use trajectory $a_1 \rightarrow s_1 \rightarrow a_2 \rightarrow s_2$ with return $-10$ to learn about action $a_1$. Then your label for that action would probably be that that action is bad, but it actually isn't, if you took action $a_2'$ after $a_1$ your return for action $a_1$ would be $10$. So you learned that your action is bad even though it might not be. Now, if later you learn that taking action $a_2'$ is good to take after $a_1$ then you would also learn that $a_1$ might be good but if you keep using that old data with return $-10$ you will keep learning that $a_1$ is bad.

You can only use data gathered from the current policy to learn about it because older data might be outdated.

",20339,,2444,,4/12/2020 13:34,4/12/2020 13:34,,,,2,,,,CC BY-SA 4.0 20198,1,,,4/12/2020 14:11,,4,162,"

I live in a rural area where there is a growing necessity for people with knowledge to prune Pear trees, this process is crucial for the industry, but as people go to the big cities, this skill is being lost, and in a few years there will be no one to do it.

I wanna know if it is possible to train a robot using AI to do this, and want would it take to make this work!

Keep in mind that this would be viable only in an ""industrial"" way. The trees all have approximately the same size and are disposed in a certain preset way (distance between each other, height, etc).

",35964,,2444,,4/12/2020 14:42,4/13/2020 1:48,Is possible to train a robot or AI to prune fruit trees?,,1,0,,,,CC BY-SA 4.0 20199,1,,,4/12/2020 14:47,,2,65,"

Suppose I am utilising a neural network to predict the next state, $s'$ based on the current $(s, a)$ pairs.

all my neural network inputs are between 0 and 1 and the loss function for this network is defined as the mean squared error of the Difference between the current state and next state. Because the variables are all between 0 and 1, the MSE difference between the actual difference and predicted difference is smaller than the actual difference.

Suppose the difference in next state and current state for $s \in R^2$ is $[0.4,0.5]$ and the neural network outputs a difference of $[0.2,0.4]$. The mean squared loss is therefore 0.05 $(0.2^2 + 0.1^2) = 0.05$ whereas the neural network does not really predict the next state very well due to a difference of $(0.2, 0.1)$.

Although whichever loss function is used does not matter, It was deceiving to think that despite the loss function outputting low values, it is mainly due to the squared term that keeps the value small.

Is Mean Squared Error loss function still a good loss function to be used here ?

",32780,,,,,4/12/2020 14:47,Is Mean Squared Error Loss function a good loss function for continuous variables $0 < x < 1$,,0,1,,,,CC BY-SA 4.0 20201,2,,11994,4/12/2020 15:15,,-1,,"

You can try to work with Gated Recurrent Units or GRU. This will solve your problem of with too much latency time that LSTM required. LSTM also doesn't give preference on newer data too. For more information, you can follow more on this great article

GRU Cells

",25685,,,,,4/12/2020 15:15,,,,1,,,,CC BY-SA 4.0 20202,2,,20185,4/12/2020 16:50,,2,,"

A typical clustering algorithm is k-means (and not k-NN, i.e. k-nearest neighbours, which is primarily used for classification). There are other clustering algorithms, such as hierarchical clustering algorithms. sklearn provides functions that implement k-means (and an example), hierarchical clustering algorithms, and other clustering algorithms.

To assess the quality of the produced clusters, you could use the silhouette method (sklearn provides a function that can be used to compute the silhouette score).

Regarding your specific data frame, note that it contains repetitions, so you may want to remove them before starting the clustering procedure. Also, the IDs are not unique, but you probably don't need the IDs for clustering.

",2444,,,,,4/12/2020 16:50,,,,3,,,,CC BY-SA 4.0 20204,2,,3910,4/12/2020 19:48,,0,,"

Your first link is from 2011, which essentially predates the current deep learning explosion. In the many years that have since passed (AlexNet 2012, ResNet 2015) we have since found that if you keep adding layers, we generally do see improved performance.

This is due to improved training techniques and optimization breakthroughs (residual connections, ReLU, dropout etc.). But do note that the result can be diminishing. In particular, take a look at Deep Equillibrium Models, which essentially allow us to train (in the limit equivalence) infinite depth neural networks.

",18086,,,,,4/12/2020 19:48,,,,0,,,,CC BY-SA 4.0 20205,2,,20183,4/12/2020 19:50,,2,,"

This problem is known as ""style transfer"". The field was started by Gatsys et al. in 2016, and has seen a lot of work over the last few years, including conditional translation from Isola et al. in 2017 and unpaired translation from Zhu et al. 2017.

This picture (Figure 2 from Gatsys et al.) shows the idea behind style transfer, and illustrates the dramatic results that are possible with these models.

",15176,,2444,,4/12/2020 20:00,4/12/2020 20:00,,,,0,,,,CC BY-SA 4.0 20206,1,,,4/12/2020 20:21,,2,34,"

I have an extremely unbalanced video dataset for a two class video classification problem.All my videos in my current video dataset is $40$ second long with $900p$ resolution.However the dataset is highly unbalanced with $3000$ samples for class A vs $300$ samples to class B. Due to high imbalance, i added the following class weight implementation to my deep learning model.

https://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_class_weight.html

However my model was still heavily biased due to high imbalance in data. I am considering the following options:

  • Adding more video data to balance the dataset: My only concern here is my current dataset is uniform with same duration of videos and similar resolution of around $900p$. Will it matter if I add very low resolution videos to balance the dataset?

  • Adding video augmentation to current dataset.

I am looking for any other recommendations that i could make use. Any pros or cons of any of these methods that I should consider to prevent any bias?

",35979,,,user9947,4/13/2020 1:46,4/13/2020 1:46,Possible approaches to dealing with unbalanced dataset and highly biased deep learning algorithm,,0,1,,,,CC BY-SA 4.0 20208,1,,,4/13/2020 0:39,,2,280,"

Neural networks typically have $\mathcal{VC}$ dimension that is proportional to their number of parameters and inputs. For example, see the papers Vapnik-Chervonenkis dimension of recurrent neural networks (1998) by Pascal Koirana and Eduardo D. Sontag and VC Dimension of Neural Networks (1998) by Eduardo D. Sontag for more details.

On the other hand, the universal approximation theorem (UAT) tells us that neural networks can approximate any continuous function. See Approximation by Superpositions of a Sigmoidal Function (1989) by G. Cybenko for more details.

Although I realize that the typical UAT only applies to continuous functions, the UAT and the results about the $\mathcal{VC}$ dimension of neural networks seem to be a little bit contradictory, but this is only if you don't know the definition of $\mathcal{VC}$ dimension and the implications of the UAT.

So, how come that neural networks approximate any continuous function, but, at the same time, they usually have a $\mathcal{VC}$ dimension that is only proportional to their number of parameters? What is the relationship between the two?

",2444,,,,,11/14/2022 1:01,How can neural networks approximate any continuous function but have $\mathcal{VC}$ dimension only proportional to their number of parameters?,,1,1,,,,CC BY-SA 4.0 20209,2,,20198,4/13/2020 1:42,,3,,"

I am not an expert in robotics (definitely not in pear trees pruning either) but I will try to give some hints to partially answer and also to help reframe the problem a bit. On overall I'll give already an answer: it is most likely possible, but also most likely not convenient.

Problem statement

First things first: in general the rule is that machine learning should be applied when a task can't be automatised otherwise. Before asking if an AI can be trained to solve a task one should always think how to automatise a task using a rule based system. This is important especially because during the process of thinking how to automatise the task you will realise that some steps can be perform without an expert, while others can't be performed without any supervision. Let's brake down your task in subtasks: a system that prune trees should be (at least) capable of:

  1. Moving between trees
  2. Selecting branches to cut
  3. Cut the selected branches

Selecting the branch to cut is probably the step that requires most of the know how and expert supervision and for which a machine learning component might be suitable. Moving instead is a perfect example of a subtasks that could be tackled at different levels. Creating a machine able to anticipate other objects movements and avoid them in real time definitely require to train an AI component, but when you say that the environment is highly structured (trees disposed in a grid) this make me think that maybe some hand coded rules would do the trick without bothering machine learning.

Theoretical tools

Once you have understood which subtasks your machine should be capable of solving, you can start dig on the theoretical feasibility of them. Following the same order as in the previous paragraph:

  1. Self-driving is a widely studied topics, the algorithms used to train robotic vacuum cleaners to move automatically in a house could be applied straight away to the problem of training an agent to move between trees.

  2. Selecting the right branches involves mostly computer vision. This sub-task should actually be also dived again into sub tasks: detecting branches from other objects and select which ones should be cut. Nevertheless, the field is quite huge and training two models able to perform both actions is, in my opinion, doable.

  3. Cutting a branch is probably harder than driving in this situation, because of the small movements that might be required to reach branches positioned in difficult spots. Anyway, it is possible to train robots to perform fine-grained movements (for a funny example see: Robot learn to flip pancakes).

Again it depends also on how high your expectation are about the final machine/system. Should the system have a surgical precision or could it risks to brake few extra branches in hard situations? Obviously the higher the expectations the harder it would be to make everything work harmoniously.

Resources required

Last but not least you also need to understand if what you're trying to do is feasible in reality and not just in theory. A big problem when it comes to use machine leaning is that these models can be trained only with huge amount of data.

  1. Train an artificial agent to move in an environment can be done by reproducing the environment in an artificial simulator, which is good news. A single guy with a laptop could potentially do the job.

  2. Collecting data to train a model able to detect branches on photo and then select which one should be cut will be highly tedious, and also expensive because data need also to be labelled by experts. Which means that some experienced people will have to take photos, in the order of tens of thousands at least, and write for each photo (using a software) which are the branches they will cut if they were working for real on that tree. I strongly doubt that a dataset like this already exists.

  3. Training a robotic arm to cut branches will also be challenging. Despite the fact that also in this case simulators could be leveraged, the task is inherently harder and this comes with bigger difficulties (for example in designing a proper reward function if using reinforcement leaning). Concretely this mean more time to spend in research and testing.

Consider also that the success of the final model trained for each subtask would be not guaranteed at all, reason why I said at the beginning that training a system with AI modules would be probably not convenient, and that the best thing is always to try to create a rule based system first.

",34098,,-1,,6/17/2020 9:57,4/13/2020 1:48,,,,0,,,,CC BY-SA 4.0 20210,1,,,4/13/2020 2:30,,4,280,"

I am working on the following problem to gain an understanding of Bayesian networks and I need help drawing it:

Birds frequently appear in the tree outside of your window in the morning and evening; these include finches, cardinals and robins. Finches appear more frequently than robins, and robins appear more frequently than cardinals (the ratio is 7:4:1). The finches will sing a song when they appear 7 out of every 10 times in the morning, but never in the evening. The cardinals rarely sing songs and only in the evenings (in the evening, they sing 1 of every 10 times they appear). Robins sing once every five times they appear regardless of the time of day. Every tenth cardinal and robin will stay in the tree longer than five minutes. Every fourth finch will stay in the tree longer than five minutes.

I have tried drawing two versions of the network and would love some feedback. Currently, I am leaning more towards the right side network.

",35982,,2444,,4/13/2020 2:53,4/13/2020 8:16,How can I draw a Bayesian network for this problem with birds?,,0,2,,,,CC BY-SA 4.0 20211,1,,,4/13/2020 3:46,,4,206,"

I am confused about the Q values of a duelling deep Q network (DQN). As far as I know, duelling DQNs have 2 outputs

  1. Advantage: how good it is to be in a particular state $s$

  2. Value: the advantage of choosing a particular action $a$

We can make these two outputs into Q values (reward for choosing particular action $a$ when in state $s$) by adding them together.

However, in a DQN, we get Q values from the single output layer of the network.

Now, suppose that I use the same DQN model with the very same weights in my input and hidden layers and changing the output layer which gives us Q values to advantage and value outputs. Then, during training, if I add them together, will it give me the same Q value for a particular state, supposing all the parameters of both my algorithms are the same except for the output layers?

",31380,,2444,,4/13/2020 14:11,4/14/2020 12:59,Are Q values estimated from a DQN different from a duelling DQN with the same number of layers and filters?,,1,0,,,,CC BY-SA 4.0 20213,1,,,4/13/2020 6:47,,1,108,"

Could someone please explain to me why in VC theory, specifically, when calculating the VC dimension, the growth function needs to be polynomial in order for the learning algorithm to be consistent? Why polynomial, and where does the name growth function come from exactly?

",35990,,2444,,4/13/2020 12:04,4/13/2020 12:04,Why does the growth function need to be polynomial in order for the learning algorithm to be consistent?,,0,7,,,,CC BY-SA 4.0 20214,1,20222,,4/13/2020 7:38,,5,1164,"

The beginner colab example for tensorflow states:

Note: It is possible to bake this tf.nn.softmax in as the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to provide an exact and numerically stable loss calculation for all models when using a softmax output.

My question is, then, why? What do they mean by impossible to provide an exact and numerically stable loss calculation?

",35992,,2444,,4/14/2020 18:10,12/17/2020 12:09,Why does TensorFlow docs discourage using softmax as activation for the last layer?,,2,1,,,,CC BY-SA 4.0 20218,1,,,4/13/2020 12:31,,2,50,"


I want to develop a CNN model to identify 24 hand signs in American Sign Language. I created a custom dataset that contains 3000 images for each hand sign i.e. 72000 images in the entire dataset.

For training the model, I would be using 80-20 dataset split (2400 images/hand sign in the training set and 600 images/hand sign in the validation set). My question is:
Should I randomly shuffle the images when creating the dataset? And Why?


PS: Based on my previous experience, it led to validation loss being lower than training loss and validation accuracy more than training accuracy.

",33467,,33467,,4/14/2020 19:30,4/14/2020 19:30,Creating Dataset for Image Classification,,0,3,,,,CC BY-SA 4.0 20219,1,,,4/13/2020 12:46,,1,60,"

Is there an online tool that can predict accuracy given only the dataset as input (i.e. without the compiled model)?

That would help to understand how data augmentation/distribution standardization, etc., is likely to change the accuracy.

",36000,,36000,,4/13/2020 17:09,4/13/2020 17:26,Is there an online tool that can predict accuracy given only the dataset?,,2,0,,12/5/2020 14:37,,CC BY-SA 4.0 20220,1,20233,,4/13/2020 13:01,,2,63,"

I'm following Stanford reinforcement learning videos on youtube. One of the assignments asks to write code for policy evaluation for Gym's FrozenLake-v0 environment.

In the course (and books I have seen), they define policy evaluation as

$$V^\pi_k(s)=r(s,\pi(s))+\gamma\sum_{s'}p(s'|s,\pi(s))V^\pi_{k-1}(s')$$

My confusion is that in the frozen lake example, the reward is tied to the result of the action. So, for each pair state-action, I have a list that contains a possible next-state, the probability to get to that next-state and the reward. For example, being in the target state and performing any action brings a reward of $0$, but being in any state that brings me to the target state gives me a reward of $1$.

Does this mean that, for this example, I need to rewrite $V^\pi_k(s)$ as something like this:

$$V^\pi_k(s)= \sum_{s'} p(s'|s,\pi(s)) [r(s,\pi(s), s')+ \gamma V^\pi_{k-1}(s')]$$

",35999,,2444,,4/16/2020 19:24,4/16/2020 19:24,How can I implement policy evaluation when reward is tied to an action outcome?,,1,0,0,,,CC BY-SA 4.0 20221,2,,20194,4/13/2020 13:15,,1,,"

You can sometimes exploit the structure of your matrix to perform faster matrix multiplication. For example, if your matrix is sparse (or dense), there are algorithms that exploit this fact.

In your case, you can actually compute $A^n$ in less time than $\mathcal{O}(n^3$). For example, have a look at this question at CS SE and this one at Stack Overflow (SO). Note that the provided solutions may not be numerically stable, so I am not suggesting you use them.

Moreover, if you perform your operations on the GPU, they could be faster in practice. See e.g. this question at SO and this one at SciComp SE.

",2444,,,,,4/13/2020 13:15,,,,0,,,,CC BY-SA 4.0 20222,2,,20214,4/13/2020 14:23,,1,,"

It's because of gradient computations: automatic differentiation will compute the gradient for each module and if you have a standalone crossentropy module the over all loss will be unstable (~1/x so it will diverge for small input values) whereas if you use a softmax + crossentropy module all-in-one, then it becomes numerically stable (y-p)

Slides from DeepMind's Simon Osindero lecture at UCL in 2016:

",11351,,11351,,4/14/2020 1:51,4/14/2020 1:51,,,,3,,,,CC BY-SA 4.0 20223,2,,20190,4/13/2020 14:29,,1,,"

You can use MNIST obviously but I'd also suggest you have a look at UC Irvine's datasets: https://archive.ics.uci.edu/ml/datasets.php

",11351,,,,,4/13/2020 14:29,,,,0,,,,CC BY-SA 4.0 20224,2,,16172,4/13/2020 14:32,,1,,"

Below is a listing of Keras application models that can be used easily in transfer learning. Note VGG has on the order of 140 million parameters which is why it is slow.



Model               Size     Top-1 Accuracy  Top-5 Accuracy  Parameters    1Depth
Xception             88 MB      0.790           0.945         22,910,480    126
VGG16               528 MB      0.713           0.901        138,357,544    23
VGG19               549 MB      0.713           0.900        143,667,240    26
ResNet50             98 MB      0.749           0.921         25,636,712    -
ResNet101           171 MB      0.764           0.928         44,707,176    -
ResNet152           232 MB      0.766           0.931         60,419,944    -
ResNet50V2           98 MB      0.760           0.930         25,613,800    -
ResNet101V2         171 MB      0.772           0.938         44,675,560    -
ResNet152V2         232 MB      0.780           0.942         60,380,648    -
InceptionV3          92 MB      0.779           0.937         23,851,784    159
InceptionResNetV2   215 MB      0.803           0.953         55,873,736    572
MobileNet            16 MB      0.704           0.895          4,253,864    88
MobileNetV2          14 MB      0.713           0.901          3,538,984    88
DenseNet121          33 MB      0.750           0.923          8,062,504    121
DenseNet169          57 MB      0.762           0.932         14,307,880    169
DenseNet201          80 MB      0.773           0.936         20,242,984    201
NASNetMobile         23 MB      0.744           0.919          5,326,716    -
NASNetLarge         343 MB      0.825           0.960         88,949,818    -

I tend to use the MobileNet model for transfer learning because it has about 4 million
parameters so it much faster than most models. It should perform as well as VGG on your
data set. If it does not tuning the hyper parameters may be required. I find that using
an adjustable learning such as the Keras ReduceLROnPlateau callback along with the
ModelCheckpoint callback both monitoring validation loss works very well. Documentation
is [here][1].
You might also try the efficientNet model which comes in various sizes and has high 
accuracy. Documentation is [here][2]


  [1]: https://keras.io/callbacks/
  [2]: https://github.com/Tony607/efficientnet_keras_transfer_learning
",33976,,,,,4/13/2020 14:32,,,,0,,,,CC BY-SA 4.0 20225,2,,17522,4/13/2020 14:49,,1,,"

I think i found out how that works, so i made a short article about it . https://medium.com/@kourloskostas/python-spam-filter-86b21d7d1564 I hope it helps!

",32762,,,,,4/13/2020 14:49,,,,3,,,,CC BY-SA 4.0 20227,2,,20219,4/13/2020 15:21,,1,,"

If I get correctly what you're asking, the answer is no, there is no way of knowing in advance how good a model would perform in a dataset without training a model on it. That's the whole point of data science, you try, you analyse the results, you try again using the knowledge you got from your previous attempts. It would be nice to hack the whole field and know in advance what to do to get the perfect model but it is rather unrealistic.

Anyway, there are some standard steps that usually help understanding if you're going on the right direction. For example creating a random benchmark to see how much your model is better than a random one. To create such benchmark you definitely don't need a specific tool, all main programming languages provide built in function to generate random numbers, and that's basically all that you need. For example in python you could do something like:

import numpy as np

# create 30 true labels 
y_true = [1,2,3]*10

# generate 30 random labels
y_random = np.random.randint(0,3,size=30)

# calculate accuracy for random model
acc_random = sum(y_true==y_random)/len(y_true) 

Another thing that is worth to mention is that in the academy people are pushing more and more to use the same datasets in order to have comparable results. for example if you're trying to train an architecture that should be used for images classification, some golden dataset you must test your models on are the MNIST ones. It is well known that on these datasets a good model should achieve more than 99% accuracy, therefore this is an establish lower bound for these datasets.

",34098,,,,,4/13/2020 15:21,,,,0,,,,CC BY-SA 4.0 20228,2,,12878,4/13/2020 16:59,,1,,"

It's important to remember what exactly the loss is measuring, and have some typical values in mind.

The cross-entropy loss is $-\mathbb{E}_{x,y\sim p}\left[\log q(y|x)\right]$, where $p$ is the data distribution and $q$ is the model distribution. A couple of points about the loss:

  • It's nonnegative, in the range $[0, \infty)$.
  • Predicting randomly (assuming balanced classes) gives loss equal to $\log k$.

In your case with four classes, the loss for a random classifier is $\log 4 \approx 1.39$. So the story for what happened with your model is probably that initially (due to initialization, etc) it predicted high but wrong confidence, such as giving 99% probabilities to certain classes. This gives very high loss, but then after training for a while it reduces its loss to just under the random loss by predicting 25% on all examples.

",15176,,15176,,4/13/2020 19:27,4/13/2020 19:27,,,,2,,,,CC BY-SA 4.0 20229,2,,15743,4/13/2020 17:09,,0,,"

As I understand Resnet has some identity mapping layers that their task is to create the output as the same as the input of the layer. the resnet solved the problem of accuracy degrading. But what is the benefit of adding identity mapping layers in intermediate layers?

See this is applicable to deep/very deep networks. We decide to add layers when the model output is not converging to the expected output (it is due to very slow convergence). By this mapping, author has suggested that some portion of complexity of the model can directly be adjusted with input value leaving just residual value for adjustment. The output is mapped to input by identity function - so it is identity mapping. So the shortcut identity mapping is doing the task of some layers in plain neural network.

The identity mapping is applicable only if output and input are of same shape otherwise linear projection is required.

",36009,,2444,,4/13/2020 17:20,4/13/2020 17:20,,,,0,,,,CC BY-SA 4.0 20230,2,,20219,4/13/2020 17:26,,1,,"

There is no tool to achieve what you desire. A ""rough"" way to get a guess is to ""compare"" your data set to other known data sets for which there are benchmark accuracy models. Compare is a vague definition at best but things you might compare are

  1. number of your classes versus the number of classes in the reference data set
  2. the number of train, validation and test samples
  3. the relative similarity of the data - that is are they images? If so image comapare size, are the images cropped to the region of interest, etc
  4. similarity of class features -that is are your classes very similar to each other. For example classifying dogs by breed can be difficult because some dog breeds look almost identical versus a situation like classifying different types of animals which is easier. Estimate your similarity of classes with respect to similarity of classes in the reference data set.

So you might be able to generate a ""rough"" estimate of what you might expect your model to achieve particularly if you use transfer learning of benchmark models.

",33976,,,,,4/13/2020 17:26,,,,0,,,,CC BY-SA 4.0 20231,1,,,4/13/2020 17:49,,2,446,"

Similarly to the question Who first coined the term Artificial Intelligence?, who first coined the term ""artificial general intelligence""?

",2444,,,,,4/21/2020 12:04,"Who first coined the term ""artificial general intelligence""?",,2,0,,,,CC BY-SA 4.0 20232,2,,20231,4/13/2020 17:49,,4,,"

According to Ben Goertzel, the first person that probably used the term ""artificial general intelligence"" (in an article related to artificial intelligence) was Mark Avrum Gubrud in the 1997 article Nanotechnology and International Security. Here's an excerpt from the article.

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be ""conscious"" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

Note that the term ""AGI"" could have been used even before that Mark A. Gubrud's article (as Goertzel also suggested).

In any case, Ben Goertzel help to popularise this term, especially with the book Artificial General Intelligence. AGI was previously known as ""strong AI"", which goes back to John Searle's Chinese Room argument, although strong AI often refers to an AGI with consciousness, and the definition of AGI doesn't necessarily imply consciousness.

You can read more about the history of the term ""AGI"" in this Goertzel's blog post.

",2444,,2444,,4/20/2020 13:50,4/20/2020 13:50,,,,0,,,,CC BY-SA 4.0 20233,2,,20220,4/13/2020 18:21,,1,,"

The renowned book Reinforcement Learning: An Introduction (2nd edition), by Sutton and Barto, provides a different update rule than your first update rule for policy evaluation. Their update rule is more similar to your second update rule. See section 4.1. They also provide the pseudocode for policy evaluation on page 75 of the book. You can also find the pseudocode here.

Moreover, note that the update rule doesn't need to change only because the rewards are tied to the outcome of an action. This information is encoded in the functions $p$ (the transition function) and $r$ (the reward function) of the Markov decision process, which is incorporated in the update rule. If you want to understand the update rule, you should read the relevant pages (especially, chapter 4) of the cited book.

",2444,,2444,,4/13/2020 18:28,4/13/2020 18:28,,,,0,,,,CC BY-SA 4.0 20234,2,,20190,4/13/2020 21:17,,1,,"

This answer doesn't provide a declaration as to which dataset(s) are used quasi-ubiquitously in research/literature. It simply provides a frame-of-reference for where to look for structured datasets and examples of two structured datasets that could be used in general.


You want to look for structured datasets.

Good examples of this are things like housing price datasets.

Check out Google Datasets (specifically the datasets hosted by Kaggle). Many of these are structured data datasets.

As an answer directly though, try a housing price dataset like the Boston Housing Price Dataset.

You could also use the famous Titanic Dataset on Kaggle.

",36013,,2444,,4/14/2020 16:52,4/14/2020 16:52,,,,0,,,,CC BY-SA 4.0 20235,1,,,4/13/2020 22:16,,2,47,"

I coded a non-zero-sum game of $N$ agents in a discrete dynamic environment to RL with Q-learning and DQN agents.

It's like a marathon. Only two actions are available per agent: $\{ G \text{ (move forward}) , S \text{ (stay to its position}) \}$. Every agent has $m$ possible individual positions, the other agents cannot interfere with its path to the terminal position. Only when one agent reaches its terminal position gets a full reward. When everyone reaches a terminal state, all get $0$ rewards. If more than $1$ but less than $N$ reach their terminals, they get a small reward.

Now, I try to formalize it as a Markov Game (MG), but I don't have a solid mathematical background.

My first question is:

  1. When we model a problem as an RL problem, the transition probability (TP) distribution is not required, while an MDP and MG require TP. But then how are all RL problems modeled first into MDP or MG?

As I have read in literature, I understand that I will treat the action sets of all other players as a "team" joint set of actions.

Second question:

  1. How can I specialize the TP function to the specific problem I want to model? Should I just mention the general function equation?

What I have tried so far is to explicitly describe it, but I think I am not getting something:

  1. The probability of the transition from $s$ to $s'$, where in $s'$ a number of $k$ agents move a step forward is equal to $1$, given that they all chose action $G \in Α$ and that the rest $n-k$, if any, all chose the action $S \in Α$, where $k$ is an integer $1 \leq k \leq n$.

  2. The probability of the transition from $s$ to $s'$, where in $s'$ a number of $k$ agents to earn the high payoff is equal to $1$, given that $k=1$, its position is equal to $m-1$, it chooses action $G \in A$ and that the rest $n-k$ all chose the action $S \in Α$, where $m$ is the max possible position for each agent.

  3. The probability of the transition from $s$ to $s'$, where in $s'$ a number of $k$ agents to earn a low payoff is equal to $1$, given that $k>1$, their position is equal $m-1$, they all chose action $G \in Α$ and that the rest $n-k$ all chose the action $S \in Α$, where $k$ is an integer with $1 < k \leq n$ and m is the max possible position for each agent

",36018,,-1,,6/17/2020 9:57,4/14/2020 3:33,How can I formalise a non-zero-sum game of $N$ agent as Markov game?,,0,1,,,,CC BY-SA 4.0 20236,2,,20186,4/13/2020 22:36,,1,,"

I think your deduction is mostly correct.

Neural networks of depth are universal function approximators. This means that in principal, for any function of the form you describe, there's a NN that approximates it.

However, a particular NN architecture of fixed width and depth, with fixed connections is not a universal approximator for all functions. Only an infinitely wide NN is, and that's a theoretical construct, not something you can make in practice.

Typically, a practitioner using a NN injects their prior beliefs about the problem by selecting an architecture. For example, it is common to use the ImageNet or ResNet architectures for image processing tasks. Those architectures are less effective on other types of tasks. For instance, it should be clear that they are ineffective on say, a time-series analysis task.

",16909,,,,,4/13/2020 22:36,,,,3,,,,CC BY-SA 4.0 20237,2,,20120,4/13/2020 22:44,,1,,"

I think the wrong assumption here is that you've forgotten the cost of encoding the new features!

MDL should be considered relative to the original or raw dataset. The idea is that you want to find an expression you could send to someone else that encodes the structure of the dataset in terms of the original variables. If you define new features, you need to send a description of those features along with your model.

To make this clearer, imagine you call me up on the phone, and we're both looking at your left hand image. You say to me 'Yeah, you just draw a line through it at $p=a$'. The natural question for me to ask is 'What's p?'. If you have to tell me what $p$ is, then it's part of the description length.

A circular model for your lefthand image does have a MDL for its decision boundary of something like $(x−a)^2+(y−b)^2=c$. However, the feature transformation you've selected has a description length of $p=(x-a)^2 + (y-b)^2$. It should be clear that the description lengths for the linear model through $p$ and the circular model through $x$ and $y$ are identical.

",16909,,,,,4/13/2020 22:44,,,,4,,,,CC BY-SA 4.0 20238,1,20334,,4/13/2020 23:15,,2,86,"

I'm experimenting with training a feedforward neural network using a genetic algorithm and I've done a few tests using both the mean squared error and classification error functions as fitness heuristic in the GA.

When I use MSE as error function, my GA tends to converge around an MSE of 0.1 (initial conditions have an MSE of around 0.9). Testing system accuracy with this network gives me 95%+ for both training and testing data.

But, when I use classification error as my heuristic, my GA tends to converge around when the MSE is about 0.3. System accuracy is still around the same at 95%+.

My question is, if you had two networks, one showing an MSE of 0.1 and one an MSE of 0.3, but both perform approximately the same in terms of accuracy, what can I deduce from the differences in MSE?

In other words: which network is ""better"", even if the accuracy is the same? Does a lower MSE mean anything below a certain amount? I could train my network for 100x as many generations and get a better MSE but not necessarily a better accuracy. Why?

For some context:

When the MSE is approximately 1.5 (epoch 250), the accuracy seems to match when the MSE is approximately 2.0 (epoch 50). Why does the accuracy not increase despite MSE decreasing?

",31112,,1671,,4/17/2020 21:26,4/17/2020 21:26,What does it mean if classification error is equal between two networks but the MSE is different?,,2,0,,,,CC BY-SA 4.0 20239,2,,20057,4/13/2020 23:19,,2,,"

I think Cross-Validation serves a completely different purpose.

From your post, it looks like you think we would use CV to get a better estimate of the parameters of our model (i.e. the model parameters after cross validation are closer to the parameters of the test data).

In fact, we use CV to get an estimate of generalization error while keeping our test set outside the training process. That is, we use it to answer the question ""What is the size of the difference between my training and testing performance likely to be?"". If you have an estimate of this that you are confident in, you can be confident that when you deploy a model to your customers, the model will actually work as you expect.

If you're only going to build a single model, then you don't need cross validation. You just train the model on the training data, and test it on the test data. Then you have an unbiased estimate of generalization error.

However, we might want to try out many different kinds of models, and many different parameters (broadly, we might want to do hyperparameter tuning). To do this, we need to understand how generalization error changes as we change our hyperparameters, and then use this information to pick hyperparameter values that we think will minimize the actual error when we deploy the model.

You could do this by training different models on the training set, and then testing them on the test set, recording the difference in model performance on the two sets. If you use this as a basis to pick a model though, you have effectively pulled the test set inside your training process (model parameters were implicitly selected using the test set, since you picked the parameters with the lowest test error). This bias will make your true generalization error much larger than what you observed.

As a stop gap, you could split your training set into a 'real training' set and a validation set. You could train models on the 'real training' set, and then measure their performance on the validation set. The difference would be a biased (but hopefully still useful) estimate of generalization error. You could then test against the test set just once (at the end) to get an unbiased estimate that you can use to decide whether or not to deploy the model.

A better workflow is to use CV on the training set to get an estimate of generalization error during hyperparameter optimization. You get K samples for k-fold cross-validation, so you can do statistical testing to see whether one model truly has better generalization error than another, or whether it's just a fluke. This decreases the degree of bias in your estimates of generalization error. Then, once you've completed hyperparameter optimization, you can run your final model against the test set once to obtain a truly unbiased estimate of your final generalization error.

",16909,,,,,4/13/2020 23:19,,,,2,,,,CC BY-SA 4.0 20240,2,,20193,4/14/2020 1:31,,2,,"

It is not really a metter of what model, but if it is possible at all to predict what you're trying to predict. Let's take a similar dataset from kaggle: California Housing Prices

This dataset contains house prices and other information among which the number of bedrooms per house. As suggested by Oliver in the comments we can compute the Person coefficient to estimate the correlation between the two variables.

import pandas as pd
from scipy.stats import pearsonr

df = pd.read_csv('housing.csv')
df = df.apply(lambda row: row[df['total_bedrooms'] <= 20]) # select subset of dataframe for sake of clarity
df.dropna(inplace=True)

x = df['median_house_value'] # our single feature  
y = df['total_bedrooms'] # target labels

print('Correlation: \n', pearsonr(x,y))

Out:

>>Correlation: 
>>(-0.14015312664251944, 0.12362969210761204)

The correlation is pretty low, which means that the price and number of bedrooms are basically not related. We can also plot the points to check that indeed there is no correlation at all.

df.plot(x='total_bedrooms',y='median_house_value',kind='scatter')

Out:

Training a model to predict the number of bedrooms uniquely from the price would mean to find a function that can interpolate all those points, which is an impossible task since we have several different prices for houses with the same amount of bedrooms.

The only way to tackle a problem like this would be to expand the dimensionality of the data, for example by using a Support Vector Machine with a non linear kernel. But even with non linear kernels you can't do miracles, so if you're dataset looks like this one, the only solution would be to expand your dataset to include extra features.

",34098,,,,,4/14/2020 1:31,,,,1,,,,CC BY-SA 4.0 20241,1,,,4/14/2020 2:25,,6,713,"

Would it be ethical to allow an AI to make life-or-death medical decisions?

For instance, where there an insufficient number of ventilators during a respiratory pandemic, not every patient can have one. It seems like a straight forward question, but before you answer, consider:

  1. Human decision-making in this regard is a form of algorithm.

(For instance, the statistics and rules that determine who gets kidney transplants.)

  1. Even if the basis for the decision is statistical, the ultimate decision making process could be heuristic, so at least the bias could be identified.

In other words, the goal of this process, specifically, is favoring one patient over another, but doing so in the way that has the greatest utility.

  1. Statistical bias is a core problem of Machine Learning, but human decision making is also subject to this condition.

One of the arguments in favor might be that at least the algorithm would be impartial, here in relation to human bias.

Finally, where there is scarcity, utilitarianism becomes more imperative. (Part of the trolley problem is you only have two tracks.) But the trolley problem is also relevant because it can be a commentary on the burden of responsibility.

",1671,,2444,,4/14/2020 12:48,7/1/2022 8:11,Would it be ethical to allow an AI to make life-or-death medical decisions?,,4,0,,,,CC BY-SA 4.0 20242,2,,20191,4/14/2020 3:34,,3,,"

Weights initialisation is strictly related to the vanishing/exploding gradient problem. For a complete explanation, please check this awesome page (also from deeplearning.ai). Here I'll summarise the main concepts:

  • initialising weight all to zero will cause all weights to have the same derivative value with respect to the loss function, hence the network would be incapable of learning anything.
  • initialising to zero the biases has no drawback since they are constants (no effect at all when computing the derivative anyway).

  • initialising weights with too high or too small values will lead to an exploding gradient (oscillating gradient values without convergence) or a vanishing gradient (small gradient values variation that converge before reaching the loss global minimum).

In order to avoid these problems, a method called Xavier initialisation has been proposed: the weights should be initialised in such a way that they will generate activations scores with a distribution that has:

  • Mean 0
  • Constant variance across layers (i.e. no vanishing/exploding)

The value ""np.sqrt(layer_dims[l - 1])"" pup up when imposing the second constrain. For a formal prove check the page I linked. To grasp the concept just focus on the fact that the variance of the weights of a layer depends on the amount of nodes of the previous layer. This mean that for layers preceded by layers with a big amount of hidden nodes, the weights will be initialised with a small variance, and this is ok cause we don't want a small bunch nodes to have stronger influence on the subsequent activations. But in layers which instead are preceded by layers with a small amount of nodes, it's ok to allow the weights to vary more.

",34098,,,,,4/14/2020 3:34,,,,0,,,,CC BY-SA 4.0 20243,2,,20241,4/14/2020 4:07,,5,,"

I disagree with the idea that a trained Machine Learning model would be impartial. Models are trained on data sets that contain features. Humans prepare those data sets and decide what features are included in the data set. The model only knows what it is trained on. Human bias is still there just less blatantly obvious.

To address your question directly, the answer I believe is that it is no more or less ethical than to have humans make such decisions since in the end humans created the AI model.

My concern, however, is simply this:

Once we offload this to AI we will longer feel responsible for the results. The mentality of ""the machine"" made the choice will make it very easy to allow us to abdicate. This is especially true if the people implementing the AI decisions are not the ones who developed the AI. Of course, humans having to repeatedly make life and death will suffer a serious and potentially devastating toll. So, in the end, it is a trade-off, but, from my perspective, I think the risk of abdication and the consequences thereof carry a heavier weight. But then I am not the one faced with making life and death choices.

",33976,,2444,,4/15/2020 12:03,4/15/2020 12:03,,,,1,,,,CC BY-SA 4.0 20244,1,20248,,4/14/2020 6:15,,4,656,"

Sutton and Barto define the state–action–next-state reward function, $r(s, a, s')$, as follows (equation 3.6, p. 49)

$$ r(s, a, s^{\prime}) \doteq \mathbb{E}\left[R_{t} \mid S_{t-1}=s, A_{t-1}=a, S_{t}=s^{\prime}\right]=\sum_{r \in \mathcal{R}} r \frac{p(s^{\prime}, r \mid s, a )}{\color{red}{p(s^{\prime} \mid s, a)}} $$

Why is the term $p(s' \mid s, a)$ required in this definition? Shouldn't the correct formula be $\sum_{r \in \mathcal{R}} r p(s^{\prime}, r \mid s, a )$?

",35926,,2444,,4/18/2022 9:16,4/18/2022 9:18,"Why does the definition of the reward function $r(s, a, s')$ involve the term $p(s' \mid s, a)$?",,2,0,,,,CC BY-SA 4.0 20245,2,,20244,4/14/2020 6:51,,2,,"

$\frac{p(s', r \mid s, a)}{p(s' \mid s, a)}$ represents the probability of observing reward $r$ in state $s'$, given that state $s'$ is the next state transitioned to. The equation assumes a probability distribution of rewards $r$ over state $s'$, meaning that a different reward might be observed whenever a state transitions from $s$ to $s'$. In most cases, if $r(s, a, s')$ is a deterministic reward then $p(s', r \mid s, a) = p(s' \mid s,a )$.

",32780,,2444,,4/14/2020 12:32,4/14/2020 12:32,,,,5,,,,CC BY-SA 4.0 20247,2,,20241,4/14/2020 9:33,,4,,"

I will interpret the questions as being about triage. This is particularly important in crisis situations, where a lot of such life-or-death decisions have to be taken.

In the START system there are four different categories:

  • the deceased, who are beyond help
  • the injured who could be helped by immediate transportation
  • the injured with less severe injuries whose transport can be delayed
  • those with minor injuries not requiring urgent care.

According to the category assigned to a patient, you then decide what to do. Other systems might be more fine-grained, but the principle is the same: effectively the human decision is in the classification, which then guides the assignment of resources. In the above list, the second category would probably be the highest priority for treatment. But once the category has been decided, the course of action has been determined, though the actual treatment options will then be considered for each case.

The ethics are thus in the judgment of the survival chances: if a nurse decides patient X is too far gone to warrant treatment, that's it. So this is the hard part, the life-or-death decision.

There are then two aspects to consider:

  1. The accuracy of the diagnosis
  2. The predicted likelihood of treatment being effective

The diagnosis should be (I'm not a medic!) fairly neutral. There are of course mis-diagnoses, but there is no value judgment involved. There might be some 'leakage' between step 1 and step 2, in that a human being might be influenced by the prospects of treatment of a diagnosis when deciding what the problem is. As in: this is a nice person, I don't want them to die, so I (subconsciously) exclude the diagnosis X which would mean certain death.

In this case a computer system that had sufficient accuracy when making a diagnosis would IMHO actually be more ethical than a human being. Provided, of course, that there is no bias in the diagnosis. Which I think is a practical problem, but in theory could be dealt with.

Once the diagnosis has been determined, the estimation of treatment success is the next decision. This also can be subject to human error, and a computer system with access to statistical information (again, unbiased) of issues and their likelihood of survival could be more independent of other factors which a human being might use, but which are unrelated to the case.

So basically, I would say that a computer system could be more ethical than a human being in such a situation. Because you can defend the way it reached it's decision, and it has been without taking into account factors not related to the problem at hand.

However, it's not always that easy. There are plenty of cases where other factors influence the decision. As long as there are fixed rules, that might not be a problem, though. Some issues would be (in each case the patients would have the same diagnosis and projected survival chance):

  • One ventilator, but an 8-year-old child and a 85-year-old man
  • A 30-year-old pregnant woman and the Prime Minister of the country
  • A one-month-old baby and a 25-year-old student
  • A homeless person and a billionaire

As a given, the system would have to be agnostic of general features such as gender, race, religion, etc. But how do you take into account age, social status, and a whole host of other factors?

If there is no difference in the situation, the fairest way would be to toss a coin. That would surely upset a lot of people (""How can you justify leaving our president to die in this time of crisis!""), but if you had explicit rules (""If one patient is in a higher tax bracket, they get priority"") you might upset even more people. The advantage of having such rules would be to make the bias explicit, and in a way protect a nurse or paramedic from having to decide between a rich senator and a homeless person — whose family is more likely to go after you if you decided against them? And if you have explicit, unambiguous rules, why not use them to guide an AI system?

Every human being has their own preferences, and I am glad that I don't have to make those kinds of decisions — it sounds like a horrific task to me. How can you even sleep in peace after basically having condemned one each of the above cases to death?

That is another factor about the ethics of AI: by relieving humans from having to make such decisions, it would also have a beneficial effect. If the final decision is the same that the human would come to, then it's a win/win situation. But that is probably unlikely, due to a whole range of subconscious biases.

The prime issue to me seems the lack of recourse in the case of ""computer said no"". When a human makes a decision (like a referee in a game), there will always be an argument if people are unhappy with it. But in this case there could be none. The oracle has spoken, your father will be left to die. Sorry, no other outcome possible. It would probably be the same with a human decision, but it wouldn't feel as 'cold': you can observe that the person making the decision has to struggle with it. And you might understand that it was not an easy choice. With a computer, that element is missing.

Anyway, to summarise: given various caveats,

  • sufficiently accurate and unbiased diagnosis,
  • unbiased prediction of treatment outcome,
  • transparent handling of non-medical factors,

I would say that an AI system would me more ethical, as it has a principled way of reaching a decision which does not disadvantage any group of patients and would always get the same decision for the same patient; furthermore it takes a heavy burden off triage staff who otherwise have to make such decisions.

Which does not mean that I would be happy with my loved-ones' survival being decided by such a system :)

",2193,,,,,4/14/2020 9:33,,,,1,,,,CC BY-SA 4.0 20248,2,,20244,4/14/2020 9:45,,6,,"

Expectation of reward after taking action $a$ in state $s$ and ending up in state $s'$ would simply be

\begin{equation} r(s, a, s') = \sum_{r \in R} r \cdot p(r|s, a, s') \end{equation}

The problem with this is that they do not define probability distribution for rewards separately, they use joint distribution $p(s', r|s, a)$, which represents probability for ending up in state $s'$ with reward $r$ after taking action $a$ in state $s$. This probability can be separated in 2 parts using product rule

\begin{equation} p(s', r|s, a) = p(s'|s, a)\cdot p(r|s', s, a) \end{equation}

which represents the probability for getting to state $s'$ from $(s, a)$, and then probability for getting reward $r$ after ending up in $s'$.

If we define reward expectation through the joint distribution, we would have

\begin{align} r(s, a, s') &= \sum_{r \in R} r \cdot p(s', r|s, a)\\ &= \sum_{r \in R} r \cdot p(s'|s, a) \cdot p(r|s', s, a) \end{align}

but this would not be correct, since we have this extra $p(s'|s, a)$, so we divide everything by it to get expression with only $p(r|s', s, a)$.

So, in the end we have

\begin{equation} r(s, a, s') = \sum_{r \in R} r \frac{p(r, s'|s, a)}{p(s'|s, a)} \end{equation}

",20339,,2444,,4/18/2022 9:18,4/18/2022 9:18,,,,0,,,,CC BY-SA 4.0 20249,1,,,4/14/2020 10:28,,2,867,"

I am trying to build a classifier which should be trained with the cross entropy loss. The training data is highly class-imbalanced. To tackle this, I've gone through the advice of the tensorflow docs

and now I am using a weighted cross entropy loss where the weights are calculated as

weight_for_class_a = (1 / samples_for_class_a) * total_number_of_samples/number_of_classes

following the mentioned tutorial.

It works perfectly, but why is there this factor total_number_of_samples/number_of_classes? The mentioned tutorial says this

[...] helps keep the loss to a similar magnitude.

But I don not understand why. Can someone clarify?

",36032,,,,,12/30/2022 19:04,How are weights for weighted x-entropy loss on imbalanced data calculated?,,1,0,,,,CC BY-SA 4.0 20252,1,,,4/14/2020 11:52,,2,38,"

I am performing a regression task on sparse images. The images are a result of a physical process with meaningful parameters (actually, they are a superposition of cone-like shapes), and I am trying to train a regressor for these parameters.

Here, sparse images mean that the data, and thus the expected output, is made of 2D square tensors, with only one channel, and that it is expectd that roughly 90% of the image is equal to zero. However, in my system, the data is represented as dense tensors.

I built a neural network with an encoder mapping the image on an output for which I have chosen activation and shape such that it corresponds with those meaningful parameters.

I then use custom layers to build an image from these parameters in a way that matches closely the physical process, and train the network by using the L2 distance between the input image and the output image.

However, for a large set of parameters, the output image will be equal to zero, since these are sparse images. This is the case in general for the initial network.

Is it possible that, through training, the neural network will learn its way out of this all-zero parameterization ?

My intuition is that, in the beginning, the loss will be equal to the L2 norm of the input image, and the gradient will be uniformly zero, hence, no learning.

Can anyone confirm ?

",36040,,36040,,4/14/2020 12:58,4/14/2020 12:58,Can a neural network whose output is uniformly equal to zero learn its way out of it?,,0,4,,,,CC BY-SA 4.0 20253,2,,20211,4/14/2020 12:52,,4,,"

Dueling-DQN has different network architecture comparing to vanilla DQN, so I don't think your version will work as well as the Dueling architecture.

From Wang et al., 2016, Dueling Network Architectures for Deep Reinforcement Learning

On the other hand, since we only have the target Q-value, separating the Q-value into state value and advantage result in the identifiability issue. That is the network might simply learn $V(s)=0$, $A(s,a)=Q(s,a)$ for every state.

To tackle with this issue, we should force an additional constraint on the advantage estimate. We can simply use the equation below as mentioned in the paper, that is, normalize the advantages across actions before combining with the state value.

$$Q(s,a;\theta,\alpha,\beta)=V(s;\theta,\beta)+(A(s,a;\theta,\alpha)-\frac{1}{|A|}\sum\limits_{a'}A(s,a';\theta,\alpha))$$

",32173,,32173,,4/14/2020 12:59,4/14/2020 12:59,,,,0,,,,CC BY-SA 4.0 20254,1,,,4/14/2020 12:56,,4,3304,"

Could machine learning be used to measure the distance between two objects from a picture or live camera?

An example of this is the measurement between the centre of each eye pupil.

This area is all new to me, so any advice and suggestions would be greatly appreciated.

",36041,,2444,,4/14/2020 13:40,4/22/2020 14:54,Could machine learning be used to measure the distance between two objects from a picture or live camera?,,2,1,,,,CC BY-SA 4.0 20255,2,,20249,4/14/2020 13:10,,0,,"

This comes from the fact that you want the same magnitude from the loss. Think of it this way: a non-weighted loss function actually has all its weights to 1 and so over the whole data set, samples are weighted with 1 and the sum of all weights is therefore $N$, if $N$ is the total number of samples.

Now in the case of a weighted loss, we want the weights to also sum to $N$ so that the loss's magnitude is comparable ($i = 1..C$ are your classes, $N_i$ is the number of samples for class $i$):

$$S = \sum_{i=0\ }^{C} \sum_{s_i=1}^{N_i} w_{i} = \sum_{i=0}^{i}\sum_{s_i=1}^{N_i}\frac{1}{N_i} \frac{N}{C} = \frac{N}{C} \sum_{i=0}^{C}\sum_{s_i=1}^{N_i}\frac{1}{N_i} = \frac{N}{C} \sum_{i=0}^{C}N_i\frac{1}{N_i} = \frac{N}{C} \sum_{i=0}^{C}1 = \frac{N}{C} C = N$$

",11351,,,,,4/14/2020 13:10,,,,0,,,,CC BY-SA 4.0 20256,2,,20254,4/14/2020 13:19,,3,,"

The short answer is: yes, it could. In what you are describing, there's nothing very new or specific conceptually; it sounds like a standard regression task. Now the problem that you're actually facing is: do you have the data?

Algorithms won't be able to learn the distance between eyes if you don't have the data that it takes. It could be supervised labels (1 distance per image which would be your regression target), reconstruction from depth maps, multi-view estimation etc. There's a number of ways you could do that given the appropriate data.

People focus on algorithms a lot, and that's good. But taking a good look at your data is often as important (if not more).

Now a good example would be in the self-driving car literature. You could start with this blog-post and go through the papers they reference: https://towardsdatascience.com/vehicle-detection-and-distance-estimation-7acde48256e1

There also seems to be some litterature about your eyes examples (https://arxiv.org/pdf/1806.10890.pdf, https://www.sciencedirect.com/science/article/pii/S0165027019301578) so skimming through these papers & the datasets they use could guide to towards answering my question: is there data for this task?

",11351,,11351,,4/14/2020 13:25,4/14/2020 13:25,,,,2,,,,CC BY-SA 4.0 20259,1,20297,,4/14/2020 15:36,,1,49,"

Which simulation platform is used by DeepMind and others to handle inverse kinematics musculoskeletal simulation, etc., for reinforcement learning simulations and agents?

I thought they use Unity or Unreal but I assume that would be resource-heavy.

",36047,,2444,,4/14/2020 17:50,4/15/2020 21:17,Which simulation platform is used by DeepMind (and others) to handle inverse kinematics musculoskeletal?,,1,6,,,,CC BY-SA 4.0 20260,2,,20241,4/14/2020 15:41,,3,,"

Oliver's answer is interesting and it provides valuable information (such as a brief description of the triage process, which I was not aware of), but I disagree with his conclusion or, at least, I think it can be misleading because he is implying it's "more ethical" because the AI will behave in "more principled way". It depends on your definition of "ethical" (and I will recall one below) and the implications of behaving in a "more principled way".

First of all, we should emphasize that current AI systems can be and usually are biased because they are mainly trained with data associated with humans and their actions (as pointed out by Gerry in his answer). Furthermore, currently, AI systems (including the ones for healthcare) are only designed by humans, who can automatically and often inadvertently introduce bias, for example, by choosing the specific AI model over another, the data, how to process or acquire the data. (Maybe, in the future, AI systems will design other AI systems, but can this really reduce bias? Given that humans will probably design the first AI system that is able to design other AI systems, would the bias introduced in this first AI system also be propagated to the other AI systems?)

In principle, an AI could make more rational decisions, especially if it is not affected by human limitations (which often are not limitations, such as feelings; e.g., if you didn't feel pain when hitting a chair, you would start bleeding without even not noticing it) that make humans sometimes take irrational actions.

However, is the rational action also the most appropriate one? It depends on your definition of rational action and what we mean by "appropriate one".

Here's a definition of "rational" from the dictionary

based on or in accordance with reason or logic

Although the AI system takes actions systematically by following the rules of logic, those actions are still based on some principles or axioms, which will bias the system. So, a rational agent can still be biased, but this bias will be systematic.

In general, every decision can potentially be biased because it's based on some principles and taken by a "subject".

Now, let's address your question more directly

Would it be ethical to allow an AI to make life-or-death medical decisions?

First, let me report two definitions of "ethical" from the dictionary

relating to moral principles or the branch of knowledge dealing with these:

morally good or correct

The original question can thus be rephrased as

Would it be morally good to allow an AI to make life-or-death medical decisions?

Of course, it's difficult to argue what is morally good or not, because this is often subjective. It's morally good for me to help my friends, but it isn't necessarily morally good to help other people. We have different friends, so this automatically implies that morally good is subjective.

The answer to this question ultimately boils down to the philosophical issue of good vs bad, which is naturally subjective. So, the answer to this question will depend on the philosophical ideas of each person. Some people will say "yes" and some people will say "no".

I think it's more productive to answer the question

What are the advantages and disadvantages of allowing an AI to make life-or-death medical decisions?

This question can be answered more objectively. For example, we could say that this would free humans from doing this job, which, in certain scenarios (as Oliver points out in his answer), can be "inconvenient". However, we could also say that current AI systems are still not compatible with human values and they do not think in the way humans do, so they could unexpectedly take "wrong" actions, which can also be difficult to explain (especially if your AI system is or uses a black box system, such as a neural network).

So, should AI systems be used to make life-or-death medical decisions?

I think that people should decide "democratically", and there's should be a great majority of acceptance, e.g. not just 51%, but e.g. 95-99% of people should agree with the idea of letting an artificial system to take a life-or-death decision. To take a reasonable vote, people should be aware of the consequences of such a vote, which means that people should be aware of the inner workings of the AI system and what they can or not do (which is often not possible when the AI system is also composed of black-box models, such as neural networks). Alternatively, allowing or not an AI to make such a decision can also be done on a case-by-case basis.

All these issues are related to "explainable artificial intelligence", "accountability", "transparency", which have been increasingly debated in the last years.

",2444,,-1,,6/17/2020 9:57,4/14/2020 21:22,,,,0,,,,CC BY-SA 4.0 20261,1,37551,,4/14/2020 16:02,,2,58,"

I am working on a classification problem into progressive classes. In other words, there is some hierarchy of categories in such a way, that A < B < C, e.g. low, medium, high, very high. What loss function and activation function for the output layer should I use to take advantage of the class hierarchy, so that true A and predicted C is penalized more than true A and predicted B?

My ideas are:

1) To assign some value to each category, use one output unit with the sigmoid activation and RMS loss function. Then to assign each class to an interval, e.g. 0-033 - class A, 0.33-0.66 class B 0.66-1 - class C. It seem to do the trick, but can favor the extreme categories over the middle ones.

2) Use K softmax output units, integer labels instead of one-hot encoded and the sparse categorical crossentropy loss function. In this case I am not sure how exactly sparse categorical crossentropy works and if it really takes into account the hierarchy.

",36031,,,,,10/21/2022 2:23,Single label classification into hierarchical categories using a neural network,,1,2,,,,CC BY-SA 4.0 20262,2,,7832,4/14/2020 16:31,,1,,"

I think @16Aghnar explains the concept quite well. However, by clipping the surrogate objective alone doesn't ensure the trust region as stated in the paper:

Engstrom et al., 2020, Implementation Matters in Deep RL: A Case Study on PPO and TRPO.

The authors inspected OpenAI's implementation of PPO and find many code-level optimizations, I'll list the most important optimizations below:

  1. Clipped surrogate objective
  2. Value function clipping
  3. Reward scaling
  4. Orthogonal initialization and layer scaling
  5. Adam learning rate and annealing

They find that:

  • PPO-M (2.-5.) alone can maintain the trust region.

    (PPO-M: PPO without Clipped surrogate objective, but with code-level optimizations)

  • PPO-Clip (1. only) cannot maintain the trust region.

    (PPO-Clip: PPO without code-level optimizations, but with Clipped surrogate objective)

  • TRPO+ has better performance comparing to TRPO, and with similar performance comparing to PPO

    (TRPO+: TRPO with code-level optimizations used in PPO OpenAI implementation)

A intuitive thought on why Clipped surrogate objective alone does not work is: The first step we take is unclipped.

As a result, since we initialize $\pi_\theta$ as $\pi$ (and thus the ratios start all equal to one) the first step we take is identical to a maximization step over the unclipped surrogate reward. Therefore, the size of step we take is determined solely be the steepness of the surrogate landscape (i.e. Lipschitz constant of the ptimization problem we solve), and we can end up moving arbitrarily far from the trust region. -- Engstrom et al., 2020

",32173,,,,,4/14/2020 16:31,,,,0,,,,CC BY-SA 4.0 20263,1,20265,,4/14/2020 18:15,,1,208,"

I am new in the field of Machine Learning so I wanted to start of by reading more about mathematics and history behind it.

I am currently reading, in my opinion, a very good and descriptive paper on Statistical Learning Theory - ""Statistical Learning Theory: Models, Concepts, and Results"". In section 5.5 Generalization bounds, it states that:

It is sometimes useful to rewrite (17) ""the other way round"". That is, instead of fixing $\epsilon$ and then computing the probability that the empirical risk deviates from the true risk by more than $\epsilon$, we specify the probability with which we want the bound to hold, and then get a statement which tells us how close we can expect the risk to be to the empirical risk. This can be achieved by setting the right-hand side of (17) equal to some $\delta > 0$, and then solving for $\epsilon$. As a result, we get the statement that with a probability at least $1−\delta$, any function $f \in F$ satisfies

Equation (17) is VC Symmetrization lemma to which we applied union bound and then Chernoff bound:

What I fail to understand is the part where we are rewriting (17) ""the other way around"". I fail to grasp intuitive understanding of relation between (17) and (18), as well as understanding generalization bounds in general.

Could anyone help me with understanding these concepts or at least provide me with additional resources (papers, blog posts, etc.) that can help?

",35990,,2444,,4/14/2020 20:09,4/14/2020 23:31,Understanding relation between VC Symmetrization Lemma and Generalization Bounds,,1,2,,,,CC BY-SA 4.0 20264,1,20269,,4/14/2020 18:53,,2,749,"

I've built a neural network from the scratch, choosing arbitrary numbers for the hyperparameters: learning rate, number of hidden layers and neurons for these, number of epochs and size of mini batches. Now that I've been able to build something potentially useful (~93% of accuracy with test data, unseen by the model before), I want to focus on hyperparameter tuning.

The conceptual difference between training and validation sets is clear and makes a lot of sense. It's obvious that the model is biased towards the training set, so it wouldn't make sense to use it to tune the hyperparameters, nor for evaluating its performance.

But, how can I use the validation set for this, if changing any of the parameters enforces me to rebuild a new model again? The final prediction depends on the values of X number of MxN matrices (weights) and X number of N vectors (biases), whose values depend on the learning rate, batch size and number of epochs; and whose dimensions depends on the number and size of hidden layers. If I change any of these, I'd need to rebuild my model again. So I'd be using this validation set for training different models, ending up as in the first step: fitting a model from the scratch.

To sum up: I fall in a recursive problem in which I need to fine tune the hyperparameters of my model with unseen data, but changing any of these hyperparameters implies rebuilding the model.

",35806,,,,,4/15/2020 5:25,"How is a validation set used to tune the hyperparameters in a non-biased way, if the new models depends on the values of these?",,2,0,,,,CC BY-SA 4.0 20265,2,,20263,4/14/2020 18:59,,2,,"

Let $\varepsilon$ in (17) is equal to $\sqrt{\frac{4}{n}\left(\log{(2\mathsf{N}(\mathcal{F},n))}-\log{\delta}\right)}$. We have:

$$ P\left(\sup_{f\in\mathcal{F}}|R(f)-R_{emp}(f)| > \sqrt{\frac{4}{n}\left(\log{(2\mathcal{N}(\mathcal{F},n))}-\log{\delta}\right)}\right) \leqslant 2\mathcal{N}(\mathcal{F},n) e^{\frac{-n}{4}\left(\frac{4}{n}\left(\log{(2\mathcal{N}(\mathcal{F},n))}-\log{\delta}\right)\right)} = 2\mathcal{N}(\mathcal{F},n) e^{\log{\delta} - \log{(2\mathcal{N}(\mathcal{F},n))}} $$

As we know that $e^{\log{n}} = n$ (suppose that base of $\log$ here is $e$), we can write:

$$ P\left(\sup_{f\in\mathcal{F}}|R(f)-R_{emp}(f)| > \sqrt{\frac{4}{n}\left(\log{(2\mathcal{N}(\mathcal{F},n))}-\log{\delta}\right)}\right) \leqslant 2\mathcal{N}(\mathcal{F},n) \left(\frac{\delta}{2\mathcal{N}(\mathcal{F},n)}\right) $$

Hence:

$$ P\left(\sup_{f\in\mathcal{F}}|R(f)-R_{emp}(f)| > \sqrt{\frac{4}{n}\left(\log{(2\mathcal{N}(\mathcal{F},n))}-\log{\delta}\right)}\right) \leqslant \delta $$ As we know that if we have $P(x > a) \leqslant c$, we can conlude that $P( x<a) \geqslant 1-c$, we will have:

$$ P\left(\sup_{f\in\mathcal{F}}|R(f)-R_{emp}(f)| < \sqrt{\frac{4}{n}\left(\log{(2\mathcal{N}(\mathcal{F},n))}-\log{\delta}\right)}\right) \geqslant 1- \delta $$

Now, as this inequality is true for supremum of set $\mathcal{F}$ and $R(f) -R_{emp}(f) \leqslant \varepsilon$ is subset of $|R(f) -R_{emp}(f)| \leqslant \varepsilon$ (in terms of probability state space), we can say the following inequality is correct for any function $f$ with at least probability of $1-\delta$: $$ R(f) \leqslant R_{emp}(f) + \sqrt{\frac{4}{n}\left(\log{(2\mathcal{N}(\mathcal{F},n))}-\log{\delta}\right)} $$

",4446,,4446,,4/14/2020 23:31,4/14/2020 23:31,,,,2,,,,CC BY-SA 4.0 20266,1,20338,,4/14/2020 19:24,,4,96,"

I am trying to create a multiclass product-rating network based on product reviews and other input features. Two of the other input features are ""product category"" and ""gender"". However, I want to avoid unfair bias in the classification task between male/female. Since some product categories are more likely to be reviewed by males or females (hence, not balanced), I am seeking for an approach to solve this ""imbalance""-like issue.

The options and things that I consider at the moment are:

  1. Downsample the training examples in each product category to balance for gender
  2. Add weights to the training examples for gender, or
  3. Add weights to the loss function (either log-likelihood or cross-entropy)

Even though downsampling might be the easiest option, I would like to explore the options of adding weights in the network in some way. However, most literature are only discussing adding weights to the loss function in order to solve for imbalanced data related to the target value (which is not the issue that I am addressing).

Can someone help me or point me in the right direction to solve this challenge?

",36053,,,,,4/16/2020 23:10,How to add weights to one specific input feature to ensure fair training in the network?,,1,4,,,,CC BY-SA 4.0 20267,1,20270,,4/14/2020 20:40,,1,292,"

The agent is trying to master the Atari Breakout game.

Here is my code

Is that normal that reward_100 decreased that much after it hits 4.5? Is there a way to avoid that behavior?

Be aware that reward_100 is simply mean_reward = np.mean(self.total_rewards[-100:]). In other words, it is the mean over the last 100 rewards. On the graph, reward_100 represents de y-axis and th number of episodes the x-axis.

",35626,,2444,,4/14/2020 21:02,4/14/2020 22:45,Why are the rewards of my RL agent for the Atari Breakout game decreasing after a certain number of episodes?,,1,6,,,,CC BY-SA 4.0 20268,2,,20241,4/14/2020 22:00,,3,,"

At face value, this sound monstrous--a measure to offload responsibility to a non-conscious mechanism that cannot be meaningfully punished for mistakes.

However, I will argue:

  • There is humane benefit in taking this decision out of the hands of doctors re: the psychological toll

Specifically, doctors are not the reason for resource scarcity, yet they're the ones being forced to make scarcity-driven life-of-death decisions, and that has got to take a toll.

Essentially, unless one is a sociopath, there is going to be an emotional effect. Here the ""sociopathy"" of a pure algorithm relieves humans of this terrible burden.

(Might even reduce burnout, and keep more doctors working longer and with more focus.)

",1671,,1671,,4/14/2020 22:06,4/14/2020 22:06,,,,0,,,,CC BY-SA 4.0 20269,2,,20264,4/14/2020 22:18,,1,,"

This is a standard ML problem: changing hyper-parameters changes the performance of the whole model. Ideally, you'd be cross-validating hyper-parameter choices, not merely comparing on a static validation set. That being said you need to be careful with hyper-parameter optimization because you could overfit these to the peculiarities of your validation set ; cross-validation helps to some extent but really what helps is having a test set that you hardly ever test against. Ideally never before you've chosen your HPs with (cross-) validation. And test-set performance will indicate how much your HP-optimization procedure was biased.

I'm afraid training from scratch is your only solution. This does not however mean that you have to train until the end and many hyper-parameter optimization techniques out there will help you stop training early enough so you don't waste computational resources on HPs which are not worth it. A good starting point would be this blog post by Criteo's Aloïs Bissuel: Hyper-parameter optimization algorithms: a short review

",11351,,,,,4/14/2020 22:18,,,,0,,,,CC BY-SA 4.0 20270,2,,20267,4/14/2020 22:45,,1,,"

It seems that decaying the learning rate solved my problem. I changed learning_rate from 0.001 to 0.0001

",35626,,,,,4/14/2020 22:45,,,,2,,,,CC BY-SA 4.0 20272,1,,,4/14/2020 23:46,,2,269,"

After reading some literature on reinforcement learning (RL), it seems that stochastic approximation theory underlies all of it.

There's a lot of substantial and difficult theory in this area requiring measure theory leading to martingales and stochastic approximations.

The standard RL texts at best mention the relevant theorem and then move on.

Is the field of RL is really stochastic approximation theory in disguise? Is RL just a less rigorous version of stochastic approximation theory?

",32390,,2444,,4/15/2020 2:52,5/22/2020 16:51,Is RL just a less rigorous version of stochastic approximation theory?,,1,0,,,,CC BY-SA 4.0 20273,2,,20272,4/15/2020 3:34,,2,,"

Is the field of RL is really stochastic approximation theory in disguise? Is RL just a less rigorous version of stochastic approximation theory?

No, but reinforcement learning (RL) is based on stochastic approximation theory (SAT), and these two fields overlap.

In RL, you typically assume that the underlying problem can be modeled as a Markov decision process (MDP), and the goal is to find a policy (or value function) that solves this MDP. To find this policy, you can use stochastic approximation algorithms, such as Q-learning, but RL isn't just SAT, where, in general, there isn't necessarily a notion of MDP.

SAT is the study of iterative algorithms to find the extrema of functions by sampling from them and under which conditions these iterative algorithms converge. SAT isn't just applied in RL, but it is applied in many other fields, such as deep learning. The paper Scalable estimation strategies based on stochastic approximations: Classical results and new insights (2015) by P. Toulis et al. provides an overview of SAT and the connections with other fields (including RL).

To conclude, RL is based on SAT, but RL isn't just stochastic approximation algorithms, so they are distinct fields. If you want to study e.g. the convergence properties of certain RL algorithms, you may need to study SAT. In fact, for example, the typical proof of convergence for tabular Q-learning assumes the Robbins–Monro conditions. However, you can do a lot of RL without even knowing that RL is based on SAT. Similarly, you can do a lot of SAT without ever caring about RL.

",2444,,2444,,5/22/2020 16:51,5/22/2020 16:51,,,,7,,,,CC BY-SA 4.0 20275,2,,20264,4/15/2020 5:25,,1,,"

The importance of having a totally separate test set is very crucial. Once you start to use the validation set performance as a measure to use to tune hyper parameters you are biasing your network to work well on the validation set so it can no longer be relied on as a true measure of performance. Eventually if you use your test set too often then adjust hyper parameters to improve performance on the test set you wind up in the same boat. I have actually used several test sets to try to avoid this trap.

",33976,,,,,4/15/2020 5:25,,,,0,,,,CC BY-SA 4.0 20276,1,,,4/15/2020 6:53,,2,166,"

I have an environment, in which my agent learns according to PPO. The environment has a maximum of 80 actions, however not all of them are always allowed. My idea was to mask them, by setting the probabilities of the non valid actions to 0, and renormalizing the remaining actions. However, this would be not the predicted policy anymore, and thus the agent wouldn't act on policy. Is there a better way to mask PPO agents actions, or does it simply not constitute a big problem?

",31821,,,,,4/15/2020 6:53,Action masking for on policy algorithm like PPO,,0,0,,,,CC BY-SA 4.0 20277,1,20279,,4/15/2020 9:52,,2,69,"

My thinking is you input a paragraph, or sentence, and the program can boil it down to the primary concept(s).

Example:

Input:

Sure, it would be nice if morality was simply a navigation toward greater states of conscious well-being, and diminishing states of suffering, but aren't there other things to value independent of well-being? Like truth, or beauty?

Output:

Questioning moral philosophy.


Is there any group that's doing this already? If not, why not?

",36067,,2444,,12/21/2021 15:09,12/21/2021 15:10,How would you build an AI to output the primary concept of a paragraph?,,1,1,,,,CC BY-SA 4.0 20278,1,,,4/15/2020 10:45,,1,96,"

Using this code:

import gym
import numpy as np
import time

""""""
SARSA on policy learning python implementation.
This is a python implementation of the SARSA algorithm in the Sutton and Barto's book on
RL. It's called SARSA because - (state, action, reward, state, action). The only difference
between SARSA and Qlearning is that SARSA takes the next action based on the current policy
while qlearning takes the action with maximum utility of next state.
Using the simplest gym environment for brevity: https://gym.openai.com/envs/FrozenLake-v0/
""""""

def init_q(s, a, type=""ones""):
    """"""
    @param s the number of states
    @param a the number of actions
    @param type random, ones or zeros for the initialization
    """"""
    if type == ""ones"":
        return np.ones((s, a))
    elif type == ""random"":
        return np.random.random((s, a))
    elif type == ""zeros"":
        return np.zeros((s, a))


def epsilon_greedy(Q, epsilon, n_actions, s, train=False):
    """"""
    @param Q Q values state x action -> value
    @param epsilon for exploration
    @param s number of states
    @param train if true then no random actions selected
    """"""
    if train or np.random.rand() < epsilon:
        action = np.argmax(Q[s, :])
    else:
        action = np.random.randint(0, n_actions)
    return action

def sarsa(alpha, gamma, epsilon, episodes, max_steps, n_tests, render = True, test=False):
    """"""
    @param alpha learning rate
    @param gamma decay factor
    @param epsilon for exploration
    @param max_steps for max step in each episode
    @param n_tests number of test episodes
    """"""
    env = gym.make('Taxi-v3')
    n_states, n_actions = env.observation_space.n, env.action_space.n
    Q = init_q(n_states, n_actions, type=""ones"")
    print('Q shape:' , Q.shape)

    timestep_reward = []
    for episode in range(episodes):
        print(f""Episode: {episode}"")
        total_reward = 0
        s = env.reset()
        print('s:' , s)
        a = epsilon_greedy(Q, epsilon, n_actions, s)
        t = 0
        done = False
        while t < max_steps:
            if render:
                env.render()
            t += 1
            s_, reward, done, info = env.step(a)
            total_reward += reward
            a_ = epsilon_greedy(Q, epsilon, n_actions, s_)
            if done:
                Q[s, a] += alpha * ( reward  - Q[s, a] )
            else:
                Q[s, a] += alpha * ( reward + (gamma * Q[s_, a_] ) - Q[s, a] )
            s, a = s_, a_
            if done:
                if render:
                    print(f""This episode took {t} timesteps and reward {total_reward}"")
                timestep_reward.append(total_reward)
                break
#             print('Updated Q values:' , Q)
    if render:
        print(f""Here are the Q values:\n{Q}\nTesting now:"")
    if test:
        test_agent(Q, env, n_tests, n_actions)
    return timestep_reward

def test_agent(Q, env, n_tests, n_actions, delay=0.1):
    for test in range(n_tests):
        print(f""Test #{test}"")
        s = env.reset()
        done = False
        epsilon = 0
        total_reward = 0
        while True:
            time.sleep(delay)
            env.render()
            a = epsilon_greedy(Q, epsilon, n_actions, s, train=True)
            print(f""Chose action {a} for state {s}"")
            s, reward, done, info = env.step(a)
            total_reward += reward
            if done:  
                print(f""Episode reward: {total_reward}"")
                time.sleep(1)
                break


if __name__ ==""__main__"":
    alpha = 0.4
    gamma = 0.999
    epsilon = 0.9
    episodes = 200
    max_steps = 20
    n_tests = 20
    timestep_reward = sarsa(alpha, gamma, epsilon, episodes, max_steps, n_tests)
    print(timestep_reward)

from :

https://towardsdatascience.com/reinforcement-learning-temporal-difference-sarsa-q-learning-expected-sarsa-on-python-9fecfda7467e

A sample Q table generated is :

[[ 1.          1.          1.          1.          1.          1.        ]
 [ 0.5996      0.5996      0.5996      0.35936     0.5996      1.        ]
 [ 0.19936016  0.35936     0.10336026  0.35936     0.35936    -5.56063984]
 ...
 [ 0.35936     0.5996      0.35936     0.5996      1.          1.        ]
 [ 1.          0.5996      1.          1.          1.          1.        ]
 [ 0.35936     0.5996      1.          1.          1.          1.        ]]

The columns representing the actions and rows representing the corresponding states.

Can the state be represented by a vector? The Q table cells are not contained by vectors of size > 1, so how should these states be represented? For example, if I'm in the state [2], can this be represented as an n-dimensional vector?

Put another way, if Q[1,3] = 4, can the Q state 1 with action 3 be represented as a vector [1, 3, 2, 12, 3]? If so, then is the state_number -> state_attributes mapping stored in a separate lookup table?

",12964,,2444,,4/15/2020 20:47,4/15/2020 20:47,How are n-dimensional vectors state vectors represented in Q-learning?,,0,0,,,,CC BY-SA 4.0 20279,2,,20277,4/15/2020 10:58,,1,,"

Identifying the primary concepts of a paragraph required understanding of the meaning of the text. In natural language processing, we are still a long way off even recognising and representing the meaning of text, let alone summarising the meaning of multiple sentences into a single statement.

Note that this is different from simply summarising a text: this can be done without any understanding based on textual features within the text itself, and ways of doing that have been around for a while. But such approaches will generally remove sentences which seem less relevant to the text, thus shortening it. They will not express the content in different words.

Conceivably people might try this with deep learning, where you train a system with paragraphs and the corresponding concepts, but again such a system would not have any understanding of the meaning, and thus results would be more or less accidental.

",2193,,2444,,12/21/2021 15:10,12/21/2021 15:10,,,,5,,,,CC BY-SA 4.0 20280,1,,,4/15/2020 11:21,,1,303,"

I am trying to figure out how to approach this.

Given training data of images and the pixel coordinates of the centre of an object in that image, would it be possible to predict the pixel coordinates of the object in the same "scene" in a different perspective, but with the object removed?

",36069,,2444,,1/16/2021 20:46,1/16/2021 20:46,"Given the coordinates of an object in an image, is it possible to predict the coordinates of the same object in a different perspective?",,0,1,,,,CC BY-SA 4.0 20282,1,20284,,4/15/2020 12:55,,2,199,"

If the i.i.d. (independent and identically distributed) assumption holds for a training-validation set pair, shouldn't their loss trends be exactly the same, since every batch from the validation set is equivalent to having a batch from the training set instead?

If the assumption was to be true wouldn't that make any method that was aware of the fact that there were two separate sets (regularization methods such as early stopping) meaningless?

Do we work with the fact that there is a certain degree of wrongness to the assumption or am I interpreting it wrongly?

P.S - The question stems from an observation made on MNIST (where I suppose the i.i.d assumption holds strongly). The training and validation trends (losses and accuracy both) on MNIST were almost exactly identical for any network (convolutional and feedforward) trained using negative log-likelihood, making regularization meaningless.

",25658,,2444,,9/12/2021 1:43,9/12/2021 1:43,"If the i.i.d. assumption holds, shouldn't the training and validation trends be exactly the same?",,1,4,,,,CC BY-SA 4.0 20283,1,,,4/15/2020 13:10,,2,1686,"

I'm having trouble understanding the 5th step in the flowchart.

For the 5th step, the 'update the Q function by taking the average of returns' is confusing.

From what I understand, the Q function is basically the state-action pair values put in a table (the Q table). To update it means to make adjustments to the state-action pair value of the individual states and their respective actions (e.g state 1 action 1, state 3 action 1, state 3 action 2, so on and so forth).

I'm not sure what 'average of returns' means though. Is it asking me to take the average of the returns after $x$ episodes? From my understanding, returns is the sum of rewards in a full episode (So, AVG=sum of returns for x episodes/x).

And what do I do with that 'average'?

I'm a little confused when they say 'update the Q function' because the Q function consists of many parameters that must be updated (the individual state-action pair value), and I'm not sure which one they are referring to.

What is the point of calculating the average of returns? Since the state-action pair value for a particular state and particular action will always be the same (e.g if I always take action 3 in state 4, I will always get value=2 forever)

",36072,,2444,,4/15/2020 14:28,5/22/2020 19:12,How does Monte Carlo Exploring Starts work?,,2,2,,,,CC BY-SA 4.0 20284,2,,20282,4/15/2020 14:03,,5,,"

If the i.i.d (independent and identically distributed) assumption holds, shouldn't the training and validation trends be exactly the same?

No, not necessarily. Let me explain why.

If you assume your samples (aka examples, observations, data points, etc.) are i.i.d., this means

  1. that they come from the same distribution, e.g. a Gaussian $\mathcal{N}(0, 1)$ (the identically distributed part), and

  2. that they are independently drawn from it, i.e., intuitively, each sample provides the same kind of information independently of the others

However, even if samples are independently drawn from a certain distribution, they can be different. For example, if you draw a sample $x$ from $\mathcal{N}(0, 1)$, an operation often denoted as $x \sim \mathcal{N}(0, 1)$, $x$ could have the value $0$, $1$, $13$ or $50$ (or any other number), so they could be variable, although your samples will tend to be mainly around $0$, because that's where your Gaussian puts more density (and your standard deviation is just $1$). If your standard deviation was higher, then there would be even more variability in the sampling process.

So, if you assume that your samples are independently drawn from a certain distribution, it doesn't mean that you will always get the same pattern of samples. In other words, you can still have variability in your samples, and this also depends on the distribution you sample from.

To answer your question more directly, there's a chance that your training data and your validation data don't necessarily have the same patterns, even if the independence assumption holds. Therefore, the training and validation trends (and I assume you mean e.g. the performance) are not necessarily the same, but, although this may also depend on the training method, I would say that they shouldn't be very different (if the assumption holds) because, intuitively, each sample should be as informative as any other sample (independence assumption).

Do we work with the fact that there is a certain degree of wrongness to the assumption or am I interpreting it wrongly?

It is often convenient to make the i.i.d. assumption even, if it doesn't hold, for several reasons:

  1. your training procedure may converge faster (because, intuitively, each sample will be as informative as any other sample)

  2. your models may be simpler (e.g., in naive Bayes, you make the i.i.d. assumption only to simplify the model and, in general, the mathematical formulations)

Sometimes, if it doesn't hold, your training procedure can be highly affected. In those cases, you can find workarounds and try to make it hold. For example, the usage of the experience replay in deep Q-learning is an example of a trick used to overcome the dependence of successive samples, which causes learning to be highly variable. See this question Why exactly do neural networks require i.i.d. data?.

The answers to the question On the importance of the i.i.d. assumption in statistical learning on CrossValidated provide more information and details, so you may want to have a look at it too. Here's another answer, which is related to shuffling and how it can or not make the independence assumption hold, that I highly recommend that you read.

",2444,,2444,,4/16/2020 1:57,4/16/2020 1:57,,,,2,,,,CC BY-SA 4.0 20285,1,,,4/15/2020 14:04,,-1,677,"

Am I right to say that the Q value of a particular state and action is the same as the state-action pair value of that same state and action?

",36072,,2444,,4/15/2020 14:31,4/15/2020 14:33,Is the Q value the same as the state-action pair value?,,2,0,,,,CC BY-SA 4.0 20286,2,,20285,4/15/2020 14:27,,1,,"

I don't understand your question very clearly.

Q-value of a particular state-action pair (s,a) under policy $\pi$ is the total reward you would expect to collect if you start from the state s, take the action a, and follow policy $\pi$ from then on.

In the literature, this is referred to as state-action values.

",36074,,,,,4/15/2020 14:27,,,,3,,,,CC BY-SA 4.0 20287,2,,20285,4/15/2020 14:28,,1,,"

Am I right to say that the Q value of a particular state and action is the same as the state-action pair value of that same state and action?

In general $Q$ is used as the symbol for action value and $Q(s,a)$ is the action value function.

The phrase ""state-action pair value"" is used to mean the same thing in some texts.

So, yes, you are right. At least I cannot think of or find any counter-examples where the two things could be used to refer to different things in the same text.

It is possible to work with different formulae for Q - e.g. finite horizon, discounted, average reward. There are also conceptual differences between a ""true"" action value, what you are currently estimating, and implementation details such as how you implement a Q function in code. In addition, there is also the advantage function usually labelled $A(s,a)$ which might also be considered as a ""state-action pair value"" by some authors.

However, those would not generally be flagged in documents by labelling one a ""Q value"" and another ""state-action pair value"" without at least some other text. So in general you are safe to consider the two terms to mean the same thing.

",1847,,1847,,4/15/2020 14:33,4/15/2020 14:33,,,,3,,,,CC BY-SA 4.0 20288,1,20298,,4/15/2020 15:09,,3,168,"

Following an earlier question, I'm interested in understanding the basics of Conv2d and especially how the kernel is applied, summed, and the propagated. I understand that:

  1. a kernel has size W x H and that more than one kernel is applied (e.g., S x W x H) where S is the amount of kernels.
  2. A stride or step is used when iterating the network input
  3. Padding may or may not be used by being added to the network input.

What I would ideally like to see is either a description, or a python sample (pytorch or tensorflow) of how that is done, what the dimensionality of the output is, and any operation I may be missing (some YouTube videos say that the kernel is summarised and then divided to hold one new unique value representing the feature activation?)

",32528,,2444,,1/1/2022 10:08,1/1/2022 10:08,How is the convolution layer is usually implemented in practice?,,1,4,,,,CC BY-SA 4.0 20289,1,20368,,4/15/2020 16:20,,0,105,"

Even when we get a valuable reward signal after every single action, this immediate reward only approximates the short term goodness of the action.

To consider the long term effect of an action, we can use the return of an episode, the action value function $Q(s,a)$ or the advantage $A(s,a) = Q(s,a) - V(s)$. However, these measures do not rate the action in isolation but take all the following actions until the end of an episode into account.

Are there ways to more precisely approximate how good a single action really is considering its short and long term effects?

",35821,,35821,,4/18/2020 13:26,4/18/2020 22:11,What is the best measurement for how good an action of a reinforcement learning agent really is?,,1,10,,,,CC BY-SA 4.0 20290,2,,18576,4/15/2020 16:35,,4,,"

My 50cents: NP_(complexity) - is still hard to solve, even with NeuralNets.

In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is ""yes"", have proofs verifiable in polynomial time by a deterministic Turing machine.

The easiest example, to imagine what is speech about, it is cryptography's Integer_factorization, which is basement of RSA cryptosystem.

For example, we have two simple numbers:

  • 12123123123123123123123.....45456
  • 23412421341234124124124.....11112

NeuralNetwork shall answer us exactly digit to digit both this numbers, when we will show it only multiplication of this two numbers... This is not guessing about school bus. The field of numbers much more bigger than number of words in all languages on whole Earth. Imagine, that there are billions of billion different school buses, billions of billions of different fire-hydrants and billions of such classes, and NN shall answer exactly - what is on the picture - no way. The chance to guess is so little...

",36078,,,,,4/15/2020 16:35,,,,1,,,,CC BY-SA 4.0 20291,1,,,4/15/2020 17:06,,1,36,"

I'm trying to predict some properties of videos with Keras using the following rough architecture:

  1. Feed each frame through the same 2-D convolutional layer.
  2. Take the outputs of this 2-D convolutional layer and feed them through a 3-D convolutional layer.

There are more hidden layers, but these are the main ones that matter and are messing with my dimensionality. The input of Conv2D should be (batch_size, height, width, channels). Each movie has dimensionality (number_of_frames, height, width, channels). I first had the idea to neglect batching of movies entirely, and treat the batch size and the number of frames equivalently. Then, Conv2D would output a 4-D tensor, and I would increase its dimensionality to make the ouput a 5-D tensor that I could input into Conv3D. To do this, Conv3D could only accept inputs of batch size 1.

I decided against this, because I wanted to batch movies. My current thought is to do this:

conv1 = Conv3D(filters=1, kernel_size =(1,12,12), strides=(1,1,1), data_format='channels_last')
conv2 = Conv3D(filters=1,kernel_size=(10,10,10), strides=(1,1,1), data_format='channels_last')

conv1 would represent the 2-D convolutional layer while conv2 would represent the 3-D convolutional layer. Would this idea work? I figure there is the advantage that I can batch now and when I train the 2-D filter, the same 2-D filter is running over every single movie frame. I'm just worried that the filter in conv1 will fail to go over certain frames, or it will somehow overlap frames when I want the filter to go over every frame individually.

",36079,,36079,,4/16/2020 15:04,4/16/2020 15:04,"Is using a filter of size (1, x, y) on a 3D convolutional layer the same as using a filter of size (x,y) on a 2D convolutional layer?",,0,1,,,,CC BY-SA 4.0 20292,1,,,4/15/2020 18:37,,4,509,"

I'm not really sure if this is the sort of question to ask on here, since it is less of a general question about AI and more about the coding of it, however I thought it wouldn't fit on stack overflow.

I have been programming a multilayer perceptron in c++, and it seems to be working with a sigmoid function, however when I change the activation function to ReLU it does not converge and stays at an average cost of 1 per training example. this is because all of the network's output neurons output a 0.

With the sigmoid function it converges rather nicely, I did a bit of testing and after about 1000 generations it got to an average cost of 0.1 on the first 1000 items in the MNIST dataset.

I will show you the code I changes first for the activation functions, and then i will put the whole block of code in.

Any help would be greatly appreciated!

Sigmoid:

inline float activation(float num)
{
    return 1 / (1 + std::exp(-num));
}

inline float activation_derivative(float num)
{
    return activation(num) * (1 - activation(num));
}

ReLU:

inline float activation(float num)
{
    return std::max(num, 0.0f);
}

inline float activation_derivative(float num)
{
    return num > 0 ? 1.0f : 0.0f;
}

And here's the whole block of code (I collapsed the region of code for benchmarking and the region for creating the dataset):

#include <iostream>
#include <fstream>
#include <vector>
#include <random>
#include <chrono>
#include <cmath>
#include <string>
#include <algorithm>

#pragma region benchmarking
#pragma endregion

class Network
{
public:
    float cost = 0.0f;
    std::vector<std::vector<std::vector<float>>> weights;
    std::vector<std::vector<std::vector<float>>> deriv_weights;
    std::vector<std::vector<float>> biases;
    std::vector<std::vector<float>> deriv_biases;
    std::vector<std::vector<float>> activations;
    std::vector<std::vector<float>> deriv_activations;
    void clear_deriv_activations()
    {
        for (unsigned int i = 0; i < deriv_activations.size(); ++i)
        {
            std::fill(deriv_activations[i].begin(), deriv_activations[i].end(), 0.0f);
        }
    }
    int get_memory_usage()
    {
        int memory = 4;
        memory += get_vector_memory_usage(weights);
        memory += get_vector_memory_usage(deriv_weights);
        memory += get_vector_memory_usage(biases);
        memory += get_vector_memory_usage(deriv_biases);
        memory += get_vector_memory_usage(activations);
        memory += get_vector_memory_usage(deriv_activations);
        return memory;
    }
};

struct DataSet
{
    std::vector<std::vector<float>> training_inputs;
    std::vector<std::vector<float>> training_answers;
    std::vector<std::vector<float>> testing_inputs;
    std::vector<std::vector<float>> testing_answers;
};


Network create_network(std::vector<int> layers)
{
    Network network;
    int layer_count = layers.size() - 1;
    network.weights.reserve(layer_count);
    network.deriv_weights.reserve(layer_count);
    network.biases.reserve(layer_count);
    network.deriv_biases.reserve(layer_count);
    network.activations.reserve(layer_count);
    network.deriv_activations.reserve(layer_count);
    int nodes_in_prev_layer = layers[0];
    for (unsigned int i = 0; i < layers.size() - 1; ++i)
    {
        int nodes_in_layer = layers[i + 1];
        network.weights.emplace_back();
        network.weights[i].reserve(nodes_in_layer);
        network.deriv_weights.emplace_back();
        network.deriv_weights[i].reserve(nodes_in_layer);
        network.biases.emplace_back();
        network.biases[i].reserve(nodes_in_layer);
        network.deriv_biases.emplace_back(nodes_in_layer, 0.0f);
        network.activations.emplace_back(nodes_in_layer, 0.0f);
        network.deriv_activations.emplace_back(nodes_in_layer, 0.0f);
        for (int j = 0; j < nodes_in_layer; ++j)
        {
            network.weights[i].emplace_back();
            network.weights[i][j].reserve(nodes_in_prev_layer);
            network.deriv_weights[i].emplace_back(nodes_in_prev_layer, 0.0f);
            for (int k = 0; k < nodes_in_prev_layer; ++k)
            {
                float input_weight = (2 * (float(std::rand()) / RAND_MAX)) - 1; 
                network.weights[i][j].push_back(input_weight);
            }
            float input_bias = (2 * (float(std::rand()) / RAND_MAX)) - 1;
            network.biases[i].push_back(input_bias);
        }
        nodes_in_prev_layer = nodes_in_layer;
    }
    return network;
}

void judge_network(Network &network, const std::vector<float>& correct_answers)
{
    int final_layer_index = network.activations.size() - 1;
    for (unsigned int i = 0; i < network.activations[final_layer_index].size(); ++i)
    {
        float val_sq = (network.activations[final_layer_index][i] - correct_answers[i]);
        network.cost += val_sq * val_sq;
    }
}

inline float activation(float num)
{
    return std::max(num, 0.0f);
}

void forward_propogate(Network& network, const std::vector<float>& input)
{
    const std::vector<float>* last_layer_activations = &input;
    int last_layer_node_count = input.size();
    for (unsigned int i = 0; i < network.weights.size(); ++i)
    {
        for (unsigned int j = 0; j < network.weights[i].size(); ++j)
        {
            float total = network.biases[i][j];
            for (int k = 0; k < last_layer_node_count; ++k)
            {
                total +=  (*last_layer_activations)[k] * network.weights[i][j][k];
            }
            network.activations[i][j] = activation(total);
        }
        last_layer_activations = &network.activations[i];
        last_layer_node_count = network.weights[i].size();
    }
}

void final_layer_deriv_activations(Network& network, const std::vector<float>& correct_answers)
{
    int final_layer_index = network.activations.size() - 1;
    int final_layer_node_count = network.activations[final_layer_index].size();
    for (int i = 0; i < final_layer_node_count; ++i)
    {
        float deriv = network.activations[final_layer_index][i] - correct_answers[i];
        network.deriv_activations[final_layer_index][i] = deriv * 2;
    }
}

inline float activation_derivative(float num)
{
    return num > 0 ? 1.0f : 0.0f;
}

void back_propogate_layer(Network& network, int layer)
{
    int nodes_in_layer = network.activations[layer].size();
    int nodes_in_prev_layer = network.activations[layer - 1].size();
    for (int i = 0; i < nodes_in_layer; ++i)
    {
        float total = network.biases[layer][i];
        for (int j = 0; j < nodes_in_prev_layer; ++j)
        {
            total += network.weights[layer][i][j] * network.activations[layer - 1][j];
        }
        float dzda = activation_derivative(total);
        float dzdc = dzda * network.deriv_activations[layer][i];
        for (int j = 0; j < nodes_in_prev_layer; ++j)
        {
            network.deriv_weights[layer][i][j] += network.activations[layer - 1][j] * dzdc;
            network.deriv_activations[layer - 1][j] += network.weights[layer][i][j] * dzdc;
        }
        network.deriv_biases[layer][i] += dzdc;
    }
}

void back_propogate_first_layer(Network& network, std::vector<float> inputs)
{
    int nodes_in_layer = network.activations[0].size();
    int input_count = inputs.size();
    for (int i = 0; i < nodes_in_layer; ++i)
    {
        float total = network.biases[0][i];
        for (int j = 0; j < input_count; ++j)
        {
            total += network.weights[0][i][j] * inputs[j];
        }
        float dzda = activation_derivative(total);
        float dzdc = dzda * network.deriv_activations[0][i];
        for (int j = 0; j < input_count; ++j)
        {
            network.deriv_weights[0][i][j] += inputs[j] * dzdc;
        }
        network.deriv_biases[0][i] += dzdc;
    }
}

void back_propogate(Network& network, const std::vector<float>& inputs, const std::vector<float>& correct_answers)
{
    network.clear_deriv_activations();
    final_layer_deriv_activations(network, correct_answers);
    for (int i = network.activations.size() - 1; i > 0; --i)
    {
        back_propogate_layer(network, i);
    }
    back_propogate_first_layer(network, inputs);
}

void apply_derivatives(Network& network, int training_example_count)
{
    for (unsigned int i = 0; i < network.weights.size(); ++i)
    {
        for (unsigned int j = 0; j < network.weights[i].size(); ++j)
        {
            for (unsigned int k = 0; k < network.weights[i][j].size(); ++k)
            {
                network.weights[i][j][k] -= network.deriv_weights[i][j][k] / training_example_count;
                network.deriv_weights[i][j][k] = 0;
            }
            network.biases[i][j] -= network.deriv_biases[i][j] / training_example_count;
            network.deriv_biases[i][j] = 0;
            network.deriv_activations[i][j] = 0;
        }
    }
}

void training_iteration(Network& network, const DataSet& data)
{
    int training_example_count = data.training_inputs.size();
    for (int i = 0; i < training_example_count; ++i)
    {
        forward_propogate(network, data.training_inputs[i]);
        judge_network(network, data.training_answers[i]);
        back_propogate(network, data.training_inputs[i], data.training_answers[i]);
    }
    apply_derivatives(network, training_example_count);
}

void train_network(Network& network, const DataSet& dataset, int training_iterations)
{
    for (int i = 0; i < training_iterations; ++i)
    {
        training_iteration(network, dataset);
        std::cout << ""Generation "" << i << "": "" << network.cost << std::endl;
        network.cost = 0.0f;
    }
}

#pragma region dataset creation

#pragma endregion

int main() 
{
    Timer timer;
    DataSet dataset = create_dataset_from_file(""data.txt"");
    Network network = create_network({784, 128, 10});
    train_network(network, dataset, 1000);
    std::cout << timer.get_duration() << std::endl;
    std::cin.get();
}
```
",33354,,33354,,4/15/2020 21:18,4/16/2020 10:36,Neural network doesn't seem to converge with ReLU but it does with Sigmoid?,,1,0,,,,CC BY-SA 4.0 20293,2,,18802,4/15/2020 19:33,,1,,"

The paper ""Bias-Variance"" Error Bounds for Temporal Difference Updates (2000) by M. Kearns and S. Singh provides error bounds for temporal-difference algorithms, i.e. TD($k$) and TD($\lambda$) (see theorem 1 and theorem 2, respectively). Note that both TD($k$) and TD($\lambda$) include TD($0$) as a special case.

",2444,,,,,4/15/2020 19:33,,,,2,,,,CC BY-SA 4.0 20294,2,,12020,4/15/2020 20:00,,2,,"

The use of of a neural network to push the search algorithm to continually only along a promising path is the same that was described in the AlphaZero paper. In AlphaZero, the NN loop contained the search function and would encourage the continued search of high probability moves that were then simulated by the same NN that now contained the Value Net. The use of alpha-beta specifically is not necessary. Just a search function aptly known as PUCT (Predictor + Upper Confidence Bounds applied to Trees)

",36086,,,,,4/15/2020 20:00,,,,0,,,,CC BY-SA 4.0 20297,2,,20259,4/15/2020 21:17,,1,,"

DeepMind used MuJoCo (see also the related paper MuJoCo: A physics engine for model-based control) for the simulations, as they stated in section 3.1 of their paper Emergence of Locomotion Behaviours in Rich Environments (2017), which is the paper you should read to know more about their results related to those animations of skeletons that try to walk or jump (but do it weirdly).

",2444,,,,,4/15/2020 21:17,,,,2,,,,CC BY-SA 4.0 20298,2,,20288,4/15/2020 22:06,,3,,"

I don't think that to understand convolution you need to dig into the nested code of huge libraries, since the code becomes quickly really hard to understand and convoluted (ba dum tsss!). Joking apart, in PyTorch Conv2d is a layer that applies another low level function, conv2d, written in c++.

Luckily enough, the guys from PyTorch wrote the general idea of how convolution is implemented in the documentation:

From this paragraph, we already have some important information like the input and output dimensions. The number of channels should be easy to understand, if we have an RGB image, for example, the channels are 3, one for each color, so they are just different matrices representing different features.

The next important element is the reference to the cross correlation, the function applied to our input images through the kernel k. Why cross-correlation? Because it is almost identical to a convolution, as you can see comparing their formulas:

The only difference consists in the way the indexing of the width is implemented, which causes the operation to start from the bottom right of the input matrix for the convolution and from the top left of the input image from the cross-correlation (the circles in the squares in the previous pic). Since, in most programming languages, matrix indexing starts from top left, cross-correlation is the most common choice to implement.

But how do these formulas work in practice? Here's another picture taken from Chapter 9 of Deep Learning (Goodfellow, Bengio, Courville), which I strongly suggest you to read.

Basically, from the input matrix, a submatrix is extracted, with the same dimension of the kernel, then the sub-matrix and the kernel are multiplied elementwise and all the resulting product summed together to produce a single output element that will form a 'pixel' of the resulting feature map (the output matrix).

Here's another example with fake numbers that I made. I hope the double notation for filter/kernel doesn't generate confusion, I actually found that sometimes it is inconsistent (in the chapter I linked they don't even use the filter at all). In practice they actually mean the same thing, I usually call kernel the actual matrix that is multiplied to the input and with filter I refer to the sliding window on the input image (that has, of course, must have the same dimension of the kernel).

Lastly, when you apply padding, the filter can actually move also outside the 'edges' of the input matrix, in which case all the elements outside would be considered to be zero. The computation is exactly the same, but since there are more splitting steps, the output matrix will have bigger dimension.

Please note that with multiple input channels you can perform either 2d convolution or 3d convolution, the difference relies in the filter dimension, in case of 2d convolution it would be a square whereas in 3d convolution it would be a cube. This means that for an RGB image a 2d convolution would treat each color layer independently, mixing the information from each channel with further computations like pooling (averaging the resulting feature maps of each color or select the max value among the feature maps for each pixel, etc..) while 3d convolution would mix together the color layers already during convolution thanks to the 3d kernel which will sum together elements from different layers.

",34098,,34098,,4/15/2020 22:29,4/15/2020 22:29,,,,0,,,,CC BY-SA 4.0 20300,1,20302,,4/16/2020 0:52,,1,308,"

From my understanding, the policy $\pi$ is basically how the agent acts (i.e. the actions it will take in each state).

However, I am confused about the Q value and how it is ""affected"" by a policy. This answer says

$Q^\pi(s, a)$ is the action-value function. It is the expected return starting from state $s$, following policy $\pi$, taking action $a$. It's focusing on the particular action at the particular state.

From this, I infer that the $Q$ value (the action-value function) will be affected by the policy $\pi$. Why? So, why does the Q value change according to policy $\pi$?

Shouldn't the Q value be constant, because the same action taken in the same state will always give the same yield (and hence remain constantly good/bad)?

All the policy does is find out the max Q values and bases its policy on that information.

",36072,,2444,,4/16/2020 1:30,4/16/2020 1:41,Why does the policy $\pi$ affect the Q value?,,2,0,,,,CC BY-SA 4.0 20301,2,,20300,4/16/2020 1:05,,0,,"

Ok, Q is the reward associated with being in a given state, following a certain action and then following the given policy.

You need to take the expectation of the sum the immediate reward plus the value function which is defined by the policy.

",32390,,,,,4/16/2020 1:05,,,,0,,,,CC BY-SA 4.0 20302,2,,20300,4/16/2020 1:41,,2,,"

First of all, $Q_\pi(s, a)$ IS DEFINED AS the value (i.e. the expected return) of taking some action $a$ in some state $s$, AND THEN following some given policy $\pi$ (until e.g. the end of the game or your life). In other words, suppose that you take action $a$ in state $s$, AND THEN use the policy $\pi$ to behave in the world until you die, then $Q_\pi(s, a)$ would represent the value that you would obtain.

So, we are DEFINING $Q_\pi(s, a)$ in a certain way. This is a DEFINITION! It's not an algorithm. In the algorithms (e.g. Q-learning), things will typically change, but that's a different story that you should investigate later.

From this, I infer that the $Q$ value (the action-value function) will be affected by the policy $\pi$.

So, $Q_\pi(s, a)$ will not keep changing. You could say that $Q_\pi(s, a)$ (which is a function) is ""affected by"" $\pi$ ONLY in the sense that it is ""defined in terms of"" $\pi$. To be precise, $Q_\pi(s, a)$ is actually an expectation (which is a mathematical concept similar to an ideal average). If you are not familiar with the concept of expectation, I suggest you get familiar with it first, before studying reinforcement learning.

Shouldn't the Q value be constant, because the same action taken in the same state will always give the same yield (and hence remain constantly good/bad)?

Again, there's the distinction between the algorithm that you actually use to find the function $Q_\pi(s, a)$ and the definition of the same function. In case you are estimating the function with an algorithm, you will not necessarily find ""constant Q values"". It depends on different aspects, which I would like to avoid discussing here, so that this post doesn't become an open discussion (I suggest you first learn about the basic Bellman equations and then you study the algorithms from the book Reinforcement learning: an introduction by Sutton and Barto).

",2444,,,,,4/16/2020 1:41,,,,0,,,,CC BY-SA 4.0 20303,1,20318,,4/16/2020 1:53,,2,638,"

I'm a beginner in the RL field, and I would like to check that my understanding of certain RL concepts.

Value function: How good it is to be in a state S following policy π.

So, the value functions here are 0.3 and 0.9

Q function(also called state-action value, or just action value): How good it is to be in a state S and perform action A while following policy π. It uses reward to measure the state-action value

So, the state-action values here are 0.03,0.02,0.5 and 0.9

Q value: The overall expected rewards after performing action A in state S, and continuing with policy π until the end of the episode. So, essentially I can only calculate the Q value if I know all the state-action values of the actions I will be taking in the single episode.(Because the Q value takes into account the actions after the current action A, till the end of the episode, following policy π)

Reward: The metric used to tell the agent how good/bad it's action was. It is a constant value. For e.g

 1. Fall in pond --> -1
 2. On stone path --> +1
 3. Reach home--> +10

Return: The sum of rewards in a single episode

Policy π: A set of specific instructions an agent will follow in an episode. For example, the policy will look like:

In state 1, take action 3 ( which takes me to state 2)

In state 2, take action 2 ( which takes me to state 3)

In state 3, take action 1 ( Which takes me to state 4)

In state 4, take action 2 ( Which takes me to terminal state)

1 episode completed

And my policy will keep updating each episode to get the best return

",36072,,2444,,4/16/2020 2:11,9/22/2021 1:22,"Is my understanding of the value function, Q function, policy, reward and return correct?",,2,1,,,,CC BY-SA 4.0 20304,1,,,4/16/2020 2:21,,2,94,"

I have the following problem called ""1-2 steal marbles"".

Initially, there are 6 marbles on the board. One of the players can choose to remove 1 or 2 marbles leaving 5 or 4. After that, the other player can do the same, choosing to take again 1 or 2 marbles from the board. The process continues until there is only one marble on the board. The player who wins is the one the leaves the last marble on the board. (For example: If there are 3 marbles and it's my turn, then I will choose to remove 2 to leave one in the board to win)

How can I draw the search tree that represents the application of the alpha-beta pruning to this ""1-2 steal marbles"" with 13 marbles? I would like to see the maximizer and minimizer nodes and the value at the nodes too.

",36094,,2444,,4/18/2020 0:30,4/18/2020 0:30,"How can I apply the alpha-beta pruning algorithm to the ""1-2 steal marbles"" problem?",,0,4,,,,CC BY-SA 4.0 20305,1,,,4/16/2020 2:57,,2,155,"

I have the following scenario. I have a binary classification problem, whose underlying function is a step function. The probability distribution of feature vectors is a uniform over the domain.

Case 1: I have a classifier which fits the training samples perfectly, no matter what the size of the data. The space of functions $H$ has an infinite VC dimension. As the data points going to infinite, the hypothesized function converges pointwise to the underlying step function.

Case 2: Here I have divided the same hypothesis space into a number of hierarchical subspaces $H_1 \subset H_2 \subset H_3 \subset \dots \subset H_n$ ($n$ goes to infinity). The VC dimension of each of the spaces is finite and grows with $n$ to infinity. Now, given any data of $n$ points, I compute the minimum number of VC dimension required to fit the data exactly, say, $d_n$ and use that space $H_{d_n}$ as the hypothesis. Do the same as data size $n$ goes to infinity, at each $n$ using the hypothesis space that just enough VC dimension to fit the data. In this approach also, as the data size goes to infinity, the hypothesized function converges pointwise to the underlying step function.

Is the difference between these two approaches to the same problem? Is there any theoretical difference? Which method is any better than others, in some sense?

",36095,,2444,,4/16/2020 3:04,1/2/2023 2:08,An infinite VC dimensional space vs using hierarchical subspaces of finite but growing VC dimensions,,1,3,,,,CC BY-SA 4.0 20307,2,,20303,4/16/2020 4:08,,1,,"

I think most of it is correct.

Q function(also called state-action value, or just action value): How good it is to be in a state S and perform action A while following policy π. It uses reward to measure the state-action value

This is a bit off. Q function basically tells you how good it is to be in state S and perform action A, and follow policy $\pi$ from the next state onwards. The action A that you take can be any action from the action space and need not be according to the policy $\pi$.

Also, I think Q-function and Q-value are mostly used interchangeably to mean the same thing.

",36074,,,,,4/16/2020 4:08,,,,1,,,,CC BY-SA 4.0 20308,1,20315,,4/16/2020 6:37,,5,933,"

Very deep models involve the composition of several functions or layers. The gradient tells how to update each parameter, under the assumption that the other layers do not change. In practice, we update all of the layers simultaneously.

The above is an extract from Ian Goodfellow's Deep Learning - which talks about the need for batch normalization.

Why do we update all the layers simultaneously? Instead, if we update layers one at a time during backpropagation - it will eliminate the need for batch normalization, right?

Reference: A Gentle Introduction to Batch Normalization for Deep Neural Networks

P.S. The attached link says: Because all layers are changed during an update, the update procedure is forever chasing a moving target. Apart from the main question, it would be great if someone could explain why exactly a moving target is being referred to in the above sentence.

",35585,,2444,,4/16/2020 11:48,4/16/2020 12:10,Why do we update all layers simultaneously while training a neural network?,,1,1,,,,CC BY-SA 4.0 20309,1,20327,,4/16/2020 6:44,,8,2254,"

Although I know how the algorithm of iterative policy evaluation using dynamic programming works, I am having a hard time realizing how it actually converges.

It appeals to intuition that, with each iteration, we get a better and better approximation for the value function and we can thus assure its convergence, but with this said simply, it seems that this method is very inefficient contrary to the reality that it actually is quite efficient.

What is the rigorous mathematical proof of the convergence of the policy evaluation algorithm to the actual answer? How is it that the value function obtained this way is close to the actual values computed by solving the set of bellman equations?

",35926,,2444,,4/16/2020 13:00,1/22/2022 21:45,What is the proof that policy evaluation converges to the optimal solution?,,2,0,,,,CC BY-SA 4.0 20310,1,20311,,4/16/2020 7:08,,3,786,"

I'm a biotech student and I'm currently working on single-particle tracking. For my work, I need to use aspects of deep learning (CNN, RNN and object segmentation) but I'm not familiar with these topics. I have some prior knowledge in python.

So, do I have to learn machine learning first before going into deep learning, or can I skip ML?

What are the pros and cons of studying machine learning before deep learning?

",36099,,2444,,4/16/2020 11:01,1/17/2021 13:59,What are the pros and cons of studying machine learning before deep learning?,,4,1,,12/22/2021 11:51,,CC BY-SA 4.0 20311,2,,20310,4/16/2020 9:28,,7,,"

That question doesn't really make sense: deep learning is a sub-topic of machine learning, so you can't really 'skip' it. It's a bit like ""I want to learn about trigonometry, but do I need to do geometry first?""

Having said that, in order to make sense of deep learning you should really know about the general principles of machine learning, otherwise you won't understand it. Or, more importantly, you won't understand what problems deep learning can be applied to, and what issues are better solved with other methods.

You don't need to go into much detail, but should at least get an overview.

",2193,,,,,4/16/2020 9:28,,,,0,,,,CC BY-SA 4.0 20312,2,,20292,4/16/2020 10:36,,1,,"

It seems like you're suffering from the the dying ReLU problem. ReLU enforces positive values so the weights and biases your network learned are leading to a negative value passed through the ReLU function - meaning you would get 0. There are a few things you can do. I do not know the exact format of your data, but if it is MNIST it is possible you simply don't have normalized values. You could be learning a large negative bias as a result. Try dividing every pixel intensity in your dataset by the float 255.0 to normalize your values and see if that fixes your problem.

You could also change your activation function to something such as Leaky ReLU which attempts to solve this problem with a small positive gradient for negative values.

",22373,,,,,4/16/2020 10:36,,,,1,,,,CC BY-SA 4.0 20313,2,,7640,4/16/2020 11:38,,0,,"

There are two kinds of problem

Stationary and non-stationary

Stationary problems are those whose reward value is static, dost not change and on other hand non-stationary problems are those whose reward value change with time

",36104,,,,,4/16/2020 11:38,,,,0,,,,CC BY-SA 4.0 20314,1,,,4/16/2020 11:55,,1,61,"

I want to solve the two partition problem (https://en.wikipedia.org/wiki/Partition_problem) using an uninformed search algorithm (BFS or uniform cost).

The states can be represented by three sets S1,S1,S where the set S contains unassigned values and S1 and S2 the values assigned to each partition respectively. At the initial state S will contain all values and S1 and S2 will be empty. The actions consist in moving a value from S to S1 or S2. The objective is to find a complete assignation (S is empty) where abs(sum(S1)-sum(S2)) is minimum. As you can see I have all elements but the cost of the actions.

  • How can I assign costs to the actions in order to apply one of those algorithms? (Costs must be positive.)

I know it is not the best way to solve this problem but there must be a way to do it because the problem is formulated this way in the book.

",36105,,1671,,4/16/2020 22:06,4/16/2020 22:06,2 Partition Problem,,0,0,,,,CC BY-SA 4.0 20315,2,,20308,4/16/2020 12:05,,3,,"

Why do we update all layers simultaneously while training a neural network?

We typically train a neural network with gradient descent and back-propagation. Gradient descent is the iterative algorithm used to update the parameters and back-propagation is the algorithm used to compute the gradient of the loss function with respect to each of these parameters.

Let's denote a vector that contains all learnable parameters of a neural network $M$ by $\mathbf{w} = \left[w_1, \dots, w_n \right] \in \mathbb{R}^n$ (so $M$ contains $n$ learnable parameters), the loss function of $M$ by $\mathcal{L}$, the gradient of the loss function with respect to each parameter $w_i$ of $M$ by $ \nabla \mathcal{L} = \left[ \frac{\partial \mathcal{L}}{\partial w_1}, \dots, \frac{\partial \mathcal{L}}{\partial w_n} \right] \in \mathbb{R}^n$, then the gradient descent step to update all parameters is

\begin{align} \mathbf{w} \leftarrow \mathbf{w} - \gamma * \nabla \mathcal{L} \tag{1} \label{1} \end{align}

where $\gamma \in \mathbb{R}$ is the learning rate.

In equation \ref{1}, we are assigning to $\mathbf{w}$ the value $\mathbf{w} - \gamma * \nabla \mathcal{L}$, so we are updating all parameters $\mathbf{w}$ simultaneously, so we are also updating all layers simultaneously.

In principle, you could update each parameter $w_i$ individually. To be more precise, you would have the following update rule

\begin{align} w_i \leftarrow w_i - \gamma * \frac{\partial \mathcal{L}}{\partial w_i}, \; \forall i\tag{2} \label{2} \end{align}

So, you could update first $w_1$, then $w_2$, and so on.

Actually, you don't need to update the parameters sequentially (also because there is no real order of the parameters). You can actually update them in any other way. You can update the parameters in any way because, although the computation of the gradient highly depends on the structure of the neural network (so if you change the structure, the computation of the gradient also changes), once the gradient is computed, you already have all information to update each parameter independently of each other.

You typically update all parameters (or layers) simultaneously because, in practice, you work with vectors and matrices rather than scalars in order to take benefit from efficient matrix multiplication algorithms and hardware (i.e. GPUs).

",2444,,2444,,4/16/2020 12:10,4/16/2020 12:10,,,,0,,,,CC BY-SA 4.0 20316,1,,,4/16/2020 12:11,,2,143,"

I've been recently given an assignment based on Reinforcement Learning and I'm supposed to implement the value iteration algorithm in a grid environment.

The assignment:

My doubt is why do I even need an initial arbitrary policy as given in the parameters (in the assignment) to implement the value iteration algorithm? And so, the change in the values of a and b shouldn't affect the algorithm. Am I correct in my thinking about it?

",29899,,29899,,4/16/2020 12:26,4/22/2020 16:23,Why do I need an initial arbitrary policy to implement value iteration algorithm,,1,0,,,,CC BY-SA 4.0 20317,1,,,4/16/2020 12:29,,2,54,"

I would like to design a reward function. I am training two models from the first model that classify set of texts (paragraphs and keywords) and I also got some hidden states. The second model is trying to generate keywords for those paragraphs.

I want to use those hidden states from the first model to give rewards for key phrases that are generated from the second model. I want to know how can I implement this reward function since I have never used it before.

",36106,,2444,,4/16/2020 14:12,4/16/2020 14:12,How should I design a reward function for a NLP problem where two models interoperate?,,0,1,,,,CC BY-SA 4.0 20318,2,,20303,4/16/2020 12:42,,2,,"

Value function: How good it is to be in a state $s$ following policy $\pi$.

There are different value functions. There's the state value function, often denoted as $v(s)$ (or $V(s)$), so it's a function of only one variable, i.e. $s$ (a state). There's the state-action value function $q(s, a)$ (or $Q(s, a$)). A value function is a function, so it's not a number or a vector, or whatever. It's a function, so it maps inputs to outputs. In the first case, it maps states to real numbers. In the second case, it maps states and actions to real numbers. So, we could denote the state value function as $v : \mathcal{S} \rightarrow \mathbb{R}$ (where $\mathcal{S}$ is the set of states in your environment) and state-action value function as $q : \mathcal{S} \times \mathcal{A}\rightarrow \mathbb{R}$ (where $\mathcal{A}$ is the set of actions and $\times$ means ""combination of"").

So, your definition of a value function is not quite correct. The value function $v(s)$ doesn't represent ""how good it is to be in a state $s$ following a policy $\pi$"", but ""how good it is to be in a state $s$ AND THEN following policy $\pi$"". To emphasize this, you often use the notation $v_{\pi}(s)$ rather than simply $v(s)$.

See What are the value functions used in reinforcement learning? for more details about existing value functions in reinforcement learning. And to see the full definition of the value functions, I suggest you read Sutton and Barto's book.

Q function (also called state-action value, or just action value): How good it is to be in a state $s$ and perform action $a$ while following policy $\pi$. It uses reward to measure the state-action value

As I said above, the $q$ function is a ""value function"" too. It's just a different value function than $v$.

Again, the same thing I said for $v$ also applies here, so ""how good it is to be in a state $s$ and perform action $a$ while following policy $\pi$"" is incorrect for the same reason your definition for $v$ was incorrect. The $q$ function can be defined as ""how good it is to be in a state $s$ and take action $a$, AND, AFTER THAT, follow a given policy $\pi$. Again, to emphasize that $q$ is defined in terms of $\pi$, we often use the notation $q_\pi$.

Reward: The metric used to tell the agent how good/bad it's action was. It is a constant value.

This is roughly correct, but the reward doesn't have to be constant and it depends on your problem. Also, there's also the related notion of ""reward function"", which is the function that assigns rewards to each action. So, when defining your problem as a Markov decision process, you need to define this reward function. Actually, this is probably the most important function in reinforcement learning (because this is the way you teach the agent to behave).

Return: The sum of rewards in a single episode

This is roughly correct. However, note that the sum can also be a ""weighted sum"".

Policy: A set of specific instructions an agent will follow in an episode.

This is roughly correct, but a policy can also have some randomness in it. For example, if you are in state $s$, your policy could say ""always take action $a_i$"", but another policy could say ""take action $a_i$ with probability $p$ and action $a_j$ with probably $1 - p$. Also, note that the policy is not restricted to an episode. It's a general function that tells the agent how to behave independently of the episode.

(Sorry, I didn't look at your examples. Maybe I will review this answer later to look at your examples too, but the information in this answer should already tell you if your examples are correct or not).

",2444,,2444,,4/16/2020 13:25,4/16/2020 13:25,,,,0,,,,CC BY-SA 4.0 20319,1,,,4/16/2020 13:16,,3,42,"

In the paper: Reinforcement learning methods for continuous-time Markov decision problems, the authors provide the following update rule for the Q-learning algorithm, when applied to Semi-Markov Decision Processes (SMDPs):

$Q^{(k+1)}(x,a) = Q^{(k)}(x,a) + \alpha_k [ \frac{1-e^{-\beta \tau}}{\beta}r(x,y,a) + e^{-\beta \tau} max_{a'} Q^{(k)}(y,a) - Q^{(k)}(x,a) ] $

where $\alpha_k$ is the learning rate, $\beta$ is the continuous time discount factor and $\tau$ is the time taken to transition from state $x$ to state $y$.

It is not clear to me what is the relationship between the sampled reward $r(x,y,a)$ and the reward rate $\rho(x,a)$ specified in the objective function $\mathbb{E}[ \int_{0}^{\infty} e^{-\beta t}\rho(x(t),a(t)) dt ]$.

In particular, how do they determine $r(x,y,a)$ in the experiments in Section 6? In this experiment, they consider a routing problem in an M/M/2 queuing system, where the reward rate is: $c_1 n_1(t) + c_2 n_2(t)$. $c_1$ and $c_2$ are scalar cost factors and $n_1(t)$ and $n_2(t)$ are the number of customers in queue 1 and 2, respectively.

",34010,,-1,,6/17/2020 9:57,4/16/2020 13:16,Relationship between the reward rate and the sampled reward in a Semi-Markov Decision Process,,0,0,,,,CC BY-SA 4.0 20320,2,,20309,4/16/2020 14:54,,-1,,"

There exist other RL books which do a better job of talking about this but it's pretty simple at it's core.

The discounting factor puts an upper limit on the difference in reward between a finite number of iterations and an infinite number, each time you add another iteration it decreases by $\gamma$ multiplied into the upper bound of the difference.

$V_\pi = E[\sum_{i=0}^{\infty} \gamma^iR_i]$, $\Delta V_k = E[\sum_{i=k}^{\infty} \gamma^iR_i] = \gamma^k E[\sum_{i=0}^{\infty} \gamma^{i}R_{i+k}] < \gamma^k \frac{1}{1-\gamma}R_{max}$

",32390,,32390,,4/16/2020 17:23,4/16/2020 17:23,,,,3,,,,CC BY-SA 4.0 20321,1,20426,,4/16/2020 16:14,,3,104,"

Assume that I have a Dataframe with the text column. Problem: Classification / Prediction

    sms_text
0   Go until jurong point, crazy.. Available only ...
1   Ok lar... Joking wif u oni...
2   Free entry in 2 a wkly comp to win FA Cup fina...
3   U dun say so early hor... U c already then say...
4   Nah I don't think he goes to usf, he lives aro...

After preprocessing the text

From the above WordCloud, we can find the most frequent(occurred) words like

Free
Call
Text
Txt

As these are the most frequent words and adds less importance in prediction/classification as they appear a lot. (My Opinion)

My Question is Removing top frequent(most occurred) words will improve the model score?

How does this impact on model performance?

Is it ok to remove the most occurred words?

",30725,,-1,,6/17/2020 9:57,4/21/2020 9:31,Top Frequent occurrence word effect in Model Efficiency?,,4,0,,,,CC BY-SA 4.0 20322,2,,17100,4/16/2020 16:34,,0,,"

As we have an autoassociative network prototype vectors are both input and output vectors. So , we have that : \begin{equation} T = P \end{equation}

\begin{equation} W = PP^T - QI = TP^T - QI = \sum_{q=1}^Q p_q p_q^T - QI \end{equation}

Applying a prototype vector as input :

\begin{equation} \alpha = W \cdot p_k = \sum_{q=1}^Q p_k p_q p_q^T - QIp_k \end{equation}

Because they are orthogonal we have : \begin{equation} \alpha = p_k(p_k^T\cdot p_k) - Q \cdot I \cdot p_k = p_k (p_k^T\cdot p_k - Q \cdot I ) = p_k (\textbf{R - Q $\cdot$ I}) \end{equation}

where $R-Q\cdot I$ is the length of vectors

So since ,

\begin{equation} W \cdot p_k= (R - Q \cdot I) \cdot p _k \end{equation} prototype vectors continue to be eigenvectors of the new weight matrix .

It is often the case that for auto-associative nets, the diagonal weights (those which connect an input component to the corresponding output component) are set to 0. There are papers that say this helps learning. Setting these weights to zero may improve the net's ability to generalize or may increase the biological plausibility of the net. In addition, it is necessary if we use iterations (iterative nets) or the delta rule is used

",32076,,,,,4/16/2020 16:34,,,,0,,,,CC BY-SA 4.0 20323,2,,20321,4/16/2020 16:34,,1,,"

The technical term for these words is ""stop words"". Have a look at Information Retrieval and indexing (eg TF/IDF) to make up your mind whether you want to remove them or not.

",2193,,,,,4/16/2020 16:34,,,,0,,,,CC BY-SA 4.0 20324,2,,17126,4/16/2020 16:41,,0,,"

We have 2 classes , 1 subclass for each class

\begin{equation} W^2=\begin{vmatrix} 1 & 0\\ 0 & 1\\ \end{vmatrix} \end{equation}

$p_1$:

\begin{equation} \alpha^1=compet(n^1)=compet\begin{vmatrix} ||w_1-p_1||\\ ||w_2-p_1||\\ \end{vmatrix} = compet\begin{vmatrix} ||\begin{vmatrix} 0 & 1\\ \end{vmatrix}^T-\begin{vmatrix} 1 & 1\\ \end{vmatrix}^T||\\ ||\begin{vmatrix} 1 & 0\\ \end{vmatrix}^T-\begin{vmatrix} 1 & 1\\ \end{vmatrix}^T||\\ \end{vmatrix}= compet(\begin{vmatrix} 1\\ 1\\ \end{vmatrix}) = \begin{vmatrix} 1\\ 0\\ \end{vmatrix}) \end{equation}

\begin{equation} \alpha^2=W^2\cdot\alpha^1= \begin{vmatrix} 1 & 0\\ 0 & 1\\ \end{vmatrix}\begin{vmatrix} 1\\ 0\\ \end{vmatrix}=\begin{vmatrix} 1\\ 0\\ \end{vmatrix} \end{equation}

\begin{equation} W_1(1) = W_1(0) + \alpha\cdot(p_1-W_1(0))=\begin{vmatrix} 0\\ 1\\ \end{vmatrix}+0.5\cdot(\begin{vmatrix} 1\\ 1\\ \end{vmatrix}-\begin{vmatrix} 0\\ 1\\ \end{vmatrix})=\begin{vmatrix} 0\\ 1\\ \end{vmatrix}+\begin{vmatrix} 0.5\\ 0\\ \end{vmatrix}=\begin{vmatrix} 0.5\\ 1\\ \end{vmatrix} \end{equation}

$p_2$ :

\begin{equation} \alpha^1=compet(n^1)=compet\begin{vmatrix} ||w_1-p_2||\\ ||w_2-p_2||\\ \end{vmatrix} = compet\begin{vmatrix} ||\begin{vmatrix} 0.5 & 1\\ \end{vmatrix}^T-\begin{vmatrix} 1 & 2\\ \end{vmatrix}^T||\\ ||\begin{vmatrix} 1 & 0\\ \end{vmatrix}^T-\begin{vmatrix} 1 & 2\\ \end{vmatrix}^T||\\ \end{vmatrix}= compet(\begin{vmatrix} 1.8027756377\\ 2.8284271247\\ \end{vmatrix}) = \begin{vmatrix} 0\\ 1\\ \end{vmatrix}) \end{equation}

\begin{equation} \alpha^2=W^2\cdot\alpha^1= \begin{vmatrix} 1 & 0\\ 0 & 1\\ \end{vmatrix}\begin{vmatrix} 0\\ 1\\ \end{vmatrix}=\begin{vmatrix} 0\\ 1\\ \end{vmatrix} \end{equation}

wrong class

\begin{equation} W_2(1) = W_2(0) - \alpha\cdot(p_2-W_2(0))=\begin{vmatrix} 1\\ 0\\ \end{vmatrix}-0.5\cdot(\begin{vmatrix} -2\\ 2\\ \end{vmatrix}=\begin{vmatrix} 2\\ -1\\ \end{vmatrix} \end{equation}

$p_3$ :

\begin{equation} \alpha^1= compet\begin{vmatrix} ||\begin{vmatrix} 0.5 & 1\\ \end{vmatrix}^T-\begin{vmatrix} -2 & 2\\ \end{vmatrix}^T||\\ ||\begin{vmatrix} 2 & -1\ \end{vmatrix}^T-\begin{vmatrix} -2 & 2\\ \end{vmatrix}^T||\\ \end{vmatrix}= compet(\begin{vmatrix} 70\\ 5\\ \end{vmatrix}) = \begin{vmatrix} 1\\ 0\\ \end{vmatrix}) \end{equation}

\begin{equation} \alpha^2=W^2\cdot\alpha^1\begin{vmatrix} 1\\ 0\\ \end{vmatrix} \end{equation}

wrong class

\begin{equation} W_1(2) = W_1(1) - \alpha\cdot(p_3-W_1(1))=\begin{vmatrix} 0.5\\ 0\\ \end{vmatrix}-0.5\cdot(\begin{vmatrix} -2.5\\ 1\\ \end{vmatrix}=\begin{vmatrix} 1.75\\ 0.5\\ \end{vmatrix} \end{equation}

$p_2$ :

\begin{equation} \alpha^1=compet(n^1)= compet\begin{vmatrix} ||\begin{vmatrix} 1.75 & 0.5\\ \end{vmatrix}^T-\begin{vmatrix} -1 & 2\\ \end{vmatrix}^T||\\ ||\begin{vmatrix} 2 & -1\ \end{vmatrix}^T-\begin{vmatrix} -1 & 2\\ \end{vmatrix}^T||\\ \end{vmatrix}= compet(\begin{vmatrix} 3.13\\ 4.24\\ \end{vmatrix}) = \begin{vmatrix} 1\\ 0\\ \end{vmatrix}) \end{equation}

\begin{equation} \alpha^2=W^2\cdot\alpha^1\begin{vmatrix} 1\\ 0\\ \end{vmatrix} \end{equation}

\begin{equation} W_1(3) = W_1(2) - \alpha\cdot(p_2-W_1(2))=\begin{vmatrix} 1.75\\ 0.5\\ \end{vmatrix}+0.5\cdot(\begin{vmatrix} -2.75\\ 1.5\\ \end{vmatrix}=\begin{vmatrix} 0.375\\ 1.25\\ \end{vmatrix} \end{equation}

$p_3$ :

\begin{equation} \alpha^1=compet(n^1)= compet\begin{vmatrix} ||\begin{vmatrix} 0.375 & 1.25\\ \end{vmatrix}^T-\begin{vmatrix} -2 & 2\\ \end{vmatrix}^T||\\ ||\begin{vmatrix} 2 & -1\ \end{vmatrix}^T-\begin{vmatrix} -2 & 2\\ \end{vmatrix}^T||\\ \end{vmatrix}= compet(\begin{vmatrix} 2.95\\ 5\\ \end{vmatrix}) = \begin{vmatrix} 1\\ 0\\ \end{vmatrix}) \end{equation}

\begin{equation} \alpha^2=W^2\cdot\alpha^1\begin{vmatrix} 1\\ 0\\ \end{vmatrix} \end{equation}

wrong class

\begin{equation} W_1(4) = W_1(1) - \alpha\cdot(p_3-W_1(3))=\begin{vmatrix} 0.375\\ 1.25\\ \end{vmatrix}-0.5\cdot(\begin{vmatrix} -2.375\\ 0.75\\ \end{vmatrix}=\begin{vmatrix} 1.5625\ 0.875\\ \end{vmatrix} \end{equation}

p_1 :

\begin{equation} \alpha^1=compet(n^1)= compet\begin{vmatrix} ||\begin{vmatrix} 1.5625 & 0.875\\ \end{vmatrix}^T-\begin{vmatrix} 1 & 1\\ \end{vmatrix}^T||\\ ||\begin{vmatrix} 2 & -1\ \end{vmatrix}^T-\begin{vmatrix} 1 & 1\\ \end{vmatrix}^T||\\ \end{vmatrix}= compet(\begin{vmatrix} 0.57\\ 2.23\\ \end{vmatrix}) = \begin{vmatrix} 1\\ 0\\ \end{vmatrix}) \end{equation}

\begin{equation} \alpha^2=W^2\cdot\alpha^1\begin{vmatrix} 1\\ 0\\ \end{vmatrix} \end{equation}

\begin{equation} W_1(5) = W_1(4) - \alpha\cdot(p_1-W_1(4))=\begin{vmatrix} 1.5625\\ 0.875\\ \end{vmatrix}+(\begin{vmatrix} -0.28125\\ 0.0625\\ \end{vmatrix}=\begin{vmatrix} 1.28125\ 0.9375\\ \end{vmatrix} \end{equation}

",32076,,,,,4/16/2020 16:41,,,,0,,,,CC BY-SA 4.0 20326,2,,20305,4/16/2020 17:14,,0,,"

In the first case, the VC dimension of $H$ being infinite implies that $H$ is not (agnostic) PAC learnable (see p. 48 of Understanding Machine Learning: From Theory to Algorithms). So, in general, your classifier is not guaranteed to succeed.

In the second case, your division of $H$ implies that $H$ is nonuniformly learnable (chapter 7 of the cited book). This implies that you can get a generalization bound by using structural risk minimization.

",36115,,2444,,4/16/2020 17:48,4/16/2020 17:48,,,,0,,,,CC BY-SA 4.0 20327,2,,20309,4/16/2020 17:19,,8,,"

First of all, efficiency and convergence are two different things. There's also the rate of convergence, so an algorithm may converge faster than another, so, in this sense, it may be more efficient. I will focus on the proof that policy evaluation (PE) converges. If you want to know about its efficiency, maybe ask another question, but the proof below also tells you about the rate of convergence of PE.

What is the proof that policy evaluation converges to the optimal solution?

To provide some context, I will briefly describe policy evaluation and what you need to know to understand the proof.

Policy evaluation

Policy evaluation (PE) is an iterative numerical algorithm to find the value function $v^\pi$ for a given (and arbitrary) policy $\pi$. This problem is often called the prediction problem (i.e. you want to predict the rewards you will get if you behave in a certain way).

Two versions: synchronous and asynchronous

There are (at least) two versions of policy evaluation: a synchronous one and an asynchronous one.

In the synchronous version (SPE), you maintain two arrays for the values of the states: one array holds the current values of the states and the other array will contain the next values of the states, so two arrays are used in order to be able to update the value of each state at the same time.

In the asynchronous version (APE), you update the value of each state in place. So, first, you update the value of e.g. $s_1$, then $s_2$, etc., by changing your only array of values (so you do not require a second array).

SPE is similar in style to the numerical method called Jacobi method, which is a general iterative method for finding a solution to a system of linear equations (which is exactly what PE is actually doing, and this is also explained in the cited book by Sutton and Barto). Similarly, APE is similar in style to the Gauss–Seidel method, which is another method to solve a system of linear equations.

Both of these general numerical methods to solve a system of linear equations are studied in detail in Parallel and Distributed Computation Numerical Methods (1989) by Bertsekas and Tsitsiklis, which I haven't read yet, but provides convergence results for these numerical methods.

The book Reinforcement learning: an introduction by Sutton and Barto provides a more detailed description of policy evaluation (PE).

Proof of convergence

I will provide a proof for the SPE based on these slides by Tom Mitchell. Before proceeding, I suggest you read the following question What is the Bellman operator in reinforcement learning? and its answer, and you should also get familiar with vector spaces, norms, fixed points and maybe contraction mappings.

The proof that PE finds a unique fixed-point is based on the contraction mapping theorem and on the concept of $\gamma$-contractions, so let me first recall these definitions.

Definition ($\gamma$-contraction): An operator on a normed vector space $\mathcal{X}$ is a $\gamma$-contraction, for $0 < \gamma < 1$, provided for all $x, y \in \mathcal{X}$

$$\| F(x) - F(y) \| \leq \gamma \| x - y\|$$

Contraction mapping theorem: For a $\gamma$-contraction $F$ in a complete normed vector space $\mathcal{X}$

  • Iterative application of $F$ converges to a unique fixed point in $\mathcal{X}$ independently of the starting point

  • at a linear convergence rate determined by $\gamma$

Now, consider the vector space $\mathcal{V}$ over state-value functions $v$ (i.e. $v \in \mathcal{V})$. So, each point in this space fully specifies a value function $v : \mathcal{S} \rightarrow \mathbb{R}$ (where $\mathcal{S}$ is the state space of the MDP).

Theorem (convergence of PE): The Bellman operator is a $\gamma$-contraction operator, so an iterative application of it converges to a unique fixed-point in $\mathcal{V}$. Given that PE is an iterative application of the Bellman operator (see What is the Bellman operator in reinforcement learning?), PE finds this unique fixed-point solution.

So, we just need to show that the Bellman operator is a $\gamma$-contraction operator in order to show that PE finds this unique fixed-point solution.

Proof

We will measure the distance between state-value functions $u$ and $v$ by the $\infty$-norm, i.e. the largest difference between state values:

$$\|u - v\|_{\infty} = \operatorname{max}_{s \in \mathcal{S}} |u(s) - v(s)|$$

Definition (Bellman operator): We define the Bellman expectation operator as

$$F^\pi(v) = \mathbf{r}^\pi + \gamma \mathbf{T}^\pi v$$

where $v \in \mathcal{V}$, $\mathbf{r}^\pi$ is an $|\mathcal{S}|$-dimensional vector whose $j$th entry gives $\mathbb{E} \left[ r \mid s_j, a=\pi(s_j) \right]$ and $\mathbf{T}^\pi$ is an $|\mathcal{S}| \times |\mathcal{S}|$ matrix whose $(j, k)$ entry gives $\mathbb{P}(s_k \mid s_j, a=\pi(s_j))$.

Now, let's measure the distance (with the $\infty$-norm defined above) between any two value functions $u \in \mathcal{V}$ and $v \in \mathcal{V}$ after the application of the Bellman operator $F^\pi$

\begin{align} \| F^\pi(u) - F^\pi(v) \|_{\infty} &= \| (\mathbf{r}^\pi + \gamma \mathbf{T}^\pi u) - (\mathbf{r}^\pi + \gamma \mathbf{T}^\pi v)\|_{\infty} \\ &= \| \gamma \mathbf{T}^\pi (u - v)\|_{\infty} \\ &\leq \| \gamma \mathbf{T}^\pi ( \mathbb{1} \cdot \| u - v \|_{\infty})\|_{\infty} \\ &\leq \| \gamma (\mathbf{T}^\pi \mathbb{1}) \cdot \| u - v \|_{\infty}\|_{\infty} \\ &\leq \gamma \| u - v \|_{\infty} \end{align}

where $\mathbb{1} = [1, \dots, 1]^T$. Note that $\mathbf{T}^\pi \cdot \mathbb{1} = \mathbb{1}$ because $\mathbf{T}^\pi$ is a stochastic matrix.

By the Bellman expectation equation (see Barto and Sutton's book and What is the Bellman operator in reinforcement learning?), $v^\pi$ is a fixed-point of the Bellman operator $F^\pi$. Given the contraction mapping theorem, the iterative application of $F^\pi$ produces a unique solution, so $v^\pi$ must be this unique solution, i.e. SPE finds $v^\pi$. Here is another version of the proof that the Bellman operator is a contraction.

Notes

I didn't prove the contraction mapping theorem, but you can find more info about the theorem and its proof in the related Wikipedia article.

",2444,,2444,,1/22/2022 21:45,1/22/2022 21:45,,,,0,,,,CC BY-SA 4.0 20329,2,,20316,4/16/2020 17:57,,2,,"

It seems to me that you're thinking about the parameters a and b as being characteristic of the agent that's moving in the environment (therefore determining the final policy), but they are actually a characteristic of the environment.

Think of a frozen lake. You want to pass the lake but there is a hole five meters in front of you. Let's say you have boots with a rubber sole, so there is no risk to slip while walking (i.e. all transition probabilities =1). What is the optimal policy? Simply moving forward till the hole, goes around it and then forward again. But what if you were wearing wooden clogs instead? Now if you walk forward there is the risk to not be able to stop when you want to (i.e. when you decide to move forward of one step you might fail and keep going forward for several steps, with probability a). How many steps would you wait before moving to the left or to the right to avoid risking to fall into the hole? The optimal policy is obviously different in this second scenario. Think about it, I will not answer in detail about how the parameters affect the final policy because stack exchange is not meant for homework.

Also, an arbitrary policy is necessary simply because in value iteration you first update the values and only when they converge you infer from then the optimal policy. So you need some arbitrary prior belief about the environment to start update the values.

Edit

So here's a quick visualisation of the impact of different transition probabilities (the parameters a and b in your assignment) on the optimal policy. I used a different grid to make it easier and more explicit (i.e. you don't even need to calculate the values to see the optimal policy).

As you can see in the picture below, there are some terminal states in the third column and a goal state which gives a huge reward on top of it. From the starting point on the bottom we want to reach the goal state.

We have two cases:

Non stochastic environment

On the left we have the case in which all transition probabilities are 1. In this case the optimal policy is simply going forward till the second row, where the goal state is located, and then move right or left to reach it. The terminal states are not scary at all, in this case we could consider them as walls that we can't cross, but the optimal policy do not reflect any risk of falling in them. This is precisely because the transition probabilities are all 1.

Stochastic environment

On the right we have a different situation. In this case only 3 action have a transition probability of 1: going south, east or west. The transition probability for nord is 0.2 meaning that 80% of the time we will slip and go east instead (I wrote right in the picture meaning east, my bad). What is the impact of this simple change in the environment?

This time the second column become really dangerous, because if we try to go nord and fail we will end on a terminal state. On the other hand, if we choose another action we will end on a legitimate state, without any risk because the other transition probabilities are 1! Therefore the agent learn that on the left side of the grid the best strategy is to go down and circumnavigate the terminal states from the right, because only in the fourth column we can move north without risk, in fact in those cells even if we try go north and fail we would just hit a wall and we could try again going north the next turn till we will eventually succeed.

Hope this make sense!

",34098,,-1,,6/17/2020 9:57,4/22/2020 16:23,,,,16,,,,CC BY-SA 4.0 20330,1,,,4/16/2020 18:48,,1,39,"

Assume I am given a binary neural network where the activation values are constrained to be 0 or 1 (by clipping the ReLU function). Additionally, assume the neural network is supposed to work in a noisy environment where some of the activation values may be randomly flipped, i.e. 0 -> 1 or 1 -> 0.

I am trying to train such neural network in a way that it would be resilient to the noisy environment. I assume training with dropout would make the neural network somewhat resilient to noises where one is flipped to zero (1 -> 0).

What are some ways that allow me to make the neural network resilient to the other kind of noise which flips zeros to ones (0 -> 1)? Is it theoretically valid to introduce a dropout-like algorithm which flips some zeros to ones during training but does not backpropagate the gradients through those flipped nodes?

",27048,,27048,,4/17/2020 19:23,4/17/2020 19:23,How to make binary neural networks resilient to flipped activation values?,,0,4,,,,CC BY-SA 4.0 20334,2,,20238,4/16/2020 21:50,,2,,"

MSE just measures the squared difference between actual and target values. It can still correctly classify the values, but perhaps not with the same confidence - leading to a higher loss (e.g. an output of 0.77 vs 0.98 when the target is 1). In terms of which is better, I wouldn't know without the specifics of your problem. It is possible the higher loss could be more robust since it is less likely to have overfitted on the data, yet achieves the same accuracy.

",22373,,,,,4/16/2020 21:50,,,,0,,,,CC BY-SA 4.0 20335,2,,20238,4/16/2020 22:15,,2,,"

Accuracy itself isn't a sufficient way to compare two models. For example, you need to consider the precision and recall stats (see confusion matrix) and calculate some other metrics like f1 score. The measurement of accuracy is only the initial state that helps us to know if a model is ""working"". But in order to understand and compare you need to know how many of impostors' set were classified as true claimants, and how many true-claimants' set classified as impostors according to the sum of total correct classified ones. In order to make a decision with the above been known you have to define how critical a miss-classification could be? e.g. assume that you need to classify if a person has a disease or not? (that's critical).

",22592,,22592,,4/16/2020 22:34,4/16/2020 22:34,,,,0,,,,CC BY-SA 4.0 20336,1,20341,,4/16/2020 22:33,,5,1977,"

This might be a little broad question, but I have been watching Caltech youtube videos on Machine Learning, and in this video prof. is trying to explain how we should interpret the VC dimension in terms of what it means in layman terms, and why do we need it in practice.

The first part I think I understand, please correct me if I am wrong. VC Dimension dictates the number of effective parameters (i.e. degrees of freedom) that model has. In other words, the number of parameters the model needs in order to cover all possible label combinations for the chosen dataset. Now, the second part is not clear to me. The professor is trying to answer the question:

How does knowing the VC dimension of the hypothesis class affect number of samples we need for training?

Again, I apologize if all of this may be trivial, but I am new to the field and wish to learn as much as I can, so I can implement better and more efficient programs in practice.

",35990,,2444,,12/7/2020 21:01,12/7/2020 22:06,How does size of the dataset depend on VC dimension of the hypothesis class?,,4,0,,,,CC BY-SA 4.0 20337,2,,20336,4/16/2020 22:43,,3,,"

Given a hypothesis set $H$, the set of all possible mappings from $X\to Y$ where $X$ is our input space and $Y$ are our binary mappings: $\{-1,1\}$, the growth function, $\Pi_H(m)$, is defined as the maximum number of dichotomies generated by $H$ on $m$ points. Here a dichotomy is the set of $m$ points in $X$ that represent a hypothesis. A hypothesis is just a way we classify our points. Therefore with two labels we know,

$$\Pi_H(m)\leq 2^m$$

This is just counts every possible hypothesis. The VC dimension is then the largest $m$ where $\Pi_H(m)=2^m$.

Consider a 2D perceptron, meaning our $X$ is $\mathbb{R}^2$ and our classifying hyperlane is one-dimensional: a line. The VC dimension will be 3. This is because we can shatter (correctly classify) all dichotomies for $m=3$. We can either have all points be the same colour, or one point be a different colour - which is $2^3=8$ dichotomies. You may ask what if the points we are trying to classify are collinear. This does not matter because we are concerned with resolving the dichotomies themselves, not the location of the points. We just need a set of points (wherever they may be located) that exhibits that dichotomy. In other words, we can pick the points such that they maximize the number of dichotomies we can shatter with one classifying hyperplane (a triangle): the VC dimension is a statement of the capacity of our model.

To make this clear, consider $m=4$. We can represent the truth table of the XOR gate as a dichotomy but this is not resolvable by the perceptron, no matter where we choose the location of the points (not linearly separable). Therefore, we can resolve a maximum of 8 dichotomies, so our VC dimension is 3. In general, the VC dimension of perceptrons is $d+1$ where $d$ is the dimension of $X$ and $d-1$ is the dimension of the classifying hyperplane.

",22373,,22373,,4/17/2020 0:49,4/17/2020 0:49,,,,3,,,,CC BY-SA 4.0 20338,2,,20266,4/16/2020 23:10,,1,,"

I think your approach to tackle this as an imbalanced problem is correct. The easiest thing you could do is to add weights to the samples, during training, so that the model ""pays more attention"" to the under-represented class.

There are also a couple of other ways for this to be done: oversampling and undersampling, but initially, I'd focus on adding weights, since its easier to implement.

",26652,,,,,4/16/2020 23:10,,,,2,,,,CC BY-SA 4.0 20339,1,,,4/16/2020 23:19,,1,39,"

I am creating a neural network to experiment with, and I was wondering:

  • If I have weights randomly initialized to be either 1 or 0 for each neuron, and then I made it so that the weights cannot be changed, would that ruin the neural network? What would happen?

Note: There is no bias in this network.

",36124,,1671,,4/17/2020 21:21,4/17/2020 21:21,Is having binary randomized unchanging neural network weights a good idea?,,0,3,,,,CC BY-SA 4.0 20340,2,,20336,4/17/2020 0:11,,2,,"

The VC dimension represents the capacity (the same Vapnik, the letter V from VC, calls it the ""capacity"") of a model (or, in general, hypotheses class), so a model with a higher VC dimension has more capacity (i.e. it can represent more functions) than a model with a lower VC dimension.

The VC dimension is typically used to provide theoretical bounds e.g. on the number of samples required for a model to achieve a certain test error with a given uncertainty or, similarly, to understand the quality of your estimation given a certain dataset.

Just to give you an idea of how the bounds look like, have a look at the theorem on page 6 (of the pdf) of the paper An overview of statistical learning theory (1999) by Vapnik.

Have also a look at this answer, where I provide more info about the VC dimension, in particular, in the context of neural networks.

",2444,,2444,,4/17/2020 0:20,4/17/2020 0:20,,,,2,,,,CC BY-SA 4.0 20341,2,,20336,4/17/2020 0:26,,3,,"

From [1] we know that we have the following bound between the test and train error for i.i.d samples:

$$ \mathbb{P}\left(R \leqslant R_{emp} + \sqrt{\frac{d\left(\log{\left(\frac{2m}{d}\right)}+1\right)-\log{\left(\frac{\eta}{4}\right)}}{m}}\right) \geqslant 1-\eta $$

$R$ is the test error, $R_{emp}$ is the training error, $m$ is the size of the training dataset, and $d$ is the hypothesis class's VC dimension. As you can see, the training and test errors have some relations to the dataset's size ($m$) and $d$.

Now, in terms of PAC learnability, we want to find a (lower or upper) bound for $m$ such that the absolute difference between $R$ and $R_{emp}$ will be less than a given $\epsilon$ with a given probability of at least $1-\eta$. Hence, $m$ can be computed in terms of $\epsilon$, $\eta$, and $d$. For example, it can be proved ([2]) to train a binary classifier with $\epsilon$ difference between test and train error with the probability of at least $1-\eta$, we need $O\left(\frac{d + \log\frac{1}{\eta}}{\epsilon} \right)$ i.i.d sample data, i.e., $m = O\left(\frac{d + \log\frac{1}{\eta}}{\epsilon}\right)$. See more example and references here.

",4446,,4446,,12/7/2020 22:06,12/7/2020 22:06,,,,0,,,,CC BY-SA 4.0 20342,1,,,4/17/2020 0:58,,2,66,"

I am currently attempting to detect a signal from background noise. The signal is pretty well known but the background has a lot of variability. I've since come to know this problem as Open Set Recognition. Another complicating factor is that the signal mixes with the background noise (think equivalent to a transparent piece of glass in-front of scenery for a picture, or picking out the sound of a pin drop in an office space).

When I started this project, it seemed like the current state of the art in this space was generating Spectrograms and feeding them to a CNN and this is the path I've followed. I'm at a place where I think I've overcome most of the initial problems you might encounter but I'm still not getting good enough results for a project solution.

Here's the overall steps I've gone through:

  1. Generate 17000 ground truth "signals" and 17000 backgrounds (negatives or other classes depending on what nn scheme I'm training)

  2. Generate separate test samples (not training samples but external model validation samples: "blind test") where I take the backgrounds and randomly overlay the signal into it at various intensities.

  3. My first attempt was with a pre-built library training solution (ImageAI) with resnet50 base model. This solution is a multiclass classifier so I had 400 each of the signal + 5 other classes that were the background. It did not work well at classifying the signal. I don't think I ever got this off the ground for two reasons a) My spectrogram pictures were not optimised (waay to large) and b) I couldn't adjust the image input shape via the library. It mostly just ended up classifying one background class.

  4. I then started building my own neural nets. The first reason to make sure my spectrogram input shape was matched in the input shape of the CNN. The second reason was to test various neural net schemes to see what worked best.

  5. The first net I built was a simple feed forward net with a couple of dense layers. This trains to .9998 val_acc. It (like the rest of what I try) produces poor results on my blind tests, in the range of 60% true positive.

    def build(width, height, depth, classes):
         # initialize the model along with the input shape to be
         # "channels last" and the channels dimension itself
         model = Sequential()
         inputShape = (height, width, depth)
         chanDim = -1
    
         # if we are using "channels first", update the input shape
         # and channels dimension
         if K.image_data_format() == "channels_first":
             inputShape = (depth, height, width)
             chanDim = 1
         model.add(Flatten())
         model.add(Dense(512, input_shape=(inputShape),activation="relu"))
         model.add(Dense(128, activation="relu"))
         model.add(Dense(32, activation="relu"))
         # sigmoid classifier
         model.add(Dense(classes))
         model.add(Activation("sigmoid"))
    
         # return the constructed network architecture
         return model  
    
  6. I then try a "VGG Light" model. Again, trains to .9999 but gives me only 62% true positive results on my blind tests

     def build(width, height, depth, classes):
         # initialize the model along with the input shape to be
         # "channels last" and the channels dimension itself
         model = Sequential()
         inputShape = (height, width, depth)
         chanDim = -1
    
         # if we are using "channels first", update the input shape
         # and channels dimension
         if K.image_data_format() == "channels_first":
             inputShape = (depth, height, width)
             chanDim = 1
    
         # CONV => RELU => POOL
         model.add(Conv2D(32, (3, 3), padding="same",            input_shape=inputShape))
         model.add(Activation("relu"))
         model.add(BatchNormalization(axis=chanDim))
         model.add(MaxPooling2D(pool_size=(3, 3)))
         model.add(Dropout(0.25))
    
         # (CONV => RELU) * 2 => POOL
         model.add(Conv2D(64, (3, 3), padding="same"))
         model.add(Activation("relu"))
         model.add(BatchNormalization(axis=chanDim))
         model.add(Conv2D(64, (3, 3), padding="same"))
         model.add(Activation("relu"))
         model.add(BatchNormalization(axis=chanDim))
         model.add(MaxPooling2D(pool_size=(2, 2)))
         model.add(Dropout(0.25))
    
         # (CONV => RELU) * 2 => POOL
         model.add(Conv2D(128, (3, 3), padding="same"))
         model.add(Activation("relu"))
         model.add(BatchNormalization(axis=chanDim))
         model.add(Conv2D(128, (3, 3), padding="same"))
         model.add(Activation("relu"))
         model.add(BatchNormalization(axis=chanDim))
         model.add(MaxPooling2D(pool_size=(2, 2)))
         model.add(Dropout(0.25))
         model.add(GaussianNoise(.05))
    
         # first (and only) set of FC => RELU layers
         model.add(Flatten())
         model.add(Dense(1024))
         model.add(Activation("relu"))
         model.add(BatchNormalization())
         model.add(Dropout(0.5))
         model.add(Dense(512))
         model.add(Activation("relu"))
         model.add(BatchNormalization())
         model.add(Dropout(.5))
         model.add(Dense(128))       
         model.add(Activation("relu"))
         model.add(BatchNormalization())     
         model.add(GaussianDropout(0.5))
    
         # sigmoid classifier
         model.add(Dense(classes))
         model.add(Activation("sigmoid"))
    
         # return the constructed network architecture
         return model
    
  7. I then try a "full VGG" net. This again trains to .9999 but only a blind test true positive result of 63%.

    def build(width, height, depth, classes):
        # initialize the model along with the input shape to be
        # "channels last" and the channels dimension itself
        model = Sequential()
        inputShape = (height, width, depth)
        chanDim = -1
    
        # if we are using "channels first", update the input shape
        # and channels dimension
        if K.image_data_format() == "channels_first":
            inputShape = (depth, height, width)
            chanDim = 1
    
        #CONV => RELU => POOL
        model.add(Conv2D(64, (3, 3), padding="same", input_shape=inputShape))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(3, 3)))
        #model.add(Dropout(0.25))
    
        # (CONV => RELU) * 2 => POOL
        model.add(Conv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(Conv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        #model.add(Dropout(0.25))
    
        # (CONV => RELU) * 2 => POOL
        model.add(Conv2D(256, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(Conv2D(256, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        #model.add(Dropout(0.25))
    
        # (CONV => RELU) * 2 => POOL
        model.add(Conv2D(512, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(Conv2D(512, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        #model.add(Dropout(0.25))
    
        # (CONV => RELU) * 2 => POOL
        model.add(Conv2D(1024, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(Conv2D(1024, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        #model.add(Dropout(0.25))
        model.add(GaussianNoise(.1))
    
        # first (and only) set of FC => RELU layers
        model.add(Flatten())
        model.add(Dense(8192))
        model.add(Activation("relu"))
        model.add(BatchNormalization())
        model.add(Dropout(0.5))
        model.add(Dense(4096))
        model.add(Activation("relu"))
        model.add(BatchNormalization())
        model.add(Dropout(0.5))
        model.add(Dense(1024))
        model.add(Activation("relu"))
        model.add(BatchNormalization()) 
        model.add(GaussianDropout(0.5))
    
        # sigmoid classifier
        model.add(Dense(classes))
        model.add(Activation("sigmoid"))
    
        # return the constructed network architecture
        return model
    
  8. All of the above are binary_crossentropy trained in keras.

  9. I've tried multi-class with these models as well but when testing them on the blind test they usually pick the background rather than the signal.

  10. I've also messed around with Autoencoders to try and get the encoder to rebuild the signal well and then compare to known results but haven't been successful yet though I'd be willing to give it another try if everyone thought that might produce better results.

  11. In the beginning I ran into unbalanced classification problems (I was noob) but under all the models shown above the classes all have the same number of samples.

I'm at the point where the larger VGG models trained on 34,000 samples is taking days and I don't see any better results than a basic, feed forward NN that takes 4 minutes to train.

Does anyone see the path forward here?

",36092,,2444,,5/18/2022 8:57,10/15/2022 10:02,Heavily mixing signal differentiation from Open Set of backgrounds via CNN,,2,4,,,,CC BY-SA 4.0 20343,1,,,4/17/2020 3:25,,1,21,"

Listening to lectures, convolutional neural network seems to be an improvement over a simple neural network, where for example, you take every pixel in the image, flatten it to a vector, and feed it to ANN with a couple layers. Therefore semantic segmentation should be possible to perform on a classic ANN.

I dont understand exactly how a CNN can classify each pixel in the image to do semantic segmentation.

The way semantic segmentation is explained in lectures is the output of the CNN is fed into it backwards, is that the same as with GAN?

If the output of a CNN is a value between 0 and 1 for each class, how exactly can those values be fed back through a CNN backwards to classify each pixel in the image? Through back propagation?

My understanding is that it should be possible to do the same to a regular ANN described above. Can someone explain why or why not it wouldn't be possible, and how semantic segmentation feeds output back through network to classify each pixel

Thanks,

",36127,,,,,4/17/2020 3:25,How would semantic segmentation work with a non convolutional neural network,,0,0,,,,CC BY-SA 4.0 20344,2,,7640,4/17/2020 6:14,,3,,"

A stationary policy is the one that does not depend on time. Meaning that the agent will take the same decision whenever certain conditions are met. This stationary policy may be probabilistic which implies that the probability of choosing an action remains the same. It may take different decisions but the probability remains the same.

A Stationary environment refers to the static model of the system. The model comprises of a Reward function and Transition probabilities. So, in a stationary environment, the reward function and transition probabilities remain constant or the changes are slow enough that the agent finds enough training time to learn the changes done in the environment.

",36133,,,,,4/17/2020 6:14,,,,0,,,,CC BY-SA 4.0 20347,1,,,4/17/2020 9:22,,1,44,"

Is the paper ""Reducing the Dimensionality of Data with Neural Networks"" by G. Hinton and R. Salakhutdinov relevant?

It seems that the deep learning textbook by Goodfellow, Bengio & Courville (2016) doesn't cite that paper.

Does that indicate that paper is not as important as others to Deep learning? If yes, I would skip this one to accelerate my process of learning.

",35896,,2444,,4/17/2020 14:14,4/17/2020 14:14,"Is the paper ""Reducing the Dimensionality of Data with Neural Networks"" by Hinton relevant?",,1,0,,,,CC BY-SA 4.0 20348,2,,20347,4/17/2020 9:28,,1,,"

No; there are too many publications around for anybody to keep track of everything, so unless it is a seminal paper, you cannot draw any conclusions from this. They could simply have missed it.

Especially if it is a textbook for beginners, more advanced papers are often not mentioned, as they might be too complex to understand.

So you have to decide for yourself if that paper is relevant to you. To me it sounds like a specific application of neural networks; if the dimensionality of the input data is an issue for you, it might be, otherwise probably not.

",2193,,,,,4/17/2020 9:28,,,,0,,,,CC BY-SA 4.0 20349,2,,20336,4/17/2020 10:39,,0,,"

Since the mathematical details have already been covered by other answers, I will try to provide an intuitive explanation. I will answer this assuming the question meant $model$ and not $learning$ $algorithm$.

One way to think of $\mathcal V \mathcal C$ dimension is that it is an indicator of the number of functions (i.e a set of functions) you can choose from to approximate your classification task over a domain. So a model (here assume neural nets, linear separators, circles, etc whose parameters can be varied) having $\mathcal V \mathcal C$ dimension of $m$ shatters all subsets of the single/multiple set of $m$ points it shatters.

For a learning algorithm, to select a function, which gives accuracy close to the best possible accuracy (on a classification task) from the aforementioned set of functions (shattered by your model, which means it can represent the function with $0$ error) it needs a certain sample size of $m$. For the sake of argument, let's say your set of functions (or the model shatters) contains all the possible mappings from $\mathcal X \rightarrow \mathcal Y$ (assume $\mathcal X$ contains $n$ points i.e finite sized, as a result number of functions possible is $2^n$). One of the function it will shatter is the function which performs the classification, and thus you are interested in finding it.

Any learning algorithm which sees $m$ number of samples can easily pick up the set of functions which agrees on these points. The number of these functions agreeing on these sampled $m$ points but disagreeing on the $n-m$ points is $2^{(n-m)}$. The algorithm has no way of selecting from these shortlisted functions (agreeing on $m$ points) the one function which is the actual classifier, hence it can only guess. Now increase the sample size and the number of functions disagreeing keeps falling and the algorithms probability of success keeps getting better and better until you see all $n$ points when your algorithm can identify the mapping function of the classifier exactly.

The $\mathcal V \mathcal C$ dimension is very similar to the above argument, except it doesn't shatter the entire domain $\mathcal X$ and only a part of it. This limits the models capability to approximate a classification function exactly. So your learning algorithm tries to pick a function from all the functions your model shatter, which is very close to the best possible classification function i.e there will exist a best possible (not exact) function (optimal) in your set of functions which is closest to the classification function and your learning algorithm tries to pick a function which is close to this optimal function. And thus again, as per our previous argument it will need to keep increasing the sample size to reach as close as possible to the optimal function. The exact mathematical bounds can be found in books, but the proofs are quite daunting.

",,user9947,,,,4/17/2020 10:39,,,,3,,,,CC BY-SA 4.0 20351,1,,,4/17/2020 14:50,,1,290,"

Here is my code

Recently, I solved the game of Atari Breakout using a classic DQN model. The convergence of the mean reward slowly improved during three days. I was interested in learning a method which may help me improving the convergence speed. I found the following article : https://arxiv.org/pdf/1706.10295v3.pdf. It says I can use an Independent Gaussian Noise to outperform an standard DQN.

Here is my Noisy DQN model :

import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np


class NoisyLinear(nn.Linear): #Independent Gaussian Noise used with NoisyDQN model
    def __init__(self, in_features, out_features, sigma_init=0.017, bias=True):
        super(NoisyLinear, self).__init__(in_features, out_features, bias=bias)

        self.sigma_weight = nn.Parameter(torch.full((out_features, in_features), sigma_init))
        self.register_buffer(""epsilon_weight"", torch.zeros(out_features, in_features))

        if bias: 
            self.sigma_bias = nn.Parameter(torch.full((out_features,), sigma_init))
            self.register_buffer(""epsilon_bias"", torch.zeros(out_features))

        self.reset_parameters()

    def reset_parameters(self):
        std = math.sqrt(3/self.in_features)
        self.weight.data.uniform_(-std, std)
        self.bias.data.uniform_(-std, std)

    def forward(self, input):
        self.epsilon_weight.normal_()
        bias = self.bias
        if bias is not None:
            self.epsilon_bias.normal_()
            bias = bias + self.sigma_bias * self.epsilon_bias.data
        return F.linear(input, self.weight + self.sigma_weight * self.epsilon_weight.data, bias)


class NoisyDQN(nn.Module):
    """"""
    Look at https://arxiv.org/pdf/1706.10295v3.pdf
    """"""

    def __init__(self, input_shape, num_actions):
        super(NoisyDQN, self).__init__()

        self.conv = nn.Sequential(
            nn.Conv2d(in_channels=input_shape[0], out_channels=32, kernel_size=8, stride=4),
            nn.ReLU(),
            nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2),
            nn.ReLU(),
            nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1),
            nn.ReLU(),
        )

        self.conv_output_size = self.get_output_size(input_shape)

        self.linear = nn.Sequential(
           NoisyLinear(in_features=self.conv_output_size, out_features=512),
           nn.ReLU(),
           NoisyLinear(in_features=512, out_features=num_actions)
        )

    def get_output_size(self, input_shape):
        output = self.conv(torch.zeros((1, *input_shape)))
        return int(np.prod(output.shape))

    def forward(self, input):
        self.layer1 = self.conv(input)
        self.layer1 = self.layer1.reshape(-1, self.conv_output_size)

        return self.linear(self.layer1)

The idea is to replace the epsilon greedy action selection and my standard DQN model by the Noisy Network you can see just above.

The code run successfully, but it doesn't improve even a bit. How can I fix that?

UPDATE

After nearly 200k episodes, I am still between 1.5 and 2 reward. The maximum reward I can get on the Atari Breakout game is about 500. With a standard DQN, after 100k episodes, I am near 11 as reward.

On the above picture the X-axis is the number of episodes and the Y-axis is the mean reward over the last 100 rewards. The Y-axis is describe with mean_reward = np.mean(self.total_rewards[-100:])

UPDATE

After about 8 hours of training, I got this

As you can see, it is not working as good as in the paper. I worked a lot with the hyperparameters, but not changed.

",35626,,35626,,4/23/2020 15:55,4/23/2020 15:55,Replace epsilon greedy action selection and the standard DQN by an Independent Gaussian Noise Network Model,,0,14,,,,CC BY-SA 4.0 20352,1,,,4/17/2020 15:01,,2,91,"

I'm a bit new to AI and I'd like to use some kind of clustering algorithm to solve a problem:

I'm trying to parse pdf documents to get headings and titles. I can parse pdf to html and I'm then able to get some information on the lines of the document. I've identified some properties that can be useful for identifying the headings.

  • font-size (int): of course it's quite usual that heading's font-size is bigger than normal text
  • font-family (string): it's possible for headings to be bold so font-family may differ
  • left property (int): it's also possible that headings are aligned a bit to the right, there's an indentation that's not always there on normal paragraphs
  • bonus boolean: I have identified some properties that I can combine to get a boolean value. When the boolean is set to true it can increase the chances of the paragraph being a heading.

Of course, these are not rules that apply to all headings. Some headings may follow some of these but not all of them. It could also be possible that some 'normal' paragraphs follow all these points, but what I've seen is that, in general, those rules where what made headings different from paragraphs.

With this information, is there a way of doing what I'm looking for? As I said, I'm new to AI even though I have a background in CS and mathematics. I thought clustering could be interesting since I'm trying to create 2 clusters: headings and normal paragraphs.

What algorithm do you think might work for this use case. Should I look outside clustering?

",36143,,2444,,4/18/2020 0:44,4/21/2020 13:27,Could clustering be used to parse pdf documents to get headings and titles?,,2,2,,,,CC BY-SA 4.0 20353,1,20356,,4/17/2020 15:33,,1,69,"

I guess the model shown in this image (img_1)

is the same as the one in this image (img_2)

I was trying to build a neural net like that.

This keras code is to do the job.

model = Sequential()
model.add(Dense(3, input_dim=3, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)

However, print(model.summary()) outputs

Model: ""sequential_17""
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_31 (Dense)             (None, 3)                 12        
_________________________________________________________________
dense_32 (Dense)             (None, 1)                 4         
=================================================================

There are 3 ws and 1 b in the hidden layer. Why does this model have 12 parameters?

",35896,,2444,,4/18/2020 1:01,4/18/2020 1:01,Why does this model have 12 parameters?,,1,0,,,,CC BY-SA 4.0 20355,1,20358,,4/17/2020 16:04,,6,1662,"

Pretty soon I will be finishing up Understanding Machine Learning: From Theory to Algorithms by Shai Ben-David and Shai Shalev-Shwartz. I absolutely love the subject and want to learn more, the only issue is I'm having trouble finding a book that could come after this. Ultimately, my goal is to read papers in JMLR's COLT.

  1. Is there a book similar to ""Understanding Machine Learning: From Theory to Algorithms"" that would progress my knowledge further and would go well after reading UML?

  2. Is there any other materials (not a book) that could allow me to learn more or prepare me for reading a journal like the one mentioned above?

(Also, taking courses in this is not really an option, so this will be for self-study).

(Note that I have also asked this question here on TCS SE, but it was recommended I also ask here.)

",36131,,2444,,1/16/2021 19:50,1/16/2022 23:13,What are some resources on computational learning theory?,,1,0,,,,CC BY-SA 4.0 20356,2,,20353,4/17/2020 16:32,,2,,"

You have 3 inputs going to 3 nodes in the input layer. Each connection has a weight so you have 3 X 3 =9 weights. Plus each node has a bias weight so that adds 3 more weights for a total of 12. Your output layer has 3 inputs and is a single node so you have 3 weights for the inputs to the node plus a bias weight for a total of 4. So the total weights in your network is 16.

",33976,,,,,4/17/2020 16:32,,,,0,,,,CC BY-SA 4.0 20357,2,,20352,4/17/2020 16:59,,2,,"

Yes, you could use clustering: Encode your features as a feature vector and feed it into a clustering algorithm (see Finding Groups in Data for a comprehensive description of these). You could use agglomerative clustering, which would give you groups of similar items; perhaps different level headings will be clustered together.

Alternatively you could try a decision tree, something like ID3, which would also be suitable; for this you'd need some annotated training data, though. But with a small amount of data you might solve it, if your items are clearly separated.

",2193,,,,,4/17/2020 16:59,,,,1,,,,CC BY-SA 4.0 20358,2,,20355,4/17/2020 17:07,,8,,"

Although I have only partially read or not read at all some of the following resources and some of these resources may not cover more advanced topics than the ones presented in the book you are reading, I think they can still be useful for your purposes, so I will share them with you.

I would also like to note that if you understand the contents of the book you are currently reading, you are probably already prepared for reading some (if not most of) the research papers you wish to read. Initially, you may find them a little bit too succinct and sometimes unclear or complex, but you need to get used to this format, so there's nothing stopping you from trying to read them and learn even more by doing this exercise.

Books

Papers

Courses (videos)

Lecture notes

Other

See also this list of resources https://kiranvodrahalli.github.io/links/#resources-notes-textbooks-monographs-classes-etc compiled by Kiran Vodrahalli.

",2444,,2444,,1/16/2022 23:13,1/16/2022 23:13,,,,1,,,,CC BY-SA 4.0 20362,2,,16599,4/17/2020 23:47,,5,,"

There's a list of ongoing and past RL competitions here. The ongoing competitions according to that list are

",2444,,,,,4/17/2020 23:47,,,,0,,,,CC BY-SA 4.0 20364,1,,,4/18/2020 7:27,,1,28,"

Is it useful to use Siamese network structure for GANs like sharing latent space between generators in cGAN , or also with discriminators.

Thinking about it, like giving the generator tips about the knowledge-base of the discriminator, to target the problem of discriminator forgetting and increase the chance of convergence. Because than the discriminator prediction confidence is dependent of the generators construction (what’s anyway the case, but now on a system base ).

What do you think ? Didn’t see it in recent papers that often, just in this one, but it’s more like a pix2pix transformation, and just works so well, because they are using the segmentation masks of A, to get good segmentation results on B‘ ( A transformed to B). Didn’t find any approaches to sth like leaky discriminators.

",35557,,35557,,4/19/2020 11:25,4/19/2020 11:25,Leaky Discriminators and Siamese GANs,,0,0,,,,CC BY-SA 4.0 20365,1,,,4/18/2020 11:22,,2,333,"

I'm using Monte Carlo Tree Search with UCT selection to try and build an AI player for a complex multiplayer board game. My regular UCT MCTS seems to be working fine, winning with random and basic greedy players or low-depth 'paranoid' alpha-beta variant player, but I've been looking for some methods to improve it and I found RAVE.

""In RAVE, for a given game tree node N, its child nodes Ci store not only the statistics of wins in playouts started in node N but also the statistics of wins in all playouts started in node N and below it, if they contain move i (also when the move was played in the tree, between node N and a playout). This way the contents of tree nodes are influenced not only by moves played immediately in a given position but also by the same moves played later."".

I've found a lot of literature about it and it was supposed to give good results - 70%-80% win rate against basic UCT on a game of TicTacToe3D. I implemented it as a sort of benchmark, a 4x4x4 version, before trying it on my target game. But, however I tried tuning the parameters, I've been getting worse results, the win rate is at best arount 46%.

I've been calculating the node values like this:

visits[i] is a number of visits for child i of parent p that selection is performed on, wins[i] is a number of wins according to UCT, AMAFvisits and AMAFwins are assigned based on the node's source action -> updated after a finished simulation if a sourceAction (the action that changed the game state into this state) was played in the simulation by the player of the MCTS tree root node.

for (int i = 0; i < nChildren; i++) {
    if (visits[i] < 1) {
        value = Double.MAX_VALUE - rnd.nextDouble();
    }
    else if (m[i] < 1) {
        double vUCT = wins[i]/visits[i] + C*Math.sqrt(Math.log(sumVisits)/(visits[i]));
        value = vUCT;
    }
    else {
        double beta = Math.sqrt(k/(3*visits[i] + k));
        double vRAVE = (AMAFscores[i])/(m[i]) + C*Math.sqrt(Math.log(mChildren)/(m[i]));
        double vUCT = (wins[i])/(visits[i])+ C*Math.sqrt(Math.log(sumVisits)/(visits[i]));
        value = beta * vRAVE + (1 - beta) * vUCT;
        value += rnd.nextDouble() * eps;
        /*double beta = Math.sqrt(k/(3*visits[i] + k));
        double vRAVE = (AMAFscores[i])/(m[i]);
        double vUCT = (wins[i])/(visits[i]);
        value = beta * vRAVE + (1 - beta) * vUCT;
        value += C*Math.sqrt(Math.log(sumVisits)/(visits[i]));
        value += rnd.nextDouble() * eps;*/
    }
    if (maxValue <= value) {
        maxValue = value;
        index = i;
    }
}
chosen = tree.getTreeNode(children.get(index));

Here's a paint rendition of my understanding of how RAVE should work -> https://imgur.com/a/MM4K1HE. Am I missing something? Is my implementation wrong? Here's the rest of the code responsible for traversing the tree in a 'rave way': https://www.paste.org/104476. The expand function on tree expands the tree for all actions, and returns a random one which then gets visited, the others are to be visited in other iterations.

I first tested the code on k = 250 like the authors of the benchmark paper https://dke.maastrichtuniversity.nl/m.winands/documents/CIG2016_RAVE.pdf suggested and on 100, 1000 and 10000 iterations, with tree depth 20 or 50. I also experimented with other k values and other params.

",36162,,,,,1/17/2023 4:03,MCTS RAVE performing badly in Board Game AI,,1,0,,,,CC BY-SA 4.0 20366,1,20367,,4/18/2020 12:42,,5,358,"

I have just started to study reinforcement learning and, as far as I understand, existing algorithms search for the optimal solution/policy, but do not allow the possibility for the programmer to suggest a way to find the solution (to guide their learning process). This would be beneficial for finding the optimal solution faster.

Is it possible to guide the learning process in (deep) reinforcement learning?

",32237,,2444,,4/18/2020 13:57,4/21/2020 12:54,Is it possible to guide a reinforcement learning algorithm?,,2,0,,,,CC BY-SA 4.0 20367,2,,20366,4/18/2020 13:10,,3,,"

The programmer already guides the RL algorithm (or agent) by specifying the reward function. However, the reward function alone may not be sufficient to learn efficiently and fast, as you correctly noticed.

To attempt to solve this inefficiency problem, one solution is to combine reinforcement learning with supervised learning. For example, the paper Deep Q-learning from Demonstrations (2017) by Todd Hester et al. describes an approach to achieve this.

The paper Active Reinforcement Learning (2008) by Arkady Epshteyn et al. also tries to solve this problem but by incorporating approximations (given by domain experts) of the MDP.

There are probably many other possible solutions. In fact, all model-based RL algorithms could probably fall into this category of algorithms that estimate or incorporate the dynamics of the environment to find a policy more efficiently.

",2444,,2444,,4/18/2020 13:15,4/18/2020 13:15,,,,0,,,,CC BY-SA 4.0 20368,2,,20289,4/18/2020 13:34,,1,,"

Are there ways to more precisely approximate how good a single action really is considering its short and long term effects?

To understand the short-term effects of an action, just take each of the available actions from the current state and observe the reward for each of them. The action that gives you the highest immediate reward is the best action. However, note that the reward function may change or could be stochastic. In those cases, you may need to estimate the best action e.g. by executing it multiple times.

If you want to know the action that gives you the highest amount of reward in the long run (i.e. that gives you the highest return), then you can use one of the available RL algorithms, which were invented exactly to solve this problem. Basically, you're asking us what is the best RL algorithm. It depends on the problem, as usual.

If you want to know the effects of an action in e.g. $n$ steps ahead, then you can probably formulate this problem as a truncated version of the typical reinforcement learning problem. In practice, you probably can achieve this by changing the discount factor so that the next $n$ rewards are more valuable (or are the only ones considered) than the rewards after $n$ steps. If you aren't familiar with discount factors, I encourage you to have a look at this concept from a reference book.

Note that, in this answer, I am just trying to give you the idea and intuition behind a possible answer to your question (also because your question isn't really suited to provide more detailed or rigorous answers).

",2444,,2444,,4/18/2020 22:11,4/18/2020 22:11,,,,1,,,,CC BY-SA 4.0 20369,1,,,4/18/2020 14:11,,3,478,"

Google provides a lot of pretrained tensorflow models, but I cannot find a license.

I am interested in the tfjs-models. The code is licensed Apache-2.0, but the models are downloaded by the code, so the license of the repository probably does not apply to the models and I am not able to find anywhere a note about the license of the pretrained models.

How should I handle this, especially when I may want to distribute models derived from the pretrained Google models?

",25798,,,,,4/21/2020 10:41,How does a software license apply to pretrained models?,,1,0,,,,CC BY-SA 4.0 20370,2,,20366,4/18/2020 14:34,,2,,"

Here are two very related interesting papers:

  1. Learning from Human Preferences
  2. Improving Reinforcement Learning with Human Input
",35821,,,,,4/18/2020 14:34,,,,0,,,,CC BY-SA 4.0 20371,1,20376,,4/18/2020 15:04,,2,66,"

Neural networks with feedback (Hopfield, Hamming, etc.) differ from ordinary neural networks (multilayer perceptrons, etc.), which turns them into a dynamic element with its own internal dynamics (if we consider them as a separate dynamic link). The following question naturally arises - is it possible to represent them in the form of state spaces?

Nuance is that feedback is created by introducing a delay element, which means recording a neural network exclusively in a discrete form. Is continuous recording possible? What acts as matrices A, B, C, D? How does the presence of nonlinear activation functions affect? The only more or less useful information that I managed to find is in this article:

On neural networks in identification and control of dynamic systems. 3.2 Paragraph. Page 8

But my assumptions are only confirmed there, which does not clarify the situation.

In general, if someone has come across this and can assist in studying the issue, please share links, possibly examples, etc.

",32829,,,,,4/18/2020 20:05,Neural networks with internal dynamics in the state-space form,,1,0,,,,CC BY-SA 4.0 20372,1,,,4/18/2020 15:33,,5,139,"

I'm reading the paper Pixel Recurrent Neural Network. I have a question about Row LSTM. Why Row LSTM can capture triangular contexts?

In this paper,

the kernel of the one-dimensional convolution has size $k \times 1$ where $k \geq 3$; the larger value of $k$ the broader the context that is captured.

The one-dimensional kernel can capture only the left context. (Is this correct?)

The $n \times n$ kernel such as

$$ \begin{bmatrix} 1 & 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} $$

can capture triangular contexts.

Is this correct?

",32303,,2444,,4/19/2020 15:45,9/14/2020 9:17,Why Pixel RNN (Row LSTM) can capture triangular contexts?,,0,0,,,,CC BY-SA 4.0 20373,2,,20342,4/18/2020 16:59,,0,,"

Thanks for the answers. If you are processing an audio signal I think the application of a low pass filter (lpf) would help to enhance the signal to noise ratio. This would help especially if the noise component occupies a large part of the spectrum. If the audio is human speech the majority of the energy is within the 300Hz to 3Khz region. Using a low pass filter with a cutoff frequency of 3Khz would eliminate noise that is in the higher part of the spectrum. You could implement the lpf as a pre-processing function. I am not knowledgeable on the implementation but a search should get you the info you need. I did find an article here. If I recall the process is to convert the time domain signal to the frequency domain using a FFT, then set a cutoff point and reconvert back to the time domain. I also know there are ways to implement that directly in the time domain.Hope this helps. I am also supersized that if you achieve a high validation accuracy that your test set accuracy is so low. Your validation data should be data the network has not seen before just like your test data. Only thing I can think off is that the test data has a very different probability distribution than the training and validation data. How were the various data sets (train, test, validate) selected? Best choice is to select these randomly using something like sklearn train_test_split or Keras ImageDataGenerator flow from directory. Hope this helps.

",33976,,33976,,4/18/2020 17:07,4/18/2020 17:07,,,,1,,,,CC BY-SA 4.0 20374,1,20375,,4/18/2020 17:28,,0,62,"

I am reading through the NEAT paper. In parameter settings, page 15, there is:

In each generation, 25% of offspring resulted from mutation without crossover.

What does it mean?

",36170,,2444,,4/18/2020 22:22,4/18/2020 22:22,"What does ""In each generation, 25% of offspring resulted from mutation without crossover"" mean in the context of NEAT?",,1,0,,,,CC BY-SA 4.0 20375,2,,20374,4/18/2020 18:00,,2,,"

In genetic algorithms, mutation without crossover simply means that part of the population is randomly changed. In this case this is applied to 25% of the population.

The remaining 75% either remain unchanged (generally the best performing specimen), or will be combined with other specimen (using crossover). It's a bit more complex here, as the genome is connecting weights in networks, and they use 'species', where certain individuals are treated as separate groups from the rest of the population.

",2193,,,,,4/18/2020 18:00,,,,0,,,,CC BY-SA 4.0 20376,2,,20371,4/18/2020 20:05,,1,,"

I think that the book ""Neural Networks and Learning Machines"" of Haykin can help you. In his book the chapter 13 is about neural dynamics, and there are some examples of how analise the dynamics of network.

",36175,,,,,4/18/2020 20:05,,,,1,,,,CC BY-SA 4.0 20377,1,,,4/18/2020 21:21,,9,2694,"

I have a question about how the averaging works when doing mini-batch gradient descent.

I think I now understood the general gradient descent algorithm, but only for online learning. When doing mini-batch gradient descent, do I have to:

  • forward propagate

  • calculate error

  • calculate all gradients

...repeatedly over all samples in the batch, and then average all gradients and apply the weight change?

I thought it would work that way, but recently I have read somewhere that you basically only average the error of each example in the batch, and then calculate the gradients at the end of each batch. That left me wondering though, because, the activations of which sample in the mini-batch am I supposed to use to calculate the gradients at the end of every batch?

It would be nice if somebody could explain what exactly happens during mini-batch gradient descent, and what actually gets calculated and averaged.

",17769,,2444,,11/29/2020 23:59,11/29/2020 23:59,What exactly is averaged when doing batch gradient descent?,,2,0,,,,CC BY-SA 4.0 20378,1,20397,,4/18/2020 21:43,,1,78,"

(link to paper in arxiv)

In section 2.1 the authors define $\gamma$ as the maximum possible value of the derivative of the activation function (e.g. 1 for tanh.) Then they have this to say:

We first prove that it is sufficient for $\lambda_1 < \frac{1}{\gamma}$, where $\lambda_1$ is the absolute value of the largest eigenvalue of the recurrent weight matrix $W_{rec}$, for the vanishing gradient problem to occur.

Then they use the submultiplicity ($\|AB\| \le \|A\|\|B\|$) of the 2-norm of the Jacobians to obtain the following inequality:

$$ \forall x, \| \frac{\partial x_{k+1}}{\partial x_k} \| \le \| W_{rec}^\top \| \| diag(\sigma'(x_k))\| < \frac{1}{\gamma} \gamma < 1 $$

Here

  • $x_k$ is the pre-activated state vector of the RNN
  • $W_{rec}$ is the weight matrix between states (i.e. $x_k = W_{rec} \times \sigma(x_{k-1}) + b$ )
  • $\sigma()$ is the activation function for the state vector
  • $diag(v)$ is the diagonal matrix version of a vector $v$

They appear to be either substituting the norm of the weight matrix $\|W_{rec}^\top\|$ for its largest eigenvalue $|\lambda_1|$ (eigenvalues are the same for transposes) or just assuming that this norm is less than or equal to the eigenvalue. This bothers me because the norm of a matrix is bounded below, not above, by this eigenvalue/spectral radius (see lemma 10 here and this math SE question)

They seem to assume that

$$\| W_{rec}^\top \| \le \lambda_1 < \frac{1}{\gamma} $$

But really

$$ \| W_{rec}^\top \| \ge \lambda_1 $$

",34395,,34395,,4/18/2020 23:59,4/19/2020 12:56,"Does the paper ""On the difficulty of training Recurrent Neural Networks"" (2013) assume, falsely, that spectral radii are $\ge$ square matrix norms?",,1,0,,,,CC BY-SA 4.0 20379,1,,,4/18/2020 22:12,,1,167,"

I am currently starting a research project whereby I am trying to convert text of one form into another. i.e. If I were to write a seed sentance of the form ""Scientists have finally achieved the ability to induce dreams of electric sheep in the minds of anaesthetized robots"" I would like GPT-2 to convert this into ""Robots have finally had dreams of electric sheep whilst being anaesthetized by scientists."" or some coherent permutation of the underlying structure whereby the main logic of the text is conveyed albeit roughly.

The current open source implementation of GPT-2 seeks to predict the next word, i.e. the seed text is given ""Scientist have finally"" and the generated text would be "" started being paid enough!""

My first presumption was to use some form of GAN, however it became quickly evident that:

Recent work has shown that when both quality and diversity is considered, GAN-generated text is substantially worse than language model generations (Caccia et al., 2018; Tevet et al., 2018; Semeniuta et al., 2018).

How could I most effectively achieve this? Thanks.

",36177,,,,,4/18/2020 22:12,How can I use GPT-2 to modify seed text of one form into a different form (LENGTH INVARIANT) whilst retaining meaning?,,0,0,,,,CC BY-SA 4.0 20380,2,,20377,4/18/2020 23:10,,11,,"

Introduction

First of all, it's completely normal that you are confused because nobody really explains this well and accurately enough. Here's my partial attempt to do that. So, this answer doesn't completely answer the original question. In fact, I leave some unanswered questions at the end (that I will eventually answer).

The gradient is a linear operator

The gradient operator $\nabla$ is a linear operator, because, for some $f : \mathbb{R} \rightarrow \mathbb{R} $ and $g: \mathbb{R} \rightarrow \mathbb{R}$, the following two conditions hold.

  • $\nabla(f + g)(x) = (\nabla f)(x) + (\nabla g)(x),\; \forall x \in \mathbb{R}$
  • $\nabla(kf)(x) = k(\nabla f)(x),\; \forall k, x \in \mathbb{R}$

In other words, the restriction, in this case, is that the functions are evaluated at the same point $x$ in the domain. This is a very important restriction to understand the answer to your question below!

The linearity of the gradient directly follows from the linearity of the derivative. See a simple proof here.

Example

For example, let $f(x) = x^2$, $g(x) = x^3$ and $h(x) = f(x) + g(x) = x^2 + x^3$, then $\frac{dh}{dx} = \frac{d (x^2 + x^3)}{d x} = \frac{d x^2}{d x} + \frac{d x^3}{d x} = \frac{d f}{d x} + \frac{d g}{d x} = 2x + 3x$.

Note that both $f$ and $g$ are not linear functions (i.e. straight-lines), so the linearity of the gradients is not just applicable in the case of straight-lines.

Straight-lines are not necessarily linear maps

Before proceeding, I want to note that there are at least two notions of linearity.

  1. There's the notion of a linear map (or linear operator), i.e. which is the definition above (i.e. the gradient operator is a linear operator because it satisfies the two conditions, i.e. it preserves addition and scalar multiplication).

  2. There's the notion of a straight-line function: $f(x) = c*x + k$. A function can be a straight-line and not be a linear map. For example, $f(x) = x+1$ is a straight-line but it doesn't satisfy the conditions above. More precisely, in general, $f(x+y) \neq f(x) + f(y)$, and you can easily verify that this is the case if $x = 2$ and $y=3$ (i.e. $f(2+3) = 6$, $f(2) = 3$, $f(3) = 4$, but $f(2) + f(3) = 7 \neq f(2+3)$.

Neural networks

A neural network is a composition of (typically) non-linear functions (let's ignore the case of linear functions), which can thus be represented as $$y'_{\theta}= f^{L}_{\theta_L} \circ f^{L-1}_{\theta_{L-1}} \circ \dots \circ f_{\theta_1},$$ where

  • $f^{l}_{\theta_l}$ is the $i$th layer of your neural network and it computes a non-linear function
  • ${\theta_l}$ is a vector of parameters associated with the $l$th layer
  • $L$ is the number of layers,
  • $y'_{\theta}$ is your neural network,
  • $\theta$ is a vector containing all parameters of the neural network
  • $y'_{\theta}(x)$ is the output of your neural network
  • $\circ $ means the composition of functions

Given that $f^l_{\theta}$ are non-linear, $y'_{\theta}$ is also a non-linear function of the input $x$. This notion of linearity is the second one above (i.e. $y'_{\theta}$ is not a straight-line). In fact, neural networks are typically composed of sigmoids, ReLUs, and hyperbolic tangents, which are not straight-lines.

Sum of squared errors

Now, for simplicity, let's consider the sum of squared error (SSE) as the loss function of your neural network, which is defined as

$$ \mathcal{L}_{\theta}(\mathbf{x}, \mathbf{y}) = \sum_{i=1}^N \mathcal{S}_{\theta}(\mathbf{x}_i, \mathbf{y}_i) = \sum_{i=1}^N (\mathbf{y}_i - y'_{\theta}(\mathbf{x}_i))^2 $$ where

  • $\mathbf{x} \in \mathbb{R}$ and $\mathbf{y} \in \mathbb{R}$ are vectors of inputs and labels, respectively
  • $\mathbf{y}_i$ is the label for the $i$th input $\mathbf{x}_i$
  • $\mathcal{S}_{\theta}(\mathbf{x}_i, \mathbf{y}_i) = (\mathbf{y}_i - y'_{\theta}(\mathbf{x}_i))^2$

Sum of gradients vs gradient of a sum

Given the gradient is a linear operator, one could think that computing the sum of the gradients is equal to the gradient of the sums.

However, in our case, we are summing $\mathcal{S}_{\theta}(\mathbf{x}_i, \mathbf{y}_i)$ and, in general, $\mathbf{x}_i \neq \mathbf{x}_j$, for $i \neq j$. So, essentially, the SSE is the sum of the same function, i.e. $S_{\theta}$, evaluated at different points of the domain. However, the definition of a linear map applies when the functions are evaluated at the same point in the domain, as I said above.

So, in general, in the case of neural networks with SSE, the gradient of the sum may not be equal to the sum of gradients, i.e. the definition of the linear operator for the gradient doesn't apply here because we are evaluating every squared error at different points of their domains.

Stochastic gradient descent

The idea of stochastic gradient descent is to approximate the true gradient (i.e. the gradient that would be computed with all training examples) with a noisy gradient (which is an approximation of the true gradient).

How does the noisy gradient approximate the true gradient?

In the case of mini-batch ($M \leq N$, where $M$ is the size of the mini-batch and $N$ is the total number of training examples), this is actually a sum of the gradients, one for each example in the mini-batch.

The papers Bayesian Learning via Stochastic Gradient Langevin Dynamics (equation 1) or Auto-Encoding Variational Bayes (in section 2.2) use this type of approximation. See also these slides.

Why?

To give you some intuition of why we sum the gradients of the error of each input point $\mathbf{x}_i$, let's consider the case $M=1$, which is often referred to as the (actual) stochastic gradient descent algorithm.

Let's assume we uniformly sample an arbitrary tuple $(\mathbf{x}_j, \mathbf{y}_j)$ from the dataset $\mathcal{D} = \{ (\mathbf{x}_i, \mathbf{y}_i) \}_{i=1}^N$.

Formally, we want to show that

\begin{align} \nabla_{\theta} \mathcal{L}_{\theta}(\mathbf{x}, \mathbf{y}) &= \mathbb{E}_{(\mathbf{x}_j, \mathbf{y}_j) \sim \mathbb{U}}\left[ \nabla_{\theta} \mathcal{S}_{\theta} \right] \label{1} \tag{1} \end{align}

where

  • $\nabla_{\theta} \mathcal{S}_{\theta}$ is the gradient of $\mathcal{S}_{\theta}$ with respect to the parameters $\theta$

  • $\mathbb{E}_{(\mathbf{x}_j, \mathbf{y}_j) \sim \mathbb{U}}$ is the expectation with respect to the random variable associated with a sample $(\mathbf{x}_j, \mathbf{y}_j)$ from the uniform distribution $\mathbb{U}$

Under some conditions (see this), we can exchange the expectation and gradient operators, so \ref{1} becomes \begin{align} \nabla_{\theta} \mathcal{L}_{\theta}(\mathbf{x}, \mathbf{y}) &= \nabla_{\theta} \mathbb{E}_{(\mathbf{x}_j, \mathbf{y}_j) \sim \mathbb{U}}\left[ \mathcal{S}_{\theta} \right] \label{2} \tag{2} \end{align} Given that we uniformly sample, the probability of sampling an arbitrary $(\mathbf{x}_j, \mathbf{y}_j)$ is $\frac{1}{N}$. So, equation \ref{2} becomes \begin{align} \nabla_{\theta} \mathcal{L}_{\theta} (\mathbf{x}, \mathbf{y}) &= \nabla_{\theta} \sum_{i=1}^N \frac{1}{N} \mathcal{S}_{\theta}(\mathbf{x}_i, \mathbf{y}_i) \\ &= \nabla_{\theta} \frac{1}{N} \sum_{i=1}^N \mathcal{S}_{\theta}(\mathbf{x}_i, \mathbf{y}_i) \end{align}

Note that $\frac{1}{N}$ is a constant with respect to the summation variable $i$ and so it can be taken out of the summation.

This shows that the gradient with respect to $\theta$ of the loss function $\mathcal{L}_{\theta}$ that includes all training examples is equivalent, in expectation, to the gradient of $\mathcal{S}_{\theta}$ (the loss function of one training example).

Questions

  1. How can we extend the previous proof to the case $1 < M \leq N$?

  2. Which conditions need exactly to be satisfied so that we can exchange the gradient and the expectation operators? And are they satisfied in the case of typical loss functions, or sometimes they aren't (but in which cases)?

  3. What is the relationship between the proof above and the linearity of the gradient?

    • In the proof above, we are dealing with expectations and probabilities!
  4. What would the gradient of a sum of errors represent? Can we still use it in place of the sum of gradients?

",2444,,2444,,5/1/2020 23:12,5/1/2020 23:12,,,,0,,,,CC BY-SA 4.0 20381,1,20451,,4/19/2020 1:34,,1,95,"

I am interested in learning about policy gradient algorithms and REINFORCE. Can you suggest a good and easy paper that I can use to code them from scratch?

",35626,,35626,,4/19/2020 12:05,4/21/2020 9:51,Is there a good and easy paper to code policy gradient algorithms (REINFORCE) from scratch?,,1,0,,,,CC BY-SA 4.0 20382,1,20402,,4/19/2020 2:28,,3,163,"

I've been learning a little bit about generalization theory, and in particular, the PAC (and PAC-Bayes) approach to thinking about this problem.

So, I started to wonder if there is an analogous version of ""generalization"" in Unsupervised Learning? I.e., is there a general framework that encapsulates how ""good"" an unsupervised learning method is? There's reconstruction error for learning lower dimensional representations, but what about unsupervised clustering?

Any ideas?

",36181,,,,,4/19/2020 23:31,Is there a notion of generalization in unsupervised learning?,,1,1,,,,CC BY-SA 4.0 20383,2,,20208,4/19/2020 2:29,,0,,"

VC Dimension of Neural Networks establishes VC bounds depending on the number of weights, whereas the UAT refers to a class of neural networks in which the number of weights a particular network can have is not bounded, although it needs to be finite.

I think that we can show, from theorem 2 and the observations below theorem 3 in Approximation by Superpositions of a Sigmoidal Function, that the VC dimension of

$$S=\left\{\sum_{i=1}^N \alpha_i\sigma(y_i^T x + \theta_i) : N\in\mathbb N, \alpha_i, \theta_i \in\mathbb R, y_i\in\mathbb{R}^n \right\}$$

is infinite.

Let $\{(x_i, y_i)\}_{i=1}^k$ be a sample of arbitrary size $k\in\mathbb N$, and let us see that there is a function in $S$ which can correctly classify it, i.e., $S$ shatters $\{x_i\}_{i=1}^k$.

We note $B(x, \varepsilon) := \{ y\in\mathbb{R}^n : d(x,y) < \varepsilon \}$ (this is just standard notation to denote a ball).

First, let $\varepsilon > 0$ be such that $B(x_i, \varepsilon)\cap B(x_j, \varepsilon) = \emptyset$ every time that $i \ne j$.

Now define $D = \cup_{y_i=1} B(x_i, \varepsilon)$. Define $f_{\varepsilon}(x)$ as in the observations below theorem 3 of Cybenko's paper, and use theorem 2 to find a function $G(x)$ in $S$ that classifies correctly all points at least $\varepsilon$ away from the boundary of $D$, i.e., all points in the sample.

",36115,,2444,,6/21/2021 22:17,6/21/2021 22:17,,,,11,,,,CC BY-SA 4.0 20384,1,,,4/19/2020 3:25,,6,3822,"

I understand that in DQNs, the loss is measured by taking the MSE of outputted Q-values and target Q-values.

What does the target Q-values represent? And how is it obtained/calculated by the DQN?

",36072,,2444,,1/18/2021 1:05,1/18/2021 1:05,What is the target Q-value in DQNs?,,3,1,,,,CC BY-SA 4.0 20386,2,,20310,4/19/2020 6:31,,1,,"

Like Oliver Mason mentioned, Deep learning is just a sub-field of machine learning. In order to learn deep learning effectively you need to have certain pre-requisites like basic principle of Machine learning and basics of simple Artificial neural network with some programming knowledge ( Python is go-to language). That being said, you don't need to know every single Machine learning algorithm and it's practices.

Now if deep learning happens to be just a tool that you need for this particular project and have no time to learn in depth about it then I would recommend you to take a look at python libraries like Tensorflow, pytorch, scikit learn, scipy, open cv etc. You can get started and use DL, ML models with these and many other libraries without knowing it's under the hood algorithms and implementations.

One of the best course to get started with deep learning with very little Ml knowledge is Andrew ng's deep learning.ai course on coursera ( you can audit the course and get all the course materials for free)

Here's the link to the course : Deep learning.ai

",35902,,2193,,4/21/2020 10:22,4/21/2020 10:22,,,,0,,,,CC BY-SA 4.0 20388,1,20398,,4/19/2020 8:53,,1,277,"

From what I understand VC dimension is what establishes the feasibility of learning for infinite hypothesis sets, the only kind we would use in practice.

But, the literature (i.e. Learning from Data) states that VC gives a loose bound, and that in real applications, learning models with lower VC dimension tend to generalize better than those with higher VC dimension. So, a good rule of thumb would be to require at least 10xVC dimension examples in order to get decent generalization.

I am having trouble interpreting what loose bound means. Is the VC generalization bound loose due to its universality? Meaning, its results apply to all hypothesis sets, learning algorithms, input spaces, probability distributions, and binary target functions.

",35990,,2444,,4/19/2020 12:02,4/19/2020 13:15,"What do we mean by saying ""VC dimension gives a LOOSE, not TIGHT bound""?",,1,0,,,,CC BY-SA 4.0 20389,2,,20384,4/19/2020 9:24,,4,,"

When training a Deep Q network with experienced replay, you accumulate what is known as training experiences $e_t = (s_t, a_t, r_t, s_{t+1})$. You then sample a batch of such experiences and for each sample you do the following.

  1. Feed $s_t$ into the network to get $Q(s,a;\theta)$.
  2. Feed $s_{t+1}$ into the network to get $Q(s’,a’,\theta)$.
  3. Choose $max_aQ(s’,a’,\theta)$ and set $ \gamma max_aQ(s’,a’,θ)$ + $r_t$ as the target of the network.
  4. Train the network with $s_t$ as input to update $\theta$. The output from the input of $s_t$ is $Q(s,a,\theta)$ and the gradient descent step minimises the squared distance between $Q(s,a,\theta)$ and $\gamma max_aQ(s’,a’,θ)$ + $r_t$
",32780,,,,,4/19/2020 9:24,,,,4,,,,CC BY-SA 4.0 20391,2,,20384,4/19/2020 9:29,,3,,"

What does the target Q-values represent?

In a DQN, which uses off-policy learning, they represent a refined estimate for the expected future reward from taking an action $a$ in state $s$, and from that point on following a target policy. The target policy in Q learning is based on always taking the maximising action in each state, according to current estimates of value.

The estimate is refined in that it is based on at least a little bit of data from experience - the immediate reward, and what transition happened next - but generally it is not going to be perfect.

And how is it obtained/calculated by the DQN?

There are lots of ways to do this. The simplest in DQN is to process a single step lookahead based on the experience replay table.

If your table contains the tuple [state, action, immediate reward, next state, done?] as $[s, a, r, s', d]$ then the formula for TD target, $g_{t:t+1}$ is

$$r + \gamma \text{max}_{a'}[Q_{target}(s',a')], \qquad \text{when}\space d \space \text{is false}$$

$$r, \qquad \text{when}\space d \space \text{is true}$$

Typically $Q_{target}$ is calculated using the "target network" which is a copy of the learning network for Q that is updated every N steps. This delayed update of the target predictions is done for numerical stability in DQN - conceptually it is an estimate for the same action values that you are learning.

This target value can change every time you use any specific memory from experience replay. So you have to perform the same calculations on each minibatch, you cannot store the target values.

",1847,,2444,,1/18/2021 1:05,1/18/2021 1:05,,,,0,,,,CC BY-SA 4.0 20392,2,,12266,4/19/2020 10:48,,10,,"

Both semi-supervised and self-supervised methods are similar in the sense that the goal is to learn with fewer labels per class. The way both formulate this is quite different:

  1. Self-Supervised Learning:

This line of work aims to learn image representations without requiring human-annotated labels and then use those learned representations on some downstream tasks. For example, you could take millions of unlabeled images, randomly rotate them by either 0, 90, 180 or 270 degrees and then train a model to predict the rotation angle. Once the model is trained, you can use transfer learning to fine-tune this model on a downstream task like cat/dog classification just like how you finetune ImageNet pretrained models. You can view an overview of the methods and also look at contrastive learning methods that are currently giving state-of-the-art results such as SimCLR and PIRL.

  1. Semi-supervised Learning

Different from self-supervised learning, semi-supervised learning aims to use both labeled and unlabeled data at the same time to improve the performance of a supervised model. An example of this is FixMatch paper where you train your model on labeled images. Then, for your unlabeled images, you apply augmentations to create two images for each unlabeled image. Now, we want to ensure that the model predicts the same label for both the augmentations of the unlabeled images. This can be incorporated into the loss as a cross-entropy loss.

",33910,,,,,4/19/2020 10:48,,,,0,,,,CC BY-SA 4.0 20393,2,,17231,4/19/2020 11:00,,2,,"

Andrew Zisserman, who is a pioneer in the field of self-supervised learning, described self-supervised learning in a talk at ICML as:

Self-supervised Learning is a form of unsupervised learning where the data provides the supervision. In general, we withhold some part of the data and task the network with predicting it. The network is forced to learn what we really care about e.g. a semantic representation, in order to solve it.

Thus, self-supervised is a subset of unsupervised learning, where you generate the labels from the given data itself. There are a few patterns of research being done for self-supervised learning:

1. Reconstruction:
In this, researchers have set up pretext tasks as predicting the color image from gray-scale image (Image Colorization), predicting the high-resolution image from the low-resolution version (Image Super-resolution) and removing some part of the image and trying to predict it (Image Inpainting).

2. Common Sense Reasoning:
You could take patches of 3x3 images and shuffle the patches and ask the network to predict the correct order (Jigsaw puzzle).

Similarly, you could take the center patch and some random patch and train model to predict where the random patch is located in relation to the center patch (context prediction).

There is another approach where you randomly rotate image into {0, 90, 180, 270} degrees and ask the model to predict the rotation angle applied (Geometric Transformation Recognition).

3. Clustering:

You could cluster the images into K categories and treat those clusters as labels. Then, a model can be trained on those clusters and you get representations. You can again repeat clustering and model training for few epochs. Papers for these include: DeepCluster and Self-Labelling.

4. Contrastive Learning:

In this paradigm, augmentations of the image is taken and the task is to bring two augmentations of the same images near while making the distance between this image and some other random image far. Papers for these include: SimCLR and PIRL.

",33910,,,,,4/19/2020 11:00,,,,1,,,,CC BY-SA 4.0 20394,1,,,4/19/2020 11:33,,2,94,"

I have data that is collected from several different instruments simultaneously that is generally analyzed on a location-by-location basis. A skilled interpreter can identify ""markers"" in the data that represent a certain change in conditions with depth - each marker only occurs once in each series of data. However, it is possible that a maker is absent either due to missing data or that physical condition not existing.

Often, there are dozens of these markers per location and thousands, if not 10's of thousands of measurements that need to be interpreted. The task is not that difficult and there are many strong priors that can be used to guide interpretation. E.g., if marker A is in location #1, and location #2 is very close to location #1, it is likely that marker A will be present in a very similar relative position. Also, if you have markers A, B, and C they will always be in that order. Although it could be that you have A/B/C or A/C or B/C, etc.

I am including a hand-sketched example below with 4 example locations and one data stream (I normally have 4-5 data streams per location.

I am looking for guidance on the type of algorithm to apply to this problem. I have explored Dynamic Time warping, but the issue is that with 10-20k data samples per location, and thousands of locations, the problem becomes computationally challenging.

Also, in general you may have 10000 locations, with maybe 100 that have been hand interpreted by an expert.

",36189,,36189,,4/21/2020 12:34,1/12/2023 23:03,What method to identify markers in data series via machine learning,,1,1,,,,CC BY-SA 4.0 20395,2,,20384,4/19/2020 12:31,,3,,"

The deep Q-learning (DQL) algorithm is really similar to the tabular Q-learning algorithm. I think that both algorithms are actually quite simple, at least, if you look at their pseudocode, which isn't longer than 10-20 lines.

Here's a screenshot of the pseudocode of DQL (from the original paper) that highlights the Q target.

Here's the screenshot of Q-learning (from Barto and Sutton's book) that highlights the Q target.

In both cases, the $\color{red}{\text{target}}$ is a reward plus a discounted maximum future Q value (apart from the exception of final states, in the case of DQL, where the target is just the reward).

There are at least 3 differences between these two algorithms.

  • DQL uses gradient descent because the $Q$ function are represented by neural networks rather than tables, like in Q-learning, and so you have an explicit loss function (e.g. MSE).

  • DQL typically uses experience replay (but, in principle, you could also do this in Q-learning)

  • DQL encodes the states (i.e. $\phi$ encodes the states).

Apart from that, the logic of both algorithms is more or less the same, so, if you know Q-learning (and you should know it before diving into DQL), then it shouldn't be a problem to learn DQL (if you also have a decent knowledge of deep learning).

",2444,,2444,,4/19/2020 23:59,4/19/2020 23:59,,,,6,,,,CC BY-SA 4.0 20396,2,,20377,4/19/2020 12:44,,4,,"

do I have to:

  • forward propagate

  • calculate error

  • calculate all gradients

  • ...repeatedly over all samples in the batch, and then average all gradients and apply the weight change?

Yes, that is correct. You can save a bit of memory by summing gradients as you go. Once you have calculated the gradients for one example for the weights of one layer, then you do not re-use the individual gradients again, so you can just keep a sum. Alternatively for speed, you can calculate a minibatch of gradients in parallel, as each example is independent - which is a major part of why GPU acceleration is so effective in neural network training.

It is critical to getting correct results that you calculate the gradient of the loss function with respect to each example input/output pair separately. Once you have done that, you can average the gradients across a batch or mini-batch to estimate a true gradient for the dataset which can be used to take a gradient descent step.

recently I have read somewhere that you basically only average the error of each example in the batch, and then calculate the gradients at the end of each batch.

Without a reference it is hard to tell whether this is an error in the "somewhere", or you have misunderstood, or there is a specific context.

If by "error" you mean the literal difference $\hat{y}_i - y_i$, where $\hat{y}_i$ is your estimate for data input $i$ and $y_i$ is the ground-truth training value, then that is the gradient for many loss functions and activation function pairs. For instance, it is the error gradient for mean square error and linear output. Some texts loosely refer to this as the "error", and talk about backpropagating "the error", but actually it is a gradient.

In addition, if the article was referring to linear regression, logistic regression or softmax regression, everything else is linear - in those specific models then you can just "average the error" and use that as the gradient.

In general, however, the statement is incorrect because a neural network with one or more hidden layers has many non-linearities that will give different results when calculating average first then backpropagating vs taking backpropagating first the averaging - that is $f'(\mu(Y))$ vs $\mu(f'(Y))$ where $f'$ is the derivative of the transfer function and $\mu$ is the mean for the batch (i.e. $\mu(Y) = \frac{1}{N}\sum_{i=1}^{N} y_i$ and $Y$ represents all the $y_i$ in a given batch of size $N$)

When $y_i = f(x_i) = ax_i +b$ i.e. the transfer function is linear, then $f'(\mu(Y)) = \mu(f'(Y)) = \frac{a}{N}\sum_{i=1}^N x_i$, but almost all useful loss functions and all transfer functions except some output layers in neural networks are non-linear. For those, $f'(\mu(Y)) \neq \mu(f'(Y))$.

A simple example would show this, if we start a small minibatch back propagation with the loss function (as opposed to its gradient).

Say you had the following data for regression:

  x    y

  1    2
  1    4

You want a model that can regress to least mean squared error $y$ when given an input $x = 1$. The best model should predict $3$ in that case.

If your model has converged, the average MSE of the dataset is $1$. Using that would make your model move away from convergence and it will perform worse.

If you first take the gradients, then average those, you will calculate $0$. A simple gradient update step using that value will make no change, leaving the model in the optimal position.

This issue occurs on every hidden layer in a neural network, so in general you cannot simply resolve the loss function gradient and start with the average error gradient at the output. You would still hit the inequality $f'(\mu(Y)) \neq \mu(f'(Y))$ on each nonlinearly.

",1847,,-1,,6/17/2020 9:57,4/19/2020 17:48,,,,1,,,,CC BY-SA 4.0 20397,2,,20378,4/19/2020 12:50,,1,,"

It is an error. But it is also not the in final version of the paper (arxiv.) The final version of the paper can be found here where they replace ""absolute value of the largest eigenvalue"" with ""largest singular value"".

We first prove that it is sufficient for $\lambda_1 < \frac{1}{\gamma}$, where $\lambda_1$ is the largest singular value of $W_{rec}$, for the vanishing gradient problem to occur.

",34395,,34395,,4/19/2020 12:56,4/19/2020 12:56,,,,1,,,,CC BY-SA 4.0 20398,2,,20388,4/19/2020 13:03,,1,,"

But, the literature (i.e. Learning from Data) states that VC gives a loose bound and that in real applications, learning models with lower VC dimension tend to generalize better than those with higher VC dimension.

It's true that people often use techniques such as regularisation to avoid over-parametrized models. However, I think it's dangerous to say that, in real applications, those models are really generalizing better, given that you're typically assessing their generalization ability by using a finite validation dataset (possibly selected in a biased way). Moreover, note that the validation dataset is often ignored in certain bounds and the only thing that is taken into account is the expected risk and the empirical risk (on the training data).

In any case, the $\mathcal{VC}$ bounds may be ""loose"" because

  • the $\mathcal{VC}$ dimension is often expressed with the big-$\mathcal{O}$ notation (e.g. see this answer)
  • the bounds often involve probabilities and uncertainties (e.g. see this other answer)
  • there aren't stricter bounds (i.e. no one found better bounds yet)

Is the VC generalization bound loose due to its universality? Meaning, its results apply to all hypothesis sets, learning algorithms, input spaces, probability distributions, and binary target functions.

I think the answer is yes. In fact, often, the specific tasks or learning algorithms are ignored in the analyses, but, in general, you may exploit your knowledge of the problem to improve e.g. the number of useful examples that the learning algorithm could use.

",2444,,2444,,4/19/2020 13:15,4/19/2020 13:15,,,,1,,,,CC BY-SA 4.0 20399,2,,20054,4/19/2020 17:59,,1,,"

I can see several challenges, and the list below is not exhaustive:

i. The main problem is how to model a problem of translating a language test into a formal language. It will probably be something like the automatic translators, but with some guarantees that the proof semantics will be preserved. If you are more interested in this path, I recommend researching what PAC, Information Theory, Computational Proof theory, Complexity theory can contribute to this modeling.

ii. Another problem is how to get the data reliable. You commented that as people used it they would generate this data. But the problem is not just collecting the data. How much you will trust the data and how you will measure the model's performance in translation.

iii. Another problem is more humane, how do you get mathematicians to use such a system? And how to make the model self-explainable.

I believe that this is one of the most difficult problems in machine learning. I once saw this video a while ago and I don't know if it can help anything. I also recommend the stack exchange of theoretical computer science, there you will probably have a more complete answer.

",36175,,,,,4/19/2020 17:59,,,,0,,,,CC BY-SA 4.0 20400,2,,7416,4/19/2020 18:10,,7,,"

Not in such straight forward way as described, but neural networks are successfully applied to guide the search of proof. There are automated theorem provers. What they do look roughly like this:

  1. Get the mathematical statement

  2. Apply one of the known mathematical equivalence transformations (theorems, axioms, etc)

  3. Check, if the resulting statement is trivially true. Then our sequence of transformations is the proof (since they all were equivalence transformations). Else, goto 2.

The tricky part here is to choose which transformation to apply at step 2. A neural network can be trained to predict function like

Statement, Transformation --> usefulness of that transformation to that statement

Then, during the search, we can apply such transformation, that neural network considers the most useful. Also, proving a theorem can be considered game, where axioms are the rules, and when you've reached the proof you win. In this form, Reinforcement Learning agents can be applied to prove theorems (this is also successfully done).

Here are papers that do similar things:

",36199,,36199,,4/20/2020 20:17,4/20/2020 20:17,,,,0,,,,CC BY-SA 4.0 20401,1,,,4/19/2020 21:07,,3,589,"

I am trying to understand the PointNet network for dealing with point clouds and struggling with understanding the difference between FC and MLP:

""FC is fully connected layer operating on each point. MLP is multi-layer perceptron on each point.""

I understand how fully connected layers are used to classify and I previously thought, was that MLP was the same thing but it seems varying academic papers have a differing definition from each other and from general online courses. In PointNet what is meant by a shared MLP different to a standard feedforward fully connected network?

",36197,,36197,,4/20/2020 12:52,10/11/2022 6:06,What is the difference between FC and MLP in as used in PointNet?,,1,0,,,,CC BY-SA 4.0 20402,2,,20382,4/19/2020 23:14,,1,,"

In the paper Generalization in Unsupervised Learning (2015), Abou-Moustafa and Schuurmans develop an approach to assess the generalization of an unsupervised learning algorithm $A$ on a given dataset $S$ and how to compare the generalization ability of two unsupervised learning algorithms $A_1$ and $A_2$, for the same learning task.

They first provide a more abstract and general definition of an unsupervised learning algorithm and loss function. Then they define the expected risk, empirical risk and generalization gap in a similar way to the case of supervised learning. Finally, they derive an upper bound on $A$'s expected loss.

Of course, you should read the paper for more details. Specifically, section 2 (page 3) describes their setting in detail.

",2444,,2444,,4/19/2020 23:31,4/19/2020 23:31,,,,0,,,,CC BY-SA 4.0 20403,1,,,4/19/2020 23:28,,1,117,"

I am new to RL and wish to realize a RL control for an industrial process. The goal is to control the temperature and humidity in a vegetal food production chamber.

States: External temperature and humidity, internal temperature and humidity, percentage of the proportional valves controlling heater, cooler and steam for humidity. The goal is to keep temperature and humidity in the chamber (measures) as close as possible to the desired values (the setpoints).

Agent actions: Increase/decrease the percentage of the proportional valves controlling the actuators.

Rewards: Deviation between measure and setpoints (small deviation => high reward, high deviation => low reward).

I have data available, the history of states and actions from a real system. The actions are made by several PID controllers (some of them in cascade). So far I have about 3 month every minutes (with some stops sometimes when a chamber is for example cleaned). The data are continuously logged and every month I get more data. The data includes bad/unwanted states.

For training the RL agent, I am planning to simulate the environment using a supervised learning model (with the predict function), probably XGboost. Is it feasible, are there pitfalls to avoid in this case?

",36210,,1847,,4/20/2020 17:06,1/13/2023 2:01,Reinforcement learning with industrial continuous process,,1,13,,,,CC BY-SA 4.0 20404,1,20511,,4/20/2020 1:11,,4,3040,"

I'm seeking guidence here. Can I use Multi Layers Perceptron (MLP), e.g regular flat neural networks, for image classification?

Will they perform better than Fisher Faces?

Is it difficult to do image classification with a MLP network?

It's on basic level like classifying objects and not detailed structures and patterns.

Important to me is that the MLP need to be trained with pictures that can have noise in background and different light shadows.

",36211,,9543,,4/21/2020 12:07,4/22/2020 14:54,Can I do image classification with Multi Layers Perceptron (MLP)?,,3,4,,,,CC BY-SA 4.0 20405,1,,,4/20/2020 5:08,,2,1136,"

"If you can't tell, does it matter?" was one of the first lines of dialogue of the Westworld television series, presented as a throwaway in the first episode of the first season, in response to the question "Are you real?"

In the sixth episode of the third season, the line becomes a central realization of one of the main characters, The Man in Black.

This is, in fact, a central premise of the show—what is reality, what is identity, what does it mean to be alive?—and has a basis in the philosophy of Philip K. Dick.

  • What are the implications of this statement in relation to AI? In relation to experience? In relation to the self?
",1671,,2444,,1/17/2021 12:49,5/10/2021 0:52,"What are the implications of the statement ""If you can't tell, does it matter?"" in relation to AI?",,3,1,,,,CC BY-SA 4.0 20407,2,,6274,4/20/2020 5:49,,1,,"

Try resizing the image to the input dimensions of your neural network architecture(keeping it fixed to something like 128*128 in a standard 2D U-net architecture) using nearest neighbor interpolation technique. This is because if you resize your image using any other interpolation, it may result in tampering with the ground truth labels. This is particularly a problem in segmentation. You won't face such a problem when it comes to classification.

Try the following:

import cv2 
resized_image = cv2.resize(original_image, (new_width, new_height), 
                           interpolation=cv2.INTER_NEAREST)
",36213,,26652,,4/21/2020 18:32,4/21/2020 18:32,,,,0,,,,CC BY-SA 4.0 20408,1,,,4/20/2020 5:53,,0,77,"

I am trying to use a neural network to predict the next state output given the current state and action pairs. Both input and outputs are continuous variables. Due to the high dimensionality of each input, ( ~50 dimensional input ) and 48 dimensional output, I am not able to achieve an achieve a satisfiable enough accuracy.

I am thinking of using an auto-encoder to learn a latent representation of the state. Would a latent representation from an auto-encoder help to improve the prediction accuracy ? and can the latent representation have a higher dimensional space compared to the original state ?

",32780,,,,,4/21/2020 12:05,Can I use an autoencoder with high latent representational space?,,1,0,,,,CC BY-SA 4.0 20409,1,,,4/20/2020 7:00,,0,201,"

I've been messing around with an Open Set, Binary Classifier and am having trouble with it. I'm sure there are a lot of reasons for that trouble.

One thing I am struggling with is, what does the model predict if it has never seen the image before?

An example would be if I'm trying to detect sheep across all background scenes. If I train a binary classification set with one class having lots of sheep in it and the other class having lots of various backgrounds, what would the model predict if it came across a background it had never seen before with no sheep in it? [mine is telling me "sheep" and I don't know why]

",36092,,2444,,12/21/2021 15:02,12/21/2021 15:02,What does the model predict if it has never seen the image before?,,1,0,,,,CC BY-SA 4.0 20410,1,,,4/20/2020 9:30,,0,227,"

I am currently working on a problem for which the topographic data is in very different resolution. Let say I have data of 20x20 with 1km2 tiles and also high resolution data of 50m2 tiles. I would like to combine both for input in a CNN. To make things more spicy I don't care about the 50m2 when it is far away from the center, that is why I would like to use an 'image' multi resolution, aka resolution low in the edges but higher in the center. That would be like human vision, only high detailed in the center... Then I would combine that multi-resolution image with my 1km2 data

Do you know any research done on such a CNN ?

Only found that one for now Multi-Resolution Feature Fusion for Image Classification of Building Damages with Convolutional Neural Networks

Thank you for you help

",36217,,,,,4/21/2020 10:09,Do Multi-resolution CNN exist?,,0,3,,,,CC BY-SA 4.0 20411,1,,,4/20/2020 13:04,,3,188,"

Apart from Journal of Artificial General Intelligence (a peer-reviewed open-access academic journal, owned by the Artificial General Intelligence Society (AGIS)), are there any other journals (or proceedings) completely (or partially) dedicated to artificial general intelligence?

If you want to share a journal that is only partially dedicated to the topic, please, provide details about the relevant subcategory or examples of papers on AGI that were published in such a journal. So, a paper that talks about e.g. an RL technique (that only claims that the idea could be useful for AGI) is not really what I am looking for. I am looking for journals where people publish papers, reviews or surveys that develop or present theories and implementations of AGI systems. It's possible that these journals are more associated with the cognitive science or neuroscience communities and fields.

",2444,,2444,,4/20/2020 13:44,7/12/2020 14:44,What are the scientific journals dedicated to artificial general intelligence?,,2,0,,,,CC BY-SA 4.0 20412,1,,,4/20/2020 13:53,,2,150,"

I wanted to ask you about the newest achievements in time series analysis (mostly prediction). What state-of-the-art solutions (as in frameworks, papers, related projects) do you know that can be used for analysing and predicting time series?

I am interested in something possibly better than just RNN, LSTM and GRU :)

",22659,,22659,,5/5/2020 19:48,5/19/2020 12:51,What are modern state-of-the-art solutions in prediction of time-series?,,1,0,,,,CC BY-SA 4.0 20414,2,,2996,4/20/2020 14:42,,0,,"

A key question that remains in the theory of deep learning is why such huge models (with many more parameters than data points) don't overfit on the datasets we use.

Classical theory based on complexity measures does not explain the behaviour of practical neural networks. For instance estimates of VC dimension give vacuous generalisation bounds. As far as I know, the tightest (upper and lower) bounds on the VC dimension are given in [1] and are on the order of the number of weights in the network. Clearly this worst case complexity cannot explain how e.g. a big resnet generalises on CIFAR or MNIST.

Recently there have been other attempts at ensuring generalisation for neural networks, for instance by relation to the neural tangent kernel or by various norm measures on the weights. Respectively, these have been found to not apply to practically sized networks and to have other unsatisfactory properties [2].

There is some work in the PAC Bayes framework for non-vacuous bounds, e.g. [3]. These setups, however, require some knowledge of the trained the network and so are different in flavour to the classical PAC analysis.

Some other aspects:

  • optimisation: how come we get 'good' solutions from gradient descent on such a non-convex problem? (There are some answers to this in recent literature)

  • interpretability: Can we explain on an intuitive level what the network is 'thinking'? (Not my area)

(incomplete) references:

",22613,,,,,4/20/2020 14:42,,,,0,,,,CC BY-SA 4.0 20415,2,,2996,4/20/2020 15:52,,0,,"

I'd like to point out there isn't a good theory on why machine learning works in general. VC bounds still assume a model, but reality doesn't fit any of these mathematical ideals. Ultimately when it comes to application everything comes down to emperical results. Even quantifying the similarity between images using an algorithm which is consistent with humans intuitive understanding is really hard

Anyway NN dont work well in their fully connected form. All successful networks have some kind of regularization built into the network architecture (CNN, LSTM, etc).

",32390,,,,,4/20/2020 15:52,,,,0,,,,CC BY-SA 4.0 20417,1,,,4/20/2020 16:29,,2,461,"

I am a newbie in deep learning and wanted to know if the problem I have at hand is a suitable fit for deep learning algorithms. I have thousands of fragments each of about 1000 bytes size (i.e. numbers in the range of 0 to 255). There are two classes in the fragments:

  1. Some fragments have a high frequency of two particular byte values appearing next to one another: ""0 and 100"". This kind of pattern roughly appears once every 100 to 200 bytes.
  2. In the other class, the byte values are more randomly distributed.

We have the ability to produce as many numbers of instances of each class as needed for training purposes. However, I would like to differentiate with a machine learning algorithm without explicitly identifying the ""0 and 100"" pattern in the 1st class myself. Can deep learning help us solve this? If so, what kind of layers might be useful?

As a preliminary experiment, we tried to train a deep learning network made up of 2 hidden layers of TensorFlow's ""Dense"" layers (of size 512 and 256 nodes in each of the hidden layers). However, unfortunately, our accuracy was indicative of simply a random guess (i.e. 50% accuracy). We were wondering why the results were so bad. Do you think a Convolutional Neural Network will better solve this problem?

",36091,,36091,,4/20/2020 20:11,10/8/2022 14:02,Finding patterns in binary files using deep learning,,2,0,,,,CC BY-SA 4.0 20418,1,20421,,4/20/2020 17:18,,3,294,"

The state value function $V(s)$ is defined as the expected return starting in state $s$ and acting according to the current policy $\pi(a|s)$ till the end of the episode. The state-action values $Q(s,a)$ are similarly dependent on the current policy.

Is it also possible to get a policy independent value of a state or an action? Can the immediate reward $r(s,a,s')$ be considered a noisy estimate of the action value?

",35821,,,,,4/21/2020 8:03,Do policy independent state and action values exist in reinforcement learning?,,1,0,0,,,CC BY-SA 4.0 20419,1,20422,,4/20/2020 18:37,,6,2169,"

I understand that SARSA is an On-policy algorithm, and Q-learning an off-policy one. Sutton and Barto's textbook describes Expected Sarsa thusly:

In these cliff walking results Expected Sarsa was used on-policy, but in general it might use a policy different from the target policy to generate behavior, in which case it becomes an off-policy algorithm.

I am fundamentally confused by this - specifically, how do we define when Expected SARSA adopts or disregards policy. The Coursera Course states that it is On-Policy, further confusing me.

My confusions became realized when tackling the Udacity course, specifically a section visualizing Expected SARSA for simple a gridworld (See section 1.11 and 1.12 in link below). Note that the course defines Expected Sarsa as on-policy. https://www.zhenhantom.com/2019/10/27/Deep-Reinforcement-Learning-Part-1/

You'll notice the calculation for the new state value Q(s0,a0) as

Q(s0, a0) <— 6 + 0.1( -1 + [0.1 x 8] + [0.1 x 7] + [0.7 x 9] + [0.1 x 8] - 6) = 6.16.

This is also the official answer. But this would mean that it is running off policy, given that it is stated that the action taken at S1 corresponds to a shift right, and hence expected SARSA (On policy) should yield you.

Q(s0, a0) <— 6 + 0.1( -1 + [0.1 x 8] + [0.1 x 7] + [0.1 x 9] + [0.7 x 8] - 6) = 6.1

The question does state

(Suppose that when selecting the actions for the first two timesteps in the 100th episode, the agent was following the epsilon-greedy policy with respect to the Q-table, with epsilon = 0.4.)

But as this same statement existed for the regular SARSA example (which also yields 6.1 as A1 is shift right, as before), I disregarded it.

Any advice is welcome.

",36228,,2444,,4/20/2020 19:35,4/20/2020 19:35,Is Expected SARSA an off-policy or on-policy algorithm?,,1,0,,,,CC BY-SA 4.0 20421,2,,20418,4/20/2020 18:42,,3,,"

Do policy independent state and action values exist in reinforcement learning?

No. They do not exist, because in order to progress in any MDP and receive any reward - i.e. to get any measure of value - you must take an action. Any consistent means of selecting actions is a policy, and the nature of that policy impacts which transitions and rewards you expect to observe, which in turn affects the expected value. Inconsistent means of selecting actions would have no meaning with respect to ""expected future return"", they would just be measurements that you made in the past.

The closest you can get to a no-policy definition would be values with respect to ""special"" policies that apply in general to nearly all MDPs:

  • Value functions for the uniformly distributed random policy.

  • Value functions for any optimal policy (if there is more than one optimal policy, then all the value functions for them will be equal).

  • Value functions for any ""inverse optimal"" policy - i.e. the policy that has lowest possible return. This one is not so useful, although it exists theoretically.

The first two can be useful measurements of an MDP. Although the uniform random policy might not be the best, it encapsulates the situation where the agent has absolutely no knowledge of the MDP, and can be a baseline for comparison. The optimal value functions are often a target for learning algorithms, and sometimes you can calculate bounds or even exact targets for these independently of the learning process, to measure how well an algorithm performs on some test MDP.

Can the immediate reward $r(s,a,s')$ be considered a noisy estimate of the action value?

No. Using that notation of the function, it is typically already the expected immediate reward. It is entirely independent of reward seen in any other transitions or time steps, so is systematically incorrect as an estimate for future return in many cases - the only exception being if you know all future rewards will be precisely $0$. So it is an unbiased estimate of an action value if $s'$ is a terminal state.

Immediate reward is also a good estimate of an action value if the discount factor $\gamma = 0$. However, that requires you to define the problem as solving for immediate rewards only, which is not usually a free choice when trying to optimise the behaviour of the agent.

",1847,,1847,,4/21/2020 8:03,4/21/2020 8:03,,,,4,,,,CC BY-SA 4.0 20422,2,,20419,4/20/2020 19:03,,3,,"

Expected SARSA can be used either on-policy or off-policy.

The policy that you use in the update step determines which it is. If the update step uses a different weighting for action choices than the policy that actually took the action, then you are using Expected SARSA in an off-policy way.

Q-learning is a special case of Expected SARSA, where the target policy is greedy with respect to the action values, so there is only ever one $r_{t+1} + \gamma \text{max}_{a'} Q(s_{t+1}, a')$ term to add with a probability $1$.

You can also use Expected SARSA, similarly to SARSA, where the behaviour policy and target policy are identical. It is not identical to SARSA though, because it calculates the TD Target over all possible actions $r_{t+1} + \gamma \sum_{a'} \pi(a'|s_{t+1}) Q(s_{t+1}, a')$

You can construct Expected SARSA updates where $\pi(a|s)$ is different when selecting which action to explore in the environment (behaviour) and when updating the Q values (target). For instance, you can decide to explore using $\epsilon$-greedy with $\epsilon=0.1$ and update the value function with $\epsilon=0.01$.

",1847,,,,,4/20/2020 19:03,,,,3,,,,CC BY-SA 4.0 20423,2,,20417,4/20/2020 20:18,,0,,"

Your network is essentially memorizing data but not extracting features. You need to apply CNN.

That said the CNN architecture will need to be kind of unusual, each bit will need to be turned into representing a positional element.

",32390,,,,,4/20/2020 20:18,,,,2,,,,CC BY-SA 4.0 20424,2,,20081,4/21/2020 7:43,,0,,"

I actually pondered this question a few months ago, so i understand your point of view!

You are correct in assuming that if you already build your tree or calculate your probabilities, what is the point of using test data? Because your model is fixed the way they are, no matter if you use test data or not. Well, the purpose of test data is not only to test your model against unseen data and get some evaluation score. But it is also to test if your model is the right fit for your problem.

One of the main reason why we all build ML/AI model in the first place is to extract insights that can be used to solve problems, make decisions etc. If you don't test your naive bayes or decision tree model with test data, you won't know if the information that are given to you by those model mean anything. They may not even help you solve problems or give you relevant information. Yes they may spout out big numbers and classify things, but are those result what you're looking for? Are the result relevant to your problem? Can the result be used to solve what you are trying to do?

Using test data gives you the opportunity to see if your model gives you the best insights and the best solution to your problems. So here are the takeaways from my answer:

  • Test data can be used to test your model against unseen data
  • You can gain score (evaluation) on your model when you test it with test data, which in turn can be used to fine tune your model
  • You can see if the answer the model gives you is any relevant to your initial problem. It its not, then it may be best to use some other algorithm.
",36257,,,,,4/21/2020 7:43,,,,0,,,,CC BY-SA 4.0 20425,2,,7202,4/21/2020 8:01,,1,,"

I think you should use a linear kernel, 'cause training SVM with a linear kernel is faster than with another kernel, especially for text classification. Good luck

https://www.svm-tutorial.com/2014/10/svm-linear-kernel-good-text-classification/

",36253,,,,,4/21/2020 8:01,,,,0,,,,CC BY-SA 4.0 20426,2,,20321,4/21/2020 8:03,,4,,"

As far as I know, there are few aspects that would probably improve the model score:

  1. Normalization
  2. Lemmatization
  3. Stopwords removal (as you asked here)

Based on your question, ""is removing top frequent words (stopwords) will improve the model score?"". The answer is, it depends on what kind of stopwords are you removing. The problem here is that if you do not remove stop words, the noise will increase in the dataset because of words like I, my, me, etc. Here is the comparison of those three aspects using SVM Classifier.

You may see that without stopwords removal the Train Set Accuracy decreased to 94.81% and the Test Set Accuracy decreased to 88.02%. But, you should be careful about what kind of stopwords you are removing.

If you are working with basic NLP techniques like BOW, Count Vectorizer or TF-IDF(Term Frequency and Inverse Document Frequency) then removing stopwords is a good idea because stopwords act like noise for these methods. If you working with LSTM’s or other models which capture the semantic meaning and the meaning of a word depends on the context of the previous text, then it becomes important not to remove stopwords.

So, what's the solution?

You may want to create a Python package nlppreprocess which removes stops words that are not necessary. It also has some additional functionalities that can make cleaning of text fast. For example:

from nlppreprocess import NLP
import pandas as pd

nlp = NLP()
df = pd.read_csv('some_file.csv')
df['text'] = df['text'].apply(nlp.process)

Source:

  1. https://github.com/miguelfzafra/Latest-News-Classifier

  2. https://towardsdatascience.com/why-you-should-avoid-removing-stopwords-aa7a353d2a52

",36240,,,,,4/21/2020 8:03,,,,4,,,,CC BY-SA 4.0 20427,2,,19985,4/21/2020 8:15,,1,,"

Artificial Intelligence can predict such a thing because before they release the game, the train the BOT or AI to play the game million times so they have a model or BOT that can predict every next move or combo that they can do or predict all moves that can finish the game. for example snake game. what they do to predict moves is train the model or bot to play the game when the snake perform some action . the snake got reward which can be positive or negative reward. the goal of the snake is to learn what action that maximize the reward, given every possible state. States are the observations that the agent receives at each iteration from the environment

this is the link that can give you the detail : https://towardsdatascience.com/how-to-teach-an-ai-to-play-games-deep-reinforcement-learning-28f9b920440a

",36239,,36239,,4/21/2020 8:32,4/21/2020 8:32,,,,0,,,,CC BY-SA 4.0 20429,2,,19985,4/21/2020 8:24,,1,,"

AI is trained to predict such thing because that is their purpose, they are given almost all possibilities of move they can do to current state of the game and chose the best possible outcome of the possible move, but not only that the AI also predict what happens after that and predict the outcome of that prediction, just like a chess AI that can predict how to checkmate a player just by one move made by the player, so they did not just predict what move to do now but also what move to do after that move has been done

this can be done with deep learning as you can read here : https://towardsdatascience.com/predicting-professional-players-chess-moves-with-deep-learning-9de6e305109e https://electronics.howstuffworks.com/chess1.htm

",36259,,,,,4/21/2020 8:24,,,,0,,,,CC BY-SA 4.0 20431,2,,19985,4/21/2020 8:29,,1,,"

In video games, usually the developer spend an dedicated time to train their AI by feeding them with learning data, provided by either the developer itself or the feedback from open/closed beta tester that participated, and from that data, the developer can model the learning pattern for the algorithm and proceed to train them with some sets of goal.

",36271,,,,,4/21/2020 8:29,,,,0,,,,CC BY-SA 4.0 20433,2,,20409,4/21/2020 8:32,,1,,"

I am assuming the images you gave the model all contain sheep. This is what i understand from your question.

Any model that you build will be based on the data that you give (training data) and your code. In your case, if you only give images that contains sheeps, and then you test it with no sheep and a background the model hasn't seen, it will search through all of its node, derived from your training data, to see which one is closest to the image you gave. Based on the information given, the only 'route' your cnn model can take is the one with sheeps on it, because you only gave images of sheeps for it to learn.

Here are a few suggestions that i can give you:

  • Give your model images with no sheep and different backgrounds so it can handle cases where there are no sheeps
  • Or you can add a piece of code to your model that tells it to default to some value if certain conditions aren't met (If the model doesn't 'see enough' sheep and background, it defaults to 'unknown image' or whatever you choose)

You are on the right path though! Since you are playing around with a binary classifier, you should definitely feed images with and without sheeps so that it can identify the two cases. Remember, a binary classifier is optimal when you give it 2 things to look out for, such as identifying images with and without sheep.

Here are some reading materials that you can brush up on to get a better basic understanding of how CNN works, personally i found the video helpful on the second link:

Also, i find google images helps me alot in terms of visualizing binary classifier as a concept.

",36257,,36257,,4/21/2020 11:20,4/21/2020 11:20,,,,4,,,,CC BY-SA 4.0 20434,2,,20321,4/21/2020 8:46,,1,,"

Based on my project, there is how i clean and doing some preparation on the data.

  1. Delete specific charaters ('\r', '\n', '""',)
  2. Change into the lowercase
  3. Delete some symbols
  4. Lemmatization (change base word with wordnet)
  5. Delete stopwords.

With these following step, i get some improvement accuracy score on my model.

My project: https://github.com/khaifagifari/NLP-Course-TelU

",36273,,,,,4/21/2020 8:46,,,,0,,,,CC BY-SA 4.0 20435,2,,20193,4/21/2020 8:54,,1,,"

Classification can be performed on structured or unstructured data. Classification is a technique where we categorize data into a given number of classes.

Based on my project in price classification, when i compared into the 5 models, i got a higher score on a Random Forest Classifier compared to Decision Tree, SVM, Naive Bayes, Logistic Regression.

my project: https://github.com/khaifagifari/Classification-and-Clustering-on-Used-Cars-Dataset

source : https://github.com/f2005636/Classification https://www.kaggle.com/vbmokin/used-cars-price-prediction-by-15-models

",36273,,,,,4/21/2020 8:54,,,,0,,,,CC BY-SA 4.0 20440,2,,20405,4/21/2020 9:12,,0,,"

I haven't watched Westworld from the beginning to the end, but I've read the synopsis of it.

The implications of the statement above might be related to the experiences of what the androids (or cyborgs) might think of themselves. Are their identity and experiences that they had gone through "real" or not? It seems that, in the series, this question is actually central, and the series seems to be about an identity crisis, the search for a true identity.

In my view, without really any fail-safe programming, once the androids understand that all of their experiences are false (or unreal) and that everything that they do is monitored and tested by some individuals for science purposes, the AI would go rogue. A conscious being with no real background experiences being put into an environment that 'should' have been their 'life' all along, but, after realizing the real truth, everything does matter, and the conscious being would go rogue to find their true self in order to recover their origin and their true identity.

",36278,,2444,,1/17/2021 12:58,1/17/2021 12:58,,,,1,,,,CC BY-SA 4.0 20442,2,,20185,4/21/2020 9:22,,0,,"

You can use k-nn clustering but you must convert your dataset to numeric values or you can remove the unrelated features in your dataset.

",36256,,22659,,4/21/2020 15:26,4/21/2020 15:26,,,,0,,,,CC BY-SA 4.0 20443,2,,20404,4/21/2020 9:23,,1,,"

let me try to answer your question. yes, you can use multilayer perceptron to image classification. Multilayer Perceptron is topology the most common of ANN, where perceptrons are connected to form layers. An MLP has input layer, at least one hidden layer, and output layer. Multilayer perceptron is one method many used. one of them, regards research on classification human skin based on its color, Khan (Khan, Hanbury, Stöttinger, & Bais, 2012) compare the nine methods for classifications include BayesNet, J48, Multilayer Perceptron (MLP), Naive Bayes, Random Forest, and SVM. The results show that the Multilayer Perceptron (MLP) produce the highest performance after Random Forest and J48.

",36275,,,,,4/21/2020 9:23,,,,0,,,,CC BY-SA 4.0 20444,2,,20081,4/21/2020 9:24,,0,,"

When we train a model using a data train, sometimes the resulting score is very high. This makes us believe that our model is very good. But when predicting actual data, the resulting score is very low. Why?? This means that the trained model is overfit (to data train) and fail to predict anything useful on yet-unseen data.

That's why we have to check our model to test data (predict test data) and compare the accuracy between data train and data test. If the accuracy is not too far away, then our model does not overfit.

Later, we can improve our model with Cross Validation (Reference) that split our data train to n-split data and take one of that to became data train. Then we take the average of Cross Validation score.

",36265,,,,,4/21/2020 9:24,,,,0,,,,CC BY-SA 4.0 20445,2,,20185,4/21/2020 9:26,,0,,"

KNN can be used in clustering with the data frame. but there are a number of steps that you must take. 1. You must separate the features you want to cluster. for example you can do clustering dob and age. 2. if there is data of type string you have to change it to an integer. For easier clustering, you can use the Sklearn library. you can access at the following link https://scikit-learn.org/stable/modules/clustering.html

",36242,,,,,4/21/2020 9:26,,,,0,,,,CC BY-SA 4.0 20446,2,,20321,4/21/2020 9:31,,4,,"

Based on my experience, I did 2 tasks that is proven to improve the accuracy/score of my model.

  1. Normalization
    • removing characters and symbols in a text
    • lowercase folding
  2. Stopwords removal (as what you asked)

These process helped me improve my model since stopwords gave my model noise as I am using word frequency count to represent text.

So based on what you asked, does stopwords removal improve score? It depends on your model. If you are using word count to represent text you may do stopwords removal to remove noise when doing text classification.

",36260,,,,,4/21/2020 9:31,,,,0,,,,CC BY-SA 4.0 20450,2,,10,4/21/2020 9:35,,1,,"

Fuzzy Logic is a way of dealing with uncertainties, which is something that computers don't do naturally which human do very well. The way we instantly think of dealing with things and the way that computers tend to deal with certain things is 'True' or 'False', or '1 or '0'. For example, you might classify someone is alive as alive ('True') or has passed away ('False'). We only have two options, there is no inbetween. With fuzzy logic, instead of going with 'True' or 'False', between that, we have what's called a degree of truth. So for example, when we look out the window, we might say ""It's a bit cloudy today, maybe it's 0.5 'nice day' or 0.7 'nice day'"". So essentially, with fuzzy logic we always have grey areas which vary from person-to-person.

Source: https://youtu.be/r804UF8Ia4c

",36283,,36283,,4/21/2020 11:13,4/21/2020 11:13,,,,0,,,,CC BY-SA 4.0 20451,2,,20381,4/21/2020 9:39,,1,,"

this is maybe not a paper but this article is dope.

I got 2 link in below to help you understand about policy gradient algorithms especially Reinforce Learning. Both article have a good explanation about that algorithms and not only an explanation, also contain a good example about that. They really put their effort into it so i'm sure if you've read it and really understand it, you can code from scratch. Hope this is help you.

If you think this answer is helping you, i will grateful if you make my answer as accepted. Just kidding, have a nice day....

(no i'm not kidding)

https://towardsdatascience.com/policy-gradients-in-a-nutshell-8b72f9743c5d by Sanyam Kapoor on Towardsdatascience.com

https://medium.com/@jonathan_hui/rl-policy-gradients-explained-9b13b688b146 by Jonathan Hui on Medium.com

",36289,,36289,,4/21/2020 9:51,4/21/2020 9:51,,,,1,,,,CC BY-SA 4.0 20452,2,,20193,4/21/2020 9:40,,1,,"

If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data

Source : https://stackoverflow.com/questions/2595176/which-machine-learning-classifier-to-choose-in-general/15881662

",36262,,36262,,4/21/2020 11:29,4/21/2020 11:29,,,,0,,,,CC BY-SA 4.0 20457,2,,12558,4/21/2020 9:48,,3,,"

Here's a definition by Tom Mitchel (1997):

Computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.

So, the programmer gives some instructions/rules to the computer, so that it can learn how to solve the problem from the given examples by themselves.

In some tasks, the computer can perform better than humans. For example, the Dota 2 bot (made by OpenAI) defeated the world champion.

With machine learning, many cases can be automated. They also have the ability to improve solutions by learning the given data from time to time. It can process and analyze large data well.

Machine learning already applied in many fields such as machine translation in google translate and face recognition that is widely used by today's society for security.

",36292,,2444,,4/21/2020 12:44,4/21/2020 12:44,,,,0,,,,CC BY-SA 4.0 20458,2,,7202,4/21/2020 9:49,,0,,"

Because you use various combinations of features whose dimensionality is between 1 and 14 features, You might try to use Linear SVM (linear Kernels) would be good for your problem. You could try LIBLINEAR library but the Data should be linearly separable, otherwise test accuracy would be very low.

",36295,,,,,4/21/2020 9:49,,,,0,,,,CC BY-SA 4.0 20460,2,,88,4/21/2020 9:52,,0,,"

i think that Training deep learning neural networks can be difficult because of local optima in the objective function and because complex models are prone to overfitting. Unsupervised pre-training initializes a discriminative neural net from one which was trained using an unsupervised criterion, such as a deep belief network or a deep autoencoder. This method can sometimes help with both the optimization and the overfitting issues, and about deep learning actually work Because there is no external taecher in unsupervised learning, it is really crucial to increase the entropy which can be done by redundancies in the data.

source: https://metacademy.org/graphs/concepts/unsupervised_pre_training

",36282,,36282,,4/21/2020 11:14,4/21/2020 11:14,,,,0,,,,CC BY-SA 4.0 20464,2,,20081,4/21/2020 9:59,,0,,"

One way to test the accuracy of trained model is by testing it using data test. By testing we are able to check the accuracy. Whether your model is a good model or not depends heavily on the accuracy, if your accuracy is too low or too high (eg. up to 99%~100%), there could be some problem in your model.

For further information on the example in data test, you can access https://jakevdp.github.io/PythonDataScienceHandbook/05.05-naive-bayes.html

Hope this helps

",36254,,,,,4/21/2020 9:59,,,,0,,,,CC BY-SA 4.0 20465,2,,14178,4/21/2020 10:00,,0,,"

The purpose of clustering is to look for the same characteristics in the data and group them into clusters. The number of clusters we take is based on how our clustering algorithm evaluated.

For example, we use the Elbow Method for evaluates. We take optimal number of clusters that distortion start decreasing in a linear fashion. In the picture below, the optimal number of clusters for the data is 3.

",36265,,,,,4/21/2020 10:00,,,,0,,,,CC BY-SA 4.0 20471,2,,20310,4/21/2020 10:07,,0,,"

Machine learning uses algorithms to digest data sets, draw conclusions based on analyzed data, and use these conclusions to complete the task in the most effective way. This ability is a fundamental difference between machine learning and machine that has been programmed from the beginning with a certain sequence of commands. Machine learning has the capability to accomplish tasks dynamically.

While Deep Learning is one of the methods of implementing machine learning that aims to mimic the workings of the human brain using ANN. Deep learning uses a number of algorithms as 'neurons' to work together in determining and digesting certain characteristics in a data set.

In contrast to general machine learning programs that are designed to accomplish certain tasks, deep learning programs are usually programmed with more complex capabilities to study, digest, and classify data.

A machine learning model requires data to learn and obtain parameter estimates, so the more data that can be used, the machine learning program will be smarter. In addition, operating machine learning models — especially logical networks for deep learning — requires high computational power. This is because the deep learning model must operate many processes simultaneously, especially in the training phase. In the training phase, the machine learning model must process very large amounts of data to be categorized as a reference.

",36264,,,,,4/21/2020 10:07,,,,0,,,,CC BY-SA 4.0 20472,2,,19881,4/21/2020 10:08,,0,,"

The training error (on any error metrics, not only for RMSE) will usually be less than the test error because the same data used to fit the model is employed to assess its training error. In other words, a fitted model usually adapts to the training data and hence its training error will be overly optimistic (too small). In fact, it is often the case that the training error steadily decreases as the size of the model increases.

",36280,,,,,4/21/2020 10:08,,,,0,,,,CC BY-SA 4.0 20475,2,,20185,4/21/2020 10:12,,0,,"

There are several algorithm for clustering such as: K-means, Mean shift, hierarchical,etc. Based on my experience, actually it's K-means(KNN for classifcication).It is suitable for clustering your dataset, there are several steps for clustering your dataset:

  1. You have to determine which features that you want cluster
  2. Changing your categorical dataset to numerical
  3. This step is optional, You can drop columns that are not related to the features you have chosen before
  4. Try to to code your clustering (like determine centroid from your dataset, calculate the euclidean distance from your centroid,etc) or if you want to use library maybe sklearn is the right place.

And for determine the quality of your clustering, you can measures SSE(sum of the square error from the items of each cluster),Inter cluster distance,Intra cluster distance for each cluster,Maximum Radius,Average Radius.

",36285,,36285,,4/21/2020 10:26,4/21/2020 10:26,,,,0,,,,CC BY-SA 4.0 20476,2,,19881,4/21/2020 10:16,,1,,"

It is common to have root mean squared error (RMSE) greater on the test dataset than on the training dataset (this is equal to having accuracy/score higher for model in training dataset than test dataset). This normally happens because the training data are assesed on the same data that have been learnt before, while the test dataset may have data that are unknown / not common that may give more errors or misclassification when doing prediction.

But if your model shows your test dataset have way too high RMSE result rather than your training dataset RMSE result, it may indicates that overfitting happens.

If overfitting happens, there are a lot of reasons this could happen. Referenced from https://elitedatascience.com/overfitting-in-machine-learning, some factors that causes overfitting are:

  • Complexity of data (e.g. there are irrelevant input features). This can be solved with removing irrelevant input features.
  • Not enough training data. This can be solved by training with more data (Eventhough this may not always succeed. Sometimes it may give noise towards data), etc.
",36260,,36260,,4/21/2020 11:04,4/21/2020 11:04,,,,1,,,,CC BY-SA 4.0 20478,2,,20185,4/21/2020 10:21,,1,,"

Yes you can use KNN algorithm to cluster (well actually its a classification not a clustering if you use KNN) the data. But, first you need to set one feature as a label because KNN is a supervised learning method, it need a labeled data to train the data first. For example you can use Gender as label to classify the data. To determine the quality of the classification result, you can simply use accuracy.

If you don’t want to use a label, you can use unsupervised learning method like K-Means to do the clusters. Because its unsupervised it doesn’t need label so you can use all of the feature to do the clusters task. For the k-means algorithm you can use a library from scikit-learn or create it from scratch. To evaluate the results you can use silhouette score or elbow method (to find the optimal number of cluster).

And don’t forget to do data exploration because maybe it can increase the quality of the cluster results.

You can learn more about the differentiation between K-Means and KNN in the link below: https://pythonprogramminglanguage.com/how-is-the-k-nearest-neighbor-algorithm-different-from-k-means-clustering/

I hope this helps :)

",36276,,36276,,4/21/2020 11:09,4/21/2020 11:09,,,,0,,,,CC BY-SA 4.0 20479,2,,12322,4/21/2020 10:21,,0,,"

Sentiment Analysis -- the most common text classification that analyses an incoming message and tells whether the underlying sentiment is positive, negative, or neutral.

Emotion Recognition-- emotion recognition refers to the cognitive and behavioral strategies people use to influence their own emotional experience.

Note: This explanation is based on the paper that I have read.

",36270,,22659,,4/22/2020 13:17,4/22/2020 13:17,,,,0,,,,CC BY-SA 4.0 20480,2,,19985,4/21/2020 10:22,,0,,"

AI can predict such thing by reading a data that they previously store by play the game many times. Using the data the AI can learn which is the best action to do. For example AI can find the best path to evade all incoming bullet while shooting down all the enemies in bullet hell game.

",36277,,,,,4/21/2020 10:22,,,,0,,,,CC BY-SA 4.0 20481,2,,20310,4/21/2020 10:22,,0,,"

So, do I have to learn machine learning first before going into deep learning, or can I skip ML?

Quoting from Wikipedia, ""Deep learning (also known as deep structured learning or differential programming) is part of a broader family of machine learning methods based on artificial neural networks with representation learning.""

That being said, it is better to understand the fundamental of machine learning first so you would be able to understand completely of how deep learning works, and how to apply it effectively and efficiently. However, You can always skip straight to deep learning without having any major issues, as there is already a lot of libraries supporting deep learning on python such as TensorFlow.

What are the pros and cons of studying machine learning before deep learning?

pros:

Since deep learning is a subset of machine learning, having fundamental knowledge about machine learning and the other machine learning algorithms will be beneficial.

cons:

You might (not) waste your time and energy to learn something you wouldn't use.

",36245,,36245,,4/21/2020 10:31,4/21/2020 10:31,,,,0,,,,CC BY-SA 4.0 20482,2,,7202,4/21/2020 10:23,,1,,"

To quickly train the SVM , you can try to Use Linear SVM or Use scaled data.

sources: https://www.researchgate.net/publication/2926909_A_Practical_Guide_to_Support_Vector_Classification_Chih-Wei_Hsu_Chih-Chung_Chang_and_Chih-Jen_Lin

",36241,,,,,4/21/2020 10:23,,,,1,,,,CC BY-SA 4.0 20483,2,,20231,4/21/2020 10:24,,1,,"

According to this Wikipedia article

The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud. in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002.

The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI.

",36285,,-1,,6/17/2020 9:57,4/21/2020 12:04,,,,0,,,,CC BY-SA 4.0 20485,2,,7202,4/21/2020 10:27,,0,,"

You can speed up the training time by doing several steps:

  1. scale the values of your features
  2. use only a limited number of features because this will affect the training time; i.e. when you use 14 features, it means your model has 14 dimensions and it makes computation more complex and take much time.
  3. choose a proper kernel, linear SVM kernel usually give the fastest result
",36247,,2444,,4/21/2020 12:49,4/21/2020 12:49,,,,0,,,,CC BY-SA 4.0 20486,2,,18137,4/21/2020 10:28,,2,,"

There have been many researches in dynamic difficulty adjustment (DDA). I see this one is quite explaining: AI for Dynamic Difficulty Adjustment in Games. However, there are many factors when we are trying to do dynamic difficulty adjustment. As explained in paper above, one major problem is it is sometimes hard to make sure the created model will still deliver the intended experience.

You can also read this paper about game design and DDA on ace05The Case for Dynamic Difficulty Adjustment in Games .

So for short, you can use neural network -- or other learning methods -- to do dynamic difficulty adjustment. But it's more about game design and experience impact of your DDA.

",36238,,36238,,4/21/2020 10:34,4/21/2020 10:34,,,,0,,,,CC BY-SA 4.0 20487,2,,20369,4/21/2020 10:41,,1,,"

I am not sure if a pretrained machine learning model is actually protected by copyrights or not. Copyright protection exists to protect the creators of creative works from having their work ""stolen"", and I am not sure if training a ML model is an act of creativity.

That said, assuming that a pretrained ML model is actually protected by copyrights, then it is more likely that the model is a derived work of the data set used for training than that it is a derived work of the software that uses the model.

The software reads the model in as data, assuming that the software can be used with many differently trained models. In that case, the software and the model are considered completely independent works in the same way that MS Word and the documents you write with it are independent works for copyright.

Thus, if you want to publish the trained model with a license, I would recommend to use the BSD license that was also used for the training set.

",36305,,,,,4/21/2020 10:41,,,,2,,,,CC BY-SA 4.0 20489,2,,3847,4/21/2020 10:42,,1,,"

It's instructive to know the definition of the transistor.

Transistors are electronic components that are used for amplifiers, as circuit breakers, as connectors, as voltage controllers, for signal modulation and others functions.

(An analogy is that the transistor functions as an electric ""faucet"" that regulates input and output voltage.)

The definition of AI by Andreas Kaplan and Michael Haenlein defines artificial intelligence as:

""The ability of the system to interpret external data correctly, to learn from that data, and to use that learning to achieve certain goals and tasks through flexible adaptation.""

[Citation Needed]

Under this definition, transistors are not part of AI, because transistors do not learn and cannot adapt. Transistors are just electronic devices for regulating current.

",36301,,1671,,4/23/2020 23:00,4/23/2020 23:00,,,,0,,,,CC BY-SA 4.0 20491,2,,19844,4/21/2020 10:45,,0,,"

It depends on the dataset we have and algorithm we use, usually text preprocessing can help your model perform better. But, some preprocessing method can have no significant impact on accuracy. We need to choose which preprocessing method that can help us make better quality dataset to give to the model.

",36292,,,,,4/21/2020 10:45,,,,0,,,,CC BY-SA 4.0 20494,2,,12023,4/21/2020 10:51,,0,,"

I've been read from some blogs, for image caption generation you can use concepts of a CNN and LSTM model and build a working model of Image caption generator by implementing CNN with LSTM. I think before you want to make something and using this project, you need to have good knowledge of Deep learning, Python, working on Jupyter notebooks, Keras library, Numpy, and Natural language processing and make sure that you have installed all the following necessary libraries: -pip install tensorflow -keras -pillow -numpy -tqdm -jupyterlab

source : https://data-flair.training/blogs/python-based-project-image-caption-generator-cnn/

",36306,,,,,4/21/2020 10:51,,,,0,,,,CC BY-SA 4.0 20497,2,,20185,4/21/2020 10:55,,0,,"

you can clustering the data frame with unsupervised algorithm, for example you can use K-Means method. There are some options you can choose to eliminate some features in your data frame, like del dataFrame['Column Name']. In unsupervised learning, the algorithm not calculate the quality of the clusters, but you can set it up by yourself to make a parameter for calculate the quality for each clusters, for example it depend on sum of data in each clusters. Actually you can use KNN algorithm with your data frame, but you need to add a label in there because KNN is a supervised learning, and its function to make a classification, not clustering. hope it useful.

",36294,,,,,4/21/2020 10:55,,,,0,,,,CC BY-SA 4.0 20501,2,,20193,4/21/2020 11:01,,0,,"

if you use just one feature for dataset, i'm recommend a Algorithm Naive Bayes Classifier because Naive Bayes is a method using probability and statistical methods. And we can also get the total data train and its accuracy value by using.

",36314,,36314,,4/21/2020 11:07,4/21/2020 11:07,,,,0,,,,CC BY-SA 4.0 20502,2,,16812,4/21/2020 11:03,,1,,"

Of course, you can, and yes this has been researched and done before. By using supervised learning, you give the machine some data, and it will try to figure the best way to analyze and predict the next movement in case of an inverted pendulum problem.

I found a complete paper for this problem: Neural network control of an inverted pendulum on a cart by Valeri Mladenov et al.

",36240,,2444,,1/16/2021 20:11,1/16/2021 20:11,,,,0,,,,CC BY-SA 4.0 20504,2,,18715,4/21/2020 11:05,,0,,"

Other tasks that RNNs are effective at solving are time-series predictions or other sequence predictions that aren’t image or tabular based.

",36279,,2444,,4/21/2020 15:27,4/21/2020 15:27,,,,0,,,,CC BY-SA 4.0 20506,2,,20254,4/21/2020 11:09,,1,,"

Yes. There's a library called OpenCV that can be used to measure distance between objects.

To find out the distance of two objects, we must know the dimensions of the reference object. There are two important properties for the reference object:

  1. We know the dimensions of the object in certain units(inches, millimeters, etc.)
  2. The reference object needs to be easily found in the photo.

Dimensions of the reference object will be used to measure distance beetween other objects. Also, we firstly need to compute the “pixels-per-metric” ratio, used to determine how many pixels “fit” into a given unit of measurement.

Go to this post for detailed explanation: https://www.pyimagesearch.com/2016/03/28/measuring-size-of-objects-in-an-image-with-opencv/

",36255,,22659,,4/22/2020 14:54,4/22/2020 14:54,,,,0,,,,CC BY-SA 4.0 20508,2,,20081,4/21/2020 11:15,,1,,"

In machine learning, we can use all the datasets as training data in a model. But if there are too many data sets, or too much data, and we do not split them up, our model may be not produce acceptable results.

Why?

Because if the model studies too much training data, it may be overfitted.

(Just like when you cram for a test, and get overloaded with too much information!)

What I mean is, your model is only familiar with the data you provide, not for the new data.

So we need to use test data to train our algorithm. Naive Bayes and Decision Tree Classifier are no exception because they can produce an overfitted model based on train data.

So we test it on the data test to know how well the method works in relation to the problem.

Most data scientists divide their data (with answers, that is historical data) into three portions: training data, cross-validation data and testing data. The training data is used to make sure the machine recognizes patterns in the data, the cross-validation data is used to ensure better accuracy and efficiency of the algorithm used to train the machine, and the test data is used to see how well the machine can predict new answers based on its training.


SOURCE: https://www.researchgate.net/post/What_is_training_and_testing_data_in_machine_learning

",36269,,1671,,4/23/2020 21:56,4/23/2020 21:56,,,,0,,,,CC BY-SA 4.0 20511,2,,20404,4/21/2020 11:19,,2,,"

Multi Layers Perceptron(MLP) can be used for image classification, but it has a lot of deficiency than Convolutional Neural network(CNN). But if you compare MLP and Fisher Faces , the better one is MLP, because Fisher Faces will be increasingly difficult if adding more individuals or classes. You can make a simple MLP model, because it just has 3 layers which are an input layer, hidden layer and output layer, here a source code that you can try:

if you make a model, it will be based on training data, I think if make data trained composed based on noise in background and different light shadows on your image I think it will have a better performance, but remember if you are using MLP for image classification it can just predict an image on one spot, for example: ""if you train a model with the object in the middle of an image, your model can not predict it when the image is moved to the different spot"".

here is pdf to see Fisher Faces performance:

",36298,,31870,,4/22/2020 14:54,4/22/2020 14:54,,,,4,,,,CC BY-SA 4.0 20513,2,,20193,4/21/2020 11:21,,0,,"

I think no matter you use one or more feature for dataset. You can compare the classification algorithms regarding the accuracy provided by the algorithm. Like compare naive bayes with svm method, it based on your problem set.

",36299,,,,,4/21/2020 11:21,,,,0,,,,CC BY-SA 4.0 20514,2,,18634,4/21/2020 11:22,,2,,"

Word embeddings are the results of learning from deep learning algorithms, which can learn characters from data through feature extraction. One implementation of word embedding is word2vec.

Word2vec has two models, namely

  • Continuous Bag of Word (CBOW) and
  • Skip Gram Model.

Both of these methods use the concept of a neural network that maps words to target variables, which are also words. In these techniques, "weights" are used as word vector representations. CBOW tries to predict a word on the basis of its neighbors, while Skip Gram tries to predict the neighbors of a word.

In simpler words, CBOW tends to find the probability of a word occurring in a context. So, it generalizes over all the different contexts in which a word can be used. Whereas SkipGram tends to study different contexts separately. Skip Gram needs more data to be trained contains more knowledge about the context.

For further explanation, you can read the paper from Mikolov about word embedding and word2vec.

",36270,,2444,,1/18/2021 11:27,1/18/2021 11:27,,,,0,,,,CC BY-SA 4.0 20519,2,,18702,4/21/2020 11:32,,1,,"

AI can transform the customer experience is by providing personalized content. For example, When you see video recommendation on YouTube, you'll know that it's from AI technology. I recommend you to read this article for knowing how they work: A Sentiment-Enhanced Hybrid Recommender System for Movie Recommendation: A Big Data Analytics Framework (abstract, article).

",36319,,4709,,5/14/2020 21:57,5/14/2020 21:57,,,,1,,,,CC BY-SA 4.0 20520,2,,20408,4/21/2020 12:05,,1,,"

i feel , i can answer this question, based the web source that i found and i read. that can use an autoencoder with hight latent representational space. example use LSTM autoencoder, LTSM autoencoder used for sequence or time series data. on the source, they use CNN autoencoder to denoise some syntetic noised data they have generated. but they have asked what is the meaning of this latent representation space? for what they have done before. on that answer. they said""your input data is noisy sinewave data. your are not supposed to use Convolutional autoencoder for sequence data"".

https://stackoverflow.com/questions/59438488/cnn-autoencoder-latent-space-representation-meaning

addition other source for answer this question https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d https://towardsdatascience.com/deep-inside-autoencoders-7e41f319999f

hopefully help answer :))

",36302,,,,,4/21/2020 12:05,,,,1,,,,CC BY-SA 4.0 20521,1,,,4/21/2020 12:54,,1,30,"

I'm a newbie in machine learning and I am interested in neural networks.

Are there any good research papers on image identification with limited data?

",36321,,2444,,4/21/2020 14:36,4/21/2020 14:36,Are there any good research papers on image identification with limited data?,,0,0,,,,CC BY-SA 4.0 20522,2,,20352,4/21/2020 13:27,,0,,"

Here im trying to answer, yes you could use clustering to parse pdf document. it similar of how the text mining works (you can read from here).

and for the clustering method, you could probably use K-NN methods, K-Means, agglomerative hierarchical clustering, and the other methods based on your preference. or u could use naive bayes alternatively.

",36309,,,,,4/21/2020 13:27,,,,1,,,,CC BY-SA 4.0 20523,1,,,4/21/2020 13:40,,1,305,"

I was exploring image/video compression using Machine Learning. In there I discovered that autoencoders are used very frequently for this sort of thing. So I wanted to enquire:-

  1. How fast are autoencoders? I need something to compress an image in milliseconds?
  2. How much resources do they take? I am not talking about the training part but rather the deployment part. Could it work fast enough to compress a video on a Mi phone (like note8 maybe)?

Do you know of any particularly new and interesting research in AI that has enabled a technique to this fast and efficiently?

",36322,,,,,9/22/2020 6:30,How fast are autoencoders?,,2,0,,,,CC BY-SA 4.0 20524,2,,20523,4/21/2020 14:12,,1,,"

Actually it depends on the size of your AE, if you use a small AE with just 500.000 to 1M weigths, the inferencing can be stunningly fast. But even large networks can run very fast, using Tensorflow lite for example, models are compressed and optimized to run faster on Edge-devices (Handys for example, end-user devices). You can find a lot of videos on Youtube, where people test inferencing large networks like Resnet-51 or Resnet-101 on a raspberrypi, or other SOC Chips. Handys are comparable to that, but maybe not that optimized.

For example,I have an Jetson Nano (SOC of Nvidia costs arround 100 euro) and i tried to inference a large Resnet with arround 30 million parameters over my fullHD Webcam. Stable 30 FPS, so speaking in milliseconds its around 33 ms per image.

To answer your question, yes Autoencoders can be fast, also very fast in combination with an optimized model and hardware. Autoencoder structures are quite easy, check out this medium,keras example

",35557,,,,,4/21/2020 14:12,,,,1,,,,CC BY-SA 4.0 20525,2,,20075,4/21/2020 14:25,,1,,"

First: RNN is one part of the Neural Network family for processing sequential data. The way in which RNN is able to store information from the past is to loop in its architecture, which automatically keeps information from the past stored. Second: sltm / gru is a component of regulating the flow of information referred to as the gate and GRU has 2 gates, namely reset gate and gate update. If we want to make a decision to eat like the analogy above, resetting the gate on the GRU will determine how to combine new inputs with past information, and update the gate, will determine how much past information should be kept. source : https://link.springer.com/article/10.1007/s00500-019-04281-z

",36320,,,,,4/21/2020 14:25,,,,1,,,,CC BY-SA 4.0 20526,2,,20404,4/21/2020 14:40,,1,,"

Depends, if the faces are centered and have the same background yes. You also need a lot of data.

If they are daily life images, then no. You will have very bad generalization.

",32390,,,,,4/21/2020 14:40,,,,0,,,,CC BY-SA 4.0 20527,1,20541,,4/21/2020 14:41,,1,256,"

The goal of a reinforcement learning agent is to maximize the expected return which is often a discounted sum of future rewards. The return indeed is a very noisy random variable as future rewards depend on the state-transition-probabilities and the often stochastic policy. Lots of trajectories have to be sampled to approximate its expected value.

The immediate reward indeed does not have these dependencies. Therefore the questions:

If we train a policy to maximize the immediate reward, will it also perform well in the long term? What properties would the reward function need to fulfill?

",35821,,,,,4/22/2020 9:32,Can optimizing for immediate reward result in a policy maximizing the return?,,1,0,,,,CC BY-SA 4.0 20528,1,,,4/21/2020 15:02,,1,33,"

What is the state of the art with respect to recognizing connotations in natural languages?

For instance:

  • Trump is a better president than Obama. [Praising]
  • Trump is the worst president since Obama. [Insulting]

or:

  • The rock star did not infect over 100 groupies. [Defending against rumor]
  • The rock star infected no more than 100 groupies. [Attacking (0 is no more than 100)]

In each example, both statements logically mean exactly the same thing, but any human hearing them would interpret them as having quite opposite meanings.

How well can current natural language processors recognize the difference between logically equivalent statements?

",36323,,2444,,6/22/2020 12:40,6/22/2020 12:40,How well can NLP techniques recognize connotations in natural languages?,,0,0,,,,CC BY-SA 4.0 20529,1,,,4/21/2020 16:11,,1,72,"

I have noticed that almost all tutorials take the number of neurons as a power of 2. Is there any proper mathematical and well-proven reason for that?

If you sometimes change it to some other odd value, then we get a very long and weird error calling about a dozen things making the traceback of a page. Is there any reason for that?

I had tried it on a text predicting RNN with some GRU and LSTM layers mixed (bidirectional). I changed the no. of neuron units and It also resulted in an error. So, any ideas/theories?

",36322,,2444,,4/21/2020 20:53,4/21/2020 20:53,Why is the number of neurons used in various neural networks power of 2?,,0,2,,,,CC BY-SA 4.0 20530,2,,13986,4/21/2020 17:10,,0,,"

About Symmetry

What Pearl means by being not symmetric is: $A=B$ and $B=A$ are exactly identical and lead to the same result in a not-causal scientific framework. For example consider very simple set of equations:

$$ \begin{align} Z = & \epsilon_z \\ X = & Z+\epsilon_X \\ Y = & 2X+Z + \epsilon_y \end{align} $$

From the Algebra point of view, this is a set of 3 equations, and 3 unknowns (consider error terms to be known). You can shuffle the equations, change the LHS and EHS of the equations, add or subtract them. In fact these actions are actually appreciated for solving a linear equation system, righ?! So the system above is identical to this:

$$ \begin{align} \epsilon_z = & Z \\ X = & Y - 2Z - \epsilon_Y - \epsilon_X \\ Y = & 2X+Z + \epsilon_y \\ \end{align} $$

But I brought to you this example because the first one is the set of equations for a very fundamental causal structure called confounding or common cause structure, Where $X$ is the exposure, or treatment, $Y$ is the outcome, and $Z$ is the parent of both $X$ and $Y$. The orders of the equations and the RHS/LHS variables in this Structural Equation Models actually mean something. First you calculate $Z$, then go for $X$, then calculate $Y$. In this case, the second system is pointing to a completely different causal structure (or to be honest, it is not even look like a legit structural equation)

About sufficiency of Probability Theory for Causal Inference

First I would like to say this would be a very ironic question to ask from Pearl, as he also mentioned in one of his interviews, because he has had a significant contribution to the realm of probability theory with bayesian network and bayesian inference framework! And now it is like he is against himself. But he is for a good reason.

why Probability is not enough can be and has been answered with formal proofs, equations and explanations. But There are a lot of examples that will draw your attention to this truth. I recommend you to read about Simpson's Paradox. It will be a great example to see how probability theory is incapable of causal inference.

The fact that probability is not enough, only resembles the idea that correlation is not necessarily causation. and that is true. Again, read about spurious correlations and you will get it. Just think of this funny example: During the year, Crime rate and Ice cream sales amount are highly correlated, THUS we must ban ice cream sales to control the crime rate. and the problem with this dumb inference is that we have not taken into account the common cause which is heat or summer that accounts for the perceived correlation.

",6258,,,,,4/21/2020 17:10,,,,0,,,,CC BY-SA 4.0 20531,1,,,4/21/2020 19:17,,5,189,"

I'm trying to train a neural net to choose a subset from some list of objects. The input is a list of objects $(a,b,c,d,e,f)$ and for each list of objects the label is a list composed of 0/1 - 1 for every object that is in the subset, for example $(1,1,0,1,0,1)$ represents choosing $a,b,d,f$. I thought about using MSE loss to train the net but that seemed like a naive approach, is there some better loss function to use in this case?

",36083,,36083,,4/22/2020 19:25,4/1/2021 21:17,Loss function for choosing a subset of objects,,1,0,,,,CC BY-SA 4.0 20532,1,20533,,4/21/2020 20:03,,3,1536,"

I am trying to implement value and policy iteration algorithms. My value function from policy iteration looks vastly different from the values from value iteration, but the policy obtained from both is very similar. How is this possible? And what could be the possible reasons for this?

",30910,,2444,,4/21/2020 21:28,10/23/2021 6:43,Why do value iteration and policy iteration obtain similar policies even though they have different value functions?,,2,0,,,,CC BY-SA 4.0 20533,2,,20532,4/21/2020 21:27,,2,,"

Both value iteration (VI) and policy iteration (PI) algorithms are guaranteed to converge to the optimal policy, so it is expected that you get similar policies from both algorithms (if they have converged).

However, they do this differently. VI can be seen as truncated version of PI.

Let me first illustrate the pseudocode of both algorithms (taken from Barto and Sutton's book), which I suggest you get familiar with (but you are probably already familiar with them if you implemented both algorithms).

As you can see, policy iteration updates the policy multiple times, because it alternates a step of policy evaluation and a step of policy improvement, where a better policy is derived from the current best estimate of the value function.

On the other hand, value iteration updates the policy only once (at the end).

In both cases, the policies are derived from the value functions in the same way. So, if you obtain similar policies, you may think that they are necessarily derived from similar final value functions. However, in general, this may not the case, and this is actually the motivation for the existence of value iteration, i.e. you may derive an optimal policy from an non-optimal value function.

Barto and Sutton's book provide an example. See figure 4.1 on page 77 (p. 99 of the pdf). For completeness, here's a screenshot of the figure.

",2444,,2444,,4/21/2020 21:35,4/21/2020 21:35,,,,4,,,,CC BY-SA 4.0 20534,2,,1,4/22/2020 0:49,,1,,"

It's a fancy name for the multivariable chain rule.

",32390,,,,,4/22/2020 0:49,,,,2,,,,CC BY-SA 4.0 20535,2,,20531,4/22/2020 0:54,,4,,"

The choice of the loss function depends primarily on the type of task you're tackling: classification or regression. Your problem is clearly a classification one since you have classes to which a given input can either belong or not. More specifically, what you're trying to do is multi-label classification, which is different from multi-class classification. The difference is important to stress out and it consists in the format of the target labels.

# Multi-class --> one-hot encoded labels, only 1 label is correct   
  
[1,0,0], [0,1,0], [0,0,1]

# Multi-label --> multiple labels can be correct
 
[1,0,0], [1,1,0], [1,1,1], [0,1,0], [0,1,1], [0,0,1]

MSE is used when continuous values are predicted for some given inputs, therefore it belongs to the loss functions suitable for regression and it should not be used for your problem.

Two loss functions that you could apply are Categorical Cross Entropy or Binary Cross Entropy. Despite being both based on cross-entropy, there is an important difference between them, consisting of the activation function they require.

Binary Cross Entropy

$$L(y, \hat{y})=-\frac{1}{N} \sum_{i=0}^{N}\left(y * \log \left(\hat{y}_{i}\right)+(1-y) * \log \left(1-\hat{y}_{i}\right)\right)$$

Despite the name that suggests this loss should be used only for binary classification, this is not strictly true and actually, this is the loss function that conceptually is best suited for multi-label tasks.

Let's start with the binary classification case. We have a model that returns a single output score, to which the sigmoid function is applied in order to constrain the value between 0 and 1. Since we have a single score, the resulting value can be interpreted as a probability of belonging to one of the two classes, and the probability of being to the other class can be computed as 1 - value.

What if we have multiple output scores, for example, 3 nodes for 3 classes? In this case, we still could apply the sigmoid function, ending up with three scores between 0 and 1.

The important aspect to capture is that since the sigmoid treat each output node independently, the 3 scores would not sum up to 1, so they will represent 3 different probability distributions rather than a unique one. This means that each score after the sigmoid represents a distinct probability of belonging to that specific class. In the example above, for example, the prediction would be true for the two labels with a score higher than 0.5 and false for the remaining label. This also means that 3 different loss have to be computed, one for each possible output. In practice, what you'll be doing is to solve n binary classification problems where n is the number of possible labels.

Categorical Cross Entropy

$$L(y, \hat{y})=-\sum_{j=0}^{M} \sum_{i=0}^{N}\left(y_{i j} * \log \left(\hat{y}_{i j}\right)\right)$$

In categorical cross-entropy we apply the softmax function to the output scores of our model, to constrain them between 0 and 1 and to turn them into a probability distribution (they all sum to 1). The important thing to notice is that in this case, we end up with a unique distribution because, unlike the sigmoid function, the softmax consider all output scores together (they are summed in the denominator).

This implies that categorical cross-entropy is best suited for multi-class tasks, in which we end up with a single true prediction for each input instance. Nevertheless, this loss can also be applied also for multi-label tasks, as done in this paper. To do so, the authors turned each target vector into a uniform probability distribution, which means that the values of the true labels are not 1 but 1/k, with k being the total number of true labels.

# Example of target vector tuned into uniform probability distribution
[0, 1, 1] --> [0, .5, .5]  
[1, 1, 1] --> [.33, .33, .33]

Note also that in the above-mentioned paper the authors found that categorical cross-entropy outperformed binary cross-entropy, even this is not a result that holds in general.

Lastly, the are other loss that you could try which differ functions rather than cross-entropy, for example:

Hamming-Loss

It computes the fraction of wrong predicted labels

$$\frac{1}{|N| \cdot|L|} \sum_{i=1}^{|N|} \sum_{j=1}^{|L|} \operatorname{xor}\left(y_{i, j}, z_{i, j}\right)$$

Exact Match Ratio

Only predictions for which all target labels were correctly classified are considered correct.

$$ExactMatchRatio,\space M R=\frac{1}{n} \sum_{i=1}^{n} I\left(Y_{i}=Z_{i}\right)$$

",34098,,36737,,4/1/2021 21:17,4/1/2021 21:17,,,,14,,,,CC BY-SA 4.0 20540,1,20548,,4/22/2020 6:59,,4,180,"

What are examples of machine learning techniques (i.e. models, algorithms, etc.) inspired (to different extents) by neuroscience?

Particularly, I'm interested in recent developments, say less than 10 years old, that have their basis in neuroscience to some degree.

",32621,,2444,,4/22/2020 13:25,4/22/2020 16:35,What are examples of machine learning techniques inspired by neuroscience?,,1,1,,,,CC BY-SA 4.0 20541,2,,20527,4/22/2020 7:59,,1,,"

If we train a policy to maximize the immediate reward, will it also perform well in the long term?

In general, no. The delay of long term reward in real world problems, and often a lack of easy-to-compute heuristics, is a key motivation for developing reinforcement learning in the first place.

It is easy to construct a counter-example to demonstrate this. Any state where the transitions into it are high and positive, but the transitions out of it are higher and negative would ""trap"" an agent that only considered immediate reward. More complex traps include high immediate gains but ending an episode vs lower gains that continue for longer.

Many real-world environments have sparse rewards where it is not possible to tell the difference between two action choices by immediate reward, but the consequences of being in one part of the state space rather than another early in a trajectory are critical. Consider any two-player strategy board game for instance, where the only goal is to win at the end. Only the last move in such a game is associated with an immediate reward, but there are often important differences between early moves.

What properties would the reward function need to fulfill?

In all states, the expected immediate reward for taking the correct long term action would need to be higher than the expected immediate reward for any other action choice.

Solving a problem framed in this way could be done with discount factor $\gamma=0$. If the action choices were always the same and valid in each state, then the problem could also be simplified to a contextual bandit, where the fact that the choices exist within a larger trajectory is not relevant.

In practice you can construct environments like this. Simple ones are possible to do manually. Doing that is similiar to adding a heuristic function for search, but with different restrictions. For many search algorithms, admissible heuristic functions are allowed to over-estimate future gains (or under-estimate costs), because a planning/search algorithm will resolve longer-term differences. In your case, you can maybe consider stochastic reward functions, but the expected reward for the correct action must always be highest.

Needing to know the correct optimal action in the first place is clearly a circular problem - if you knew it already you would have no need to perform reinforcement learning to discover the optimal policy. An exception might be if you constructed an easy environment in order to test an algorithm, and prove that it could find the optimal policy. Although even then usually you are interested in the algorithm solving a harder variant of your problem than one you have deliberately constructed to be easy.

In brief, there is no way to create a shortcut here and avoid the need to solve a harder RL problem.

",1847,,1847,,4/22/2020 9:32,4/22/2020 9:32,,,,4,,,,CC BY-SA 4.0 20543,1,,,4/22/2020 11:50,,1,30,"

Recently, I read many papers on variance and bias. But I am still confused by the two notions, the variance or bias belongs to who? Policy or value? If the variance or bias is large or low, what results will we get?

",8415,,2444,,4/22/2020 11:59,4/22/2020 11:59,Do the variance and bias belong to the policy or value functions?,,0,0,,,,CC BY-SA 4.0 20544,1,,,4/22/2020 12:05,,1,62,"

I am developing a DCGAN using the this tutorial in PyCharm. As my usage of this tutorial suggests, I am quite new to DCGANs as I've previously only had a few experiences with machine learning algorithms on classifying problems. My goal is to feed my DCGAN a dataset of paintings of a specific painter, and get 'new' paintings in return. Needless to say, a painter does not paint thousands of paintings in his life, leaving me with a dataset of around 60 paintings. One of the smallest datasets I have ever worked with. I have two, related, questions:

  1. Is it realistic to properly train a DCGAN on this type of dataset? If not, would there be any alternative you would suggest?
  2. What would be a good set of parameters to start of from to properly train this DCGAN?

Thanks in advance!

",36354,,,,,4/22/2020 12:05,Using DCGAN on a (very small) dataset of art,,0,0,,,,CC BY-SA 4.0 20545,2,,6196,4/22/2020 12:05,,13,,"

This Tutorial by OpenAI offers a great comparison of different RL methods.
I'll try to summarize the differences between Q-Learning and Policy Gradient methods:

  1. Objective Function

    1. In Q-Learning we learn a Q-function that satisfies the Bellman (Optimality) Equation. This is most often achieved by minimizing the Mean Squared Bellman Error (MSBE) as the loss function. The Q-function is then used to obtain a policy (e.g. by greedily selecting the action with maximum value).
    2. Policy Gradient methods directly try to maximize the expected return by taking small steps in the direction of the policy gradient. The policy gradient is the derivative of the expected return w.r.t. the policy parameters.
  2. On- vs. Off-Policy

    1. The Policy Gradient is derived as an expectation over trajectories ($s_1,a_1,r_1,s_2,a_2,...,r_n$), which is estimated by a sample mean. To get an unbiased estimate of the gradient, the trajectories have to be sampled from the current policy. Thus, policy gradient methods are on-policy methods.
    2. Q-Learning only makes sure to satisfy the Bellman-Equation. This equation has to hold true for all transitions. Therefore, Q-learning can also use experiences collected from previous policies and is off-policy.
  3. Stability and Sample Efficiency

    1. Directly optimizing the return and thus the actual performance on a given task, Policy Gradient methods tend to more stably converge to a good behavior. Indeed being on-policy, makes them very sample inefficient. Q-learning find a function that is guaranteed to satisfy the Bellman-Equation, but this does not guarantee to result in near-optimal behavior. Several tricks are used to improve convergence and in this case, Q-learning is more sample efficient.
",35821,,,,,4/22/2020 12:05,,,,4,,,,CC BY-SA 4.0 20546,1,,,4/22/2020 12:14,,3,154,"

I understand that with a fully observable environment (chess / go etc) you can run an MCTS with an optimal policy network for future planning purposes. This will allow you to pick actions for gameplay, which will result in max expected return from that state.

However, in a partially observable environment, do we still need to run MCTS during gameplay? Why can't we just pick the max action from the trained optimal policy given the current state? What utility does MCTS serve here?

I am new to reinforcement learning and am trying to understand the purpose of MCTS / planning in partially observable environments.

",36355,,2444,,4/22/2020 13:41,4/22/2020 13:41,Is Monte Carlo tree search needed in partially observable environments during gameplay?,,0,0,,,,CC BY-SA 4.0 20547,1,,,4/22/2020 12:35,,1,29,"

This article states that:

One of the algorithms that photonics is very good at implementing is matrix multiplication

But how are parameters stored and updated(in backpropagation)?

One more serious problem is that there are nonlinear operations in neural networks, then how does the photonic neural network deal with activation functions?

",5351,,,,,4/22/2020 12:35,How does optical computing work and deal with nonlinearity?,,0,0,,,,CC BY-SA 4.0 20548,2,,20540,4/22/2020 13:06,,1,,"

There is a category of neural networks that more closely attempt to mimic biological neural networks by incorporating also time (i.e. not all neurons fire at the same time). They are called spiking neural networks (SNNs) and their name comes from the fact that they use spiking neurons (i.e. neurons that fire discrete signals and affect other neurons at possibly different times).

SNNs are mainly used in neuroscience, and aren't commonly used in machine learning because they currently have some apparent limitations (e.g. non-differentiability, so gradient descent and back-propagation can't be applied, but GD and BP aren't really biologically realistic anyway, although some people already tried to apply GD to SNNs) and their performance isn't still as good as the performance of traditional deep learning models, which make them not so appealing to the deep learning community (which is currently mainly driven by performance and utility). Nevertheless, the performance gap between traditional neural networks and spiking neural networks is decreasing. See Deep Learning in Spiking Neural Networks (2019) by Amirhossein Tavanaei et al. for more details.

There are already commercial implementations of a hardware-accelerated SNNs (e.g. BrainChip provides this service). These hardware-accelerators are often called neuromorphic chips (or processors) and all computing based on SNNs or processors that attempt to implement biological neural networks is known as neuromorphic computing.

There's also the related area called reservoir computing, which studies neural networks (such as liquid-state machines or echo state machines) that make use of reservoirs (which are fixed during learning) to attempt e.g. to improve training efficiency. See An overview of reservoir computing: Theory, applications and implementations (2007) by Benjamin Schrauwen et al. for an overview.

Numenta (and, in particular, Jeff Hawkins, the founder of Numenta and author of an interesting book called On Intelligence) has also been studying neuroscience for a long time in order to develop models and theories of human intelligence. They call their new theory The Thousand Brains Theory of Intelligence, which is inspired by biological grid cells. This is also related and similar to capsule networks (often associated with Hinton).

",2444,,2444,,4/22/2020 16:35,4/22/2020 16:35,,,,0,,,,CC BY-SA 4.0 20551,2,,20283,4/22/2020 13:29,,1,,"

each episode you will calculate the return, you will then update the action value or $Q(s,a)$ as the average each episode. Using the blackjack example from open AI gym and using a discount factor of 1, you get the following

episode 1 [{'state': (22, 10, False), 'reward': -1, 'action': 1}, {'state': (17, 10, False), 'reward': 0, 'action': 1}, {'state': (12, 10, False), 'reward': 0.0, 'action': 1}]

$Q((22, 10, False),0)=-1$

$Q((17, 10, False),1)=-1$

$Q((12, 10, False),1)=-1$

episode 2 [{'state': (21, 10, False), 'reward': 1, 'action': 0}, {'state': (17, 10, False), 'reward': 0, 'action': 1}, {'state': (12, 10, False), 'reward': 0.0, 'action': 1}]

$Q((21, 10, False),0)=1$

$Q((17, 10, False),1)=0$

$Q((12, 10, False),1)=0$

For $Q((17, 10, False),1)$ and $Q((12, 10, False),1)$ is the average return i.e -1 for the first episode and 1 for the second.

",21565,,21565,,4/22/2020 15:23,4/22/2020 15:23,,,,1,,,,CC BY-SA 4.0 20553,1,20554,,4/22/2020 15:17,,2,130,"

Is the philosophy between Bellman equations and minimax the same?

Both the algorithms look at the full horizon and take into account potential gains (Bellman) and potential losses (minimax).

However, do the two differ besides the obvious on the fact that Bellman equations use discounted potential rewards, while minimax deals with potential losses without the discount? Are these enough to say they are similar in philosophy or is are they dissimilar? If so, then in what sense?

",36047,,2444,,4/22/2020 15:42,4/22/2020 15:42,How are the Bellman optimality equations and minimax related?,,1,0,,,,CC BY-SA 4.0 20554,2,,20553,4/22/2020 15:31,,0,,"

They have similar philosophies, in the sense that minimax and algorithms based on the Bellman optimality equations are used to solve optimization problems, but they are also different because they solve different problems.

Minimax (at least, the minimax version that I am aware of) is typically used to solve two-player games (e.g. chess, tic-tac-toe, etc.), while Bellman optimality equations (I assume you are referring to the Bellman equations that algorithms such as policy iteration are based on) do not assume the existence of two players (unless you consider the environment a player).

",2444,,2444,,4/22/2020 15:37,4/22/2020 15:37,,,,2,,,,CC BY-SA 4.0 20555,1,,,4/22/2020 17:04,,0,181,"

Say the game is tic tac toe. I found two possible output layers:

  1. Vector of length 9: each float of the vector represents 1 action (one of the 9 boxes in Tic Tac Toe). The agent will play the corresponding action with the highest value. The agent learns the rules through trial and error. When the agent tries to make an illegal move (i.e. placing a piece on a box where there is already one), the reward will be harshly negative (-1000 or so).
  2. A single float: the float represents who is winning (positive = ""the agent is winning"", negative = ""the other player is winning""). The agent does not know the rules of the game. Each turn the agent is presented with all the possible next states (resulting from playing each legal action) and it chooses the state with the highest output value.

What other options are there?

I like the first option because it's cleaner, but it's not feasible with games that have thousands or millions of actions. Also, I am worried that the game might not really learn the rules. E.g. Say that in state S the action A is illegal. Say that state R is extremely similar to state S but action A is legal in state R (and maybe in state R action A is actually the best move!). Isn't there the risk that by learning not to play action A in state S it will also learn not to play action A in state R? Probably not an issue in Tic Tac Toe, but likely one in any game with more complex rules. What are the disadvantages of option 2?

Does the choice depend on the game? What's your rule of thumb when choosing the output layer?

",10813,,10813,,4/23/2020 12:30,5/30/2020 19:02,RL: What should be the output of the NN for an agent trying to learn how to play a game?,,2,0,,,,CC BY-SA 4.0 20556,1,,,4/22/2020 17:06,,1,69,"

I've just learned about Dueling Network Architectures to estimate $Q$-values and am wondering why this architecture is not used more often in deep RL algorithms? DDPG and TD3 estimate the $Q$-function using Double Q Learning instead of the empirically better Dueling Approach.

",35821,,,,,4/22/2020 17:06,Why are Dueling Q Networks not used more often to approximate Q-values in reinforcement learning algorithms?,,0,0,,,,CC BY-SA 4.0 20559,1,20560,,4/22/2020 18:18,,0,2061,"

In this page it's told:

In Single Perceptron / Multi-layer Perceptron(MLP), we only have linear separability because they are composed of input and output layers(some hidden layers in MLP)

What does it mean? I thought the MLP was a non-linear classifier. Could you explain it to me?

",36363,,2444,,4/22/2020 18:59,4/22/2020 19:01,Why can't MLPs perform non-linear regression and classification?,,1,0,,,,CC BY-SA 4.0 20560,2,,20559,4/22/2020 18:55,,1,,"

In Single Perceptron / Multi-layer Perceptron(MLP), we only have linear separability because they are composed of input and output layers(some hidden layers in MLP)

This is wrong.

A multi-layer perceptron (i.e. a feed-forward neural network) with non-linear activation functions can perform non-linear classification and regression. In fact, an MLP with one hidden layer with an arbitrary number of hidden nodes, each of them with a sigmoid (which is a non-linear function), can approximate any continuous function (up to an approximation error).

On the other hand, perceptrons can't do that. They perform only linear classification/regression.

I thought the MLP was a non-linear classifier.

You're right, unless the MLP only uses linear activation functions. In that case, it won't be able to perform non-linear classification/regression.

(P.S.: I suggest you always question the truth and correctness of what you read on the web, especially, on sites like Medium, as you actually did!)

",2444,,2444,,4/22/2020 19:01,4/22/2020 19:01,,,,4,,,,CC BY-SA 4.0 20563,1,,,4/22/2020 22:15,,1,23,"

For example, a RL algorithm that gains points when a rat presses a lever and loses points when it dispenses a pellet, water, treat, and/or sugar water. After a few days of controlling the rewards given to a rat, all rewards are stopped and the longer/more times the rat presses the lever before giving up, the higher the score.

This would be a situation in which both the input and the outputs are discrete with very low data density over time and with outputs having very long-term affects on the environment.

What kind of RL architecture would be appropriate here?

",36368,,1671,,4/23/2020 22:40,4/23/2020 22:40,Has there been any work done on AI-driven operant conditioning?,,0,0,,,,CC BY-SA 4.0 20564,1,,,4/22/2020 22:51,,1,168,"

I am currently using TensorFlow and have simply been trying to train a neural network directly against a large continuous data set, e.g. $y = [0.014, 1.545, 10.232, 0.948, ...]$ corresponding to different points in time. The loss function in the fully connected neural network (input layer: 3 nodes, 8 inner layers: 20 nodes each, output layer: 1 node) is just the squared error between my prediction and the actual continuous data. It appears the neural network is able to learn the high magnitude data points relatively well (e.g. Figure 1 at time = 0.4422). But the smaller magnitude data points (e.g. Figure 2 at time = 1.1256) are quite poorly learned without any sharpness and I want to improve this. I've tried experimenting with different optimizers (e.g. mini-batch with Adam, full batch with L-BFGS), compared reduce_mean and reduce_sum, normalized the data in different ways (e.g. median, subtract the sample mean and divide by the standard deviation, divide the squared loss term by the actual data), and attempted to simply make the neural network deeper and train for a very long period of time (e.g. 7+ days). But after approximately 24 hours of training and the aforementioned tricks, I am not seeing any significant improvements in predicted outputs especially for the small magnitude data points.


Figure 1


Figure 2


Therefore, do you have any recommendations on how to improve training particularly when there are different data points of varying magnitude I am trying to learn? I believe this is a related question, but any explicit examples of implementations or techniques to handle varying orders of magnitude within a single large data set would be greatly appreciated.

",21895,,,,,1/13/2023 6:06,How to improve neural network training against a large data set of points with varying magnitude,,3,0,,,,CC BY-SA 4.0 20567,1,20574,,4/23/2020 2:24,,1,218,"

I once heard that the problem of approximating an unknown function can be modeled as a communication problem. How is this possible?

",36175,,2444,,11/22/2020 21:50,11/23/2020 11:05,How can a machine learning problem be reduced as a communication problem?,,1,0,,,,CC BY-SA 4.0 20570,1,,,4/23/2020 9:39,,3,584,"

I'm working on a RL problem with the following properties:

  1. The rewards are extremely sparse i.e. all rewards are 0 except the terminal non-zero reward. Ideally I would not use any reward engineering as that would lead to a different optimization problem.
  2. Actions are continuous. Discretization should not be used.
  3. The amount of stochasticity in the environment is very high i.e. for a fixed deterministic policy the variance of returns is very high.

More specifically, the RL agent represents the investor, the terminal reward represents the utility of the terminal wealth (hence the sparsity), actions represent portfolio positions (hence the continuity) and the environment represents the financial market (hence the high stochasticity).

I've been trying to use DDPG with a set of ""commonly used"" hyperparameters (as I have no idea have to tune them besides experimenting which lasts too long) but so far (after 10000 episodes) it seems that nothing is happening.

My questions are the following:

  1. Given the nature of the problem I'm trying to solve (sparse rewards, continuous actions, stochasticity) is there a particular (D)RL algorithm that would lend itself well to it?
  2. How likely is it that DDPG simply won't converge to a reasonable solution (due to the peculiarities of the problem itself) no matter what set of hyperparameters I choose?
",26195,,,,,1/18/2023 17:52,"Appropriate algorithm for RL problem with sparse rewards, continuous actions and significant stochasticity",,1,3,0,,,CC BY-SA 4.0 20571,2,,4766,4/23/2020 9:50,,2,,"

Geoffrey Hinton has started working on Thought Vectors at Google: https://en.wikipedia.org/wiki/Thought_vector

The basic idea is similar to his original idea with Capsule Networks, where activation happens by vectors instead of scalars, which allows the network to capture transformations: for example while traditional CNN needs to see object from all perspectives of three dimensional space, the Capsule networks are able to extrapolate transformations such as stretching much better.

Thought Vectors guide NLP similarly; one could say that there are two grammars, the linguistic grammar and the narrative grammar which is more universal (Vladimir Propp, Joseph Campbell, John Vervake). While dependency grammars do great job at understanding linguistic grammar, we lack tools for meaning extraction, which is narrative bound. Thus Thought Vectors could, at least in theory, give us a framework for matching the meaning of a word within a context rather than just lexically and grammarly trying to approximate the meaning through average co-occurances.

Neural Networks with Thought Vectors would be highly complex and beyond our computational resources today (Hinton predicts in one paper, that we would get there around 2035), however, one could conduct empirical research already by giving a heuristic structure for Thought Vectors by utilizing narrative systems that do compute more easily. One could for example have text segments annotated with writing theories or other such devices that would approximate the Thought Vectors conceptually. For example annotating the text with state transformations of conflict driven partially ordered causal link planner (cPOCL, Gervas et al.) or use a writing theory framework such as Dramatica to annotate known movie scripts (http://dramatica.com/theory http://dramatica.com/analysis).

Hinton himself is currently active in NLP research: https://research.google/people/GeoffreyHinton/

Here is a nice explanation of Thought Vectors: https://pathmind.com/wiki/thought-vectors

",11626,,11626,,6/15/2020 14:33,6/15/2020 14:33,,,,0,,,,CC BY-SA 4.0 20574,2,,20567,4/23/2020 13:26,,2,,"

Information-theoretic view of Bayesian learning

I once heard that the problem of approximating an unknown function can be modeled as a communication problem. How is this possible?

Yes, this is indeed possible. More precisely, there is an information-theoretic view of Bayesian learning in neural networks, which can also be thought of as a communication problem, which explains both maximum a posteriori estimation (MAPE) and full Bayesian learning [1], i.e. finding the posteriors over the weights of the neural network: the neural networks that maintain a probability distribution over the weights are now known as Bayesian neural networks (and, in terms of theory, they are strongly related/similar to the famous variational auto-encoders).

The oldest relevant paper (I am aware of) that interprets Bayesian learning in neural networks as a communication problem is the 1993 paper by Hinton and Van Camp entitled Keeping the neural networks simple by minimizing the description length of the weights (COLT), which is the paper that introduces variational Bayesian neural networks (sometimes called ensemble learning in some papers from the 1990s), i.e. variational inference (VI) applied to neural networks (yes, the same VI used in VAEs). Hinton (yes, the famous Hinton that won the Turing award) and Van Camp (who is this? probably a Dutch guy from the name!) write in this paper

We can think in terms of a sender who can see both the input vector and the correct output and a receiver who can only see the input vector. The sender first fits a neural network, of pre-arranged architecture, to the complete set of training cases, then sends the weights to the receiver. For each training case, the sender also sends the discrepancy between the net's output and the correct output. By adding this discrepancy to the output of the net, the receiver can generate exactly the correct output.

You should read this seminal paper if you want to understand all the details.

Another relevant paper is Practical Variational Inference for Neural Networks (2013, NeurIPS) by Graves, who cites the 1993 paper immediately at the beginning of the paper. Essentially, as the title of the paper suggests, Graves tries to make VI in neural networks practical.

There are other relevant papers that still attempt to provide this information-theoretic view of Bayesian learning, such as Variational learning and bits-back coding: An information-theoretic view to bayesian learning (2004, IEEE Transactions on Neural networks), but most current papers on Bayesian neural networks, such as Weight Uncertainty in Neural Networks (2015, PMLR) don't do it (at most they may mention that this interpretation exists, but they don't go into the details).

Minimum description length

To give you a few more details, the information-theoretic view of Bayesian learning in these papers is that of the minimum description length (MDL), i.e. Bayesian learning (i.e. the application of Bayes rule to find the posteriors over the parameters of the model) is equivalent to finding a model that gives the "shortest description of the data" (hence the name MDL), where a description is some code/encoding of the data: in the case of the NNs, this encoding is contained in their weights.

Given that you want to find the simplest code, then this is a direct application of Occam's razor: if you have multiple hypotheses/functions that describe your data (or are consistent with your observations), then choose the simplest one. Occam's razor underlies many other mathematical/ML theories and frameworks, for example, AIXI, a framework for artificial general intelligence developed by Marcus Hutter. Jürgen Schmidhuber is also a good fan of Occam's razor and compression as a means to act intelligently (see e.g. the speed prior). If you are familiar with deep learning, a light bulb should turn on in your brain now. Yes, regularization techniques to avoid over-fitting and improve generalization can also be viewed as an application of Occam's razor principle.

Bits-back coding

How do we find the simplest weights? The bits-back coding, used by the 1993 paper and described in the 2004 and 2013 papers, essentially states that you can find the simplest encoding (i.e. posterior over the weights) by minimizing the Kullback-Leibler divergence (aka relative entropy: say what?!) between the posterior (which is unknown: so how can we compute the KL divergence?) and some prior (coding distribution), which is zero when the prior is equal to the posterior (but we don't know the posterior) [1]. Given that we don't know the posterior, we need to use a proxy objective function that doesn't involve the posterior, such as the Evidence Lower BOund (ELBO), also known as variational free-energy, which leads to a non-optimal coding (i.e. possibly, you will find some posteriors that are not optimal given the data).

Conclusions

Using MAPE or performing (approximate) Bayesian learning in a neural network (which finds one function or a probability distribution over functions, respectively) can be interpreted as finding the MDL, i.e. an optimal or near-optimal encoding of the data that needs to be communicated from a sender to a receiver.

Side notes

Information theory was pioneered by Claude Shannon in his 1948 seminal paper A Mathematical Theory of Communication.

Claude Shannon was also one of the participants at the Dartmouth workshop, which officially started the field of artificial intelligence, so he is one of the fathers of the AI field, and his impact on the field is definitely huge (although most people are not aware of it, but, hopefully, this answer will change that).

Further reading

Apart from the papers that I cited above, you may also be interested in Information Theory and its Relation to Machine Learning (2015) by Hu.

",2444,,2444,,11/23/2020 11:05,11/23/2020 11:05,,,,0,,,,CC BY-SA 4.0 20576,1,,,4/23/2020 17:43,,1,37,"

I am following the tutorial Train a Deep Q Network with TF-Agents. It uses the hello world environment of reinforced learning: cart pole.

At the end, the agent is getting trained with experience on the training enviroment (train_env). When performing an action in the environment a time_step is returned containing the observation, award and whether the environment signals the end, which basically says 'game over'. This can be due to the pole reaching an angle which is too high, or 200 time steps have been reached (which is the max score, atleast in the tutorial).

When compute_avg_return method evaluates an agents performance on an environment, the environment is checked to be game over or not using time_step.is_last.

Why is time_step.is_last not considered when training the agent at the end of the tutorial? Nor do I see that the environment is reset during training. Or at least, I do not see it in the code presented. Is it checked internally? Looking at the graph, it never goes over an average return (the score) of 200 time steps. So it does seem to check for time_step.is_last. Do I overlook something or how does this work?

See the code block below. I would expect the check for time_step.is_last after collect_step(train_env, agent.collect_policy, replay_buffer), which would be followed by resetting the environment if it was true.

# Reset the train step
agent.train_step_counter.assign(0)

# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]

for _ in range(num_iterations):

  # Collect a few steps using collect_policy and save to the replay buffer.
  for _ in range(collect_steps_per_iteration):
    collect_step(train_env, agent.collect_policy, replay_buffer)

  # Sample a batch of data from the buffer and update the agent's network.
  experience, unused_info = next(iterator)
  train_loss = agent.train(experience).loss

  step = agent.train_step_counter.numpy()

  if step % log_interval == 0:
    print('step = {0}: loss = {1}'.format(step, train_loss))

  if step % eval_interval == 0:
    avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
    print('step = {0}: Average Return = {1}'.format(step, avg_return))
    returns.append(avg_return)
",32968,,32968,,4/23/2020 17:53,4/23/2020 17:53,Why does this tutorial on reinforced learning not check whether the environment is 'game over' during training?,,0,1,,,,CC BY-SA 4.0 20583,2,,8509,4/23/2020 23:01,,1,,"

Is it the case that one of the numbers if filled in? If so a CNN with 10 output should work well. Just choose the output that has the highest probability. If your data allows no number to be filled in, then have 11 outputs where the eleventh output indicates none is filled in. I would recommend transfer learning using the MobileNet model. Documentation is here. Here is the code to adapt MobileNet to your problem:

image_size=128
no_of_classes=10  # set to 11 if in some cases no numbers are filled in
lr_rate=.001
dropout=.4
mobile = tf.keras.applications.mobilenet.MobileNet( include_top=False,
                                                           input_shape=(image_size,image_size,3),
                                                           pooling='avg', weights='imagenet',
                                                           alpha=1, depth_multiplier=1)
x=mobile.layers[-1].output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
x=Dense(128, kernel_regularizer = regularizers.l2(l = 0.016),activity_regularizer=regularizers.l1(0.006),
                bias_regularizer=regularizers.l1(0.006) ,activation='relu')(x)
x=Dropout(rate=dropout, seed=128)(x)
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
predictions=Dense (no_of_classes, activation='softmax')(x)
model = Model(inputs=mobile.input, outputs=predictions)    
for layer in model.layers:
    layer.trainable=True
model.compile(Adam(lr=lr_rate), loss='categorical_crossentropy', metrics=['accuracy']) 

I would also create a training, test and validation set with about 150 images in the test set and 150 images in the validation set leaving 1700 images for training. I also recommend you use two useful callbacks. Documentation is here. Use the ReduceLROnPlateau callback to monitor the validation loss and adjust the learning rate downward by a factor. Use ModelCheckpoint to monitor the validation loss and save the model with the lowest validation loss to use to make predictions on the test set.

",33976,,,,,4/23/2020 23:01,,,,0,,,,CC BY-SA 4.0 20584,2,,13390,4/23/2020 23:09,,0,,"

There's something called Elastic Weight Consolidation to prevent neural networks from forgetting previous tasks as they train on new tasks. It might be helpful for your case too.

The main idea is to quantify the importance of parameters for task $t$ and penalize the model in proportion when it changes its parameters as it trains to learn task $t+1$. As you can see, this incentivizes model to change parameters that are less important for task $t$ which prevents the model from forgetting it.

",32621,,,,,4/23/2020 23:09,,,,0,,,,CC BY-SA 4.0 20585,1,22384,,4/24/2020 1:56,,5,819,"

In the previous research, in 2015, Deep Q-Learning shows its great performance on single player Atari Games. But why do AlphaGo's researchers use CNN + MCTS instead of Deep Q-Learning? is that because Deep Q-Learning somehow is not suitable for Go?

",16565,,16565,,4/29/2020 17:35,7/7/2020 20:03,Why AlphaGo didn't use Deep Q-Learning?,,2,0,,,,CC BY-SA 4.0 20586,1,20590,,4/24/2020 3:04,,1,86,"

In Sutton and Barto's RL textbook they included the following pseudocode for off policy Monte Carlo learning. I am a little confused, however, because to me it looks like the W term will become infinitely large after a couple thousand iterations (and this is exactly what happens when I implement the algorithm).

For example, say that the MC algorithm always follows the behavioral policy for each episode (ignoring epsilon soft/greedy for examples sake). If the probability of the action specified by the policy is 0.9, then after 10,000 iterations W would have a value of 1.11^10,000. I understand that the ratio of W to C(a,s) is what matters, however this ratio cannot be computer once W becomes infinite. Clearly I am misunderstanding something.

",36404,,,,,4/24/2020 7:33,Understanding the W term in off policy monte carlo learning,,1,0,,,,CC BY-SA 4.0 20588,1,21072,,4/24/2020 4:24,,3,1030,"

From Calculate Levenshtein distance between two strings in Python it is possible to calculate distance and similarity between two given strings(sentences).

And from Levenshtein Distance and Text Similarity in Python to return the matrix for each character and distance for two strings.

Are there any ways to calculate distance and similarity between each word in a string and print the matrix for each word in a string(sentences)?

a = ""This is a dog.""
b = ""This is a cat.""

from difflib import ndiff

def levenshtein(seq1, seq2):
    size_x = len(seq1) + 1
    size_y = len(seq2) + 1
    matrix = np.zeros ((size_x, size_y))
    for x in range(size_x):
        matrix [x, 0] = x
    for y in range(size_y):
        matrix [0, y] = y

    for x in range(1, size_x):
        for y in range(1, size_y):
            if seq1[x-1] == seq2[y-1]:
                matrix [x,y] = min(
                    matrix[x-1, y] + 1,
                    matrix[x-1, y-1],
                    matrix[x, y-1] + 1
                )
            else:
                matrix [x,y] = min(
                    matrix[x-1,y] + 1,
                    matrix[x-1,y-1] + 1,
                    matrix[x,y-1] + 1
                )
    print (matrix)
    return (matrix[size_x - 1, size_y - 1])

levenshtein(a, b)

Outputs

>> 3

Matrix

[[ 0.  1.  2.  3.  4.  5.  6.  7.  8.  9. 10. 11. 12. 13. 14.]
 [ 1.  0.  1.  2.  3.  4.  5.  6.  7.  8.  9. 10. 11. 12. 13.]
 [ 2.  1.  0.  1.  2.  3.  4.  5.  6.  7.  8.  9. 10. 11. 12.]
 [ 3.  2.  1.  0.  1.  2.  3.  4.  5.  6.  7.  8.  9. 10. 11.]
 [ 4.  3.  2.  1.  0.  1.  2.  3.  4.  5.  6.  7.  8.  9. 10.]
 [ 5.  4.  3.  2.  1.  0.  1.  2.  3.  4.  5.  6.  7.  8.  9.]
 [ 6.  5.  4.  3.  2.  1.  0.  1.  2.  3.  4.  5.  6.  7.  8.]
 [ 7.  6.  5.  4.  3.  2.  1.  0.  1.  2.  3.  4.  5.  6.  7.]
 [ 8.  7.  6.  5.  4.  3.  2.  1.  0.  1.  2.  3.  4.  5.  6.]
 [ 9.  8.  7.  6.  5.  4.  3.  2.  1.  0.  1.  2.  3.  4.  5.]
 [10.  9.  8.  7.  6.  5.  4.  3.  2.  1.  0.  1.  2.  3.  4.]
 [11. 10.  9.  8.  7.  6.  5.  4.  3.  2.  1.  1.  2.  3.  4.]
 [12. 11. 10.  9.  8.  7.  6.  5.  4.  3.  2.  2.  2.  3.  4.]
 [13. 12. 11. 10.  9.  8.  7.  6.  5.  4.  3.  3.  3.  3.  4.]
 [14. 13. 12. 11. 10.  9.  8.  7.  6.  5.  4.  4.  4.  4.  3.]]

General Levenshtein distance for character level shown in below fig.

Is it possible to calculate Levenshtein Distance for Word Level?

Required Matrix

          This is a cat

This
is
a
dog
",30725,,30725,,4/27/2020 10:22,5/11/2020 15:37,Levenshtein Distance between each word in a given string,,2,2,,,,CC BY-SA 4.0 20590,2,,20586,4/24/2020 7:33,,2,,"

The pseudocode you have copied looks incorrect to me, and I think it is from the first edition.

The main issue is at the end of the loop. Where the book has

$\qquad W \leftarrow W \frac{1}{\mu(A_t|S_t)}$

$\qquad \text{If } W = 0 \text{ then ExitForLoop}$

It should have either

$\qquad W \leftarrow W \frac{1}{\mu(A_t|S_t)}$

$\qquad \text{If } \pi(S_t) \neq A_t \text{ then ExitForLoop}$

or

$\qquad W \leftarrow W \frac{\pi(A_t|S_t)}{\mu(A_t|S_t)}$

$\qquad \text{If } W = 0 \text{ then ExitForLoop}$

This latter one is more general - it covers situations where the target policy can be stochastic - but doesn't fit with the notation used elsewhere for a deterministic policy. For some reason the first edition book had a mistake using hybrid algorithm which adjusted to a deterministic target policy except for the exit loop statement. This is fixed in the second edition (page 111).

after 10,000 iterations

Are your episodes really 10,000 time steps long? If so, the chances of off-policy MC control learning anything for early time steps seems remote unless $\epsilon$ is really low (and in which case $W$ will not get too high). If not, have you missed that $W \leftarrow 1$ occurs at the start of each episode?

",1847,,-1,,6/17/2020 9:57,4/24/2020 7:33,,,,0,,,,CC BY-SA 4.0 20591,1,,,4/24/2020 9:15,,0,2310,"

I wonder if there's anyone who has actually succeeded in fine-tuning GPT-2's 774M model without using cloud TPU's. My GeForce RTX 2070 SUPER couldn't handle it in previous attempts.

I'm running TensorFlow 1.14.0 with CUDA V 9.1 on Ubuntu 18.04. For fine-tuning I'm using gpt-2-simple.

When fine-tuning using the 77M model, I keep running into OOM errors, such as: W tensorflow/core/common_runtime/bfc_allocator.cc:314] Allocator (GPU_0_bfc) ran out of memory trying to allocate 6.25MiB (rounded to 6553600). Current allocation summary follows.

So far I've tried:

  • Using different a optimizer (RMSPropOptimizer instead of AdamOptimizer)
  • Setting batch-size to 1
  • use_memory_saving_gradients
  • only_train_transformer_layers

Fine-tuning works smoothly on the 355M model.

So what I'm really asking is:

  • is it possible to fine-tune GPT-2's 774M model without industrial-sized hardware?
  • if so, please tell me about your successful attempts
  • apart from hardware-recommendations, how could fine-tuning be optimized to make 77M fit in memory?
",33476,,2444,,4/24/2021 12:39,4/24/2021 12:39,GPT-2: (Hardware) requirements for fine-tuning the 774M model,,1,0,,12/10/2021 21:33,,CC BY-SA 4.0 20592,1,20603,,4/24/2020 10:31,,1,211,"

According this thread some hyperparameters are independent from each other while some are directly related.

One of the answers give an example where two hyperparameters affect each other.

For example, if you're using stochastic gradient descent (that is, you train your model one example at a time), you probably do not want to update the parameters of your model too fast (that is, you probably do not want a high learning rate), given that a single training example is unlikely to be able to give the error signal that is able to update the parameters in the appropriate direction (that is, the global or even local optimum of the loss function).

How would someone creating a neural network know how the hyperparameters affect each other?

In other words, what are the heuristics for hyperparameter selection when trying to build a robust model?

",32265,,,,,4/25/2020 3:50,How to know if the hyperparameters of a neural network relate to each other?,,1,0,,,,CC BY-SA 4.0 20594,1,,,4/24/2020 12:23,,1,51,"

I am working to use DQN and Policy Gradient reinforcement learning models to solve classic maze escaping problems.

So far, I have been able to train a model, which, after around 100 episodes, quickly explored ONE optimal solution to escape mazes.

However, it is easy to see that for many maze designs, the optimal solutions could be multiple, and I would like to take one step further to collect all optimal and distinguishable solutions.

However, I tried some searches online and so far, the only material I can find is this Learning Diverse Skills. But this seems an obstacle to me. I somewhat believe this seems a classic (?) and an easier problem that should be addressed in the textbook?

Could someone shed light on this matter?

",25973,,2444,,4/25/2020 3:56,4/25/2020 3:56,How can I design a DQN or policy gradient model to explore and collect all optimal solutions?,,0,0,,,,CC BY-SA 4.0 20596,2,,12508,4/24/2020 13:05,,1,,"

What you are doing when calculating $d'(x,y)$:

  1. $d(x,y)$: calculating the original edge distance from $x$ to $y$
  2. $h(y)$: plus the heuristic from $y$ to the goal
  3. $h(x)$: minus the heuristic from $x$ to the goal

So, using this recalculation of the original edge-values ($1.$) in Dijkstra's algorithm you are inherently accounting for the heuristic component of A* by incorporating it ($2.$) into the value of the edge traversed, and discarding ($3.$) the accumulated previous heuristic values of previous nodes in the path.

The additional condition $h(x) ≤ d(x, y) + h(y)$ ensures the new edge-values are strictly positive.

",23503,,23503,,4/24/2020 14:06,4/24/2020 14:06,,,,0,,,,CC BY-SA 4.0 20597,1,20672,,4/24/2020 14:31,,0,87,"

I'm a computer engineering student and I'm about to work on my master thesis. My professor gave me a small dataset with brain Computed Axial Tomography records. I would like to use deep learning to help doctors diagnose a certain disease (obviously, I've also got the data for doing supervised learning).

Since the dataset is small, is radial basis function network a good solution? What do you think?

Btw, if you have any type of tips in using the RBF network for this kind of project I would be really grateful.

",36363,,2444,,4/27/2020 23:14,4/27/2020 23:14,Is radial basis function network appropriate for small datasets?,,1,0,,,,CC BY-SA 4.0 20598,1,20602,,4/24/2020 15:20,,1,271,"

One of the steps in the actor-critic algorithm is $$\partial \theta_{\pi} \gets \partial \theta_{\pi} + \nabla_{\theta}\log\pi_{\theta} (a_i | s_i) (R - V_{\theta}(s_i))$$

For me, $\theta$ are just the weights. Can you explain to me what mean $\partial \theta_{\pi}$?

The whole algorithm comes from Maxim Lepan's book Deep Reinforcement Learning Hands-on page 269.

Here is a picture of the algorithm :

",35626,,2444,,4/24/2020 20:05,4/24/2020 20:11,What does the notation $\partial \theta_{\pi}$ mean in this actor-critic update rule?,,1,0,,,,CC BY-SA 4.0 20599,1,20606,,4/24/2020 15:51,,2,4324,"

While watching MIT's lectures about search, 4. Search: Depth-First, Hill Climbing, Beam, the professor explains the hill-climbing search in a way that is similar to the best-first search. At around the 35 mins mark, the professor enqueues the paths in a way similar to greedy best-first search in which they are sorted, and the closer nodes expanded first.

However, I have read elsewhere that hill climbing is different from the best first search. What's the difference between the two then?

",32780,,2444,,4/24/2020 19:32,4/25/2020 0:07,What is the difference between hill-climbing and greedy best-first search algorithms?,,1,0,,,,CC BY-SA 4.0 20601,2,,11405,4/24/2020 16:11,,1,,"
  1. The decoder half is necessary in order to compute the loss function for training the network. Similar to how the 'adversary' is still necessary in a GAN even if you are only interested in the generative component.
  2. Autoencoders can learn non-linear embeddings of the data, and hence are more powerful than vanilla PCA.
  3. Autoencoders have applications beyond dimensionality reduction:
    • Generating new data points, or perform interpolation (see VAE's)
    • Create denoising filters (e.g. in image processing)
    • Compress/decompress data
    • Link prediction (e.g. in drug discovery)
",23503,,23503,,4/25/2020 19:04,4/25/2020 19:04,,,,0,,,,CC BY-SA 4.0 20602,2,,20598,4/24/2020 19:58,,1,,"

In reinforcement learning, you can distinguish algorithms based on the functions they use to ultimately find the policy (which is the goal in RL anyway!).

  • algorithms that attempt to find an optimal value function (an example is Q-learning, which attempts to find a state-action value function), then derive the policy from the value function
  • algorithms that directly attempt to find a policy (e.g. REINFORCE and other so-called ""policy gradients"" algorithms)
  • algorithms that use a value function to guide the search for an optimal policy (i.e. actor-critic methods)

More specifically, in actor-critic methods, you have a policy $\pi$ (known as the ""actor"") and a value function $v$ (known as the ""critic""). Hence the name ""actor-critic"". The idea is that you will use this critic $v$ to ""criticize"" the policy (or actor) $\pi$, i.e. to guide the search for a good policy. This article 6.6 Actor-Critic Methods (from Sutton and Barto's book) explains the concept quite well.

In your specific example, the policy and value function are assumed to be differentiable (otherwise, you wouldn't be able to compute the derivatives anyway!). They are typically neural networks. $\theta_\pi$ are the parameters of the neural network that represent the policy (i.e. a neural network that receives as input a state and produces as output an action or probability distribution over actions). Similarly, $\theta_v$ are the parameters of the neural network that represent the critic. Then $\partial \theta_\pi$ and $\partial \theta_\pi $ will represent an accumulation of the gradients with respect to the actor and critic, respectively. (The accumulation is probably over the steps from $i=t-1$ to $i = t_{\text{start}}$, but I can't say more because I never implemented actor-critic methods). This should give you an idea, though!

",2444,,2444,,4/24/2020 20:11,4/24/2020 20:11,,,,0,,,,CC BY-SA 4.0 20603,2,,20592,4/24/2020 20:27,,1,,"

This is one of the most difficult and unsolved problems in machine learning and deep learning!

There are many different ways to estimate the most appropriate hyper-parameters, such as grid search, random search, Bayesian optimization, meta-learning, reinforcement learning, and evolutionary algorithms (e.g. NEAT).

However, the problem is that most if not all these approaches are typically not very computationally efficient if your model isn't anything but very small. The number of possible configurations of the hyper-parameters is very large. If you don't have good computational resources (e.g. GPUs and powerful servers), you probably out of luck or require some days to get some insight.

In certain cases, it is obvious that certain hyper-parameters are dependent on each other, e.g. in the case of the batch size and the learning rate, because we have some decent understanding of gradient descent, but, in other cases, the situation isn't so nice.

As far as I know, there isn't a very good general rule of thumb or method to solve this issue (i.e. find the dependence of hyper-parameters on each other). Maybe, as our knowledge of our models (especially, neural networks) increases, we'll get some more insights and we'll develop more efficient approaches to understand the dependence of the hyper-parameters.

Nowadays, there's automatical machine learning (AutoML), which is a fancy name to denote services that provide hyper-parameter optimization (plus some other stuff).

",2444,,2444,,4/25/2020 3:50,4/25/2020 3:50,,,,0,,,,CC BY-SA 4.0 20604,1,,,4/24/2020 22:15,,0,74,"

Why are all weights of a neural net updated and not only the weights of the first hidden layer?

The error-influence of the prediction by the weights of a neural net is calculated using the chain rule. However, the chain rule tells us how the first variable influences the second variable, and so on. Following that logic, we should only update the weights of the first hidden layer. My thought is, that if we backtrack the influence of the first variable but also change the values of the subsequent weights (of the subsequent hidden layer), there is no need to calculate the influence of the first weights in the first place. Where am I wrong?

",27777,,2444,,4/26/2020 16:35,4/26/2020 16:35,Why are all weights of a neural net updated and not just the weights of the first layer,,1,0,,,,CC BY-SA 4.0 20605,1,,,4/24/2020 23:09,,0,264,"

I have a fully connected neural network with the following number of neurons in each layer [4, 20, 20, 20, ..., 1]. I am using TensorFlow and the 4 real-valued inputs correspond to a particular point in space and time, i.e. (x, y, z, t), and the 1 real-valued output corresponds to the temperature at that point. The loss function is just the mean square error between my predicted temperature and the actual temperature at that point in (x, y, z, t). I have a set of training data points with the following structure for their inputs:


(x,y,z,t):

(0.11,0.12,1.00,0.41)

(0.34,0.43,1.00,0.92)

(0.01,0.25,1.00,0.65)

...

(0.71,0.32,1.00,0.49)

(0.31,0.22,1.00,0.01)

(0.21,0.13,1.00,0.71)


Namely, what you will notice is that the training data all have the same redundant value in z, but x, y, and t are generally not redundant. Yet what I find is my neural network cannot train on this data due to the redundancy. In particular, every time I start training the neural network, it appears to fail and the loss function becomes nan. But, if I change the structure of the neural network such that the number of neurons in each layer is [3, 20, 20, 20, ..., 1], i.e. now data points only correspond to an input of (x, y, t), everything works perfectly and training is all right. But is there any way to overcome this problem? (Note: it occurs whether any of the variables are identical, e.g. either x, y, or t could be redundant and cause this error.)

My question: is there any way to still train the neural network while keeping the redundant z as an input? It just so happens the particular training data set I am considering at the moment has all z redundant, but in general, I will have data coming from different z in the future. Therefore, a way to ensure the neural network can robustly handle inputs at the present moment is sought.

",21895,,,,,4/25/2020 1:58,Can neural networks handle redundant inputs?,,0,5,,,,CC BY-SA 4.0 20606,2,,20599,4/24/2020 23:11,,3,,"

Let's see their definition first:

  1. Best First Search (BFS): ‌

    Best-first search is a search algorithm that explores a graph by expanding the most promising node chosen according to a specified rule.

    estimating the promise of node n by a ""heuristic evaluation function ${\displaystyle f(n)}$ which, in general, may depend on the description of n, the description of the goal, the information gathered by the search up to that point, and most importantly, on any extra knowledge about the problem domain.""

  2. Hill Climbing (HC):

    In numerical analysis, hill climbing is a mathematical optimization technique that belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found.

Base on the definition, we can find the following differences:

  • The aim of BFS is reaching to a specified goal by using a heuristic function (it might be greedy) vs. HC is a local search algorithm
  • BFS is mostly used in the graph search (in a wide state space) to find a path. vs. HC is using for the optimization task.
",4446,,4446,,4/25/2020 0:07,4/25/2020 0:07,,,,4,,,,CC BY-SA 4.0 20609,1,,,4/25/2020 5:40,,1,31,"

I have questions regarding on how to implement PBT as described in Algorithm 1 (on page 5) in the paper, Population Based Training of Neural Networks to train agents in a MARL (multi-agent reinforcement learning) environment.

In a single agent RL environment, the environments can be distributed & agents trained in parallel & there will be a need to maintain some sort of centralized population of weights & hyperparameters. (Please correct me if I'm wrong.)

In a MARL context, do I need to also maintain a centralized population for all agents in all environments or do I need to maintain a separate population for agents in each distributed environment? Which is a correct or more effective approach?

Any pointers would be appreciated. Thank you.

",25299,,,,,4/25/2020 5:40,Do I need to maintain a separate population in each distributed environment when implementing PBT in a MARL context?,,0,0,0,,,CC BY-SA 4.0 20613,1,,,4/25/2020 7:22,,1,18,"

Suppose we have two graphs A and B disconnected to each other (let's say 2-hops each), within a larger graph. If the convolutional representation of graph A is known, is it possible to estimate the definitive convolutional representation of graph B based on its similarity to graph A?

If yes, what do you think is the simplest (arithmetically) way to do this, which algorithm can help me to do this? You can assume that precision requirements are not important.

",36440,,2444,,4/26/2020 0:51,4/26/2020 0:51,How to estimate the convolutional representation of a graph from its similarity to other graph convolutional representation?,,0,0,,,,CC BY-SA 4.0 20616,1,,,4/25/2020 13:15,,1,36,"

This is a bot making problem from here. I am detailing the problem.

The picture above shows the initial configuration of the game. P1 represents player1 and P2 represents player2. A scotch bottle is kept(initially) at the position #5 on the number line. Both players start with 100 dollars in hand. Note that players don't move, only the bottle moves.

Rules of the game:

  1. The first player makes a secret bid followed by a secret bid by the second player.
  2. The bottle moves one position closer to the winning bidder.
  3. In case of drawn bid, the winner is the player, who has the draw advantage.
  4. Draw advantage alternates between the two player, that is, the first draw is won by the first player, the second draw if it occurs is won by the second player and so on.
  5. The winning bid is deducted from the player's hand, the loser keeps his bid.
  6. The bottle moves one position closer to the winning bidder.
  7. Each bid must be greater than 0 dollar. In the case when there's no money left, the player has no choice but to bid 0 dollar. Only integral bids are allowed.

The player who gets the bottle wins. If no one gets it, the game ends in a draw.

Both the players,thus,have complete knowledge of the history of biddings of each other, and the location of bottle at the current time.

So far, i know this is an instance of poorman bidding games. I have used some strategies like I am intentionally losing some bids and let the opponent use his money, in a hope that difference of money increases to the point of allowing for the emergence of a winning strategy. Also, i pull stronger the bottle as it goes further. This isn't performing well with other bots.

What should be the strategy of a bot playing this game?

",22063,,22063,,4/25/2020 13:21,4/25/2020 13:21,What should be a good playing strategy in this 2-player simultaneous game?,,0,3,,,,CC BY-SA 4.0 20617,1,,,4/25/2020 13:18,,1,74,"

I have a gaussian distributed time series ($X_t$) with some parameters in my experiment. Suppose I want to know the mean $\mu$. If I define another time series $Y_t$ such that $Y_t=X_t-a$ for all $t$. Now say I vary this parameter $a$ and generate altogether different time series for each $a$, say $Y_t(a)$. I look at the mean of $Y_t$ for each $a$. The value of a, where I get the mean of $Y_t$ closest to $0$, will be my estimate of $\mu$. Say I will eventually use this learnt value of $\mu$ to generate $Y_t$ as my final goal. Can this be called ML? I am using some training data of $X_t$ to learn about its parameter and then using test data of $X_t$ to generate $Y_t$.

Now why am I working so hard on this simple problem? Well, actually I am not. I am doing something else, which will have lots of parameters in the time series and will be used to generate other time series after similar parameter extraction. That will be too complicated to discuss here. I just wanted to clear my basics using an over-simplified example.

",36369,,11539,,8/22/2022 18:24,11/23/2022 3:04,Does it classify as Machine Learning?,,1,2,,,,CC BY-SA 4.0 20618,1,,,4/25/2020 13:47,,1,138,"

In the information theory, the entropy is a measure of uncertainty in some system. Being applied to agent policy, entropy shows how much the agent is uncertain about which action to make. In math notation, entropy of the policy is defined as : $$H(\pi) = -\sum \pi(a|s) \log \pi(a|s)$$ The value of entropy is always greater than zero and has a single maximum when the policy is uniform. In other words, all actions have the same probability. Entropy becomes minimal when our policy has 1 for some action and 0 for all others, which means that the agent is absolutely sure what to do. To prevent our agent from being stuck in the local minimum, we are subtracting the entropy from the loss function, punishing the agent for being too certain about the action to take.

The above excerpt is from Maxim Lapan in the book Deep Reinforcement Learning Hands-on page 254.

In code, it might look like :

 optimizer.zero_grad()
 logits= PG_network(batch_states_ts)
 log_prob = F.log_softmax(logits, dim=1)
 log_prob_actions = batch_scales_ts * log_prob[range(params[""batch_size""]), batch_actions_ts]
 loss_policy = -log_prob_actions.mean()

 prob = F.softmax(logits, dim=1)
 entropy = -(prob * log_prob).sum(dim=1).mean()
 entropy_loss = params[""entropy_beta""] * entropy
 loss = loss_policy - entropy_loss

I know that a disadvantage of using policy gradient is our agent can be stuck at a local minimum. Can you explain mathematically why subtracting the entropy from our policy will prevent our agent from being stuck in the local minimum ?

",35626,,,,,4/25/2020 13:47,Subtracting the entropy from our policy gradient will prevent our agent from being stuck in the local minimum?,,0,3,,,,CC BY-SA 4.0 20619,1,20620,,4/25/2020 14:33,,2,546,"

I was reading about the temporal difference (TD) learning and I read that:

TD handles continuing, non-episodic domains

Assuming that continuing means non-terminating, what does non-episodic or episodic domain mean?

",36447,,2444,,4/26/2020 0:50,4/26/2020 0:50,What are episodic and non-episodic domains in reinforcement learning?,,1,1,,,,CC BY-SA 4.0 20620,2,,20619,4/25/2020 14:51,,2,,"

Assuming that continuing means non terminating, what does non-episodic or episodic domain mean ?

Non-episodic means the same as continuing. The quote you found is not listing two separate domains, the word ""continuing"" is slightly redundant. I expect the author put it in there to emphasise the meaning, or to cover two common ways of describing such environments.

Episodic domain problems are ones that terminate, or otherwise naturally split into groups of time steps that can be considered separately.

",1847,,,,,4/25/2020 14:51,,,,0,,,,CC BY-SA 4.0 20621,2,,18188,4/25/2020 14:58,,0,,"

I have written a tutorial on using OpenAI Spinning Up in a image-based PyBullet + Gym environment here

In order to be able to use spinup for an image-based environment I had to fork it here and add CNN to PPO's core.py

",36448,,,,,4/25/2020 14:58,,,,0,,,,CC BY-SA 4.0 20622,1,,,4/25/2020 16:17,,3,1346,"

I am reading Sutton and Barto's material now. I know value iteration, which is an iterative algorithm taking the maximum value of adjacent states, and policy iteration. But what is generalized policy iteration?

",36107,,36107,,4/25/2020 17:01,4/25/2020 17:10,What is generalized policy iteration?,,1,0,,,,CC BY-SA 4.0 20624,2,,20622,4/25/2020 17:05,,3,,"

In the standard policy iteration algorithm presented in Sutton and Barto's book, you alternate between a policy evaluation (PE) step and a policy improvement (PI) step (i.e. PE, PI, PE, PI, PE, PI, PE, ...). However, in general, you don't have to follow this alternation strictly in order to converge (in the limit) to the optimal policy. For example, value iteration (VI) is an example of a truncated policy iteration that still converges to the optimal policy.

The term generalized policy iteration (GPI) refers to all algorithms based on policy iteration, such as value iteration, that alternate in some order PI and PE, and that are guaranteed to converge to the optimal policy, provided PE and PI are executed enough times.

",2444,,2444,,4/25/2020 17:10,4/25/2020 17:10,,,,0,,,,CC BY-SA 4.0 20625,2,,7413,4/25/2020 17:14,,2,,"

It's a continuing task in that, after failure, the agent always gets a reward of $0$ at each time-step ad infinitum.

From the book:

we could treat pole-balancing as a continuing task, using discounting. In this case the reward would be -1 on each failure and zero at all other times. The return at each time would then be related to $-\gamma^K$, where $K$ is the number of time steps before failure.

(Here I have used $\gamma$ as the discount factor).

Said another way, assuming the agent fails in the (K + 1)th step the reward is $0$ till that step, $-1$ for it, and then $0$ for eternity.

So the return: $$G_t = R_{t+1} + \gamma R_{t+2} + \gamma^2 R_{t+3} + ... + \gamma^K R_{t+K+1} + ... = -\gamma^K$$

",36449,,2444,,4/25/2020 18:47,4/25/2020 18:47,,,,1,,,,CC BY-SA 4.0 20626,2,,11169,4/25/2020 18:11,,2,,"

Graph Neural Networks

The term Graph Neural Network, in its broadest sense, refers to any Neural Network designed to take graph structured data as its input:

To cover a broader range of methods, this survey considers GNNs as all deep learning approaches for graph data.

However the original paper to propose the term specifically referred to recursive neural networks adapted to take graph-structured data as their input:

This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs.

Subtypes

Note, Wu et al propose a taxonomy dividing GNN's into four subgroups:

  • Recurrent graph neural networks (RecGNN)
  • Convolutional graph neural networks (ConvGNN)
  • Graph autoencoders (GAE)
  • Spatial-temporal graph neural networks (STGNN)

ConvGNN's can themselves be classified by whether they use Spectral methods or Spatial methods, and GAE's by whether they are designed for Network embedding or Graph generation.

",23503,,23503,,12/2/2021 11:14,12/2/2021 11:14,,,,1,,,,CC BY-SA 4.0 20627,2,,12712,4/25/2020 18:19,,2,,"

Yes, there are numerous, coming under the umbrella term Graph Neural Networks (GNN).

The most common input structures accepted by these techniques are the adjacency matrix of the graph (optionally accompanied by its node feature matrix and/or edge feature matrix, if the graph has such information).

A Comprehensive Survey on Graph Neural Networks, Wu et al (2019) divides GNN's into four subgroups:

  • Recurrent graph neural networks (RecGNN)
  • Convolutional graph neural networks (ConvGNN)
  • Graph autoencoders (GAE)
  • Spatial-temporal graph neural networks (STGNN)

ConvGNN's can themselves be classified by whether they use Spectral methods or Spatial methods, and GAE's by whether they are designed for Network embedding or Graph generation.

",23503,,23503,,5/26/2020 19:44,5/26/2020 19:44,,,,0,,,,CC BY-SA 4.0 20628,2,,11226,4/25/2020 18:43,,13,,"

I presume this question was prompted by the paper Geometric deep learning: going beyond Euclidean data (2017). If we look at its abstract:

Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them.

We see that the authors use the term ""non-Euclidean data"" to refer to data whose underlying structure is non-Euclidean.

Since Euclidean spaces are prototypically defined by $\mathbb{R}^n$ (for some dimension $n$), 'Euclidean data' is data which is sensibly modelled as being plotted in $n$-dimensional linear space, for example image files (where the $x$ and $y$ coordinates refer to the location of each pixel, and the $z$ coordinate refers to its colour/intensity).

However some data does not map neatly into $\mathbb{R}^n$, for example, a social network modelled by a graph. You can of course embed the physical shape of a graph in 3-d space, but you will lose information such as the quality of edges, or the values associated with nodes, or the directionality of edges, and there isn't an obvious sensible way of mapping these attributes to higher dimensional Euclidean space. And depending on the specific embedding, you may introduce spurious correlations (e.g. two unconnected nodes appearing closer to each other in the embedding than to nodes they are connected to).

Methods such as Graph Neural Networks seek to adapt existing Machine Learning technologies to directly process non-Euclidean structured data as input, so that this (possibly useful) information is not lost in transforming the data into a Euclidean input as required by existing techniques.

",23503,,23503,,4/25/2020 18:59,4/25/2020 18:59,,,,2,,,,CC BY-SA 4.0 20629,2,,15688,4/25/2020 19:20,,1,,"

Low order/low level information refers to the most granular level of information. This is the most informative in terms of volume of information, but it can often be difficult to conceptualise for humans.

High order/high level information refers to abstractions of the low level information to more intuitive but less easy to describe technically concepts.

An example would be images of faces. The low level information might be the raw $x, y, z$ values of the pixels: their position and colour value. Some high level information might be the direction the face in the images is facing, from what direction the lighting in the image is coming etc.

In the paper cited, the example they use is low level information (node and edge values); high level information (motifs).

",23503,,,,,4/25/2020 19:20,,,,0,,,,CC BY-SA 4.0 20630,2,,16805,4/25/2020 20:00,,2,,"

A Comprehensive Survey on Graph Neural Networks (2019) presents a list of ConvGNN's. All of the following accept weighted graphs, and three accept those with edge weights as well:

And below is a series of open source implementations of many of the above:

",23503,,23503,,4/26/2020 10:42,4/26/2020 10:42,,,,0,,,,CC BY-SA 4.0 20631,1,20653,,4/25/2020 20:06,,1,280,"

I was going through university slides and this particular slide is trying to prove that in a Monte Carlo Policy Iteration algorithm using an epsilon-greedy policy, the state Values (V-Values) are monotonically improving.

My question is about the first line of computation.

Isn't this actually the formula for the expected value of Q? It is calculating a probability of occurrence following the policy times actual Q values, then doing the summation.

If that is the case, could you help me understand the relationship between the expected value of Q and the expected value of V ?

Also, if above is true, in a real world scenario, depending on how many episodes we sample and on stochasticity, does it mean that the V values of the new policy could be worse than the V values of the old policy ?

",36447,,2444,,4/25/2020 23:16,4/26/2020 15:36,Monte Carlo epsilon-greedy Policy Iteration: monotonic improvement for all cases or for the expected value?,,1,0,,,,CC BY-SA 4.0 20632,1,20643,,4/25/2020 21:10,,0,236,"

Desperate trying to understand something for couple of weeks. All those questions are actually one big question.Please help me. Time-codes and screens in my question refer to this great(IMHO) 3d explanation:

https://www.youtube.com/watch?v=UojVVG4PAG0&list=PLVZqlMpoM6kaJX_2lLKjEhWI0NlqHfqzp&index=2

.... Here is the case: Say I have 2 inputs (lets call them X1 and X2) into my ANN. Say X1= persons age and X2=years of education.

1) First question: do I plug those numbers as is or normalize them 0-1 as a ""preprocessing""?

2) As I have 2 weights and 1 bias, actually I am going to plug my inputs to X1*W1+X2*W2=output formula. This is 2d plane in a 3d space if I am not mistaken(time-code 5:31):

Thus when I plug in my variables, like in regression I will get on a Z axis my output. So the second question is: am I right up to here?

-----------------From here come real important couple of questions.

3) My output (before I plug it into the activation function) is just a simple number, IT IS NOT A PLANE and NOT A SURFACE, but a simple scalar, without any sigh on it coming from 2d surface in a 3d space(though it does come from there). Thus, when I plug this number (which was Z value in a previous step) into the activation function (say sigmoid) my number enters there in to the X axis, and we get as an output some Y value. As I understand this operation was totally 2d operation, is was 2d sigmoid and not some kind of 3dsigmoidal surface.

So here is the question: If I am right, why do we see in this movie (and couple of other places) such an explanation? (time-code 12:55):

4)Now lets say that I was right in the previous step and as an output from the activation function I do get a simple number not a 2d surface and not a 3d one. I just have some number like I had in the very beginning of the ANN as an input (age, education etc). If i want to add another layer of neurons, this very number enters there as is not telling any one the ""secret"" that it was created by some kind of sigmoid. In this next layer this number is about to take similar transformations as it happened to age and education in a previous layer, it is going to be Xn in just the same scenario: sigmoid(XnWn+XmWm=output) and in the end we will get once again just a number. If I am right, why in the movie they say (time-code 14:50 ) that when we add together two activation functions we get something unlinear. They show result of such ""addition"" first as 2d (time-code 14:50 and 14:58). So, Here comes my question: how come that they ""add"" two activation functions, if to the second activation function reaches just a simple number as said above he is not telling any one the ""secret"" that it was created by some kind of sigmoid.

5) And then again, they show this addition of 3d surfaces (time-code 19:39 )
How it is possible? I mean again there should not happen any addition of surfaces, because no surface passes to next step but a number. What do I miss?

",36453,,31416,,4/26/2020 10:15,4/26/2020 18:44,How are non-linear surfaces formed in the training of a neural network?,,2,0,,12/21/2021 15:08,,CC BY-SA 4.0 20633,2,,5546,4/25/2020 22:13,,2,,"

The everyday definition of convolution comes from the Latin convolutus meaning 'to roll together'. Hence the meaning twisted or complicated.

The mathematical definition comes from the same root, with the interpretation of taking a ""rolling average"".

Hence in Machine Learning, a convolution is a sliding window across an input creating one averaged output for each stride the window takes. I.e. the values covered by the window are convoluted to create one convoluted output. This is best demonstrated with an a diagram:

The convolution can be any function of the input, but some common ones are the max value, or the mean value.

A convolutional neural network (CNN) is a neural network where one or more of the layers employs a convolution as the function applied to the output of the previous layer.

If the window is greater than size 1x1, the output will be necessarily smaller than the input (unless the input is artificially 'padded' with zeros), and hence CNN's often have a distinctive 'funnel' shape:

",23503,,,,,4/25/2020 22:13,,,,1,,,,CC BY-SA 4.0 20637,2,,20632,4/26/2020 1:02,,1,,"

The output of any node is simply a scalar number. For a given input you get a specific scalar output. What is being shown is the surfaces that get generated as you VARY x1 and x2 over their input range. To answer your first question it is always best to scale your inputs.

",33976,,,,,4/26/2020 1:02,,,,1,,,,CC BY-SA 4.0 20638,1,,,4/26/2020 1:16,,4,90,"

I was reading the paper Learning to Prune Filters in Convolutional Neural Networks, which is about pruning the CNN filters using reinforcement learning (policy gradient). The paper says that the input for the pruning agent (the agent is a convolutional neural network) is a 2D array of shape (N_l, M_l), where N_l is the number of filters and M_l = m x h x w (m, l and h are filter dimensions), and the output is an array of actions (each element is 0 (unnecessary filter) or 1 (necessary)) and says in order to approximate gradients we have to sample the output M times (using the REINFORCE algorithm).

Since I have one input, how can I sample the output distribution multiple times (without updating the CNN parameters)?

If I'm missing something, please, tell me where I'm wrong

",36461,,2444,,4/26/2020 21:10,1/12/2023 17:05,How can I sample the output distribution multiple times when pruning the filters with reinforcement learning?,,1,0,,,,CC BY-SA 4.0 20639,1,,,4/26/2020 2:31,,2,101,"

What are the differences and similarities between PAC learning and classic parameter estimation theorems (e.g. consistency results when estimating parameters, e.g. with MLE)?

",32390,,2444,,4/26/2020 18:55,4/26/2020 18:55,What is the relationship between PAC learning and classic parameter estimation theorems?,,0,3,,,,CC BY-SA 4.0 20640,2,,20570,4/26/2020 3:42,,3,,"

(1) You might want look into RND (Random network distillation) which allows usage of a curiosity-based exploration bonus for the agent as an intrinsic reward. You can use the intrinsic reward to complement the sparse extrinsic reward return by the environment.

The general idea is to have a randomly initialized fixed target network which encodes the next state & a predictor network is trained to predict the output of the target network. The prediction error is used to ""quantify the novelty of new experience"". Stronger novelty will be a good indication for the agent that it maybe worthwhile to explore more.

The authors of this (A) paper were able to achieve SOTA performance in Montezuma's Revenge, which is notorious for it's sparse reward.

In appendix A.1, It is mentioned that: ""An exploration bonus can be used with any RL algorithm by modifying the rewards used to train the model (i.e., rt = it + et)."" It is also mentioned that the authors combined this exploration bonus with PPO (which also works in continuous action space). In A.2, a pseudo code is provided.

I would also recommend this (B) paper (see section 3) if you're interested in exploring the available Bonus-Based Exploration Methods out there which may help in tackling hard exploration games with sparse rewards.

With regards to high stochasticity & variance, I found an interesting remark (on page 3, under Figure 2) in this (C) paper:

""our investigation of DDPG on different network configurations shows that for the Hopper environment, DDPG is quite unstable no matter the network architecture. This can be attributed partially to the high variance of DDPG itself, but also to the increased stochasticity of the Hopper task.""

The remark was made in the context where the authors were trying to ""tune DDPG to reproduce results from other works even when using their reported hyper-parameter settings"".

Have a look here for a different benchmark on how DDPG fair against other algorithms.

(2) From the information provided, I can't conclusively provide you a quantitative assessment on DDPG performance for your specific problem. However, I would recommend the following:

(a) I will encourage you to try different RL algorithms when face with a difficult problem so that you can benchmark & find out which is more suitable. Also in (A), the authors mentioned, ""PPO is a policy gradient method that we have found to require little tuning for good performance.""

(b) Try different sets of hyperparameters. There are many ways to tune them systematically but discussion about this will be out of scope for this question.

",25299,,25299,,4/26/2020 6:42,4/26/2020 6:42,,,,0,,,,CC BY-SA 4.0 20642,2,,16631,4/26/2020 7:12,,2,,"

[Answering my own question after 5 months of studying VAE models]

The point of the MMD-VAE or InfoVAE is not exactly to emphasise on the visual quality of generated samples. It is to preserve greater amount of information through the encoding process. The MMD formulation stems from introducing a mutual coefficient factor into the Evidence Lower BOund (ELBO) loss of VAEs. Refer to the paper appendices for full derivation. This formulation improves information content in latent space and provides for more accurate approximation of the true posterior - these results have also been empirically proven in the paper.

However, the InfoVAE uses pixel-wise or element-wise reconstruction loss. An element-wise reconstruction loss is likely to lead to some extent of blurriness inrespective of the prior loss term. On Github, several developers have implemented the InfoVAE model and shown their results. Here is a link to one such implementation whose results I could personally verify.

From my own experimentations, I can say that even though InfoVAE may give better reconstructions for some data, there is still considerable blurriness.

Perceptual similarity metrics may be learned or computed as a static function of the input image. With a learned perceptual loss, VAEs can produce much sharper images. PixelVAE and VAEGAN are well-known models with such implementations. For a static function of the image itself, reconstruction quality will depend on the nature of that function and such a model may not be very useful for all kinds of datasets. Using measures like SSIM, FSIM, we may still end up getting blurred images.

",31416,,31416,,4/26/2020 11:17,4/26/2020 11:17,,,,0,,,,CC BY-SA 4.0 20643,2,,20632,4/26/2020 8:15,,3,,"

Hi and welcome to the community. It's important to understand these basic concepts very clearly.

You have to first understand the basic unit of a neural network, a single node/neuron/perceptron. Let us forget all about Neural Networks for a bit, and talk about something far simpler.

Linear Regression

In the above figure, we clearly have one independent variable on the x-axis, and one dependent variable on the y-axis. The red line has an intercept of zero, and let's say a slope of 0.5. Therefore, $$ y = 0.5x + 0 $$

This, right here, is a single perceptron. You take a value of x, lets say 8, pass it through the the node, and get a value as output, 4. Simple! But what is the model in this case? Is it the output? No. Its the set [0.5, 0] that represents the red line above. The outputs are simply points on that line.

A neural network model is always a set of values - a matrix or a tensor, if you will.

The plots in your question, do not represent outputs. They represent the models. But now that you've possibly understood what a linear model with one indpendent variable looks like, I hope you can appreciate that having 2 independent variables will give us a plane in 3-D space. This is called multiple regression.

This forms the first layer of a neural network with linear activation functions. Assuming $ x_{i} $ and $ x_{j} $ as the two independent variables, the first layer computes $$ y_{1} = w_{1}x_{i} + w_{2}x_{j} + b_{1} $$

Note that while $ y_{1} $ is the output of the first layer, the set $ [w_{1}, w_{2}, b_{1}] $ is the model of the first layer and can be plotted as a plane in 3D space. The second layer, again a linear layer, computes $$ y_{2} = w_{3}y_{1} + b_{2} $$

Substitute $ y_{1} $ in above and what do you get? Another linear model!

$$ y_{2} = w_{3}(w_{1}x_{i} + w_{2}x_{j} + b_{1}) + b_{2} $$

Adding layers to a neural network is only compounding of functions.

Compounding linear functions on linear functions result in linear functions.

Well, then, what was the point of adding a layer? Seems useless, right?

Yes, adding linear layers to a neural network is absolutely useless. But what happens if the activation functions of each perceptron, each layer was not linear? For example the sigmoid or the most widely used today, ReLU.

Compounding non-linear functions on non-linear functions can increase non-linearity.

The ReLU looks like this $$ y = max(0, x) $$

This is definitely non-linear but not as non-linear as let's say the sine wave. But can we approximate the sine wave by somehow ""compounding"" multiple, say $ N $ ReLUs?

$$ \sin(x) \approx a + \sum_{N}b*max(0, c + dx)$$

And here the variables $ a, b,c, d $ are the trainable ""weights"" in neural network terminology.

If you remember the structure of the perceptron, the first operation is often denoted as a summation over all the inputs. This is how non-linearity is approximated in Neural Networks. Now one may ask: So, summing over non-linear functions can approximate any function, right? So a single hidden layer between input layer and output layer (one that sums over all the outputs of the hidden layer units) should be enough? Why do we often see neural network architectures with so many hidden layers? This is one of the most important yet often overlooked aspect of neural networks and deep-learning.

To quote, Dr. Ian J Goodfellow, one of the brightest minds in AI,

A feedforward network with a single (hidden) layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly.

So, what is the ideal number of hidden layers? There's no magic number! ;-)

For more mathematical rigor on how neural networks approximate non-linear functions, one should learn about the Universal Approximation Theorem. Beginners should check this out.

But why should we care for increased non-linearity? For that I'd direct you to this.

Note that all of the above discussion is with respect to regression. For classification, the non-linear surface learned is regarded as a decision boundary and points above and below the surface are classified into different classes. However, an alternative, and arguably better, way to look at this is that given a dataset that is not linearly seperable, a neural network first transforms the input dataset into a linearly seperable form and then uses a linear decision boundary on it. For more on this, definitely check out Christopher Olah's amazing blog.

Finally, yes all independent variables must be normalized before training a neural network. This is to equalize the scale of different variables. More info here.

",31416,,31416,,4/26/2020 18:44,4/26/2020 18:44,,,,1,,,,CC BY-SA 4.0 20644,2,,17791,4/26/2020 8:58,,1,,"

The authors of your cited paper use the term graph-based semi-supervised learning (G-SSL) to refer to semi-supervised learning techniques which take graph structured data as their input.

Given their main example, the MNIST dataset, is not graph structured, they detail a method for converting the raw Euclidean data $X$ into said form (represented by its adjacency matrix $S$), and then compute the Laplacian $L$ of this graph:

We consider the graph-based semi-supervised learning (G-SSL) problem. The input include labeled data $X_{l} ∈ \mathbb{R}^{n_{l}×d}$ and unlabeled data $X_{u} \in \mathbb{R}^{n_{u}×d}$, we define the whole features $X = [X_{l}; X_{u}]$. Denoting the labels of $X_{l}$ as $y_{l}$, our goal is to predict the labels of test data $y_{u}$. The learner applies algorithm $A$ to predict $y_{u}$ from available data $\{X_{l}, y_{l}, X_{u}\}$. Here we restrict $A$ to label propagation method, where we first generate a graph with adjacency matrix $S$ from Gaussian kernel: $S_{ij} = \exp(−γ\lVert x_i − x_j\rVert ^{2})$, where the subscripts $x_{i(j)}$ represents the $i(j)$-th row of $X$. Then the graph Laplacian is calculated by $L = D − S$, where $D = \text{diag}\{\sum_{k=1}^{n} S_{ik}\}$ is the degree matrix.

This is consistent with the terminology as used in other literature:

Semi-supervised learning for node-level classification. Given a single network with partial nodes being labeled and others remaining unlabeled, ConvGNNs can learn a robust model that effectively identifies the class labels for the unlabeled nodes [22]. To this end, an end-to-end framework can be built by stacking a couple of graph convolutional layers followed by a softmax layer for multi-class classification.

",23503,,2444,,4/28/2020 1:23,4/28/2020 1:23,,,,0,,,,CC BY-SA 4.0 20645,1,,,4/26/2020 9:37,,4,92,"

I was reading the gradient temporal difference learning version 2(GTD2) from rich Sutton's book page-246. At some point, he expressed the whole expectation using a single sample from the environment. But how a single sample can represent the whole expectation.

I marked this point in this image.

",28048,,28048,,4/27/2020 9:42,6/7/2020 23:07,How can a single sample represent the expectation in gradient temporal difference learning?,,1,1,,,,CC BY-SA 4.0 20646,2,,11285,4/26/2020 9:42,,27,,"

Embedding vs Latent Space

Due to Machine Learning's recent and rapid renaissance, and the fact that it draws from many distinct areas of mathematics, statistics, and computer science, it often has a number of different terms for the same or similar concepts.

"Latent space" and "embedding" both refer to an (often lower-dimensional) representation of high-dimensional data:

  • Latent space refers specifically to the space from which the low-dimensional representation is drawn.
  • Embedding refers to the way the low-dimensional data is mapped to ("embedded in") the original higher dimensional space.

For example, in this "Swiss roll" data, the 3d data on the left is sensibly modelled as a 2d manifold 'embedded' in 3d space. The function mapping the 'latent' 2d data to its 3d representation is the embedding, and the underlying 2d space itself is the latent space (or embedded space):

Synonyms

Depending on the specific impression you wish to give, "embedding" often goes by different terms:

Term Context
dimensionality reduction combating the "curse of dimensionality"
feature extraction
feature projection
feature embedding
feature learning
representation learning
extracting 'meaningful' features from raw data
embedding
manifold learning
latent feature representation
understanding the underlying topology of the data

However this is not a hard-and-fast rule, and they are often completely interchangeable.

",23503,,23503,,12/31/2020 17:43,12/31/2020 17:43,,,,0,,,,CC BY-SA 4.0 20649,1,,,4/26/2020 11:03,,1,121,"

My goal is to write a program that automatically selects a routing out of multiple proposed options.

The data consists out of the multiple proposed options with each the attributes time, costs and if there is a transhipment and also which of the options was selected.

Example of data:

My idea at the moment is that I have to apply so type of inference to learn which attribute (time, costs, transhipment) has the highest impact on how to choose the best option. But I don't know exactly where to start with this.

Is there a ""best"" ML algorithm for this? Or how should I approach this?

The dataset currently consists out of 1000 samples in case if this is important.

Thanks in advance for your responses.

",36473,,,,,4/26/2020 11:03,Algorithm which learns to select from proposed options,,0,3,,,,CC BY-SA 4.0 20651,1,,,4/26/2020 11:25,,1,137,"

Why td_loss is calculated from (td_targets against q_values)?

Why I am lost is because:

  1. q_values is just the probability of action. It does not have a reward and discount.
  2. td_targets does have rewards + discounts * next_q_values. Somemore next_q_values is next state.

How both td_targets and q_values can minus (or Huber or MSE) to get lost work?

td_error = valid_mask * (td_targets - q_values)
td_loss = valid_mask * td_errors_loss_fn(td_targets, q_values)

td_loss = valid_mask * td_errors_loss_fn(td_targets, q_values)

",36475,,2444,,5/10/2020 13:58,9/27/2022 19:04,How does the DQN loss from td_targets against q_values make sense?,,1,0,,,,CC BY-SA 4.0 20652,2,,20651,4/26/2020 12:15,,-1,,"

First of all, DQN is off-policy learning. That means, you are following the behavior policy(epsilon greedy policy) but still learning about the optimal policy or target policy(greedy policy). Td_target in DQN is the estimation of our current state's optimal action-value function independent of the policy we are following (since we are picking next state's action-value from target policy) and q_values(as you referred) is what you get following the behavior policy. While using this kind of update, you are improving both the behavior policy and the target policy.

",28048,,28048,,4/26/2020 12:27,4/26/2020 12:27,,,,5,,,,CC BY-SA 4.0 20653,2,,20631,4/26/2020 15:00,,1,,"

I think this equation answer your question: $$ q_{\pi^{i}}(s,\pi^{i+1}(s)) = \mathbf{E}[q_{\pi^{i}}(s,\pi^{i+1}(s))] = \sum_{a \in A}\pi^{i+1}(a|s)q_{\pi^{i}}(s,a)$$

value of the Q while taking action from policy $\pi^{i+1}$ and thereafter following the policy $\pi^{i}$ is equal to the expected q value while taking action from policy $\pi^{i+1}$ and thereafter following the policy $\pi^{i}$. And for the second part of your question the answer is:

$$ V_{\pi^{i}}(s) = q_{\pi^{i}}(s,\pi^{i}(s))$$

state Value function following the policy $\pi^{i}$ is the same as action-value function while taking action from policy $\pi^{i}$ and thereafter following the policy $\pi^{i}$.

",28048,,28048,,4/26/2020 15:36,4/26/2020 15:36,,,,0,,,,CC BY-SA 4.0 20654,1,,,4/26/2020 15:26,,1,90,"

Many neural net architectures for computer vision tasks use several convolutional layers and then several fully-connected (or dense) layers. While the reasons for using convolutional layers are clear to me, I don't understand why the dense layers are needed. Can't high accuracy be achieved with only convolutional layers?

",36083,,2444,,4/26/2020 21:02,4/26/2020 21:02,Why are denser layers needed in computer vision neural nets?,,1,0,,,,CC BY-SA 4.0 20656,1,,,4/26/2020 16:22,,3,76,"

Generally, if one googles ""quantum machine learning"" or anything similar the general gist of the results is that quantum computing will greatly speed up the learning process of our ""classical"" machine learning algorithms. However, ""speed up"" itself does not seem very appealing to me as the current leaps made in AI/ML are generally due to novel architectures or methods, not faster training.

Are there any quantum machine learning methods in development that are fundamentally different from ""classical"" methods? By this I mean that these methods are (almost*) impossible to perform on ""classical"" computers.

*except for simulation of the quantum computer of course

",5344,,,,,1/1/2021 5:57,"Are there any novel quantum machine learning algorithms that are fundamentally different from ""classical"" ones?",,1,2,,,,CC BY-SA 4.0 20657,2,,20604,4/26/2020 16:35,,1,,"

However, the chain rule tells us how the first variable influences the second variable, and so on. Following that logic, we should only update the weights of the first hidden layer.

I don't see how the second statement follows from the first.

Each weight $w_i$ (not just the ones in the first layer) affects the loss $\mathcal{L}$ according to the partial derivative of $\mathcal{L}$ with respect to $w_i$, i.e. $\frac{\partial \mathcal{L}}{\partial w_i}$. Intuitively, the partial derivative with respect to parameter tells you how the function is changing with respect to that parameter.

My thought is, that if we backtrack the influence of the first variable but also change the values of the subsequent weights (of the subsequent hidden layer), there is no need to calculate the influence of the first weights in the first place.

I am not sure I understand your reasoning, but, typically, you update the parameters only after having computed all the partial derivatives. In other words, first, you compute all partial derivatives, i.e. the gradient with back-propagation (a fancy name to denote the application of the chain rule), then you update the parameters.

Why do you do this? In this case, the loss function is multi-variable function, so it depends on multiple variables. The gradient $\nabla \mathcal{L} = \left[ \frac{\partial \mathcal{L}}{\partial w_1}, \dots, \frac{\partial \mathcal{L}}{\partial w_N} \right]$ represents the direction (note that the gradient is a vector and vectors have direction) towards which your function is increasing or decreasing (depending on the sign of the gradient).

",2444,,,,,4/26/2020 16:35,,,,0,,,,CC BY-SA 4.0 20658,1,23239,,4/26/2020 17:43,,1,309,"

So, i have created Snake game using Pygame and Python. Then i wanted to create an AI with Genetic algorithm and a simple NN to play it. Seems pretty fun, but things aren't working out.

This is my genetic algorithm:

def calculate_fitness(population):
    """"""Calculate the fitness value for the entire population of the generation.""""""
    # First we create all_fit, an empty array, at the start. Then we proceed to start the chromosome x and we will
    # calculate his fit_value. Then we will insert, inside the all_fit array, all the fit_values for each chromosome
    # of the population and return the array
    all_fit = []
    for i in range(len(population)):
        fit_value = Fitness().fitness(population[i])
        all_fit.append(fit_value)
    return all_fit


def select_best_individuals(population, fitness):
    """"""Select X number of best parents based on their fitness score.""""""
    # Create an empty array of the size of number_parents_crossover and the shape of the weights
    # after that we need to create an array with x number of the best parents, where x is NUMBER_PARENTS_CROSSOVER
    # inside config file. Then we search for the fittest parents inside the fitness array created by the
    # calculate_fitness function. Numpy.where return (array([], dtype=int64),) that satisfy the query, so we
    # take only the first element of the array and then it's value (the index inside fitness array). After we have
    # the index of the element we just need to take all the weights of that chromosome and insert them as a new
    # parent. Finally we change the fitness value of the fitness value of that chromosome inside the fitness
    # array in order to have all different parents and not only the fittest
    parents = numpy.empty((config.NUMBER_PARENTS_CROSSOVER, population.shape[1]))
    for parent_num in range(config.NUMBER_PARENTS_CROSSOVER):
        index_fittest = numpy.where(fitness == numpy.max(fitness))
        index_fittest = index_fittest[0][0]
        parents[parent_num, :] = population[index_fittest, :]
        fitness[index_fittest] = -99999
    return parents


def crossover(parents, offspring_size):
    """"""Create a crossover of the best parents.""""""
    # First we start by creating and empty array with the size equal to offspring_size we want. The type of the
    # array is [ [Index, Weights[]] ]. If the parents size is only 1 than we can't make crossover and we return
    # the parent itself, otherwise we select 2 random parents and then mix their weights based on a probability
    offspring = numpy.empty(offspring_size)
    if parents.shape[0] == 1:
        offspring = parents
    else:
        for offspring_index in range(offspring_size[0]):
            while True:
                index_parent_1 = random.randint(0, parents.shape[0] - 1)
                index_parent_2 = random.randint(0, parents.shape[0] - 1)
                if index_parent_1 != index_parent_2:
                    for weight_index in range(offspring_size[1]):
                        if random.uniform(0, 1) < 0.5:
                            offspring[offspring_index, weight_index] = parents[index_parent_1, weight_index]
                        else:
                            offspring[offspring_index, weight_index] = parents[index_parent_2, weight_index]
                    break
    return offspring


def mutation(offspring_crossover):
    """"""Mutating the offsprings generated from crossover to maintain variation in the population.""""""
    # We cycle though the offspring_crossover population and we change x random weights, where x is a parameter
    # inside the config file. We select a random index, generate a random value between -1 and 1 and then
    # we sum the original weight with the random_value, so that we have a variation inside the population
    for offspring_index in range(offspring_crossover.shape[0]):
        for _ in range(offspring_crossover.shape[1]):
            if random.uniform(0, 1) == config.MUTATION_PERCENTAGE:
                index = random.randint(0, offspring_crossover.shape[1] - 1)
                random_value = numpy.random.choice(numpy.arange(-1, 1, step=0.001), size=1, replace=False)
                offspring_crossover[offspring_index, index] = offspring_crossover[offspring_index, index] + random_value
    return offspring_crossover

My neural network is formed using 7 inputs:

is_left_blocked, is_front_blocked, is_right_blocked, apple_direction_vector_normalized_x,
snake_direction_vector_normalized_x, apple_direction_vector_normalized_y,snake_direction_vector_normalized_y

Basically if you can go left, front, right, direction to the apple and snake direction. Then i have an hidden layer with 8 neurons and finally 3 output that indicate left, keep going or right.

The Neural Network forward() is calculate like this:

self.get_weights_from_encoded()
Z1 = numpy.matmul(self.__W1, self.__input_values.T)
A1 = numpy.tanh(Z1)
Z2 = numpy.matmul(self.__W2, A1)
A2 = self.sigmoid(Z2)
A2 = self.softmax(A2)
return A2

where self.__W1 and self.__W2 are the weights from input to hidden layer and then the weights from hidden layer to the output. Softmax(A2) return the index of the matrix[1,3] where the value is the biggest, then i use that index to indicate the direction that my neural network choose.

This is the config file that contains the parameters:

# GENETIC ALGORITHM
NUMBER_OF_POPULATION = 500
NUMBER_OF_GENERATION = 200
NUMBER_PARENTS_CROSSOVER = 50
MUTATION_PERCENTAGE = 0.2

# NEURAL NETWORK
INPUT = 7
NEURONS_HIDDEN_1 = 8
OUTPUT = 3
NUMBER_WEIGHTS = INPUT * NEURONS_HIDDEN_1 + NEURONS_HIDDEN_1 * OUTPUT

And this is the main:

for generation in range(config.NUMBER_OF_GENERATION):

    snakes_fitness = genetic_algorithm.calculate_fitness(population)

    # Selecting the best parents in the population.
    parents = genetic_algorithm.select_best_individuals(population, snakes_fitness)

    # Generating next generation using crossover.
    offspring_crossover = genetic_algorithm.crossover(parents,
                                                      offspring_size=(pop_size[0] - parents.shape[0], config.NUMBER_WEIGHTS))

    # Adding some variations to the offspring using mutation.
    offspring_mutation = genetic_algorithm.mutation(offspring_crossover)

    # Creating the new population based on the parents and offspring.
    population[0:parents.shape[0], :] = parents
    population[parents.shape[0]:, :] = offspring_mutation

I have 2 problems:

1) I don't see an improvement over the new generations

2) I'm actually running the game inside the for loop, but waiting for all the snake of a generation to die and repeat with the new one is really time consuming. Isn't there a way to launch all or, atleast, more than 1 instance of the game and keep filling the array with the result?

This is Fitness().fitness(population[i])

def fitness(self, weights):
    game_manager = GameManager(weights)
    self.__score = game_manager.play_game()
    return self.__score

This is where it's called inside the for loop

def calculate_fitness(population):
    """"""Calculate the fitness value for the entire population of the generation.""""""
    # First we create all_fit, an empty array, at the start. Then we proceed to start the chromosome x and we will
    # calculate his fit_value. Then we will insert, inside the all_fit array, all the fit_values for each chromosome
    # of the population and return the array
    all_fit = []
    for i in range(len(population)):
        fit_value = Fitness().fitness(population[i])
        all_fit.append(fit_value)
    return all_fit

This the function that launch the game (GameManager(weights)) and return the score of the snake.

This is my first time on AI so this code could be all a mess, don't worry about pointing out what i did wrong, just please don't say ""It's all wrong"" because i won't be able to learn otherwise.

",36482,,,,,8/25/2020 4:23,Genetic Algorithm Python Snake not improving,,1,0,,,,CC BY-SA 4.0 20660,2,,20654,4/26/2020 18:31,,1,,"

Convolutional layers are added in order to extract features from the image (like edges, corners, textures). After extracting those features, you feed them to a fully connected neural network to get the prediction.

Let's take an example, consider you want to classify the cat's image. But you decided to do this by only using the convolutional layer. So, you feed the image to a convolutional layer. After passing through some layer it extracted the key features from the cat's image and at the end of the layer, you add a single neuron(since we have a single class to predict). Now it's ready to classify it as a cat. But unfortunately, it's misclassified the image. Why? Now let us answer the question.

Since convolutional layers are not fully connected layers, the neuron added at the last layer is only connected with a handful of neurons from the previous layer. So it misses the key features (encapsulated by some other neurons it is not connected with them) in order to detect this image as a cat. In order to get the prediction right, you have to add a dense layer(a fully connected layer) at the end of the convolutional layer so that it can get all of those extracted features.

If this does not satisfy you, please, ask the question more specifically and also edit the question because linear layer does not call fully connected layer.

",28048,,28048,,4/26/2020 19:25,4/26/2020 19:25,,,,0,,,,CC BY-SA 4.0 20661,1,,,4/26/2020 18:41,,2,106,"

I have been trying to figure out what the Fourier transformed image represents. I am aware of Fourier transformation in general, but I can't explain myself the image it forms after transformation.

In the given image, what does the outlined white sort of lines mean?

",36484,,2444,,12/17/2021 14:39,12/17/2021 14:39,What does the Fourier transformed image mean?,,1,0,,,,CC BY-SA 4.0 20662,2,,20661,4/26/2020 19:02,,1,,"

Have a look at this explanation of the two dimensional Fourier Transform applied to an image.

The way I read it is that it somewhat mirrors the diagonal structure of the 'X', as that is where most of the 'image energy' is, so that are the lines you highlighted. Because the image is fairly uniform in colour, most of the components are low-frequency (ie close to the centre of the 2DFT image).

Disclaimer: I'm not really that familiar with a 2D version either, as I mainly have experience with using FFTs in speech processing.

",2193,,,,,4/26/2020 19:02,,,,0,,,,CC BY-SA 4.0 20663,1,20679,,4/26/2020 20:08,,1,194,"

From the perspective of the type of AI Agents, I would like to discuss Prim's Minimum Spanning Tree algorithm and Dijkstra's Algorithm.

Both are model-based agents and both are "greedy algorithms".

Both have their memory to store the history of vertices and their path distance. Prim's is more greedy than Dijkstra's algorithm whereas Dijkstra's algorithm is more efficient than Prim's.

Can we say that Dijkstra's algorithm is a utility-based agent, whereas Prim's is a goal-based agent, with the justification that Prim's is more goal-oriented as compared to finding the optimum (shortest) path?

",34306,,2444,,11/19/2020 12:51,11/19/2020 12:51,What types of AI agents are Djikstra's algorithm and Prim's Minimum Spanning Tree algorithm?,,1,1,,,,CC BY-SA 4.0 20664,1,20665,,4/26/2020 21:42,,2,204,"

I am looking at this formula which breaks down the gradient of $P(\tau |\theta)$ the first part is clear as is the derivative of $\log(x)$, but I do not see how the first formula is rearranged into the second.

",36486,,2444,,4/26/2020 23:45,4/26/2020 23:55,How is the log-derivative trick of a trajectory derived?,,1,0,,,,CC BY-SA 4.0 20665,2,,20664,4/26/2020 23:18,,2,,"

The identity $$\nabla_{\theta} P(\tau \mid \theta) = P(\tau \mid \theta) \nabla_{\theta} \log P(\tau \mid \theta)\tag{1}\label{1},$$

which can also be written as

\begin{align} \nabla_{\theta} \log P(\tau \mid \theta) &= \frac{\nabla_{\theta} P(\tau \mid \theta)}{P(\tau \mid \theta)}\\ &=\frac{1}{P(\tau \mid \theta)} \nabla_{\theta} P(\tau \mid \theta) \end{align}

directly comes from the general rule to derive the logarithm of a function and the chain rule \begin{align} \frac{d \log f(x)}{d x} &= \frac{1}{f(x)} \frac{d f}{dx}. \end{align} Note that $\log f(x)$ is a composite function and that's why we apply the chain rule and that the derivative of $\log x = \frac{1}{x}$, as your text says.

People shouldn't call this a trick. There's no trick here. It's just basic calculus.

Why do you need identity \ref{1}? Because that identity tells you that the derivative of the probability of the trajectory given the parameter $\theta$ with respect to $\theta$ is $P(\tau \mid \theta)$ times the gradient of the logarithm of that same probability. How is this useful? Because the logarithm will turn your product into a sum (and the derivative of a sum is the sum of the derivatives of the elements of the sum), Essentially, the identity \ref{1} will help you to compute the gradient is an easier way (at least, conceptually).

",2444,,2444,,4/26/2020 23:55,4/26/2020 23:55,,,,7,,,,CC BY-SA 4.0 20667,1,20671,,4/27/2020 2:23,,1,643,"

SARSA is on-policy, while n-step SARSA is off-policy. But when n = 1, is it like an off-policy version of SARSA? Any similarity and difference between 1-step SARSA and SARSA?

",23707,,2444,,4/27/2020 2:40,4/27/2020 12:47,What are the differences between 1-step SARSA and SARSA?,,1,0,,,,CC BY-SA 4.0 20668,2,,18701,4/27/2020 4:50,,1,,"

Not really something that slows it down but currently Mauricio Santillana at Harvard is working on modeling the pandemic and has shared some of his approaches. He explained that they have used google search trends to try to predict the number of actual cases (there is a delay between people being sick and getting tested). Looking for search terms like, ""how to use an inhaler?"" can reveal areas affected by the outbreak and is useful for modeling.

Paper can be found here

Towards Data Science has several articles listing potential AI applications for helping in the fight against COVID

Including:

  1. Identify who is most at risk
  2. Diagnose patients
  3. Develop drugs faster
  4. Predict the spread of the disease
  5. Understand viruses better
  6. Map where viruses come from
  7. Predict the next pandemic

How to fight COVID-19 with machine learning

Though there are probably many creative applications that AI can help with I think most of them are modeling related at the moment.

",36131,,,,,4/27/2020 4:50,,,,1,,,,CC BY-SA 4.0 20669,2,,20564,4/27/2020 6:22,,0,,"

I think there is no special method for training the neural network in large datasets. But I can add some suggestion for you:

1) Use the convolutional neural network for this dataset.

2) You can use huber loss instead of squared loss and see what happens.

3) See if you have enough small magnitude training data.

Also, please define your problem in more detail (like what those images represent and what you want to predict, what are the features etc).

",28048,,,,,4/27/2020 6:22,,,,0,,,,CC BY-SA 4.0 20670,1,,,4/27/2020 6:44,,1,31,"

The goal of our project is to identify the quality of a grain and we have two values, A and B.

A can take the values low L, medium M and high H. And B can take the values of Low, Medium and High (L,M,H).
Specifically, the ranges for A are -> 5-10 (low), 10-15 (medium), 15-20 (high).
And the ranges for B are -> 50-75 (low), 75-90 (medium), 90-110 (high)

The output for these are bad B, average A and good G.

How do we determine the membership functions and values for this?

We want to write Python code for the fuzzy system, but we are beginners and have no idea how to go about this. Any help would be appreciated.

",36495,,36495,,4/27/2020 13:23,4/27/2020 13:23,How do we determine the membership functions and values for this problem?,,0,0,,,,CC BY-SA 4.0 20671,2,,20667,4/27/2020 7:14,,0,,"

N-step SARSA can be both off policy and on-policy. I think you already know the n step on-policy SARSA. So I am just telling you how n-step SARSA can be off-policy.

Off-policy n-step SARSA: Now you have two policies, one is target policy, $\pi$, (let's say it is a greedy policy), another one is behavior policy, $b$, (you are actually following this behavior policy). Since this is off-policy, you do importance sampling for that. So the update rule is like this:

$$Q_{t+n}(S_{t},A_{t}) = Q_{t+n-1}(S_{t},A_{t}) + \alpha \rho_{t+1:t+n-1}[G_{t:t+n} - Q_{t+n-1}(S_{t},A_{t})],$$

where

$$\rho_{t:h} = \prod_{t=k}^{h} \frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}$$

You are following the behavior policy $b$, but shifting the Q values towards the target policy, $\pi$.

Off-policy one-step SARSA: You can think of Q learning as one-step off policy SARSA.

",28048,,28048,,4/27/2020 12:47,4/27/2020 12:47,,,,4,,,,CC BY-SA 4.0 20672,2,,20597,4/27/2020 7:44,,1,,"

Not familiar with RBF but when you have a small data set image augmentation can help. You can do this easily with the Keras ImageDataGenerator, documentation is here. Alternatively you can create image augmentation yourself using image processing models like PIL or CV2.

",33976,,,,,4/27/2020 7:44,,,,0,,,,CC BY-SA 4.0 20673,1,,,4/27/2020 8:02,,1,43,"

I am facing the following supervised learning problem:

An object is fully characterized by its position in $R^n$. There are $m$ objects. There are fully observable (i.e. their positions are always known).

At each time step $t$, exactly one of these objects is activated. Activation is fully observable, i.e. the index $a_t$ ($a_t \in [1,m]$) of the object activated at time $t$ is known.

We know that, under the hood, activation works this way: there is a priority function $f$ ($f: R^n \to R$ ), which computes, for each time step, the priority score of each object. The object for which the priority score was the highest is activated.

The goal is to find (the approximation of) one of the possible priority functions that would match a given data-set. A data-set is of size $(m*n+1)*t$ ($m$ positions of dimension $n$, plus the index of the activated object, over $t$ time steps).

As an example, if it turns out there is a hidden fixed beacon, and at each time step $t$ the object the closest to the beacon is activated, then a possible function would be $f(o_{it})=1/d_{it}$, where $d_{it}$ is the distance between the beacon and the object $o_i$ at time $t$.

(If several objects have the same highest priority score, then only one of them is activated, selected randomly).

The function found by the algorithm may be parametric and encoded by a neural network, if this is applicable.

Is there a method for finding one such function ?

",36496,,36496,,4/27/2020 20:32,4/27/2020 20:32,How can I approximate a function that determines the priority of objects?,,0,0,,,,CC BY-SA 4.0 20674,1,,,4/27/2020 9:58,,1,28,"

I use an off-the-shelf convolutional neural network, where at the end of the convolutional part, the depth of the last convolutional layer is expanded and then its 2D average is computed (such that for a tensor of say 8x8x512, you get its 2D average, which is of 1x512). It is a commonly used operation in deep networks, called Global Average Pooling 2D.

The only tensor that is input to the fully-connected part is that 2D averaged 1x512 tensor, i.e., a tensor that should not preserve the 2D information. Yet, my fully-connected last layer neurons, which have been trained to predict the 2D location of objects, work very well.

I thought about it for a long time and couldn't find any convincing explanation how come the network preserved the 2D information in the averaged tensor.

Any idea?

",36499,,,,,4/27/2020 9:58,How come a detection works after global average pooling 2D?,,0,2,,,,CC BY-SA 4.0 20675,1,,,4/27/2020 11:02,,1,230,"

I am using the following architechture:

3*(fully connected -> batch normalization -> relu -> dropout) -> fully connected

Should I add the batch normalization -> relu -> dropout part after the last fully connected layer as well (the output is positive anyway, so the relu wouldn't hurt I suppose)?

",36083,,36083,,4/27/2020 11:31,10/17/2022 13:03,Should batch-normalization/dropout/activation-function layers be used after the last fully connected layer?,,4,0,,,,CC BY-SA 4.0 20676,1,20737,,4/27/2020 11:03,,0,919,"

I wonder what happens to the 'channels' dimension (usually 3 for RGB images) after the first convolution layer in CNNs?

In books and other sources, it is always said that the depth of the output from convolutional layers is the number of kernels (filters) in that layer.

But, if the input image has 3 channels and we convolve each of them with $K$ kernels, shouldn't the depth of the output be $K * 3$? Are they somehow 'averaged' or in other way combined with each other?

",22659,,2444,,4/27/2020 12:19,5/28/2020 17:01,What happens to the channels after the convolution layer?,,1,3,,,,CC BY-SA 4.0 20677,2,,20638,4/27/2020 11:09,,0,,"

I'm not sure what do you mean by one input. The input to the pruning agent is always the same, it's the convolutional layer $W$ of dimension $m \times h \times w$. The layer is taken from baseline model that is pretrained. The input doesn't change it's always the same. The output of the pruning agent is an array of probabilities to prune a specific filter. For example if you have $3$ filters in a layer, the output of the pruning agent will be array of $3$ elements . Let's say its \begin{equation} y = [0.1, 0.6, 0.7] \end{equation} Each of these elements represents probability of pruning filter $i$ in layer $W$. So $0.1$ would be probability to prune filter $1$, $0.6$ to prune filter $2$ and $0.7$ to prune filter $3$. Let's say you sample this distribution $2$ times and you get: $[0, 1, 1], [0, 0, 1]$. That means you would make 2 different models from the original baseline model. First model would have 2nd and 3rd filter pruned in layer $W$, and second model would have 3rd filter pruned. The you run those 2 new models on your train and validation set, calculate objective function $R$. Then you update parameters $\theta$ of your pruning agent based on $R$. The original weights of layer $W$ stay untouched. Then you do another inference of the pruning model $\pi$ with updated parameters $\theta$ (the input is still original $W$). You will get another array of probabilities and you keep repeating previous steps that i described until parameters $\theta$ converge. When they converge you make final pruning.

",20339,,20339,,4/27/2020 11:18,4/27/2020 11:18,,,,6,,,,CC BY-SA 4.0 20679,2,,20663,4/27/2020 12:45,,1,,"

Why do you want to think of these algorithms as agents?

An agent is an abstract and higher-level concept than the concept of an algorithm, which is just a set of instructions.

You could have two agents, one that is supposed to find the minimum spanning tree and another that is supposed to find the shortest path between a source and goal nodes. In both cases, these two agents have a goal, so they are both goal-oriented agents, but the algorithms they use to reach the goal are irrelevant, as long as they have one or more goals. If you have an agent that performs optimally with respect to some metric, you may want to call that agent an optimal agent.

A similar logic applies to utility-based agents.

So, in short, algorithms are used by agents to act, but the algorithms themselves are not the agents (unless you want to have a philosophical debate or use different definitions of agents). See this answer for more details about the definition of agents.

",2444,,,,,4/27/2020 12:45,,,,0,,,,CC BY-SA 4.0 20680,1,,,4/27/2020 13:21,,9,2690,"

I know it's not an exact science. But would you say that generally for more complicated tasks, deeper nets are required?

",36083,,2444,,4/27/2020 19:04,5/12/2020 12:49,Should neural nets be deeper the more complex the learning problem is?,,4,1,,,,CC BY-SA 4.0 20681,1,21079,,4/27/2020 13:55,,1,5013,"

I'm trying to extract the vectors from the sentences. Spent soo much time searching for the pre-trained BERT models but found nothing.

Is it possible to get the vectors using pre-trained BERT from the data?

",30725,,30725,,7/6/2020 10:29,7/6/2020 10:29,How to use pre-trained BERT to extract the vectors from sentences?,,1,0,,,,CC BY-SA 4.0 20682,1,,,4/27/2020 14:03,,3,70,"

Let's say one wants to use a neural net to learn some function $g(x)$. Let's say that we know that $g$ is a combination of two functions (or two sub-problems), $g(x)=f_2(f_1(x))$, and that we have two datasets

  1. composed of $x$ samples and their corresponding $g(x)$ labels, and
  2. composed of $x$ samples and their corresponding $f_1(x)$ labels.

Should we use two nets, one to learn the mapping from $x$ samples to $f_1(x)$ using dataset 1 and another net to learn the mapping from $f_1(x)$ to $g(x)$ (note that we can build a dataset composed of $f_1(x)$ samples and $g(x)$ labels with the trained net), or just one net to learn mappings from $x$ to $g(x)$ using dataset 1?

Intuitively, the first option seems to be better since we take advantage of our knowledge that $f_1$ is a ""sub-problem"" of $g$.

",36083,,2444,,4/27/2020 14:17,5/18/2022 13:04,A model for each sub-problem vs one model for the whole problem,,1,0,,,,CC BY-SA 4.0 20683,1,,,4/27/2020 14:06,,1,34,"

I am working on the MNIST data on my own. The idea is to use different values for the number of hidden layers, number of nodes in a given layer, etc. How do you organize these things while you are working on creating a model for a problem? DO you do everything in one code file or you use different code files for choosing the best?

",36510,,,,,4/27/2020 16:23,How to work on different models for a given problem?,,1,1,,,,CC BY-SA 4.0 20684,1,20687,,4/27/2020 14:44,,2,130,"

I have a dataset which includes states, actions, and reward. The dataset includes information on the transition, i.e., $p(r,s' \mid s,a)$.

Is there a way to estimate a behavior policy from this dataset so that it can be used in an off-policy learning algorithm?

",23707,,2444,,4/27/2020 16:01,4/28/2020 3:19,How to estimate a behavior policy for off-policy learning based on data?,,3,0,,,,CC BY-SA 4.0 20686,1,,,4/27/2020 15:17,,2,71,"

The derivative of ReLU is 0 if its output is lower than 0 - $d ReLU(x)/dReLU$ is $0$ if $x < 0$. Let's denote some net's output by $Out$, so if this net's last layer is ReLU then we get that $dOut/dReLU$ is $0$ if $Out < 0$. Subsequently, for every parameter $p$ in the net we would get that $dOut/dp$ is $0$. Does that mean that for every sample $x$ such that $Out(x) < 0$ the net doesn't learn at all from that sample since the derivative for each parameter is $0$?

",36083,,,,,4/28/2020 4:07,Does net with ReLU not learn when output < 0?,,1,1,,,,CC BY-SA 4.0 20687,2,,20684,4/27/2020 15:24,,0,,"

Is there a way to estimate a behavior policy from this dataset so that it can be used in an off-policy learning algorithm?

If you have enough examples of $(s,a)$ pairs for each instance of $s$ then you can simply estimate

$$b(a|s) = \frac{N(a,s)}{N(s)}$$

Where $N$ counts the number of instances in your dataset. This might be enough to use off-policy with importance sampling.

Alternatively, you can use an off-policy approach that doesn't need importance sampling. The most straightforward one here would be single-step Q learning. The update step for 1-step Q-learning does not depend on behaviour policy, because:

  • The action value being updated $Q(s,a)$ already assumes $a$ is being taken, so you don't need any conditional probability there.

  • The TD target $r + \gamma \text{max}_{a'}[Q(s',a')]$ does not need to be adjusted for behaviour policy, it works with the target policy directly (implied as $\pi(s) = \text{argmax}_{a}[Q(s,a)]$)

A 2-step Q learning algorithm would need to adjust for likelihood $b(a'|s')$ in the TD target $\frac{\pi(a'|s')}{b(a'|s')}(r + \gamma r' + \gamma^2\text{max}_{a''}[Q(s'',a'')])$ - typically $\pi(a'|s')$ is either 0 or 1, thus making $b(a'|s')$ irrelevant some of the time. But you would still prefer to know it for performing updates it you can.

If you are making updates offline and off-policy, then single-step Q learning is probably the simplest approach. It will require more update steps overall to reach convergence, but each one will be simpler.

",1847,,1847,,4/27/2020 15:56,4/27/2020 15:56,,,,0,,,,CC BY-SA 4.0 20688,2,,20684,4/27/2020 15:27,,0,,"

If your data look like this $(s_{1},a_{1},r_{1},s_{2}),(s_{2},a_{2},r_{2},s_{3}),....,$ then this sample drawn from a particular behavior policy. So, you do not need to find the behavior policy just Q-Learning to find the optimal policy while following the behavior policy.

If the MDP is too big then consider applying Deep Q Learning. In both cases, the transition probability they have given has no use. But if you use on-policy learning and you know the dynamics of the system(means transition probabilities), I will recommend you to use dynamic programming(if state-space is not quite large). But for your above problem setting, you can not use dynamic programming, you have only one choice to use off-policy learning.

",28048,,28048,,4/27/2020 15:37,4/27/2020 15:37,,,,6,,,,CC BY-SA 4.0 20689,2,,20680,4/27/2020 15:33,,14,,"

Deeper models can have advantages (in certain cases)

Most people will answer ""yes"" to your question, see e.g. Why are neural networks becoming deeper, but not wider? and Why do deep neural networks work well?.

In fact, there are cases where deep neural networks have certain advantages compared to shallow ones. For example, see the following papers

What about the width?

The following papers may be relevant

Bigger models have bigger capacity but also have disadvantages

Vladimir Vapnik (co-inventor of VC theory and SVMs, and one of the most influential contributors to learning theory), who is not a fan of neural networks, will probably tell you that you should look for the smallest model (set of functions) that is consistent with your data (i.e. an admissible set of functions).

For example, watch this podcast Vladimir Vapnik: Statistical Learning | Artificial Intelligence (AI) Podcast (2018), where he says this. His new learning theory framework based on statistical invariants and predicates can be found in the paper Rethinking statistical learning theory: learning using statistical invariants (2019). You should also read ""Learning Has Just Started"" – an interview with Prof. Vladimir Vapnik (2014).

Bigger models have a bigger capacity (i.e. a bigger VC dimension), which means that you will more likely overfit the training data, i.e., the model may not really be able to generalize to unseen data. So, in order not to overfit, models with more parameters (and thus capacity) will also require more data. You should also ask yourself why people use regularisation techniques.

In practice, models that achieve state-of-the-art performance can be very deep, but they are also computationally inefficient to train and they require huge amounts of training data (either manually labeled or automatically generated).

Moreover, there are many other technical complications with deeper neural networks, for example, problems such as the vanishing (and exploding) gradient problem.

Complex tasks may not require bigger models

Some people will tell you that you require deep models because, empirically, some deep models have achieved state-of-the-art results, but that's probably because we haven't found cleverer and more efficient ways of solving these problems.

Therefore, I would not say that ""complex tasks"" (whatever the definition is) necessarily require deeper or, in general, bigger models. While designing our models, it may be a good idea to always keep in mind principles like Occam's razor!

A side note

As a side note, I think that more people should focus more on the mathematical aspects of machine learning, i.e. computational and statistical learning theory. There are too many practitioners, who don't really understand the underlying learning theory, and too few theorists, and the progress could soon stagnate because of a lack of understanding of the underlying mathematical concepts.

To give you a more concrete idea of the current mentality of the deep learning community, in this lesson, a person like Ilya Sutskever, who is considered an ""important and leading"" researcher in deep learning, talks about NP-complete problems as if he doesn't really know what he's talking about. NP-complete problems aren't just ""hard problems"". NP-completeness has a very specific definition in computational complexity theory!

",2444,,2444,,5/12/2020 12:49,5/12/2020 12:49,,,,5,,,,CC BY-SA 4.0 20690,2,,20564,4/27/2020 15:41,,0,,"

One option is normalizing your data. In particular, min-max feature scaling to bring all values into the range [0,1] is particularly useful with gradient descent.

",15403,,,,,4/27/2020 15:41,,,,0,,,,CC BY-SA 4.0 20691,2,,20394,4/27/2020 15:44,,0,,"

Thresholding based on variance is one method for anomaly detection in time-series data.

",15403,,,,,4/27/2020 15:44,,,,0,,,,CC BY-SA 4.0 20692,2,,20680,4/27/2020 15:49,,2,,"

Deeper networks have more learning capacity in the sense that they can fit to more complex data. But at the same time, they are also more prone to overfitting the training data and therefore fails to generalize to the test set.

Apart from overfitting, exploding/vanishing gradients is another problem which hampers convergence. This can be addressed by normalizing the initialization and normalizing the intermediate layers. Then you can do backpropagation with stochastic gradient descent (SGD).

When deeper networks are able to converge, another problem of 'degradation' has been detected. The accuracy saturates and then starts to degrade. This is not caused by overfitting. In fact, adding more layers here leads to higher training error. A possible fix is to use ResNets (residual networks), which have been shown to decrease 'degradation'

",36512,,,,,4/27/2020 15:49,,,,0,,,,CC BY-SA 4.0 20693,2,,20683,4/27/2020 16:23,,1,,"

it seems to me that you are talking about hyperparameter tuning and effect of hyperparameters on the network in general. If you are working with tensorflow, I recommend you to look into tensorboard.

Hands-on TensorBoard can be a good starting point.

",33835,,,,,4/27/2020 16:23,,,,3,,,,CC BY-SA 4.0 20694,1,,,4/27/2020 17:26,,3,194,"

Suppose we have an object detector that is trained to detect $20$ products. If two objects are too close to each other, in general, would an object detector do a poor job of correctly classifying them? If they were far apart in the scene, would the object detector to a better job of correctly classifying them?

",36517,,2444,,8/2/2020 12:59,8/2/2020 12:59,"If two objects are too close to each other, would an object detector do a poor job of correctly classifying them?",,1,0,,,,CC BY-SA 4.0 20695,2,,20680,4/27/2020 18:28,,2,,"

My experience from a tactical standpoint is to start out with a smaller simple model first. Train the model and observe the training accuracy and validation loss and validation accuracy. My observation is that, to be a good model, your training accuracy should achieve a value of at least 95%. If it does not, then try to optimize some of the hyper-parameters. If the training accuracy does not improve, then you may try to incrementally add more complexity to the model. As you add more complexity the risk of overfitting, vanishing or exploding gradients becomes higher.

You can detect overfitting by monitoring the validation loss. If as the model accuracy goes up the validation loss on later epochs starts to go up you are overfitting. At that point, you will have to take remedial action in your model like adding dropout layers and use regularizers. Keras documentation is here.

As pointed out in the answer by nbro, the theory addressing this issue is complex. I highly recommend the excellent tutorial on this subject which can be found on YouTube here.

",33976,,2444,,4/27/2020 20:32,4/27/2020 20:32,,,,0,,,,CC BY-SA 4.0 20696,2,,20555,4/27/2020 18:43,,0,,"

in reinforcement learning, neural networks are used to estimate the value function (board state worth), not to choose the action directly. In most games, the actions available are state-dependent anyway, so you cannot easily formulate them as ANN outputs.

So the idea is that at each state, you consider the alternative actions, and the one that leads to the most valuable state is the action of choice (without using lookahead). Your ANN will thus be approximating the board state values

Strictly speaking for tic-tac-toe you don't need a neural network, the tabular Q-learning method would suffice. Have you read Sutton and Barto book on RL?

",36518,,,,,4/27/2020 18:43,,,,0,,,,CC BY-SA 4.0 20697,2,,6556,4/27/2020 18:58,,5,,"

The term feature embedding appears to be a synonym for feature extraction, feature learning etc. I.e. a form of embedding/dimension reduction (with the caveat the goal may not be a lower dimensional representation but one of equal dimensionality, but more meaningfully expressed):

Feature embedding is an emerging research area which intends to transform features from the original space into a new space to support effective learning.

Feature embedding aims to learn a low-dimensional vector representation for each instance to preserve the information in its features.

",23503,,23503,,4/27/2020 19:12,4/27/2020 19:12,,,,0,,,,CC BY-SA 4.0 20698,2,,20403,4/27/2020 19:03,,0,,"

RL is a generic technique that can be applied to any MDP system. From the looks of it you have data to produce a state-space model of your system (system identification, excluding your existing control loops) and then you can use that to drive a simulated exploration of your process and discover a control policy. As this is a continuous process, some discretization will be required

so yes, your quest is feasible, but not trivial!

I experimented sometime ago with the cartpole balancing system, which is a toy, but I found that a good old PID was much better than AI :)

",36518,,,,,4/27/2020 19:03,,,,0,,,,CC BY-SA 4.0 20699,1,20700,,4/27/2020 19:44,,1,246,"

A neural network without a hidden layer is the same as just linear regression.

If I then use squared hinge loss and encoporate the l2 regularisation term, is it fair to then call this network the same as a linear SVM?

Going by this assumption, then if I need to implement a multiclass SVM, i can just have n output nodes (where n is the number of classes). Would this then be equivalent to having n number of SVMs, similar to a one-vs-rest method?

If I then wanted to encoporate a kernel into my SVM, could I then use an activation function or layer prior to the final output nodes (where I compute loss and add regularisation) which would then transfer this data into another feature plane the same as that of an SVM kernel?

This is my current hunch, but would like some confirmation or correction where my understanding is incorrect.

",29877,,,,,3/10/2021 15:56,Is an SVM the same as a neural network without a hidden layer?,,1,1,,,,CC BY-SA 4.0 20700,2,,20699,4/27/2020 21:02,,2,,"

First, what makes the neural network different than linear regression is the non-linearity (activation function), not the number of layers. So, a neural network with $n$ layers with no non-linearities is still the same as linear regression. Second, SVM finds the hyperplane of maximum margin. You are not guaranteed to find that hyperplane that has maximum margin with a neural network (using non-linearities).

",36341,,2444,,3/10/2021 15:56,3/10/2021 15:56,,,,3,,,,CC BY-SA 4.0 20702,2,,17634,4/27/2020 21:43,,2,,"

Some examples of dimensionality reduction techniques:

Linear methods Non-linear methods Graph-based methods
("Network embedding")
PCA
CCA
ICA
SVD
LDA
NMF
Kernel PCA
GDA
Autoencoders
t-SNE
UMAP
MVU
Diffusion maps
Graph Autoencoders

Graph-based kernel PCA
(Isomap, LLE, Hessian LLE, Laplacian Eigenmaps)

Though there are many more.

",23503,,23503,,2/16/2021 13:54,2/16/2021 13:54,,,,0,,,,CC BY-SA 4.0 20703,1,20704,,4/27/2020 22:21,,6,1968,"

People sometimes use 1st layer, 2nd layer to refer to a specific layer in a neural net. Is the layer immediately follows the input layer called 1st layer?

How about the lowest layer and highest layer?

",35896,,2444,,4/27/2020 23:29,4/28/2020 3:48,"Does the ""lowest layer"" refer to the first or last layer of the neural network?",,2,1,,,,CC BY-SA 4.0 20704,2,,20703,4/27/2020 23:25,,3,,"

People sometimes use 1st layer, 2nd layer to refer to a specific layer in a neural net. Is the layer immediately follows the input layer called 1st layer?

The 1st layer should typically refer to the layer that comes after the input layer. Similarly, the 2nd layer should refer to the layer that comes after the 1st layer, and so on.

However, note that this convention and terminology may not be applicable in all cases. You should always take into account your context!

How about lowest layer and highest layer?

To be honest, I also dislike this ambiguous terminology. From my experience, I don't think there's an agreement on the actual meaning of ""lowest"" or ""highest"". It depends on how you depict the neural network, but it's possible that ""lowest"" refers to the layers closer to the inputs, because, if you think of a neural network as a hierarchy that starts from the inputs and builds more complex representations of it, the ""lowest"" may refer to the ""lowest in the hierarchy"" (but who knows!).

",2444,,2444,,4/27/2020 23:32,4/27/2020 23:32,,,,0,,,,CC BY-SA 4.0 20706,1,,,4/28/2020 1:04,,8,444,"

I understand the gist of what convolutional neural networks do and what they are used for, but I still wrestle a bit with how they function on a conceptual level. For example, I get that filters with kernel size greater than 1 are used as feature detectors, and that number of filters is equal to the number of output channels for a convolutional layer, and the number of features being detected scales with the number of filters/channels.

However, recently, I've been encountering an increasing number of models that employ 1- or 2D convolutions with kernel sizes of 1 or 1x1, and I can't quite grasp why. It feels to me like they defeat the purpose of performing a convolution in the first place.

What is the advantage of using such layers? Are they not just equivalent to multiplying each channel by a trainable, scalar value?

",36525,,2444,,1/23/2021 18:00,2/9/2022 11:19,What is the point of using 1D and 2D convolutions with a kernel size of 1 and 1x1 respectively?,,2,0,,,,CC BY-SA 4.0 20708,2,,20684,4/28/2020 3:19,,1,,"

You can simply train a policy from the inputs to predict the actions in your dataset. You can use the cross entropy loss for this, i.e. maximize the the log probability that the policy assigns to the actions in the data set when given the corresponding inputs. This is called behavioral cloning.

The result is an approximation of the behavioral policy that lets you compute probability densities of actions. It is an approximation because the dataset is finite, and even more so when you restrict the learned policy to a class of distributions, e.g. Gaussians.

",30627,,,,,4/28/2020 3:19,,,,0,,,,CC BY-SA 4.0 20709,2,,20564,4/28/2020 3:35,,0,,"

It might be that the large labels dominate the loss value, so the model pays more attention to them. You could use an L1 loss rather than an L2 loss, such that the predictions get pulled towards each label equally rather than being pulled more strongly when the label is further away from the prediction.

There may also be a data imbalance, i.e. you train on more large labels than small labels. This would also cause the model to pay more attention to becoming good at predicting the large labels. If this is the case, you can either collect more small labels or train more often on the small labels you have.

Another possibility is that your labels are not evenly distributed. For example, the gaps between large data points may be larger than between small data points, so the small data points all look similar to the model. You can plot a histogram of your labels to find out and then transform your labels (e.g. using log) to make space them out more evenly.

The general rules for training artificial neural networks apply. For example, normalize your inputs and outputs by subtracting the mean and divide them by their standard deviation, estimated across the training set. It also seem that your network is likely too deep and and not wide enough. I'd try 4x100 rather than 8x20 and add a small amount of weight decay when you see overfitting.

",30627,,,,,4/28/2020 3:35,,,,0,,,,CC BY-SA 4.0 20710,2,,20686,4/28/2020 3:46,,1,,"

There is no benefit to using ReLU as the output activation of a neural network. As you said, the network will ignore training labels below zero and it will train on labels above zero as if no output activation were present. However, the problem you're describing can also occur for individual units of hidden layers, where ReLU activations are common. This is known as the dead ReLU problem. In practice, it is rarely a problem but it can be avoided with smooth rectifiers like ELU and Swish. Another interesting ideas is CRelu, which concatenates both positive and negative parts of the pre-activation, resulting in twice as many outputs, half of which always receive a non-zero gradient.

",30627,,30627,,4/28/2020 4:07,4/28/2020 4:07,,,,0,,,,CC BY-SA 4.0 20711,2,,20703,4/28/2020 3:48,,6,,"

Lowest layer generally refers to the layer closest to the input. This comes from the idea that layers closer to the input represent low-level features such as gradients and edges, while layers closer to the output represent high-level features such as parts and objects.

",30627,,,,,4/28/2020 3:48,,,,1,,,,CC BY-SA 4.0 20712,2,,11285,4/28/2020 4:00,,0,,"

The words latent space and embedding space are often used interchangeably. However, latent space can more specifically refer to the sample space of a stochastic representation, whereas embedding space more often refers to the space of a deterministic representation.

This comes from latent referring to an unobserved random variable, for which we can infer a belief distribution over its plausible values, for example using an encoder network. You can then draw samples of the predicted distribution for further processing. To learn more, you can look into VAEs.

",30627,,,,,4/28/2020 4:00,,,,0,,,,CC BY-SA 4.0 20713,2,,20675,4/28/2020 4:05,,0,,"

No, the activation of the output layer should instead be tailored to the labels you're trying to predict. The network prediction can be seen as a distribution, for example a categorical for classification or a Gaussian (or something more flexible) for regression. The output of your network should predict the sufficient statistics of this distribution. For example, a softmax activation on the last layer ensures that the outputs are positive and sum up to one, as you would expect for a categorical distribution. When you predict a Gaussian with mean and variance, you don't need an activation for the mean but the variance has to be positive, so you could use exp as activation for that part of the output.

",30627,,,,,4/28/2020 4:05,,,,1,,,,CC BY-SA 4.0 20714,2,,20680,4/28/2020 4:58,,1,,"

Speaking very generally, I would say that with the current state of machine learning, a ""more complicated"" task requires more trainable parameters. You can increase parameter count by either increasing width and also by increasing depth. Again, speaking very generally, I would say that in practice, people have found more success by increasing depth than by increasing width.

However, this depends a lot on what you mean by ""more complicated"". I would argue that generating something is a fundamentally more complicated problem than just identifying something. However, a GAN to generate a 4-pixel image will probably be far more shallow than the shallowest ImageNet network.

One could also make an argument that the definition of complexity of a deep learning task is ""more layers needed == more complicated"", in which case it's obvious that by definition, a more complicated task requires a deeper net.

",27092,,27092,,4/28/2020 5:05,4/28/2020 5:05,,,,0,,,,CC BY-SA 4.0 20715,1,,,4/28/2020 6:05,,1,29,"

There is a theorem that states that basically a neural network can approximate any function whatsoever. However, this does not mean that it can solve any equation. I have some notes where it states that backpropagation allows us to solve problems of the following kind

$$ F(x_i, t) = y_i $$

Can someone point me to what exactly this means?

",36173,,,,,4/28/2020 6:05,Class of functional equations that backpropagation can solve,,0,0,,,,CC BY-SA 4.0 20717,1,20718,,4/28/2020 6:40,,1,63,"

In the paper SSD: Single Shot MultiBox Detector, under section 2.2 - (4), why do we add an offset of 0.5 to x, y in generating the anchor boxes across feature maps?

",36534,,2444,,4/28/2020 14:21,4/28/2020 14:21,Why do we set offset (0.5) in single shot detector?,,1,0,,,,CC BY-SA 4.0 20718,2,,20717,4/28/2020 8:46,,0,,"

The integer coordinates refer to the top-left corner of a pixel. Adding 0.5 to both x and y gives you the center location of a pixel instead.

",22086,,,,,4/28/2020 8:46,,,,0,,,,CC BY-SA 4.0 20720,2,,20682,4/28/2020 9:16,,1,,"

The tendency in literature in the last years (at least for computer vision problems) seems to point towards the single model option (I'll try to remember to come back and add some links to papers mentioning this when I find them), although this IMO is really data- and problem-dependent.

In your case, I would set up a network for the mapping $x$ to $g(x)$, with a training-only auxiliary loss calculated on the mapping $x$ to $f1(x)$ and compare this with a model trained only on ""$x$ to $g(x)$"".

",22086,,,,,4/28/2020 9:16,,,,2,,,,CC BY-SA 4.0 20721,2,,16874,4/28/2020 9:44,,1,,"

From my perspective you should look at the concept of ontologies, which might briefly be described as a set of axioms that formalize concepts such as {Grass, Water, Green} and relations between those like hasProperty(Grass, Green) and needs(Grass, Water). To describe such kind of knowledge the Web Ontology Language was created. The theoretical framework on which it is built are different flavors of description logics, which all are fragments of first order predicate logic, but come with different tradeoffs between expressiveness and computational complexity for automatic reasoning.

As with other AI-topics this kind of stuff can get quite involved ‒ and interesting. I can recommend the open textbook: An introduction to ontology engineering by Maria Keet (University of Cape Town).

",33714,,,,,4/28/2020 9:44,,,,0,,,,CC BY-SA 4.0 20722,1,,,4/28/2020 10:54,,1,28,"

Given some natural language sentences like

I would like to talk to Mr. Smith

I would like to extract entities, like the person "Smith".

I know that frameworks, which are capable of doing so (f. e. RASA or spaCy), exist, but I would like to dive deeper and understand the theory behind all this.

At university, I learned a few of the basic models like CRF's or SVM's used for this task, but I wonder if there are any good resources (preferably books) about this topic.

",32819,,2444,,2/3/2021 16:43,2/3/2021 16:43,Are there any good resources (preferably books) about techniques used for entity extraction?,,0,0,,,,CC BY-SA 4.0 20726,1,,,4/28/2020 11:26,,1,70,"

I'm having a problem understanding how the MSE should be used when working with a multidimensional target, e.g 3 dimensiones. (My outputs are continuois values, not categorical)

Let us say I have a batch size of 2, to make it simple; I pass my input in the network and my y_pred would be a 2x3 tensor. The same happens for y_true, 2x3 itself.

Now the thing I'm not sure of: I take first the difference, diff = y_true - y_pred; this maintains the dimension. Now, for MSE, I square diff, obtaining again a 2x3 tensor, is that right ?

Now the tricky part (for me): I have to Mean. Which one should I consider:

Mean all the (six) values obtaining thus a scalar ? But in this case I do not understand how the backpropagation would make better on specific targets

Mean by rows, i.e, obtaining a 2x1 tensor, so that I have a mean for each example ? Also here I cannot get how the optimization would work

Mean by columns, i.e, obtaining a 1x3 tensors, where I obtain thus an ""error"" for each target? This seems the more logical to me, but I'm not so sure.

Hope this is clear

",36504,,36504,,4/29/2020 9:40,4/29/2020 9:40,How MSE should be appliead with multi target deep network?,,0,0,,,,CC BY-SA 4.0 20728,1,20915,,4/28/2020 12:58,,2,382,"

In the standard Markov Decision Process (MDP) formalization of the reinforcement-learning (RL) problem (Sutton & Barto, 1998), a decision maker interacts with an environment consisting of finite state and action spaces.

This is an extract from this paper, although it has nothing to do with the paper's content per se (just a small part of the introduction).

Could someone please explain why it makes sense to study finite state and action spaces?

In the real world, we might not be able to restrict ourselves to a finite number of states and actions! Thinking of humans as RL agents, this really doesn't make sense.

",35585,,2444,,4/28/2020 13:12,5/5/2020 21:07,Why does it make sense to study MDPs with finite state and action spaces?,,3,1,,,,CC BY-SA 4.0 20729,1,,,4/28/2020 13:23,,1,18,"

There are 11 objects of which 4 are ""Bad"" objects. So there are 7 ""Good"" objects. You have to choose as many Good objects before proceeding to another set of objects of a different sequence.

How would you train an AI that predicts the position of the Good objects based of previous sets of data?

",36548,,,,,4/28/2020 13:23,Using AI to find the correct set of object/numbers based on previous data,,0,0,,,,CC BY-SA 4.0 20730,1,,,4/28/2020 13:43,,2,30,"

I suppose that picking an appropriate size for the bottleneck in Autoencoders is neither a trivial nor an intuitive task. After watching this video about VAEs, I've been wondering: Do disentangled VAEs solve this problem?

After all, if the network is trained to use as few latent space variables as possible, I might as well make the bottleneck large enough so that I don't run into any issues during training. Am I wrong somewhere?

",36550,,1671,,4/28/2020 21:50,4/28/2020 21:50,Does bottleneck size matter in Disentangled Variational Autoencoders?,,0,0,,,,CC BY-SA 4.0 20731,1,,,4/28/2020 14:39,,2,161,"

In RL (reinforcement learning) or MARL (multi-agent reinforcement learning), we have the usual tuple:

(state, action, transition_probabilities, reward, next_state)

In MORL (multi-objective reinforcement learning), we have two more additions to the tuple, namely, ""preferences"" and ""preference functions"".

What are they? What do we do with them? Can someone provide an intuitive example?

",25299,,2444,,10/8/2020 14:16,10/8/2020 14:16,What are preferences and preference functions in multi-objective reinforcement learning?,,1,0,,,,CC BY-SA 4.0 20732,1,,,4/28/2020 14:42,,2,78,"

I've been reading the attached paper - which aims to model entities in the world as objects, including the learning agent itself!

To say the least, the goal is to navigate through what seems like a maze (path-planning problem) - and drop off passengers in desired destinations, while avoiding walls in the map of the world (5x5 grid for now). The objects involved are, a taxi, passengers, walls and a destination.

Now, a particular paragraph says the following:

""Whereas in the classical MDP model, the effect of encountering walls is felt as a property of specific locations in the grid, the OO-MDP view is that wall interactions are the same regardless of their location. As such, agents’ experience can transfer gracefully throughout the state space.""

What does this mean? How are the classical MDP and the object-oriented MDP views different?

I can't make sense of the above extract, at all. Any help would be appreciated!

P.S. I did not consider posting parts of the extract as separate questions since my problem has more to do with understanding the extract as a whole which inevitably relies on understanding the parts.

",35585,,2444,,4/28/2020 15:06,4/28/2020 19:08,How are the classical MDP and the object-oriented MDP views different?,,0,4,,,,CC BY-SA 4.0 20733,1,20736,,4/28/2020 14:43,,0,636,"

Let's say we have MDP where we have a state transition matrix.

How is this state transition different from action value in reinforcement learning? Is the state transition in MDP stochastic transition, meaning transition to some other state without taking any action?

",36047,,2444,,4/28/2020 15:03,4/28/2020 15:48,What is the difference between the state transition of an MDP and an action-value?,,1,0,,,,CC BY-SA 4.0 20735,1,,,4/28/2020 15:03,,4,296,"

I'm trying to create a text recognition project using CNN. I need help regarding the text detection task.

I have the training images and bounding box details for them. But I'm unable to figure out how to create the loss function.

Can anyone help me by telling how to take the output from the CNN model and compare it to the bounding box labels?

",36538,,2444,,4/28/2020 15:50,12/18/2022 8:07,How should I define the loss function for a multi-object detection problem?,,1,0,,,,CC BY-SA 4.0 20736,2,,20733,4/28/2020 15:48,,0,,"

Transition Probabilities: Consider that you are at state $s$ and from that state take an action $a$.Then there are some probability you will land up at state $s_{1}'$ or $s_{2}'$ ($s'$ indicate the next states). Those probabilities are called transition probabilities. In this example, the transition matrix is just a 3D array since it depends on your state and action($p(s, a)$).

Action value function $Q_{\pi}(s, a)$: It is the expected total reward you get from state $s$, taking action $a$ and thereafter following the policy, $\pi$.

Is the state transition in MDP stochastic transition, meaning transition to some other state without taking any action?

The environment can be stochastic or deterministic. If the environment is stochastic then those transitions are stochastic. If the environment is deterministic then those transitions are deterministic.

",28048,,,,,4/28/2020 15:48,,,,5,,,,CC BY-SA 4.0 20737,2,,20676,4/28/2020 16:18,,0,,"

Answer to my question is that values obtained from convolutions among different channels sum up together, therefore 3 channels after convolution with one filter give one output.

Best explanation delivered by Andrew: https://www.coursera.org/lecture/convolutional-neural-networks/convolutions-over-volume-ctQZz

",22659,,,,,4/28/2020 16:18,,,,1,,,,CC BY-SA 4.0 20738,1,,,4/28/2020 18:04,,1,28,"

I have customer review texts. The data consists of the the raw and manually corrected texts of the reviews. I have aligned these pairs by using similarity algorithms and matched the words on them. Since there are some mis-matched words pairs, I have eliminated the pairs under a threshold value for their counts.

Now there are raw and corrected word pairs. What kind of machine learning model can I build for spellcheck by using the data mentioned as well?

",36188,,,,,4/28/2020 18:04,Building a spell check model,,0,0,,,,CC BY-SA 4.0 20739,1,,,4/28/2020 18:20,,2,51,"

I would like to classify a dataset Credit Scoring, which is composed of 21 attributes, some of them are numeric and others are boolean.

For the output, I want to know if they have a good or bad credit based on those attributes, without calculating any numeric value for the credit score.

I am using Weka for this task. However, I am not sure what are the best/ideal classifiers for that kind of datasets.

Anyone here can put me in the right direction?

",36532,,2444,,4/28/2020 18:38,4/28/2020 18:53,What are the best classifiers for this type of data?,,1,0,,,,CC BY-SA 4.0 20740,2,,20739,4/28/2020 18:53,,1,,"

Well, it depends on the structure of the data. The best way is to try all the intelligent models like naive bayes, random forest, svm with different parameters by grid search. There is no model works best all the time for classification. However, neural network (named Multilayer Perceptron on weka) is supposed to be better if it is set correctly.

",36188,,,,,4/28/2020 18:53,,,,0,,,,CC BY-SA 4.0 20748,1,,,4/29/2020 2:08,,2,40,"

A big class of problems that are relevant in today's society are full of uncertainty and are also sometimes computationally intractable. Along our lives we come to realize that we are solving the same type of problem multiple times, sometimes with different strategies and mixed results. I would like to close in on three main problem types: pattern recognition, regression and density estimation.

An agent (computer program or even a human) that identifies the type of a problem and applies a systematic procedure for finding its solution. A solution is understood in the classical sense for each of the problem types, thus, the solution does not have to be a global optima. This procedure must be implementable.

Bonus points

  1. Uses metadata about the problem itself to 'gain insight' about the nature of the problem.
  2. Verifies that its solution is correct in some sense.
  3. The types or classes of problems can be expanded later on.
  4. Works with very limited resources.
  5. Works with little information about the problem, or with "small data".

So far, I've found Statistical Learning theory and Bayesian Inference as candidates that implement some of those ideas, but I was wondering if there's something else out there or I just need to take the best of both of those worlds.

",36173,,2444,,12/12/2021 13:27,12/12/2021 13:27,Is there a theory that captures the following ideas?,,0,0,,,,CC BY-SA 4.0 20749,1,20753,,4/29/2020 3:06,,1,68,"

My question is actually related to the addition of probabilities. I am reading on computational learning theory from Tom Mitchell's machine learning book.

In chapter 7, when proving the upper bound of probabilities for $\epsilon$ exhausted version space (theorem 7.1), it says that the probability that at least one hypothesis out of the $k$ hypotheses in the hypotheses space $|H|$ being consistent with m training examples is at most $k(1- \epsilon)^m$.

I understand that the probability of a hypothesis, $h$, consistent with m training examples is $(1-\epsilon)^m$. However, why is it possible to add the probabilities for $k$ hypotheses? And might the probability be greater than 1 in this case?

",32780,,2444,,4/29/2020 4:06,4/29/2020 23:17,Why is probability that at least one hypothesis out of $k$ being consistent with $m$ training examples $k(1- \epsilon)^m$?,,1,0,,,,CC BY-SA 4.0 20750,1,,,4/29/2020 3:41,,2,51,"

I have a mix of two deep models, as follows:

if model A is YES --pass to B--> if model B is YES--> result = YES
if model A is NO ---> result = NO

So basically model B validates if A is saying YES. My models are actually the same, but trained on two different feature sets of same inputs.

What is this mix called in machine learning terminology? I just call them master/slave architecture, or primary/secondary model.

",9053,,9053,,5/1/2020 18:37,5/1/2020 18:37,How is an architecture composed of a second model that validates the first one called in machine learning?,,1,0,,,,CC BY-SA 4.0 20751,1,,,4/29/2020 3:43,,0,34,"

I have a data set of 1600 examples. I am using 1280 (80%) for training, 160 (10%) for testing, and 160 (10%) for validation. The training goes one of two ways no matter how I fine-tune the L2 parameter:

1) The validation and training error converge, albeit around 75% error

2) The training error settles to around 0%, but the validation error stays around 75%

I don't think my network is too large either. I have trained networks with two hidden layers, both with the same number of nodes as the input. I also tried dropout layers and that did not seem to help.

Does this just mean that I need to add more training examples? Or how do I know that I have reached the limitations of what I am having the network learn?

",14811,,,,,4/29/2020 3:43,Cannot fine-tune L2-regularization parameter,,0,4,,,,CC BY-SA 4.0 20753,2,,20749,4/29/2020 4:26,,1,,"

Let $A$ and $B$ be two events. In general, the probability that either $A$ or $B$ occurs is defined as

$$ P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B) $$

If $A$ and $B$ are disjoint, i.e. they cannot happen at the same time, then $P(A \text{ and } B) = 0$, so the above formula becomes

$$ P(A \text{ or } B) = P(A) + P(B) $$

If the probability of one arbitrary hypothesis being consistent with $m$ training examples is $(1-\epsilon)^m$, then, given the rule above and assuming that only one hypothesis is consistent with $m$ training examples, the probability of one or more (i.e. at least one) of the hypotheses being consistent with training examples is the sum of the probabilities, i.e. $k (1-\epsilon)^m$.

This probability can be bigger than one if more than one hypothesis is consistent with $m$ training examples. In that case, you have to take into account the probability that both hypotheses are consistent.

See e.g. notes General Probability, I: Rules of probability for more details about the union rule and other rules of probability.

",2444,,2444,,4/29/2020 23:17,4/29/2020 23:17,,,,2,,,,CC BY-SA 4.0 20754,2,,20731,4/29/2020 5:50,,3,,"

In MORL the reward component is a vector rather than a scalar, with an element for each objective. So if we are using a multiobjective version of an algorithm like Q-learning, the Q-values stored for each state-action pair will also be vectors.

Q-learning requires the agent to be able to identify the greedy action in any state (the action expected to lead to the highest long-term return). For scalar rewards this is easy, but for vector values it is more complicated as one vector may be higher for objective 1, while another is higher for objective 2, and so on.

We need a means to order the vector values in terms of how well they meet the user's desired trade-offs between the different objectives. That is the role of the preference function and preferences. The function defines a general operation for either converting the vector values to a scalar value so they can be compared, or for performing some sort of ordering of the vectors (some types of orderings such as lexicographic ordering can't readily be defined in terms of scalarisation). So, for example, our preference function might be a weighted sum of the components of the vector. The preferences specify the parameters of the preference function which define a specific ordering (i.e. based on the needs of the current user). So, in the case of a weighted sum for the preference function, the preferences would be specified in terms of the values of the weights.

The choice of preference function can have implications for the types of solutions which can be found, or for whether additional information needs to be included in the state in order to ensure convergence.

I'd suggest you read the following survey paper for an overview of MORL (disclaimer - I was a co-author on this, but I genuinely think it is a useful introduction to this area)

Roijers, D. M., Vamplew, P., Whiteson, S., & Dazeley, R. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113.

",36575,,2444,,10/8/2020 14:13,10/8/2020 14:13,,,,0,,,,CC BY-SA 4.0 20756,2,,20555,4/29/2020 9:38,,1,,"

It depends on whether the action is part of the input or output of a neural network estimating the Q-value(state, action).

The network on the left has the state as input and outputs one scalar value for each of the categorical actions. It has the advantage of being easy to setup and only needs one network evaluation to predict the Q-value for all actions. If the action space is categorical and single-dimensional I would use it.

The network on the right has both the state and a representation of the action as input and outputs one single scalar value. This architecture also allows to compute the Q-value for multi-dimensional and continuous action spaces.

The action space of tic-tac-toe can be easily represented by a vector of length 9, thus I would recommend the left NN-architecture. However, if your game has continuous-valued variables in the action space (e.g. the position of your mouse pointer), you should use the NN-architecture on the right.

The approach to prevent illegal moves is only partially dependent on the choice of the Q-function architecture and covered by another question: How to forbid actions

",36560,,36560,,4/30/2020 18:18,4/30/2020 18:18,,,,1,,,,CC BY-SA 4.0 20758,1,,,4/29/2020 11:16,,2,476,"

I'm interested about using Reinforcement Learning in a setting that might seem more suitable for Supervised Learning. There's a dataset $X$ and for each sample $x$ some decision needs to be made. Supervised Learning can't be used since there aren't any algorithms to solve or approximate the problem (so I can't solve it on the dataset) but for a given decision it's very easy to decide how good it is (define a reward).

For example, you can think about the knapsack problem - let's say we have a dataset where each sample $x$ is a list (of let's say size 5) of objects each associated with a weight and a value and we want to decide which objects to choose (of course you can solve the knapsack problem for lists of size 5, but let's imagine that you can't). For each solution the reward is the value of the chosen objects (and if the weight exceeds the allowed weight then the reward is 0 or something). So, we let an agent ""play"" with each sample $M$ times, where play just means choosing some subset and training with the given value.

For the $i$-th sample the step can be adjusted to be: $$\theta = \theta + \alpha \nabla_{\theta}log \pi_{\theta}(a|x^i)v$$ for each ""game"" with ""action"" $a$ and value $v$.

instead of the original step: $$\theta = \theta + \alpha \nabla_{\theta}log \pi_{\theta}(a_t|s_t)v_t$$ Essentially, we replace the state with the sample.

The issue with this is that REINFORCE assumes that an action also leads to some new state where here it is not the case. Anyway, do you think something like this could work?

",36083,,36083,,4/29/2020 13:54,4/29/2020 14:57,"Reinforcement Learning (and specifically REINFORCE algorithm) for one-round ""games""",,4,0,,,,CC BY-SA 4.0 20759,2,,18654,4/29/2020 11:29,,0,,"

Since no one answers, I will say the approach I used. It was inspired by the successive halving algorithm. The concept is that we find good performing hyperparameters for one agent by playing manually against it. Then, we put the agent we want to tune to play against it. We assign different configurations. All of these configurations are tried for a predefined number of games per configuration. We keep the best performing half, and then go to the next round with the half we kept and repeat this process. The last configuration that remains is the best performing configuration.

",33307,,,,,4/29/2020 11:29,,,,0,,,,CC BY-SA 4.0 20760,1,,,4/29/2020 11:57,,1,47,"

Many Q-learning techniques have been developed to capture discrete state(observation), actions like a robot in a grid world, and even continuous (state or action) spaces. But I am wondering how we can model the states/space in a time-dependent environment. Please, consider the following example:

There is one smartphone (client) and five compute servers that are addressing/serving many clients (smartphones) at the same time. The smartphone transfers some raw data (e.g, sensor data) to one of those five servers (e.g., every t seconds) and gets the results. Suppose the server computes the stress-level of the client in real-time based on the collected data.

Now, a q-learning agent should be deployed to the smartphone to be able to select the best server with minimum response time (i.e., the goal is to minimize the execution/response time). Note that servers are serving different clients and their load is a function of time and varies from time to time.

So in the above scenario, I am wondering what would be our ""states"" and how we can model the ""environment""?

",36581,,,,,4/29/2020 11:57,Is Q-Learning suitable for time-dependent spaces?,,0,7,,,,CC BY-SA 4.0 20761,1,,,4/29/2020 12:12,,2,339,"

What does the term "easy negatives" exactly mean in the context of machine learning for a classification problem or any problem in general?

From a quick google search, I think it means just negative examples in the training set.

Can someone please elaborate a bit more on why the term "easy" is brought into the picture?

Below, there is a screenshot taken from the paper where I found this term, which is underlined.

",36582,,2444,,9/22/2021 12:41,10/17/2022 15:04,"What is the meaning of ""easy negatives"" in the context of machine learning?",,1,0,,,,CC BY-SA 4.0 20762,2,,20758,4/29/2020 12:15,,1,,"

This seems like a multi-armed bandit problem (no states involved here). I had the same problem some times ago and I was advised to sample the output distribution M times, calculate the rewards and then feed them to the agent, this was also explained in this paper Algorithm 1 page 3 (but different problem & different context). I honestly don't know if this will work for your case. You could also take a look at this example.

",36461,,,,,4/29/2020 12:15,,,,4,,,,CC BY-SA 4.0 20763,2,,20758,4/29/2020 12:21,,0,,"

You should look into contextual bandits, and specifically gradient bandit solvers (see section 13).

Your derivation of the gradient seems correct to me. Instead of a sampled/bootstrapped value function (as in Actor-Critic) or sampled full return (in REINFORCE) you can use the sampled reward. You will probably want to subtract a baseline from $v$, e.g. a rolling average reward for the current policy.

I have successfully used a gradient bandit solver for one-shot optimisation problem with 5000 dimension actions. It was not as strong as a custom optimiser or SAT solver, but whether or not that is an issue for you will depend on the problem.

",1847,,1847,,4/29/2020 12:37,4/29/2020 12:37,,,,0,,,,CC BY-SA 4.0 20765,2,,20758,4/29/2020 12:40,,0,,"

Besides contextual Bandit or multi-armed Bandit perspective, if you want to use a dataset to train RL policy, I would recommend you Batch RL, it is another RL working in a supervised learning way to train a policy.

For your problem, I think you can still use one-state trajectories to train REINFORCE. For example, there is a trajectory, $\tau={(s, a, r, s^{\prime})}$, there $s^{\prime}$ is NULL. By using REINFORCE, you can get the gradient $\theta = \theta + \alpha \nabla_{\theta}log \pi_{\theta}(a|s)r$, and you do not need $s^{\prime}$ here.

",8415,,8415,,4/29/2020 12:46,4/29/2020 12:46,,,,0,,,,CC BY-SA 4.0 20767,2,,20750,4/29/2020 13:12,,1,,"

Not in terms of models, but there is a terminology called 'Hierarchical learning', wherein if your model has a task to classify disease, then, If it detects a presence of a disease (disease/ no disease), then it proceeds to further classify a disease(class A/B/C/...). Else it does not proceed. This technique of hierarchical learning is very common amongst supervised learning tasks.

Now according to your question, you have two models and I assume that they have different tasks and provide a binary outcome(yes/no). Here, you can call it as 'Multitask learning', where the output of task1 is given to task2 for processing. If task1 detect the presence of disease, then task2 classifies disease into various classes / or segment it / localize it etc.

",35791,,,,,4/29/2020 13:12,,,,2,,,,CC BY-SA 4.0 20769,1,20872,,4/29/2020 13:57,,1,641,"

...for learning transition dynamics...in the KWIK framework.

The above is part of a paper's conclusion - and I don't really seem to understand what the KWIK framework is. In the details of the paper, is a brief highlight of the KWIK conditions for a learning algorithm, which go as follows (I paraphrase):

  1. All predictions must be accurate (assuming a valid hypotheses class)
  2. However the learning algorithm may also return $\perp$, which indicates that it cannot yet predict the output for this input.

A quick Google search brought me to this paper from ICML 2008, but it is a little difficult to comprehend without a detailed read.

Could someone please help me understand what the KWIK framework is, and what implication does it have for a learning algorithm to satisfy KWIK conditions? An explanation that starts at simple and goes to fairly advanced discussions is appreciated.

",35585,,2444,,7/16/2021 14:22,7/16/2021 14:22,What is the KWIK framework?,,1,1,,,,CC BY-SA 4.0 20770,1,20776,,4/29/2020 14:04,,3,235,"

We are building an AI to play a board game. Leaving aside the implementation, we noticed that it plays worse when we set an even (2,4,6,...) level of depth. We use a minimax depth-first strategy. Do you have any ideas why it behaves like that?

Edit: for example if we set a game between an AI with 5 levels of depth and an AI with 6 levels of depth, the first one usually wins (and this is weird).

",36585,,36585,,4/29/2020 19:57,4/29/2020 19:57,Why does our AI play worse at even levels of depth?,,1,2,,,,CC BY-SA 4.0 20771,2,,5539,4/29/2020 14:21,,2,,"

A random function cannot be learned efficiently by any algorithm, in particular, neural networks. However, if you are looking for a function with (exponentially) smaller description size, I do not know but any function that is conjectured to be average-case hard probably cannot be learned efficiently by neural networks, for example,

",36586,,18758,,1/15/2022 0:12,1/15/2022 0:12,,,,0,,,,CC BY-SA 4.0 20772,2,,20758,4/29/2020 14:57,,1,,"

I think the key to your problem may not the one-round. Use RL to solve the knapsack problem is great related to the topic rl for combination optimization. U can use NEURAL COMBINATORIAL OPTIMIZATION WITH REINFORCEMENT LEARNING to get some idea and find more related solutions.

",36587,,,,,4/29/2020 14:57,,,,0,,,,CC BY-SA 4.0 20773,1,,,4/29/2020 15:27,,2,974,"

In DDPG, if there are no $\epsilon$-greedy and no action noise, is DDPG an on-policy algorithm?

",8415,,2444,,4/29/2020 15:44,5/9/2021 23:13,Why is DDPG an off-policy RL algorithm?,,2,0,1,,,CC BY-SA 4.0 20774,1,,,4/29/2020 15:39,,1,39,"

Let's say I have a dataset with multiple types of multiple ingredients (salt1,salt2, etc). Each n-th variation of each ingredient vs flavor may be represented by an n×k matrix that where an ingredient corresponds with a particular value of ""flavor"".

A recipe consists of a 1×n vector (where n is the number of ingredients) where each value corresponds to the quantity of ingredient in the recipe.

A particular combination of ingredients, with particular weights, with some transformation, would result in a particular 1×k ""flavor"" profile, in this simple model.

One approach could be to formulate this as a Probabilistic Matrix Factorization problem (I think), with k being the number of flavor parameters. And combining the recipe vector with the flavor matrix might do the trick.

But the problem is, the flavor value of each ingredient (and each variation of the ingredient) in the ingredient-flavor matrix would be very very limited. The recipe flavor profile might have a corresponding flavor vector, that too would be limited, and would not be available, at the beginning. So in order to capture the relationship between the ingredients and the flavor, the system would be dependent on user-submitted data on recipe/ingredient flavors.

Is there a way I could create clusters of recipes based on user flavor ratings and extrapolate these to the constituent ingredients or vice versa? Could this be done via some unsupervised learning algorithm?

I am quite new to this, I would appreciate some help or some pointers to which mathematical approaches I should be looking at to model this problem.

",36589,,,,,4/29/2020 15:39,How do I approach this problem?,,0,0,,,,CC BY-SA 4.0 20775,2,,20773,4/29/2020 16:39,,0,,"

If there was no action noise it would probably not explore enough to obtain a good estimate of Q or the policy gradient.

Instead of estimating Q of the target policy you could estimate Q of the behavior policy but then you have a stochastic policy and the deterministic policy gradient theorem does not work anymore as it is a special case of the stochastic policy gradient theorem (see Section 3.3 of the DPG paper, http://proceedings.mlr.press/v32/silver14.pdf). You would have to use policy gradient theorem of Section 2.2 from the DPG paper.

",36590,,36590,,4/29/2020 17:43,4/29/2020 17:43,,,,11,,,,CC BY-SA 4.0 20776,2,,20770,4/29/2020 17:37,,1,,"

When the number of levels is odd, it means the first player can do one more extra movement on the board. As it is an extensive form game, when decide using backward induction, as the last and the first move are for the first player, so the first player can act better than the situation that the second player will make the last move.

",4446,,,,,4/29/2020 17:37,,,,3,,,,CC BY-SA 4.0 20777,1,20804,,4/29/2020 18:43,,1,77,"

What kinds of techniques do autopilots of autonomous cars (e.g. the ones of Tesla) use? Do they use reinforcement learning? Which types of neural network architecture do they use?

",36107,,2444,,4/30/2020 12:34,4/30/2020 15:15,What kinds of techniques do autopilots of autonomous cars use?,,2,0,,,,CC BY-SA 4.0 20778,1,,,4/29/2020 22:24,,1,264,"

Currently, what are the most popular and effective approaches to leveraging AI for stock price prediction?

It seems like there could be several approaches and problem formulations:

  • Supervised learning:
  • Regression: predict the stock price directly
  • Classification: predict whether the stock price goes up or down
  • Unsupervised learning: find clusters of stocks that move together
  • Reinforcement learning: let the agent directly maximize its stock market return
  • Other AI methods: rules, symbolic systems, etc.

Which are most popular/performant? Are there other ways that people are using machine learning in stock trading (sentiment analysis on financial statements, news, etc.)?

",18086,,2444,,12/11/2021 20:54,12/11/2021 20:54,What are the most popular and effective approaches to leveraging AI for stock price prediction?,,3,0,,,,CC BY-SA 4.0 20779,1,,,4/30/2020 0:00,,2,149,"

I have been recently studying Actor-Critic algorithms, and I ran into the following question.

Let $Q_{\omega}$ be the critic network, and $\pi_{\theta}$ be the actor. It is known that in order to maximize the objective return $J(\theta)$, we follow the gradient direction, which could be estimated as follows $$\nabla_{\theta}J=\mathbb{E}[Q_{\omega}(s,a).\nabla_{\theta}log \pi_{\theta} (a|s)].$$ But if we were to calculate the gradient of $Q^{\pi}$ with respect to $\theta$, what are the possible approaches to do so?

More generally, say we have a network $\phi_{\omega}$ that is trained on data generated from another neural network, say a stochastic actor $\pi_{\theta}$ like in classic reinforcement learning frameworks, how to find the gradient of $\phi_{\omega}$ w.r.t ${\theta}$?

",36603,,36603,,4/30/2020 2:22,4/30/2020 2:22,What is the gradient of the Q function with respect to the policy's parameters?,,0,3,0,,,CC BY-SA 4.0 20780,2,,7601,4/30/2020 0:34,,0,,"

I have recently watched a podcast by Lex Fridman where he interviews Vladimir Vapnik, who talks about this way of teaching with "predicates" and "invariants".

If you were really able to teach a model with sentences like "a bird is an animal that is able to fly" but "not all birds fly, such as penguins", that would probably represent one of the biggest milestones in machine learning, especially, if the machine was able to learn and apply this learned knowledge as efficiently as humans do. However, we are not quite there yet!

To know more about this new learning paradigm Learning Using Statistical Invariants (LUSI), you probably should read the paper Rethinking statistical learning theory: learning using statistical invariants (2019), by V. Vapnik and R. Izmailov, which will probably be difficult to follow if you have no knowledge of learning theory and your mathematical background is poor. However, there are some sections of this paper that are accessible to everyone. For example, section 6.6

Suppose that the Teacher teaches Student to recognize digits by providing a number of examples and also suggesting the following heuristics: "In order to recognize the digit zero, look at the center of the picture — it is usually light; in order to recognize the digit 2, look at the bottom of the picture - it usually has a dark tail" and so on.

From the theory above, the Teacher wants the Student to construct specific predicates $\psi(x)$ to use them for invariants. However, the Student does not necessarily construct exactly the same predicate that the Teacher had in mind (the Student's understanding of concepts "center of the picture" or "bottom of the picture": can be different). Instead of $\psi(x)$, the Student constructs function $\hat{\psi}(x)$. However, this is acceptable, since any function from $L_2$ can serve as a predicate for an invariant.

",2444,,-1,,6/17/2020 9:57,4/30/2020 0:34,,,,0,,,,CC BY-SA 4.0 20782,1,20874,,4/30/2020 3:53,,2,121,"

""Single-object tracking commonly uses Siamese networks, which can be seen as an RNN unrolled over two time-steps.""

(from the SQAIR paper)

I'm wondering how Siamese networks can be viewed as RNNs, as mentioned above. A diagrammatic explanation, or anything that helps understand the same, would help! Thank you!

",35585,,,,,5/4/2020 7:55,How can Siamese Networks be viewed as RNNs?,,2,0,,,,CC BY-SA 4.0 20783,1,22886,,4/30/2020 4:23,,2,744,"

I'm trying to compare Glove, Fasttext, Bert on the basis of similarity between 2 words using Pre-trained Models. Glove and Fasttext had pre-trained models that could easily be used with gensim word2vec in python.

Does BERT have any such models?

Is it possible to check the similarity between two words using BERT?

",30725,,-1,,6/17/2020 9:57,8/6/2020 4:54,Similarity score between 2 words using Pre-trained BERT using Pytorch,,1,0,,,,CC BY-SA 4.0 20784,2,,13627,4/30/2020 4:50,,0,,"

Not sure if you're using TensorFlow-Agents for this, but if you are, there's some nice functionality for this built out using the ActorDistributionRnnNetwork and ValueDistributionRnnNetwork that I've found to be very helpful. The LSTM architecture is used in both the actor and critic networks. See this page for more information - hope this helps!

",36611,,,,,4/30/2020 4:50,,,,0,,,,CC BY-SA 4.0 20787,2,,20761,4/30/2020 6:21,,0,,"

OK, I think I understood what this means.

Hard and easy negatives are the ones that have relatively large and small values for the loss function, respectively.

",36582,,2444,,9/22/2021 12:42,9/22/2021 12:42,,,,0,,,,CC BY-SA 4.0 20789,2,,18701,4/30/2020 6:56,,0,,"

What are the AI technologies currently used to fight the coronavirus pandemic?

You don't define what is AI for you. Let's suppose it means advanced informatics (but perhaps also abstract interpretation).

In France we have polling websites like https://covidnet.fr/ ... Does that count as AI?

And we also have Deep Tech related Covid19 specific call for proposals, e.g. here ... does that count as AI?

Does the French StopCovid platform count as AI? It is an open source digital project (using Bluetooth) to trace potential Covid-infected people. The current political debate is involving intense discussions about its legitimacy and deployement.

At last, development of Covid specific ventilators (e.g. this open hardware / open software project) involves a lot of embedded software which could be analyzed by static analyzers such as Frama-C ? Does that count as AI ?

My employer CEA (a French scaled down equivalent to US DoE) is an applied research institution participating to several Covid related projects.

And several French research institutions (e.g. INSERM, CNRS, INRIA, LIP6, etc...) are participating to Covid related research programmes. Also, major health organizations like AP-HP.

So is the European Union which is partly funding several of them. Most of the Covid related research projects have strong digital aspects. In particular, related to genome decoding and bioinformatics.

PS. My GitHub helpcovid ongoing project (a free software web application related to Covid19) does not claim to be AI, but certainly is a digital application. See also these slides about RefPerSys (which is an ongoing free software ambitious AI project).

",3335,,2444,,9/29/2020 22:01,9/29/2020 22:01,,,,1,,,,CC BY-SA 4.0 20790,1,20822,,4/30/2020 7:30,,0,86,"

In case I had a prediction model and decided to add a PCA step prior to the model, is it theoretically possible/impossible that the number of output dimensions that is better for all tests may perform worse than the model without PCA?

My question comes from the fact that I want to add a PCA step prior to a model and hyperparameterize the PCA output dimension from 1 to N (N being the number of dimensions in the original dataset) and I wanted to know if there is any theoretical basis that there is no case in which performing this previous step could have a worse performance than the previous model.

Especially, my doubt comes if the best PCA case from a selection of dimensions from 1-N is always better than the best case without PCA.

",22930,,2444,,12/12/2021 13:28,12/12/2021 13:28,Is it theoretically possible (or impossible) that principal component analysis worsens the performance of the model?,,2,1,,,,CC BY-SA 4.0 20791,2,,16863,4/30/2020 7:30,,0,,"

Hey I am currently studying First Order Logic right now and I think I can answer your question. Others please correct me if I am wrong.

For the first case, you can generally substitute variables with constants. Hence, you can make the substitution $\theta \leftarrow \{x_1 \leftarrow A, y_1 \leftarrow B \}$. This is used very commonly when you want to infer some query $\alpha$ from your knowledge base. $\alpha$ usually is in the form $P(A,B)$ as you have mentioned. When you unify you get $\neg L(A) \vee H(A)$ and ur substitution has to stay the same throughout the resolution algorithm. I.e, you cannot substitute $x_1$ for another constant / variable.

Regarding the second case, you cannot substitute a constant with a variable $B$ to another variable.

In general, you can substitute a variable with a constant, or another variable. You can also substitute a variable with a skolem function, Eg. $x_1 \leftarrow G(y)$. However, for skolem functions, you cannot substitute $x \leftarrow F(x)$, in which the variable names are the same.

Hope this helps !

",32780,,,,,4/30/2020 7:30,,,,0,,,,CC BY-SA 4.0 20793,2,,14332,4/30/2020 8:21,,4,,"

Both implementations may be closer than you think.

In short:

PPO has both parts: there is noisiness in draws during training (with learned standard deviation), helping to explore new promising actions/policies. And there is a term added to the loss function aiming to prevent a collapse of the noisiness, to help ensure exploration continues and we don't get stuck at a bad (local) equilibrium.

In fact, for continuous action, the term for entropy in the loss function you describe in Ex. 2, can make sense only when the actions are stochastic, i.e. when there action choice has some standard deviation the way you describe in Ex. 1.

More detail:

On one hand, PPO (at least for continuous action), trains a central/deterministic value (say the mean policy or close to mean) targeting a most profitable action path. On the other hand, along with it, a standard deviation making the actions a random draw with noise around the deterministic value. This is the part you describe in Example 1. The noise helps explore new paths and to update the policy according to the rewards on these sampled paths. Entropy itself is a measure of the noisiness of the draws, an thus also an indirect indicator for the trained standard deviation value(s) of the policy.

Now, entropy tends to decay as training progresses, that is, the random draws become progressively less random. This can be good for reward maximization - really the best draws are taken for reward maximization - but it is bad for further improvements of policy: improvement may halt or slow down as exploration of new action paths fades.

This is where entropy encouragement comes in. PPO foresees the inclusion of entropy in the loss function: we reduce the loss by x * entropy, with x the entropy coefficient (e.g. 0.01), incentivizing the learning network to increase the standard deviations (or, to not let them drop too much). This part is what you describe in Example 2.

Further notes:

  • During exploitation, we'd typically turn off the noise (implicitly assuming action std = 0) and pick deterministic actions: in normal cases this increases the payoffs; we're choosing our best action estimate, rather than at a random value around it.

  • People are not always precise when referring to the model's entropy vs. the entropy coefficient added to the loss function.

  • Other RL algorithms with continuous action tend to use noisy drwas with standard deviations/entropy too.

",34179,,34179,,4/30/2020 10:42,4/30/2020 10:42,,,,0,,,,CC BY-SA 4.0 20794,1,,,4/30/2020 8:37,,1,43,"

I've been reading the SQAIR paper lately, and the mathematics involved seems a bit complicated.

Some background, about the paper: SQAIR stands for Sequential Attend, Infer, Repeat - the paper does generative modelling of moving objects. The idea of Attend, Infer, Repeat is to decompose a static scene into constituent objects, where each object is represented by continuous latent variables. The latent variables, $z^{what}$,$z^{where}$ and $z^{pres}$ encode the appearance, position and presence of an object.

Here's a screenshot of the first of many things I'm unable to understand -

Why is $z^{pres,1:n+1}$ a random vector of $n$ ones followed by a zero? Why do we need the zero? How does it help?

Furthermore, an explanation of equation $(2)$ as in the image above, would be great.

P.S. I hope you all find the paper interesting. I'll ask other questions from the paper in separate posts, so as to not crowd one post with too many queries.

",35585,,-1,,6/17/2020 9:57,10/26/2022 1:00,Why is this variable in equation 2 of the SQAIR paper a random vector of $n$ ones followed by a zero?,,1,0,,,,CC BY-SA 4.0 20796,1,,,4/30/2020 9:42,,1,22,"

I want to create an exercise suggester. Each day either has a routine or is a rest day. A routine has 4 slots. For each slot we select an exercise. We constrain the legal exercises, only do upper-body today, etc. We, therefore, restrict the number of available exercises. This seems easy.

I want to know how can I extend this model to take fatigue into account. Fatigue is a state such that every exercise reduces it (specific value for each exercise) and it recovers with time.

Can this problem even be modeled as a constraint satisfaction problem? What's the best way of modeling and solving this problem

I'd like to suggest all the legal exercises taking fatigue into account.

",23895,,2444,,4/30/2020 15:40,4/30/2020 15:40,How can I design a system that suggests physical exercises to a person while keeping into account the fatigue?,,0,0,,,,CC BY-SA 4.0 20797,1,,,4/30/2020 11:00,,1,24,"

I want to find out scenarios that useful to examine the performance of intra-group and inter-group cooperation in MARL. Specifically, I prefer a board game (like sudoku) that is suitable for the cooperation evaluation.

But there are some differences between the requirements and the go like game. Every grid on the board should be treated as an agent. They are designed to form a situation with the local utility and global utility.

Take the sudoku as an example, every grid should choose an appropriate value to reach the sudoku solution.

Due to my non-familiarity with traditional MARL scenarios, it will be a great help for showing me some keywords or lists.

",36587,,36587,,4/30/2020 16:34,4/30/2020 16:34,Are there any board game appropriate to examine the performance of multiple agents that cooperate both inter-group and intra-group?,,0,2,,,,CC BY-SA 4.0 20798,1,20801,,4/30/2020 11:32,,2,60,"

I'm currently working on a project where I am using a basic cellular automata and a genetic algorithm to create dungeon-like maps. Currently, I'm having an incredibly hard time understanding how exactly crossover works when my output can only be two states: DEAD or ALIVE (1 or 0).

I understand crossover conceptually - you find two fit members of the population and they exchange genetic material, hopefully producing a fitter offspring. I also understand this is usually done by performing k-point crossover on bit strings (but can also be done with real numbers).

However, even if I encode my DEAD/ALIVE cells into bits and cross them over, what do I end up with? The cell can only be DEAD or ALIVE. Will the crossover give me some random value that is outside this range?

And even if I were to work on floating-point numbers, wouldn't I just end up with a 1 or 0 anyway? In that case, it seems like it would be better to just randomly mutate DEAD cells into ALIVE cells, or vice versa.

I've read several papers on the topic, but none seem to explain this particular issue (in a language I can understand, anyway). Intuitively, I thought maybe I can perform crossover on a neighbourhood of cells - so I find 2 fit neighbourhoods, and then they exchange members (for example, neighbourhood A gives 4 of its neighbours to neighbourhood B). However, I have not seen this idea anywhere, which leads me to believe it must be fundamentally wrong.

Any help would be greatly appreciated, I'm really stuck on this one.

",36622,,2444,,4/30/2020 15:36,4/30/2020 15:36,How does the crossover operator work when my output contains only 2 states?,,1,9,,,,CC BY-SA 4.0 20799,1,,,4/30/2020 12:30,,0,141,"

I working on a classification problem that needs to detect patterns on a time serie. Basically, there's a catch-all class that means ""no pattern detected"", the other are for the specific patterns. The data is imbalanced (ratio 1/10 at least), but I adapted the class weights.

I'm able to overfit successfully on a few days of data, but when I train on 2 years of data, the model seems stuck on class1 ""no pattern detected"" for a veeeery long time. I've tried several learning rates, but it doesn't make the convergence happen significatively faster.

Is it a better starting point for my training to use the overfitting model's weight as a starting point? Could this allow the model to converge faster?

",36624,,,,,1/15/2023 23:04,Is it a good idea to overfit on a small part of your data for faster model convergence?,,1,0,,,,CC BY-SA 4.0 20800,1,,,4/30/2020 13:00,,1,96,"

In this Stanford lecture (minute 35:47 and 37:00), the professor says that Monte Carlo (MC) linear function approximation does not always converge, and she gives an example. In general, when does MC linear function approximation converge (or not)?

Why do people use that MC linear function approximation if sometimes it doesn't converge?

They also gave the definition of the stationary distribution of a policy, and I am not sure if using it for function approximation converges or not.

",36107,,2444,,4/30/2020 14:31,4/30/2020 14:31,When does Monte Carlo linear function approximation converge?,,0,1,,,,CC BY-SA 4.0 20801,2,,20798,4/30/2020 13:28,,0,,"

How do you calculate fitness for your organism if there are only two states? I would think it would be a solution to work on combinations of cells:

If you have blocks of, say, 4x4 cells, then each organism would be encoded using 16 bits. A crossover then is like overlaying two 4x4 blocks, picking the new block from either the first or the second parent. Effectively you are doing this:

0011                   1111                 xx11
0001  combined with    0000  would become   000x
1001                   0000                 x00x
1111                   0000                 xxxx

(this would be a bottom-right corner combined with a top wall segment)

or, as a single bit-string:

0011000110011111 +
1111000000000000 =
xx11000xx00xxxxx

where 'x' can be either 0 or 1, as this is where the two organisms are different.

",2193,,,,,4/30/2020 13:28,,,,2,,,,CC BY-SA 4.0 20802,2,,20777,4/30/2020 14:34,,0,,"

They definitely don't use RL.

My guess is a mix of NN for segmentation, lidar, radar and how for perception.

Once object detection has been performed they use a trajectory generator and MPC.

There is an autonomous car class on Coursera that goes over this.

",32390,,,,,4/30/2020 14:34,,,,3,,,,CC BY-SA 4.0 20803,1,,,4/30/2020 14:45,,3,152,"

I was trying to understand the implementation of a basic policy gradient (REINFORCE) method using TensorFlow. I think I got almost everything. The only thing that still bothers me is the loss function implementation.

From the theory, we have that after all the manipulation the gradient of the score function is

$$\nabla_{\theta}J(\theta)=\mathop{\mathbb{E}}\left[\nabla_{\theta}(log(\pi(s,a,\theta)))R(\tau) \right]$$

In this Cartpole example the part relative to the loss function is

    neg_log_prob = tf.nn.softmax_cross_entropy_with_logits_v2(logits = NeuralNetworkOutputs, labels = actions)
    loss = tf.reduce_mean(neg_log_prob * discounted_episode_rewards_) 

At this point, I do not understand how the definition from above translates into code.

As far as I understood, the functions

tf.nn.softmax_cross_entropy_with_logits_v2(logits = NeuralNetworkOutputs, labels = actions)

returns

log(softmax(NeuralNetworkOutputs))*actions

Which is then multiplied by the discounted returns

log(softmax(NeuralNetworkOutputs))*actions*discounted_episode_rewards_

Within this expression, I do not understand why should we multiply, an expression which looks like the loss function we want, by the value of the action.

",36629,,2444,,4/30/2020 15:06,4/30/2020 15:06,Understanding the TensorFlow implementation of the policy gradient method,,0,1,,,,CC BY-SA 4.0 20804,2,,20777,4/30/2020 14:48,,0,,"

I will focus on Tesla's autopilots in this answer (because that's the only specific autopilot you mention).

In their website, they mention the basic technologies underlying the current autopilots, which includes deep neural networks.

To make use of a camera suite this powerful, the new hardware introduces an entirely new and powerful set of vision processing tools developed by Tesla. Built on a deep neural network, Tesla Vision deconstructs the car's environment at greater levels of reliability than those achievable with classical vision processing techniques.

So, they are probably using convolutional neural networks for the computer vision tasks.

There's also a Wikipedia article completely dedicated to Tesla's autopilot.

",2444,,2444,,4/30/2020 15:15,4/30/2020 15:15,,,,2,,,,CC BY-SA 4.0 20806,2,,20799,4/30/2020 15:56,,1,,"

The first thing which you first have to understand is that does your trained model is working efficiently with both the training and testing data. If yes then its not overfitting. There is only one case where over-fitting doesn't matter and that is when your testing data is same as training data and dealing with is very important or it could be disastrous. Therefore, to avoid overfitting you can try several methods and they are followings:

  1. Optimization Algorithms like sgd, Adam, RMSprop, Adagard, are few gradient descents.
  2. Loss function like Hinge loss could be useful.
  3. Parameter initialization greatly influence the optimization process.

Overall, Large datasets are better for building ML models but there are some methods there is an interesting article that i would recommend you to read: Article Link

I Hope this Helps!!!

",25685,,25685,,4/30/2020 16:56,4/30/2020 16:56,,,,4,,,,CC BY-SA 4.0 20808,1,,,4/30/2020 18:05,,9,4567,"

I would like to ask whether MCTS is usually chosen when the branching factor for the states that we have available is large and not suitable for Minimax. Also, other than MCTS simluates actions, where Minimax actually 'brute-forces' all possible actions, what are some other benefits for using Monte Carlo for adversarial (2-player) games?

",36638,,36638,,4/30/2020 20:26,7/11/2020 11:51,When should Monte Carlo Tree search be chosen over MiniMax?,,1,1,,,,CC BY-SA 4.0 20809,1,,,4/30/2020 18:17,,1,42,"

I am at a very initial stage of my research so I will try to describe what I am trying to achieve:

I want to create an AI model which learns how to navigate the browser's component like clicking or creating new favorites and tabs, or navigating browser action menu, or bookmarking the website etc. In short, automating the testing for browser using selenium and an AI model so that over time the model learns itself to navigate the browser and test different functionality itself and eventually it test the functionalities that are not seen by the model before. For example: if I feed the AI model how browser is closed when ""x"" is clicked and minimize when ""-"" is clicked, next it can learn itself how to maximize the browser.

The initial input could be to record some videos of navigating the browser using selenium and then feed it to the model and with time it learns itself to go to different section of the browser which the model does not know and still test it.

Is it even possible to combine AI and selenium together to create something like this? If yes, how can I achieve it and what is the best approach to develop such model.

Thanks in advance.

",36637,,,,,4/30/2020 18:17,Automating browser actions using AI,,0,0,,,,CC BY-SA 4.0 20810,1,20819,,4/30/2020 18:18,,1,174,"

I'm trying to create a convolutional neural network without frameworks (such as PyTorch, TensorFlow, Keras, and so on) with Python.

Here's a description of CNN taken from the Wikipedia article

In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, medical image analysis, natural language processing, and financial time series.

A CNN has different types of layers, such as convolution, pooling (max or average), flatten and dense (or fully-connected) layers.

I have a few questions.

  1. Should we compute gradients (such as $\frac{\partial L}{\partial A_i}$,$\frac{\partial L}{\partial Z_i}$,$\frac{\partial L}{\partial A_{i-1}}$ and so on) in flatten layer or not?

  2. If no, then how should I compute $\frac{\partial L}{\partial A_i}$ and $\frac{\partial L}{\partial Z_i}$ of first layer of convolutional layer? With $\frac{\partial L}{[\frac{\partial g(A_i)}{\partial x}]}$ or with $\frac{\partial L}{\partial dA_{i+2}}$(P.S. as you know iteration of BackPropagation is reverse, so I used i+n for denote the previous layer)?

  3. Or can I compute derivatives in Flatten layer with $$\frac{\partial J}{\partial A} = W_{i+1}^T Z_{i+1}$$(i+1 denotes prev.layer in BackProp) $$\frac{\partial L}{\partial Z} = \frac{\partial L}{\partial A} *\frac{\partial g(A_i)}{\partial x} $$ and then reshape of Conv2D shape?

P.S. I found questions like mine (names are same), but there're not answer to my question as I asking about formula.

",36639,,2444,,5/1/2020 12:38,5/6/2020 10:47,Should I compute the gradients with respect to the flatten layer in a convolutional neural network?,,1,1,,,,CC BY-SA 4.0 20813,2,,20790,5/1/2020 5:19,,1,,"

PCA can make models worse, imagine data points scattered along two elongated parallel rectangles. The axis with the greatest variation will be parallel to the rectangles but doesn't provide any benefit in classifying the points.

",32390,,,,,5/1/2020 5:19,,,,0,,,,CC BY-SA 4.0 20817,1,,,5/1/2020 6:07,,1,20,"

I am interested in doing some work in classification problems in music information retrieval. I know that there are some formats of datasets (such as MIDI, Spectrogram, Piano-roll, MusicXML, etc.) for this work but have been unable to find any nice large datasets for this. What are the best free datasets for such work? I am mainly looking into western classical music.

",36651,,,,,5/1/2020 6:07,What are the best datasets available for music information retrieval?,,0,0,,,,CC BY-SA 4.0 20819,2,,20810,5/1/2020 7:48,,1,,"

I have found that you should compute derivatives $\frac{\partial L}{\partial A}, \frac{\partial L}{\partial Z}$ in Flatten layer and then reshape Conv2D input shape.

",36639,,36639,,5/6/2020 10:47,5/6/2020 10:47,,,,1,,,,CC BY-SA 4.0 20820,1,,,5/1/2020 8:53,,2,416,"

I am trying to understand the mathematics behind the forward and backward propagation of neural nets. To make myself more comfortable, I am testing myself with an arbitrarily chosen neural network. However, I am stuck at some point.

Consider a simple fully connected neural network with two hidden layers. For simplicity, choose linear activation function (${f(x) = x}$) at all layer. Now consider that this neural network takes two $n$-dimensional inputs $X^{1}$ and $X^{2}$. However, the first hidden layer only takes $X^1$ as the input and produces the output of $H^1$. The second hidden layer takes $H^{1} $and $X^2$ as the input and produces the output $H^{2}$. The output layer takes $H^{2}$ as the input and produces the output $\hat{Y}$. For simplicity, assume, we do not have any bias.

So, we can write that, $H^1 = W^{x1}X^{1}$

$H^2 = W^{h}H1 + W^{x2}X^{2} = W^{h}W^{x1}X^{1} + W^{x2}X^{2}$ [substituting the value of $H^1$]

$\hat{Y} = W^{y}H^2$

Here, $W^{x1}$, $W^{x2}$, $W^{h}$ and $W^{y}$ are the weight matrix. Now, to make it more interesting, consider a sharing weight matrix $W^{x} = W^{x1} = W^{x2}$, which leads, $H^1 = W^{x}X^{1}$ and $H^2 = W^{h}W^{x}X^{1} + W^{x}X^{2}$

I do not have any problem to do forward propagation by my hand; however, the problem arises when I tried to make backward propagation and update the $W^{x}$.

$\frac{\partial loss}{\partial W^{x}} = \frac{\partial loss}{\partial H^{2}} . \frac{\partial H^{2}}{\partial W^{x}}$

Substituting, $\frac{\partial loss}{\partial H^{2}} = \frac{\partial Y}{\partial H^{2}}. \frac{\partial loss}{\partial Y}$ and $H^2 = W^{h}W^{x}X^{1} + W^{x}X^{2}$

$\frac{\partial loss}{\partial W^{x}}= \frac{\partial Y}{\partial H^{2}}. \frac{\partial loss}{\partial Y} . \frac{\partial}{\partial W^{x}} (W^{h}W^{x}X^{1} + W^{x}X^{2})$

Here I understand that, $\frac{\partial Y}{\partial H^{2}} = (W^y)^T$ and $\frac{\partial}{\partial W^{x}} W^{x}X^{2} = (X^{2})^T$ and we can also calculate $\frac{\partial Y}{\partial H^{2}}$, if we know the loss function. But how do we calculate $\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1}$?

",33750,,,,,12/13/2022 17:03,Backpropagation of neural nets with shared weight,,3,0,,,,CC BY-SA 4.0 20821,2,,20820,5/1/2020 10:26,,1,,"

If we write $ H^2 = W^{h}H1 + W^{x}X^{2} $ then it will be better to understand the backward propagation step.

Now,

$\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1}$ can be written as: $\frac{\partial H^2}{\partial H^1}\frac{\partial H^1}{\partial W^{x}} $

$\frac{\partial H^2}{\partial H^1} = (W^h)^T$ and $\frac{\partial H^1}{\partial W^{x}} = (X^{1})^T $

Therefore,

$\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1} = (W^h)^T(X^{1})^T $

I hope it has solved your problem.

",36657,,,,,5/1/2020 10:26,,,,2,,,,CC BY-SA 4.0 20822,2,,20790,5/1/2020 11:46,,1,,"

PCA works well where data sample space is linear. If data sample space is not linear or it is manifold data then model without PCA may perform better than model using PCA.

In the given image you can see, data is manifold. In this type of data, PCA, which is based on projection technique does not work well. That's why we use manifold learning technique to handle these cases.

",36657,,,,,5/1/2020 11:46,,,,0,,,,CC BY-SA 4.0 20823,2,,18778,5/1/2020 13:06,,0,,"

So this question is due to a misunderstanding I had with how the loss function works in A2C.

Usually loss functions are always positives, but in A2C it can be negative as well. I thought that minimizing the loss function means moving it closer to zero, but in fact reducing the loss is indifferent to the sign of the loss value and is about making the loss value smaller.

In other words, the loss for 0.1 is smaller than the loss for 0.9 prediction for a disadvantageous action.

When the advantage is positive, $-log (p) * advantage$ will be smaller if $p$ is growing, and when the advantage is negative, $-log (p) * advantage$ will be smaller if $p$ is decreasing.

",32950,,,,,5/1/2020 13:06,,,,0,,,,CC BY-SA 4.0 20824,1,,,5/1/2020 13:14,,1,173,"

I want to make a bot which clicks the fire button on the mobile screen upon seeing an enemies head.

In pubg mobile which is an android game you have to control the fire button and the aim along with many other controls to kill an enemy or other players. I want to automate the fire button and everything else would be controlled by me, when I'll aim on a player's head the bot should click the fire button instantly.

So there are a few problems in this ,first one is that what if it shoots upon seeing my teammates head and other is what if there are two players at once ,which will it shoot, for that it needs to shoot when my aim or the red crosshair is on the player's head.

I don't know how to get started , I need to make an image recognition app and an autoclicker and combine them both. How do I get started? Assuming that I only know basic python.

",36664,,,,,5/1/2020 13:14,How do i start building an autoclick bot for pubg mobile?,,0,1,,,,CC BY-SA 4.0 20825,2,,20820,5/1/2020 13:23,,0,,"

The product rule of partial derivative:
$\frac{\partial}{\partial x} f g = g \frac{\partial}{\partial x} f + f \frac{\partial}{\partial x} g$

According to this: $\frac{\partial}{\partial W^{x}} W^{h}W^{x}X^{1} = W^{h}X^{1}$, because derivative of other term with respect to $W^{x}$ is zero. (I am not considering the transpose notation as it depends on how you organize your data.)

However, Your assumption of giving $H^{1}$ and $X^{2}$ as input to second hidden layer is not valid(they are called hidden layer for that reason). The output of first hidden layer ($H^{1}$) will be fed to the input of second hidden layer. Your output of second hidden layer would be $H^{2} = W^{h} * H^{1}$.

You have to fed your input $X^{1} X^{2}$ to your network at once by means of looping or vectorization.

",36661,,,,,5/1/2020 13:23,,,,2,,,,CC BY-SA 4.0 20826,1,20827,,5/1/2020 13:49,,1,544,"

In the context of Reinforcement Learning, what does it mean to have a multi-dimensional continuous action space?

I came across the following in the COBRA Paper

A method for learning a distribution over a multi-dimensional continuous action space. This learned distribution can be sampled efficiently.

and

During the initial exploration phase it explores its environment, in which it can move objects freely with a continuous action space but is not rewarded for its actions.

So, what do the multi-dimensionality and the continuity of the action space refer to? It'd be great if someone could provide an explanation with examples!

",35585,,2444,,1/9/2021 22:10,1/9/2021 22:23,What is meant by a multi-dimensional continuous action space?,,2,3,,,,CC BY-SA 4.0 20827,2,,20826,5/1/2020 14:19,,2,,"

Let me rephrase it a little - it's a multidimensional continuous space of actions. So, you assign each action some vector from $R^{n}$. For intuition -- imagine you have a robot arm with four joints. For every joint you could applied a rotation force from [-1, 1] and thus you get a 4-D vector with float numbers for each possible action.

",16940,,16940,,5/1/2020 14:28,5/1/2020 14:28,,,,0,,,,CC BY-SA 4.0 20829,2,,20826,5/1/2020 14:42,,1,,"

The question has already been answered by Kirill, but I thought I'll add a good example of a multi-dimensional continuous action space too, namely the one I just encountered in the COBRA paper itself.

In all of our experiments we use a 2-dimensional virtual "touch-screen" environment that contains objects with configurable shape, position, and color. The agent can move any visible object by clicking on the object and clicking in a direction for the object to move. Hence the action space is continuous and 4-dimensional, namely a pair of clicks.

",35585,,2444,,1/9/2021 22:23,1/9/2021 22:23,,,,0,,,,CC BY-SA 4.0 20830,1,,,5/1/2020 14:58,,2,92,"

I'm creating a RF Q-Learning agent for a two player fully-observable board game and wondered, if I was to train the Q Table using adversarial training, should I let both 'players' use, and update, the same Q Table? Or would this lead to issues?

",27629,,,,,1/17/2023 15:05,Adversarial Q Learning should use the same Q Table?,,1,0,,,,CC BY-SA 4.0 20831,1,,,5/1/2020 16:10,,3,584,"

I'm building an agent for a racing game. In this game, there is a randomized map where there are speed boosts for the player to pick up and obstacles that act to slow the player down. The goal of the game is to reach the finishing line before the opponent.

While working on this problem, I've realized that we can almost forget about the presence of our opponent and just focus on getting the agent to the finish line as quickly as possible.

I started with a simple

  • $-1$ reward for every timestep
  • $+100$ reward for winning, and
  • $-100$ for losing.

When I was experimenting with this, I felt like the rewards may be too sparse, as my agent was converging to pretty poor average returns. I iterated to a function of speed and distance travelled (along with the $+100$ reward), but, after some experimentation, I started feeling like the agent might be able to achieve high returns without necessarily being the fastest to the finish line.

I'm thinking that I return to the first approach and possibly add in some reward for being in the first place (as a function of the opponent's distance behind the agent).

What else could I try? Should I try and spread the positive rewards out more for good behavior? should I create additional rewards/penalties for perhaps hitting obstacles and using boosts or can I expect the agent to learn the correlation?

",36670,,2444,,10/7/2020 22:49,10/7/2020 22:49,How should I design the reward function for racing game (where the goal is to reach finishing line before the opponent)?,,1,0,0,,,CC BY-SA 4.0 20832,1,,,5/1/2020 17:41,,2,248,"

What is the original source of the TD Advantage Actor-Critic algorithm?

I found this tutorial really helpful for learning the algorithm. However, what is the original source of this algorithm?

",36673,,2444,,1/13/2022 11:57,1/13/2022 11:57,What is the original source of the TD Advantage Actor-Critic algorithm?,,0,0,,,,CC BY-SA 4.0 20834,2,,20830,5/1/2020 18:58,,0,,"

should I let both 'players' use, and update, the same Q Table?

Yes this works well for zero-sum games, when player 1 wants to maximise a result (often just +1 for ""player 1 wins"") and player 2 wants to minimise a result (score -1 for ""player 2 wins""). That alters algorithms such as Q-learning because the greedy choice switches beween min and max functions over action values - player 1's TD target becomes $R_{t+1} + \gamma \text{min}_{a'} [Q(S_{t+1}, a')]$ because the greedy choice in the next state is taken by player 2.

Alternatively, if the states never overlap between players, then you could learn a Q function that always returns the future expected return according the current player. This can be forced if necessary by making part of the state record whose turn it is. With this you need a way to convert between player 1 and player 2 scores in order to use Q learning updates. For zero-sum games, then the Q value from a player 2 state is negative of player 1's value for that same state, and vice versa, so the TD target changes for both players to $R_{t+1} - \gamma \text{max}_{a'} [Q(S_{t+1}, a')]$

The first option can result in slightly less complexity in the learned function, that might be an issue if you are using function approximation and learning a Q function e.g. using neural networks instead of a Q table. That may result in faster learning and generalisation, although it will depend on details of the game.

Or would this lead to issues?

No major issues. I am performing this kind of training - a single Q function estimating a global score which P1 maximises and P2 minimises - for the Kaggle Connect X competition, and it works well.

I can think of a couple of minor things:

  • You may still want to have the ability for each player to be using a different version of the table or learned Q function. This would allow you to have different versions of your agent (e.g. at different stages of learning) compete, to evaluate different agents against each other. To do this, you have to write code that allows for multiple tables or functions in any case.

  • You need to keep track of how both players express and achieve their opposing goals when using the table, as you can already see by the modified TD targets above. This becomes more important when adding look-ahead planning, which is a common addition and can significantly improve the performance of an agent - during look-ahead you must switch between player views on what the best action choice is. It is possible to make off-by-one errors in some part of the code but not others and have the agent learn inconsistently.

",1847,,1847,,5/2/2020 10:33,5/2/2020 10:33,,,,0,,,,CC BY-SA 4.0 20836,1,,,5/1/2020 19:31,,1,29,"

Assume we have a policy $\pi_{\theta}$ in a classic reinforcement learning setting, and a reward function $R^{\pi}(s,a)$ that changes as long as $\pi$ changes i.e. not only is it predefined by the environment itself, how can we model the popular algorithms (e.g. SAC) according to this change?

",36603,,,,,5/1/2020 19:31,What if the rewards induced by an environment are related to the policy too?,,0,1,,,,CC BY-SA 4.0 20839,1,,,5/1/2020 23:16,,1,58,"

When using a trained Q-learning algorithm in an actual game, would I just use exploitation and no longer use exploration? Should I use exploration only during the training phase?

",27629,,2444,,5/1/2020 23:23,5/1/2020 23:42,Should I just use exploitation after I have trained the Q agent?,,1,0,,,,CC BY-SA 4.0 20843,2,,20839,5/1/2020 23:35,,1,,"

Once you have estimated the $Q$ function, you can derive the policy from it in different ways. For example, you can act greedily with respect to it (see this answer), which can be formally denoted as

$$ \pi(s) = \operatorname{argmax}_{a^*}Q(s, a), \; \forall s \in \mathcal{S} $$ where $Q(s, a)$ is your estimated value function and $\pi$ the policy greedily derived from it.

This means that you would just exploit your current knowledge of the return. This is probably a good thing to do if you believe your value function is optimal and the dynamics of the environment don't change.

Of course, if your policy is not optimal, you may not want to always execute the greedy action. In that case, you could still perform some form of exploration (e.g. with the $\epsilon$-greedy policy).

Moreover, if the dynamics (e.g. the reward function) of your environment change over time, you could continually train your RL agent. If you are interested in continual RL, the paper Continual Reinforcement Learning with Complex Synapses (2018) could potentially be useful.

",2444,,2444,,5/1/2020 23:42,5/1/2020 23:42,,,,0,,,,CC BY-SA 4.0 20844,1,,,5/2/2020 3:59,,0,97,"

I have a very silly problem in hand. I have implemented 2 methods which give me the mask to separate the objects from the background. What I get from one method is the object encapsulated in the red Contour or boundary and the other one makes the background Red.

I am using Keras to classify Trash. I wanted to use the output or the EXTRACTED object as an input to the CNN model. Now I do not see any difference in the images and output. All there is extra boundary around the object and I fail to understand how it can help my model.

I could add an extra alpha channel to second method to make the background transparent but Keras' ImageDataGenerator do not work with RGBA images. What shoul dI do to improve the model?

",36062,,,,,5/30/2021 22:01,How to use 'Canny/Watershed' algorithm's output as an input for Image Classification Model,,1,2,,,,CC BY-SA 4.0 20845,1,,,5/2/2020 4:41,,1,27,"

To give a little background, I've been reading the COBRA paper, and I've reached the section that talks about the exploration policy, in particular. We figure that a uniformly random policy won't do us any good, since the action space is sparsely populated with objects that the agent must act upon - and a random action is likely to result in no change (an object here occupies only about 1.7% of the space of the screen). Hence we need our agent to learn in the exploration phase a policy that clicks on and moves objects more frequently.

I get that a random policy won't work, but I've difficulty understanding how and why the transition model is trained adversarially. Following is the extract which talks about the same, and I've highlighted parts that I don't completely understand -

""Our approach is to train the transition model adversarially with an exploration policy that learns to take actions on which the transition model has a high error. Such difficult-to-predict actions should be those that move objects (given that others would leave the scene unchanged). In this way the exploration policy and the transition model learn together in a virtuous cycle. This approach is a form of curiosity-driven exploration, as previously described in both the psychology (Gopnik et al., 1999) and the reinforcement learning literature (Pathak et al., 2017; Schmidhuber, 1990a,b).""

  • How does it help to take actions on which the transition model has a high error?
  • I don't exactly see how a virtuous cycle is in action

Could someone please explain? Thanks a lot!

",35585,,,,,5/2/2020 4:41,How can transition models in RL be trained adversarially?,,0,0,,,,CC BY-SA 4.0 20846,1,,,5/2/2020 8:51,,2,177,"

I am wondering how can DDPG or DPG handle the discrete action space. There are some papers saying that use Gumbel softmax with DDPG can make the discrete action problem be solved. However, will the Gumbel softmax make the deterministic policy be the stochastic one? If not, how can that be achieved?

",36587,,,,,5/2/2020 8:51,How can DDPG handle the discrete action space?,,0,0,,,,CC BY-SA 4.0 20847,1,20849,,5/2/2020 11:56,,1,189,"

What are multi-hop relational paths in the context of knowledge graphs (KGs)?

I tried looking it up online, but didn't find a simple explanation.

",35585,,35585,,5/2/2020 17:58,5/2/2020 18:22,What are multi-hop relational paths?,,1,0,,,,CC BY-SA 4.0 20849,2,,20847,5/2/2020 13:49,,1,,"

Before trying to explain this term in your context, let me briefly describe the term in other contexts.

In computer networking, the term ""hop"" refers to a node (e.g. a router) that a packet goes through before reaching its destination from its source. In a multi-hop situation, you have several nodes involved in the process of sending the packet from the source to the destination.

A knowledge graph is a graph that accumulates and conveys knowledge of the real world, where nodes represent entities of interest and edges relations between those entities.

So, multi-hop relational paths are probably relational paths involving more than one node or edge in the knowledge graph.

But what do we mean by ""relational""?

If you are familiar with the basics of databases, the word ""relational"" shouldn't be so unfamiliar. In fact, there the so-called relational databases, relational models and relational algebra. Intuitively, the word ""relational"" is used to denote what you think it denotes, i.e. relations. See also What a relational database is by Oracle.

And what is a path?

In section 2.2.3 of the tutorial Knowledge Graphs, Aidan Hogan et al. provide a description of a path (expressions) in the context of knowledge graphs

Navigational graph patterns. A key feature that distinguishes graph query languages is the ability to include path expressions in queries. A path expression $r$ is a regular expression that allows matching arbitrary-length paths between two nodes, which is expressed as a regular path query $(x,r,y)$, where $x$ and $y$ can be variables or constants (or even the same term).

",2444,,2444,,5/2/2020 18:22,5/2/2020 18:22,,,,0,,,,CC BY-SA 4.0 20850,1,20861,,5/2/2020 14:42,,5,922,"

If a research paper uses multi-armed bandits (either in their standard or contextual form) to solve a particular task, can we say that they solved this task using a reinforcement learning approach? Or should we distinguish between the two and use the RL term only when it is associated with an MDP formulation?

In fact, each RL course/textbook usually contains a section about bandits (especially when dealing with the exploration-exploitation tradeoff). Additionally, bandits also have the concept of actions and rewards.

I just want to make sure what the right terminology should be, when describing either approach.

",34010,,,,,5/3/2020 15:06,Are bandits considered an RL approach?,,2,0,,,,CC BY-SA 4.0 20852,1,20863,,5/2/2020 15:09,,1,180,"

I've been reading this paper on recommendation systems using reinforcement learning (RL) and knowledge graphs (KGs).

To give some background, the graph has several (finitely many) entities, of which some are user entities and others are item entities. The goal is to recommend items to users, i.e. to find a recommendation set of items for every user such that the user and the corresponding items are connected by one reasoning path.

I'm attaching an example of such a graph for more clarity (from the paper itself) -

In the paper above, they say

First, we do not have pre-defined targeted items for any user, so it is not applicable to use a binary reward indicating whether the user interacts with the item or not. A better design of the reward function is to incorporate the uncertainty of how an item is relevant to a user based on the rich heterogeneous information given by the knowledge graph.

I'm not able to understand the above extract, which talks about the reward function to use - binary, or something else. A detailed explanation of what the author is trying to convey in the above extract would really help.

",35585,,2444,,12/26/2021 13:44,12/26/2021 13:44,Which reward function works for recommendation systems using knowledge graphs?,,1,4,,,,CC BY-SA 4.0 20858,1,20859,,5/2/2020 20:18,,1,114,"

I am trying to apply Eligibility Traces to a currently working Q-Learning algorithm.

The reference code for the Q-Learning algorithm was taken from this great blog by DeepLizard, but does not include Eligibility Traces. Link to the code on Google Colab.

I wish to add the Eligibility Traces by implementing this pseud code:

Initialize Q(s,a) arbitrarily and e(s,a) = 0, for all s,a
Repeat (for each episode):
    Initialize s,a
    Repeat (for each step of episode):
        Take action a, observe r,s’
        Choose a’ from s’ using policy derived from Q (e.g., ϵ-greedy)
        δ ← r + γ Q(s’,a’) – Q(s,a)
        e(s,a) ← e(s,a) + 1
        For all s,a:
            Q(s,a) ← Q(s,a) + α δ e(s,a)
            e(s,a) ← γ λ e(s,a)
        s ← s’ ; a ← a’
    until s is terminal

Taken from HERE

This is my code as I have implemented the pseudo-code - Link

The part that needs to be improved is here:

#Q learning algorithem
for episode in range(num_episodes):
  state = env.reset()
  et_table = np.zeros((state_space_size,action_space_size))
  done = False
  reward_current_episode = 0

  for steps in range(max_steps_per_episode):
    #Exploration-Explotation trade-off
    exploration_rate_thresh = random.uniform(0,1)
    if exploration_rate_thresh > exploration_rate:
      action = np.argmax(q_table[state,:])
    else:
      action = env.action_space.sample()

    new_state, reward, done, info = env.step(action)

    #Update Q-table and Eligibility table
    delta = reward + discount_rate * np.max(q_table[new_state,:]) - q_table[state,action]
    et_table[state, action] = et_table[state, action] + 1

    for update_state in range(state_space_size):
      for update_action in range(action_space_size):
        q_table[update_state, update_action] = q_table[update_state, update_action] + learning_rate * delta * et_table[update_state, update_action]
        et_table[update_state, update_action] = discount_rate * gamma * et_table[update_state, update_action]

    state = new_state
    reward_current_episode = reward

    if done==True:
      break

  #Exploration rate decay
  exploration_rate = min_exploration_rate + (max_exploration_rate - min_exploration_rate) * np.exp(-exploration_decay_rate*episode)

  rewards_all_episodes.append(reward_current_episode)

For a while, I was getting pure results (avg. rewards for 1000 episodes were around 0.14 while the original NON-ET algorithm was averaging 0.69 on the last 1000 episodes), but now I get these errors:

/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:27: RuntimeWarning: overflow encountered in double_scalars
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:22: RuntimeWarning: invalid value encountered in double_scalars
",36702,,,,,5/2/2020 20:35,Applying Eligibility Traces to Q-Learning algorithm does not improve results (And might not function well),,1,0,,,,CC BY-SA 4.0 20859,2,,20858,5/2/2020 20:35,,1,,"

The thing is, that while I was posting the question, I tried to tweak with the parameters and it seems that my discount rate (set for 0.99) was causing these errors.

Also - it seems that the is_slippery argument passed to the environment (True - the agent's action will be fulfilled 33% of the times, the rest of the times will be in a random direction, False - the agent's action will be fulfilled 100% of the times) is crucial for Eligibility Traces. The results improve from 0.39 to 0.993. My assumption is that the randomness of the ""Slippery"" part of the environment is crucially hurting the Eligibility Traces due to the fact that it is relying on the actions that were taken (and assuming they were fulfilled and not randomly changed).

After changing it to 0.9 I receive these results:

*** Average award per 1000 episodes ***
1000 :  0.2640000000000002
2000 :  0.7560000000000006
3000 :  0.9140000000000007
4000 :  0.9600000000000007
5000 :  0.9820000000000008
6000 :  0.9870000000000008
7000 :  0.9860000000000008
8000 :  0.9830000000000008
9000 :  0.9930000000000008
10000 :  0.9930000000000008


******* Q-Table *******

[[0.53144572 0.59049007 0.59046218 0.53144103]
 [0.53292711 0.         0.65610941 0.5864814 ]
 [0.58892359 0.729037   0.59954581 0.6546992 ]
 [0.65324248 0.         0.53612877 0.55540024]
 [0.59049001 0.65610007 0.         0.53143728]
 [0.         0.         0.         0.        ]
 [0.         0.8099945  0.         0.65015572]
 [0.         0.         0.         0.        ]
 [0.65610004 0.         0.72900006 0.59049002]
 [0.65610024 0.81       0.80996651 0.        ]
 [0.73156036 0.9        0.         0.72830918]
 [0.         0.         0.         0.        ]
 [0.         0.         0.         0.        ]
 [0.         0.8100008  0.9        0.72900011]
 [0.81000015 0.89999998 1.         0.80998688]
 [0.         0.         0.         0.        ]]

Comparing it to the original !-Learning algorithm by DeepLizard (LINK):

*** Average award per 1000 episodes ***
1000 :  0.22900000000000018
2000 :  0.7250000000000005
3000 :  0.8990000000000007
4000 :  0.9610000000000007
5000 :  0.9860000000000008
6000 :  0.9900000000000008
7000 :  0.9870000000000008
8000 :  0.9970000000000008
9000 :  0.9870000000000008
10000 :  0.9890000000000008


******* Q-Table *******

[[0.94148015 0.95099005 0.93206533 0.94148015]
 [0.94148015 0.         0.71778349 0.84678222]
 [0.88367728 0.43945604 0.0084652  0.27374571]
 [0.15097126 0.         0.         0.        ]
 [0.95099005 0.96059601 0.         0.94148015]
 [0.         0.         0.         0.        ]
 [0.         0.98009092 0.         0.36957636]
 [0.         0.         0.         0.        ]
 [0.96059599 0.         0.970299   0.95099004]
 [0.96059598 0.98009932 0.9801     0.        ]
 [0.97029894 0.99       0.         0.97021962]
 [0.         0.         0.         0.        ]
 [0.         0.         0.         0.        ]
 [0.         0.89329522 0.99       0.91080933]
 [0.98009934 0.98999961 1.         0.98009962]
 [0.         0.         0.         0.        ]]

We see a bit better results (we can see that on the 10K episode our agent is making the right path at 0.993% of the times, in appose to 0.989% in the original algorithm).

The bottom line is that the Eligibility Traces implementation can be done using these formulas:

a. create the et_table:

et_table = np.zeros((state_space_size,action_space_size))

b. Set gamma for the decaying of the Eligibility Traces:

gamma = 0.9

c. At each step - apply these calculations:

#Update Q-table and Eligibility table
    delta = reward + discount_rate * np.max(q_table[new_state,:]) - q_table[state,action]
    et_table[state, action] = et_table[state, action] + 1

    for update_state in range(state_space_size):
      for update_action in range(action_space_size):
        q_table[update_state, update_action] = q_table[update_state, update_action] + learning_rate * delta * et_table[update_state, update_action]
        et_table[update_state, update_action] = discount_rate * gamma * et_table[update_state, update_action]

d. After each episode - reset the et_table to zeros.

Full code can be found here:

import numpy as np
import gym
import random
import time
from IPython.display import clear_output

env = gym.make(""FrozenLake-v0"", is_slippery=False)

action_space_size = env.action_space.n
state_space_size = env.observation_space.n

q_table = np.zeros((state_space_size,action_space_size))
et_table = np.zeros((state_space_size,action_space_size))
print(""Q Table:\n"", q_table)
print(""Eligibility Traces:\n"", et_table)

num_episodes = 10000
max_steps_per_episode = 100

learning_rate = 0.1
discount_rate = 0.9
gamma = 0.9

exploration_rate = 1
max_exploration_rate = 1
min_exploration_rate = 0.01
exploration_decay_rate = 0.001

rewards_all_episodes = []
q_table = np.zeros((state_space_size,action_space_size))

#Q learning algorithem
for episode in range(num_episodes):
  state = env.reset()
  et_table = np.zeros((state_space_size,action_space_size))
  done = False
  reward_current_episode = 0

  for steps in range(max_steps_per_episode):
    #Exploration-Explotation trade-off
    exploration_rate_thresh = random.uniform(0,1)
    if exploration_rate_thresh > exploration_rate:
      action = np.argmax(q_table[state,:])
    else:
      action = env.action_space.sample()

    new_state, reward, done, info = env.step(action)

    #Update Q-table and Eligibility table
    delta = reward + discount_rate * np.max(q_table[new_state,:]) - q_table[state,action]
    et_table[state, action] = et_table[state, action] + 1

    for update_state in range(state_space_size):
      for update_action in range(action_space_size):
        q_table[update_state, update_action] = q_table[update_state, update_action] + learning_rate * delta * et_table[update_state, update_action]
        et_table[update_state, update_action] = discount_rate * gamma * et_table[update_state, update_action]

    state = new_state
    reward_current_episode = reward

    if done==True:
      break

  #Exploration rate decay
  exploration_rate = min_exploration_rate + (max_exploration_rate - min_exploration_rate) * np.exp(-exploration_decay_rate*episode)

  rewards_all_episodes.append(reward_current_episode)

#Print average reward per thousend episodes
reward_per_thousend_episodes = np.split(np.array(rewards_all_episodes),num_episodes/1000)
count = 1000
print(""*** Average award per 1000 episodes ***"")
for r in reward_per_thousend_episodes:
  print(count, "": "", str(sum(r/1000)))
  count+=1000

#Print Q-Table
print(""\n\n******* Q-Table *******\n"")
print(q_table)

print(""\n\n******* ET-Table *******\n"")
print(et_table)
",36702,,,,,5/2/2020 20:35,,,,0,,,,CC BY-SA 4.0 20861,2,,20850,5/2/2020 22:55,,5,,"

Several important researchers distinguish between bandit problems and the general reinforcement learning problem.

The book Reinforcement learning: an introduction by Sutton and Barto describes bandit problems as a special case of the general RL problem.

The first chapter of this part of the book describes solution methods for the special case of the reinforcement learning problem in which there is only a single state, called bandit problems. The second chapter describes the general problem formulation that we treat throughout the rest of the book — finite Markov decision processes — and its main ideas including Bellman equations and value functions.

This means that you can represent your bandit problem as an MDP with a single state and possibly multiple actions.

In section 1.1.2 of the book Bandit Algorithms (2020), Szepesvari and Lattimore describe the differences between bandits and reinforcement learning

One of the distinguishing features of all bandit problems studied in this book is that the learner never needs to plan for the future. More precisely, we will invariably make the assumption that the learner's available choices and rewards tomorrow are not affected by their decisions today. Problems that do require this kind of long-term planning fall into the realm of reinforcement learning

This definition is different than the one by Sutton and Barto. In this case, only bandit problems where the learner doesn't need to plan for the future are considered.

In any case, bandit problems and RL problems have a lot of similarities. For example, both attempt to deal with the exploration-exploitation trade-off and, in both cases, the underlying problem can be formulated as a Markov decision process.

",2444,,2444,,5/3/2020 0:28,5/3/2020 0:28,,,,1,,,,CC BY-SA 4.0 20863,2,,20852,5/3/2020 5:50,,2,,"

I found the answer further into the paper (section 3.2 Formulation as Markov Decision Process)! I'll post it here for everyone.

Given any user, there is no pre-known targeted item in the KGRE-Rec problem, so it is unfeasible to consider binary rewards indicating whether the agent has reached a target or not. Instead, the agent is encouraged to explore as many “good” paths as possible. Intuitively, in the context of recommendations, a “good” path is one that leads to an item that a user will interact with, with high probability. To this end, we consider to give a soft reward only for the terminal state $s_{T}=\left(u, e_{T}, h_{T}\right)$ based on another scoring function $f(u, i)$. The terminal reward $R_{T}$ is defined as

$$ R_{T}= \begin{cases}\max \left(0, \frac{f\left(u, e_{T}\right)}{\max _{i \in I} f(u, i)}\right), & \text { if } e_{T} \in \mathcal{I} \\ 0, & \text { otherwise }\end{cases} $$

where the value of $R_{T}$ is normalized to the range of $[0,1] . f(u, i)$ is also introduced in the next section.

Details about $f(u,i)$, the scoring function, can be found in the paper.

",35585,,2444,,12/26/2021 13:44,12/26/2021 13:44,,,,1,,,,CC BY-SA 4.0 20864,1,20867,,5/3/2020 6:11,,3,49,"

I am learning RL for the first time. It may be naive, but it is a bit odd to grasp this idea that, if the goal of RL is to maximize the expected return, then shouldn't the expected return be calculated for some faraway time in the future ($t+n$) instead of current time $t$? It is because we are building our system for the future using current information. ( I am coming from machine learning background and this makes more sense to me).

Generally, the expected return is: $$\mathbb{E}[G_t] = \mathbb{E}[ R_{t+1} + R_{t+2} + R_{t+3}+... R_{t+n}]$$

However, shouldn't the expected return be: $$\mathbb{E}[G_{t+n}] = \mathbb{E}[R_{t+1} + R_{t+2} + R_{t+3}+... R_{t+n-1}]$$

",36710,,2444,,5/3/2020 12:00,5/4/2020 7:15,Shouldn't expected return be calculated for some faraway time in the future $t+n$ instead of current time $t$?,,1,0,,,,CC BY-SA 4.0 20866,1,,,5/3/2020 7:13,,1,57,"

I have been trying to write code to implement plain neural net without convolution from scratch. I took some help online here and added my code to my github account.

I don't understand why the prediction made by my code is only 88%-90% accurate after the 1st epoch, whereas his code is 95% accurate after 1st epoch with the same parameters (Same Xavier initialization for weights, biases are not initialized, same hidden layer neurons). While his architecture uses 2 hidden layers, my code performed worse with 2 hidden layers. For 1 hidden layer, his code performs similar (~96%).

",33029,,,,,5/3/2020 7:13,MNIST Classification code performing with 88%-90% whereas other codes online perform 95% on first epoch,,0,3,,,,CC BY-SA 4.0 20867,2,,20864,5/3/2020 8:08,,1,,"

shouldn't the expected return be calculated for some faraway time in the future (𝑡+𝑛) instead of current time $t$?

This is partly a notation issue, but $G_t$ is already the future sum of rewards as seen by the first (and correct) equation in your question. You don't actually know the value of any individual return $g_t$* until after $t+n$. However, you can predict the expected value $\mathbb{E}[G_t]$ provided the environment and policy remain consistent.

Your second equation is pretty similar to the first one, which is why I say it is partly a notation issue. However, the point of calculating the forward-looking expectation for the return is to allow for assessment at time $t$. This is to either predict likely outcome for future rewards knowing the system is in state $s_t$, or in control scenarios to choose an action $a_t$.

In addition, in both prediction and control scenarios, the past (from time steps $0$ to $t-1$) has already happened. Measurements of how well a system did in terms of gathering reward, looking backwards, might be useful metrics for e.g. ""how good is this agent?"". However, they are not in general a guide to the future. In many sparse environments (e.g. a board game scoring +1 for a win), this data is basically useless for predicting the future and all you want to know is summarised by the current state and the acting policy.

Regardless of when you get to finally calculate a return, the start time step - the part of the trajectory that the return is calculated from, is a key parameter. The end time step is a practical concern for implementation, but can often be considered to be at infinity for theoretical purposes (i.e. we are interested in measuring or optimising all future rewards). So if you are only going to display/use one parameter in the notation, the start time $t$ is the one to use.

There are variants of notation, to show how the return is calculated, where the calculation horizon is made explicit, e.g. $G_{t:t+n}$ for a truncated return or $G_{t:t+1}$ when calculating a one step temporal difference target. All the ones I have seen still maintain the forward view that defines a value associated with a current timestep $t$, for the same reason as explained above - it is at timestep $t$ where this value is of most interest as a prediction.

In practice during training you often wait until $t+n$ before you know the correct value of $g_t$* to apply as a training value - which is then used to update value estimates for $\hat{v}(s_t)$ or $\hat{q}(s_t, a_t)$. It is possible to make partial updates before that ending time step using techniques such as eligibility traces.


* Using notation of uppercase $G$ for random variable, and lowercase $g$ for a measured value.

",1847,,1847,,5/4/2020 7:15,5/4/2020 7:15,,,,0,,,,CC BY-SA 4.0 20868,1,,,5/3/2020 8:15,,3,83,"

Recently I was reading a paper based on a new evaluation metric SIMILE. In a section, validation loss comparison had been made for SIMILE and BLEU. The plot showed the expected BLEU cost when training with BLEU and SIMILE.

What I'm unable to understand is what is meant by the expected BLEU cost when training with BLEU and SIMILE? Are there any separate cost functions defined for these scores?

I'm attaching the image of the graph.

",36711,,2212,,5/3/2020 20:26,1/19/2023 3:01,What is meant by the expected BLEU cost when training with BLEU and SIMILE?,,1,0,,,,CC BY-SA 4.0 20870,1,21089,,5/3/2020 12:54,,1,87,"

I'm looking at the Bernoulli naïve Bayes classifier on Wikipedia and I understand Bayes theorem along with Gaussian naïve Bayes. However, when looking at how $P(x|c_k)$ is calculated, I don't understand it. The Wikipedia page says its calculated as follows

$$P(x|c_k) = \prod^{n}_{i=1} p^{x_i}_{ki} (1-p_{ki})^{(1-x_i)}. $$

They mention that $p_{ki}$ is the probability of class $c_k$ generating the term $x_i$, does that mean $P(x|c_k)$? Because if so then that doesn't make sense since to calculate that we need to have calculated it already. So what is $p_{ki}$?

And in the first part, after the product symbol, are they raising this probability to the power pf $x_i$ or does that again just mean 'probability of class $c_k$ generating the term $x_i$'?

I also don't understand the intuition behind why or how this calculates $P(x|c_i)$.

",20736,,2444,,12/13/2021 9:12,12/13/2021 9:13,Understanding how to calculate $P(x|c_k)$ for the Bernoulli naïve Bayes classifier,,1,6,,,,CC BY-SA 4.0 20871,1,20900,,5/3/2020 13:09,,4,463,"
 make_env = lambda: ptan.common.wrappers.wrap_dqn(gym.make(""PongNoFrameskip-v4""))
 envs = [make_env() for _ in range(NUM_ENVS)]

Here is a code you can look at.

The two above lines of code create multiple environments for the game of Atari Pong with the A2C algorithm.

I understand why it is very useful to have multiple agents working on different instances of the same environment as it is presented in A3C (i.e. an asynchronous version of A2C). However, in the above code, it has a single agent working on different instances of the same environment.

What is the advantage of using more than one environment with a single agent?

UPDATE

class GymEnvVec:
    def __init__(self, name, n_envs, seed):
        self.envs = [gym.make(name) for i in range(n_envs)]
        [env.seed(seed + 10 * i) for i, env in enumerate(self.envs)]

    def reset(self):
        return [env.reset() for env in self.envs]

    def step(self, actions):
        return list(zip(*[env.step(a) for env, a in zip(self.envs, actions)]))
",35626,,35626,,5/11/2020 14:26,5/11/2020 14:26,What is the advantage of using more than one environment with the advantage actor-critic?,,1,0,,,,CC BY-SA 4.0 20872,2,,20769,5/3/2020 13:12,,1,,"

Based on the presentation by Praveen Venkateswaran on the same paper, KWIK framework is any learning algorithm that makes the following:

  1. if it knows (by its learning) the answer already, it does tell that
  2. if it is uncertain, it tells the ""I don't know"" mark
  3. ""I don't know"" marks should have an upper limit, which is the accuracy/error term of the algorithm.

The idea is that algorithm has a trainer who knows the correct answers, like in any machine learning case, but the difference is that the algorithm agent should know when it has enough knowledge to give the result right away, and when is time to ask and learn. In learning phase the trainer does not calculate errors, only he gives right answers if asked.

There I see two implications: algorithm should know when it has not all the necessary data to answer (when to use ""I don't know"" answer) and a mechanism to be sure you're correct when the answer is e.g. 1 or 0 in binary case.

How to put that in mathematics, it varies by your chosen algorithm. Also there is a formal equation set for this, but I leave here only this literal definition to give the basic idea.

",11810,,11810,,5/3/2020 13:44,5/3/2020 13:44,,,,1,,,,CC BY-SA 4.0 20873,2,,20850,5/3/2020 15:06,,3,,"

Let's have a look at the introduction of Chapter 2: Multi-armed Bandits in the Reinforcement Learning: An Introduction by Sutton, Barto

The most important feature distinguishing reinforcement learning from other types of learning is that it uses training information that evaluates the actions taken rather than instructs by giving correct actions. This is what creates the need for active exploration, for an explicit search for good behavior. Purely evaluative feedback indicates how good the action taken was, but not whether it was the best or the worst action possible. Purely instructive feedback, on the other hand, indicates the correct action to take, independently of the action actually taken. This kind of feedback is the basis of supervised learning, which includes large parts of pattern classification, artificial neural networks, and system identification. In their pure forms, these two kinds of feedback are quite distinct: evaluative feedback depends entirely on the action taken, whereas instructive feedback is independent of the action taken. In this chapter we study the evaluative aspect of reinforcement learning in a simplified setting, one that does not involve learning to act in more than one situation. This nonassociative setting is the one in which most prior work involving evaluative feedback has been done, and it avoids much of the complexity of the full reinforcement learning problem. Studying this case enables us to see most clearly how evaluative feedback differs from, and yet can be combined with, instructive feedback. The particular nonassociative, evaluative feedback problem that we explore is a simple version of the k-armed bandit problem. We use this problem to introduce a number of basic learning methods which we extend in later chapters to apply to the full reinforcement learning problem. At the end of this chapter, we take a step closer to the full reinforcement learning problem by discussing what happens when the bandit problem becomes associative, that is, when actions are taken in more than one situation.

Since bandits involve evaluative feedback they are indeed a type of a (simplified) reinforcement learning problem.

",22835,,-1,,6/17/2020 9:57,5/3/2020 15:06,,,,1,,,,CC BY-SA 4.0 20874,2,,20782,5/3/2020 16:18,,2,,"

Well, here in the picture we have the unrolled or unfold RNN on the right side. Siamese network is formed when it is said to be ""unrolled over two time-steps"". So, take part where there is two first iterations of RNN and yes, you have kind of Siamese network.

One take from the source of the image:

Unlike a traditional deep neural network, which uses different parameters at each layer, a RNN shares the same parameters (U, V, W above) across all steps. This reflects the fact that we are performing the same task at each step, just with different inputs. This greatly reduces the total number of parameters we need to learn.

Sounds familiar to siamese network used on single-object tracking: there we take two signals (image and the tracked object), drive it through identical paths and make some maths to get results. Just something the RNN makes to time separated values!

For proof of similarity, a take from a site where siamese networks are nicely explained:

Side note: I don't know then, how closely those relate in real world (could a Siamese network in anyway be a RNN or vice versa), but supposedly so, because the comparison is made by researcher to say so. At diagrammatic level at least there would be no problem on that.

",11810,,,,,5/3/2020 16:18,,,,0,,,,CC BY-SA 4.0 20877,2,,20868,5/3/2020 19:38,,0,,"

It looks like the method they use for training takes a set of candidate hypotheses $\mathcal{U}(x)$, along with associated probabilities, and then minimizes the expected value of the cost function over that distribution. Section 3 has the loss function being minimized:

$ \mathcal{L}_{Risk} = \sum\limits_{u \in \mathcal{U}(x)} cost(t, u) \frac{p(u|x)}{\sum_{u' \in \mathcal{U}(x)} p(u'|x)} $.

One of the cost functions used is $1 - \texttt{BLEU}(t, h)$, where $t$ is the target and $h$ is the generated hypothesis. I'm not sure where $p(u|x)$ is coming from, but $1 - \mathcal{L}_{Risk}$ for the BLEU cost function is probably what they're refering to when they mention Expected BLEU.

",2212,,,,,5/3/2020 19:38,,,,0,,,,CC BY-SA 4.0 20878,1,,,5/3/2020 22:05,,3,354,"

I've created a Q Learning algorithm to play Connect Four against an opponent who just chooses a random free column. My Q Agent is currently only winning about 0.49 games on average (30,000 episodes). Will my Q Agent actually learn from these episodes, seeing as its opponent isn't 'trying' to beat it, as there's no strategy behind its random choices? Or should this not matter – if the Q Agent is playing enough games, it doesn't matter how good/bad its opponent is?

",27629,,,,,5/4/2020 7:39,Does Q Learning learn from an opponent playing random moves?,,1,2,,,,CC BY-SA 4.0 20879,1,,,5/3/2020 22:44,,1,110,"

Is there any application of topology (as in math discipline) to deep learning? If so, what are some examples?

",32621,,2444,,12/12/2021 13:29,12/12/2021 13:29,Is there any application of topology to deep learning?,,0,1,,,,CC BY-SA 4.0 20881,1,20887,,5/4/2020 2:41,,3,424,"

I have some gaps in my understanding regarding the performing of the gradient descent in Deep - Q networks. The original deep q network for Atari performs a gradient descent step to minimise $y_j - Q(s_j,a_j,\theta)$, where $y_j = r_j + \gamma max_aQ(s',a',\theta)$.

In the example where I sample a single experience $(s_1,a_2,r_1,s_2)$ and I try to conduct a single gradient descent step, then feeding in $s_1$ to the neural network outputs an array of $Q(s_1,a_0), Q(s_1,a_1), Q(s_1,a_2), \dots$ values.

When doing gradient descent update for this single example, should the target output to set for the network be equivalent to $Q(s_1,a_0), Q(s_1,a_1), r_1 + \gamma max_{a'}Q(s_2,a',\theta), Q(s_1,a_3), \dots$ ?

I know the inputs to the neural network to be $s_j$, to give the corresponding Q values. However, I cannot concretize the target values that the network should be optimized.

",32780,,32780,,5/15/2020 12:49,5/15/2020 12:49,What should the target be when the neural network outputs multiple Q values in deep Q-learning?,,2,0,,,,CC BY-SA 4.0 20884,2,,20881,5/4/2020 7:18,,1,,"

You are looking for the best actions which minimize the loss function. You sample a batch of memory buffer uniformly and define a loss function based on that batch. The memory buffer consists of trajectories. Each trajectory consists of an state and the action taken in that state which results in next state and an immediate reward. If the trajectory is shown by $(s,a,r,s\prime)$, the loss for this single state is simply defined as: $(r + max_a\prime Q(s\prime,a\prime,w^-)-Q(s,a,w))^2$.

The minus sign above the parameters means that you should fix the the target parameters to ensure the stability of learning. So the loss function for a whole batch is: $L(w) = E_{(s,a,r,s\prime)\sim U(D)}(r + max_a\prime Q(s\prime,a\prime,w^-)-Q(s,a,w))^2$.

",35633,,,,,5/4/2020 7:18,,,,0,,,,CC BY-SA 4.0 20885,2,,20878,5/4/2020 7:33,,2,,"

It should be possible to train an agent using some variant of DQN to beat a random agent around 100% of the time within a few thousand games.

It may require one or two more advanced techniques to get the learning time down to a low number of thousands. However, if your agent is winning ~50% of games against a random agent, something has gone wrong, since that is the performance you would expect of another random agent. Even simple policies, such as always play in same column, will beat a random agent a significant fraction of the time.

First thing to consider is that there are too many states in Connect 4 to use tabular Q learning. You have to use some variant of DQN. As a grid-based board game where winning patterns can repeat, some form of convolutional neural network (CNN) for the Q function is probably a good start.

I think for a first step, you should double-check that you have implemented DQN correctly. Check the TD target formula is correct, and that you have implemented experience replay. Ideally you will also have a delayed-update target network for calculating the TD targets.

As a second step, try some variations of hyper-parameters. The learning rate, exploration rate, size of replay table, number of games to play before starting learning etc. A discount factor $\gamma$ slightly below 1 can help (despite this being an episodic problem) - it makes the agent forget more of the initial bias for early time steps.

Or should this not matter – if the Q Agent is playing enough games, it doesn't matter how good/bad its opponent is?

Up to a point this is true. It is hard to learn against a perfect agent in Connect 4, because it always wins as player one, which means all policies are equally good and there is nothing to learn. Other than that, if there is a way to win, eventually a Q learning agent with exploration should find it.

Against a random agent, you should be seeing some improvement if your agent is correctly set up for the problem, after a few thousand games. As it happens I am currently training Connect 4 agents using variants of DQN for a Kaggle competition, and they consistently beat random agents with 100% measured success rate after 10,000 training games. I have added a few extras to my agents in order to achieve this - there are some discussions of approaches in the forums at https://www.kaggle.com/c/connectx

",1847,,1847,,5/4/2020 7:39,5/4/2020 7:39,,,,2,,,,CC BY-SA 4.0 20886,2,,20782,5/4/2020 7:55,,0,,"

Single object tracking using a Siamese Network is a Detect and compare approach, where an object of interest is detected and the one to be tracked is passed through a siam network with next consecutive frame to get a correlation between them, if you look at the related reference, it is a correlation-based tracking, ie. correlation between objects detected in one frame and the ones detected in next frame, which you can imagine as samples considered across 2 timesteps or an RNN unrolled to 2 timesteps

",27875,,,,,5/4/2020 7:55,,,,0,,,,CC BY-SA 4.0 20887,2,,20881,5/4/2020 8:10,,3,,"

When doing gradient descent update for this single example, should the target output to set for the network be equivalent to $Q(s_1,a_0), Q(s_1,a_1), r_2 + \gamma max_aQ(s',a',\theta) , Q(s_1,a_3),...$ ?

Other than what looks like a couple of small typos, then yes.

This is an implementation issue for DQN, where you have decided to create a function that outputs multiple Q functions at once. There is nothing about this in Q learning theory, so you need to figure out what will generate the correct error (and therefore gradients) for an update step.

You don't know the TD targets for actions that were not taken, and cannot make any update for them, so the gradients for these actions must be zero. One way to achieve that is to feed back the network's own output for those actions. This is common practice because you can use built-in functions from neural network libraries to handle minibatches*.

There are some details worth clarifying:

  • You have substituted the third entry in the array with the calculated TD target because the action from experience replay is $a_2$. In general you substitute for the action taken. Looks like you have this correct.

  • You have $r_1$ in your experience replay table, but put $r_2$ in your TD target formula. Looks like a typo. Another typo is that you maximise over $a$ but reference $a'$. Also, you reference $s'$ but don't define it anywhere. Fixing these issues gives $r_1 + \gamma \text{max}_{a'}Q(s_2,a',\theta)$

  • For the TD target it is often worth using a dedicated target network that every N steps is copied from the learning network. It helps with stability. This can be noted as a ""frozen copy"" of $\theta$ noted $\theta^-$, and the neural network approximate Q function often noted $\hat{q}$ giving formula of $r_1 + \gamma \text{max}_{a'}\hat{q}(s_2,a',\theta^-)$ for your example.


* If you want you can also calculate the gradient more directly from the single action that was taken, and back propagate from there, knowing that all the other outputs will have a gradient componnet of zero. That requires implementing at least some of the back propagation yourself.

",1847,,1847,,5/4/2020 8:16,5/4/2020 8:16,,,,0,,,,CC BY-SA 4.0 20888,1,20909,,5/4/2020 8:20,,1,192,"

In Appendix B of MuZero, they say

In two-player zero-sum games the value functions are assumed to be bounded within the $[0, 1]$ interval.

I'm confused about the boundary: Shouldn't the value/utility function be in the range of [-1,1] for two-player zero-sum games?

",8689,,2444,,5/4/2020 12:07,6/3/2020 18:00,"Shouldn't the utility function of two-player zero-sum games be in the range $[-1, 1]$?",,1,0,,,,CC BY-SA 4.0 20889,1,20892,,5/4/2020 8:34,,1,274,"

I'm new to reinforcement learning and trying to understand it.

If you train an agent using a reinforcement learning algorithm (discrete or continuous) on an environment (real or simulated), then how do you know if the agent has learnt its environment? Should it reach its goal on every run (episode)? (Any literature references are also welcome)

Is this related to the reward threshold defined in the environment?

What happens if you continue training after the agent has learnt the environment? Will it perform by reaching its goal every time or will there be failed episodes?

",32237,,,,,5/4/2020 12:09,How do you know if an agent has learnt its environment in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 20891,1,20895,,5/4/2020 11:56,,4,375,"

I was watching a video in my online course where I'm learning about A.I. I am a very beginner in it.

At one point in the course, the instructor says that reinforcement learning (RL) needs a deep learning model (NN) to perform an action. But for that, we need expected results in our model for the NN to learn how to predict the Q-values.

Nevertheless, at the beginning of the course, they said to me that RL is an unsupervised learning approach because the agent performs the action, receives the response from the environment, and finally takes the more likely action, that is, with the highest Q value.

But if I'm using deep learning in RL, for me, RL looks like a supervised learning approach. I'm a little confused about these things, could someone give me clarifications about them?

",36735,,2444,,5/4/2020 12:11,5/5/2020 15:44,How can reinforcement learning be unsupervised learning if it uses deep learning?,,2,0,,,,CC BY-SA 4.0 20892,2,,20889,5/4/2020 12:09,,0,,"

This depends on the complexity of the environment being learned, and the purpose for learning it. There is no general answer.

For the simple environments used to teach reinforcement learning (RL), often the optimal solution is obvious, or can be calculated and proven optimal. For instance, any environment that can be solved using policy iteration will have a known optimal policy and optimal value function. The goals of these environments are to teach, or to check correctness of agents - it helps in these cases to have a well known correct answer.

At the next level up in terms of complexity are well-studied environments which can have achievable targets set for learning agents. The goals of these environments include getting useful metrics for learning agents, such as how many episodes it takes a particular implementation to learn it well enough. Defining ""well enough"" is a matter of experience with existing agents.

Getting more complex still, in general it is not possible to know whether an agent has fully optimised against its environment. The subject area of sequential decision making that includes RL agents can cover scenarios such as driving a car or playing a computer game. We don't know when any agent, whether they are based on RL or some other approach, has fully learning an environment, and instead must construct tests of behaviour - e.g. make an agent simulate driving in a set of scenarios, and expect at least safe behaviour in each of them, essentially a driving test similar to one a person might take. In these environments, often the tests are based on ""good enough to use"" goals. We can say an agent has learned to drive if it drives more safely than an average human.

In the special case of competitive games, we can score agents against each other or against human players. You might say that an agent has learned its environment if it beats some standard player, but also you can rank agents against each other and declare a particular agent as the current best.

It is possible to mix and match these ideas. The Atari games learning suite has benchmark scores to reach that count as ""standard human"", and recently agents have been published that beat all of those scores.

What happens if you continue training after the agent has learnt the environment? Will it perform by reaching its goal every time or will there be failed episodes?

If you include training episodes, then RL learns mainly by ""trial and error"". So you should expect an agent to make deliberate mistakes as it tests to see what happens. In some environments these could be critical mistakes leading to failed episodes.

If you ignore the training episodes and are interested only in performance without exploration - e.g. testing every few hundred episodes - then you can expect performance to vary depending on the type of agent and environment. Some agents even exhibit ""catastrophic forgetting"", which as the name implies causes performance to drop significantly - this can be caused by a successful agent over-fitting to all the recent successful episodes without errors it just experienced, and losing the ability to predict the true lower value of incorrect actions.

Neither failed episodes during training nor catastrophic forgetting are inevitable. It depends on the environment and type of agent.

",1847,,,,,5/4/2020 12:09,,,,3,,,,CC BY-SA 4.0 20893,1,20974,,5/4/2020 12:31,,3,189,"

In page 125 of Sutton and Barto (second last paragraph) the proof for equality of $v_{\pi}$ and $v_*$ for $\epsilon$ soft policies is given. But I could not understand the statement explaining the proof:

Consider a new environment that is just like the original environment, except with the requirement that policies be $\epsilon$-soft “moved inside” the environment. The new environment has the same action and state set as the original and behaves as follows. If in state $s$ and taking action $a$, then with probability $1 - \epsilon$ the new environment behaves exactly like the old environment. With probability $\epsilon$ it repicks the action at random, with equal probabilities, and then behaves like the old environment with the new, random action. The best one can do in this new environment with general policies is the same as the best one could do in the original environment with $\epsilon$-soft policies.

What is the meaning of environment here? And what is this new thing/argument (provided above) the authors are describing to arrive at the proof?

",,user9947,1641,,2/14/2022 14:08,2/14/2022 14:08,Doubt regarding the proof of convergence of $\epsilon$ soft policies without exploring starts,,1,0,,,,CC BY-SA 4.0 20894,1,20899,,5/4/2020 12:54,,1,1691,"

xor is a non-linear dataset. It cannot be solved with any number of perceptron based neural network but when the perceptions are applied the sigmoid activation function, we can solve the xor dataset.

But I came across a source where the following statement is stated as False

A two layer (one input layer; one output layer; no hidden layer) neural network can represent the XOR function.

However, I have trained a model with no hidden layers, gives the following result:

[INFO] data=[0 0],ground-truth=0, pred=0.5161, step=1
[INFO] data=[0 1],ground-truth=1, pred=0.5000, step=1
[INFO] data=[1 0],ground-truth=1, pred=0.4839, step=0
[INFO] data=[1 1],ground-truth=0, pred=0.4678, step=0

So, in if I apply a softmax classifier, I can separate the xor dataset with a nn without any hidden layer. This makes the statement incorrect.

Is it true that we cannot separate a non linear dataset without any hidden layers in a neural network? If yes, where am I wrong in my reasoning from the training of the nn I have done above

",35616,,,,,5/4/2020 13:36,solving xor function using a neural network with no hidden layers,,1,4,,,,CC BY-SA 4.0 20895,2,,20891,5/4/2020 13:06,,3,,"

Supervised learning

The supervised learning (SL) problem is formulated as follows.

You are given a dataset $\mathcal{D} = \{(x_i, y_i)_{i=1}^N$, which is assumed to be drawn i.i.d. from an unknown joint probability distribution $p(x, y)$, where $x_i$ represents the $i$th input and $y_i$ is the corresponding label. You choose a loss function $\mathcal{L}: V \times U \rightarrow \mathbb{R}$. Then your goal is to minimize the so-called empirical risk

$$R_{\mathcal{D}}[f]=\frac{1}{N} \sum_{i=1}^N \mathcal{L}(x_i, f(x_i)) \tag{0}\label{0}$$

with respect to $f$. In other words, you want to find the $f$ that minimizes the average above, which can also be formally written as $$ f^* = \operatorname{argmin}_f R[f] \tag{1}\label{1} $$ The problem \ref{1} is called the empirical risk minimization because it is a proxy problem for the expected risk minimization (but you can ignore this for now).

Reinforcement learning

In reinforcement learning, you typically imagine that there's an agent that interacts, in time steps, with an environment by taking actions. At each time step $t$, the agent takes $a_t$ in the state $s_t$, receives a reward $r_t$ from the environment and the agent and the environment move to another state $s_{t+1}$.

The goal of the agent is to maximize the expected return

$$\mathbb{E}\left[ G_t \right] = \mathbb{E}\left[ \sum_{i=t+1}^\infty R_i \right]$$

where $t$ is the current time step (so we don't care about the past), $R_i$ is a random variable that represents the probable reward at time step $i$, and $G_t = \sum_{i=t+1}^\infty R_i $ is the so-called return (i.e. a sum of future rewards, in this case, starting from time step $t$), which is also a random variable.

In this context, the most important job of the programmer is to define a function $\mathcal{R}(s, a)$, the reward function, which provides the reinforcement (or reward) signal to the RL agent. $\mathcal{R}(s, a)$ will deterministically or stochastically determine the reward that the agent receives every time it takes action $a$ in the state $s$. (Note that $\mathcal{R}$ is different from $R_i$, which is a random variable that represents the reward at time step $i$).

What is the difference between SL and RL?

In RL, you (the programmer) need to define the reward function $\mathcal{R}$ and you want to maximize the expected return. On the other hand, in SL you are given (or you collect) a dataset $\mathcal{D}$, you choose $\mathcal{L}$ in \ref{0}, and the goal is to find the function $f^*$ that minimizes the empirical risk. So, these have different settings and goals, so they are different!

However, every SL problem can be cast as an RL problem. See this answer. Similarly, in certain cases, you can formulate an RL as an SL problem. So, although the approaches are different, they are related.

Is RL an unsupervised learning approach?

In RL, you do not tell the agent what action it needs to take. You only say that the action that was taken was ""bad"", ""good"" or ""so so"". The agent needs to figure out which actions to take based on your feedback. In SL, you explicitly say that, for this input $x_i$, the output should be $y_i$.

Some people may consider RL is an unsupervised learning approach, but I think this is wrong, because, in RL, the programmer still needs to define the reward function, so RL isn't totally unsupervised and it's also not totally supervised. For this reason, many people consider RL an approach that sits between UL and SL.

What is deep learning?

The term/expression deep learning (DL) refers to the use of deep neural networks (i.e. neural networks with many layers, where ""many"" can refer to more than 1 or 1000, i.e. it depends on the context) in machine learning, either supervised, unsupervised, or reinforcement learning. So, you can apply deep learning to SL, RL and UL. So, DL is not only restricted to SL.

",2444,,2444,,5/5/2020 14:49,5/5/2020 14:49,,,,0,,,,CC BY-SA 4.0 20896,2,,20891,5/4/2020 13:06,,2,,"

In Supervised learning, the goal is to learn a mapping from points in a feature space to labels. So that for any new input data point, we are able to predict its label. whereas in Unsupervised learning data set is composed only of points in a feature space, i.e. there are no labels & here the goal is to learn some inner structure or organization in the feature space itself.

Reinforcement Learning is basically concerned with learning a policy in a sequential decision problem. There are some components in RL that are “unsupervised” and some that are “supervised”, but it is not a combination of “unsupervised learning” and “supervised learning”, since those are terms used for very particular settings, and typically not used at all for sequential decision problems.

In Reinforcement Learning, we have something called as Reward function that the agent aims to maximize. During the learning process, one typical intermediate step is to learn to predict the reward obtained for a specific policy.

In a nutshell, we can say Reinforcement Learning put a model in an environment where it learns everything on its own from data collection to model evaluation. It is about taking suitable action to maximize reward in a particular situation. There is no answer but the reinforcement agent decides what to do to perform the given task. In the absence of training dataset, it is bound to learn from its experience.

To better understand let’s look at an analogy.

Suppose you have a dog that is not so well trained, every time the dog messes up the living room you reduce the amount of tasty foods you give it (punishment) and every time it behaves well you double the tasty snacks (reward). What will the dog eventually learn? Well, that messing up the living room is bad.

This simple concept is powerful. The dog is the agent, the living room the environment, you are the source of the reward signal (tasty snacks).

To learn more on Reinforcement learning, pls check this awesome Reinforcement Learning lecture that is freely available on youtube by someone who actually leads the Reinforcement learning research group at DeepMind and also a lead researcher on AlphaGo, AlphaZero.

[RL Course by David Silver] https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB""

",36737,,36737,,5/5/2020 15:44,5/5/2020 15:44,,,,0,,,,CC BY-SA 4.0 20898,1,,,5/4/2020 13:18,,5,1920,"

I just read an article about the minimax algorithm. When you design the algorithm, you assume that your opponent is a perfect player, i.e. it plays optimally.

Let's consider the game of chess. What happens if the opponent plays irrationally or sub-optimally? Do you still have a guarantee that you are going to win?

",36107,,2444,,5/4/2020 13:53,6/3/2020 15:08,What happens if the opponent doesn't play optimally in minimax?,,1,0,,,,CC BY-SA 4.0 20899,2,,20894,5/4/2020 13:31,,3,,"

Softmax is a probability distribution you use when you want probability for all multiple classes you are predicting which are not independent, ie, exp(xi)/sum(exp(xj) for j in all x), where xi is the score of one neuron, so softmax is good if you have more than 1 neurons, but for just 1 neuron(in this case), the output of softmax will be 1, always (exp(xi)/exp(xi)).

Now lets visualize your dataset, ie plot the points of XOR as a function of X,Y on coordinate space. , [src: https://www.researchgate.net/figure/The-exclusive-or-XOR-function-is-a-nonlinear-function-that-returns-0-when-its-two_fig4_322048911]

Now, whatever you do, you cannot separate the points with a single wx+b straight line, and after passing it to sigmoid, you just squeeze the values for those, ie, a clipped wx+b.

Now, it is still a linear equation. Now when you add another layer, a hidden one, you can operate again on the 1st output, which if you squeeze between 0 and 1 or use something like relu activation, will produce some non linearity, otherwise it will just be (w2(w1*x + b1)+b2, which again is a linear equation not able to separate the classes 0 and 1.

So after adding a hidden layer and passing it through activation function, you get non linear output, example, with sigmoid you get w2*(1/+exp(w2*x+b1))+b2 which has a fairly non linear term of x, and after passing it through sigmoid again, you are able to squeeze the outputs(otherwise they can be higher than 1 or lower than 0,but XOR outputs are binary) after fitting the curve so that both classes are separate.

Now, food for thought, what will happen if you use 2 output neurons for XOR (represent 0 as [1,0] and 1 as [0,1])?

",27875,,27875,,5/4/2020 13:36,5/4/2020 13:36,,,,1,,,,CC BY-SA 4.0 20900,2,,20871,5/4/2020 14:14,,1,,"

What is the advantage of using more than one environment with a single agent?

There are two main advantages to this approach:

  • The dataset for training is closer to the independent, identically distributed (i.i.d.) ideal, important for theoretical and practical reasons when training a neural network. Samples taken from a single trajectory are not independent, but instead are correlated due to rules of the environment - so using a single trajectory is furthest from i.i.d. This is a similar motivation to use of experience replay tables for DQN variants of Q-learning. However, experience replay is inherently off-policy, so not a good fit to A2C or A3C that need samples taken when acting under the current policy.

  • Collecting experience is often a major bottleneck in training RL agents. Being able to do so in parallel in a distributed environment can significantly speed up the training process.

",1847,,1847,,5/11/2020 6:50,5/11/2020 6:50,,,,0,,,,CC BY-SA 4.0 20901,2,,20898,5/4/2020 14:27,,3,,"

What happens if the opponent plays irrationally or sub-optimally? Do you still have a guarantee that you are going to win?

If your search is deep enough to guarantee optimal play in all cases, then yes. Optimal play is such that the opponent's decision causes least impact to your agent. In fact, if the opponent makes a mistake, often that can make the search easier/faster, and the agent will win more convincingly.

What it may mean is that optimisations you may have taken - e.g. pruning game tree segments that lead to non-optimal decisions by either player - might not be as useful. This might impact decision time if you keep some partial game tree or cache branch evaluations between moves to help speed up the agent.

Actual optimal agents for games as complex as chess are not possible. In these games, you will not have a truly optimal agent, but approximately optimal. You will be relying on some heuristic to guide the minimax search when it cannot force an end game win. If the opponent manages to control play into a state where the heuristics are not accurate, they could cause minimax search to fail and misdirect your agent into making mistakes.

A combination of effects is also possible if you have implemented a tree caching mechanism for performance improvements and made the system playable by limiting planning time - e.g. you limit computer search time to 3 seconds max - an irrational opponent may cause your agent's performance to degrade to the point where it too starts to make mistakes. Whether or not this is enough for a smart opponent to take advantage of it and beat an agent which is capable of otherwise playing a ""perfect"" game depends on details of the game, and how narrow the agent's measure of ""perfect"" is.

An extreme case might be an agent that has memorised a single perfect game (by scoring high heuristics for any state on a single tarjectory through the game that has been pre-calculated for perfect moves by both players), and has poor heuristics otherwise - once state moves away from what it can evaluate directly, the agent will be limited by how well it can search for completed game wins, and can easily be manipulated by a more generally smart opponent into a losing position that is beyond the depth of its search.

In practice, well-coded agents will not suffer too much from this effect. If you make a mistake, or try a random probably bad move in an attempt to confuse the agent when playing against Stockfish, you will lose.

",1847,,1847,,5/4/2020 14:33,5/4/2020 14:33,,,,0,,,,CC BY-SA 4.0 20902,1,,,5/4/2020 14:29,,5,135,"

Apart from the vanishing or exploding gradient problems, what are other problems or pitfalls that we could face when training neural networks?

",36740,,2444,,5/4/2020 20:59,5/4/2020 21:14,What are the common pitfalls that we could face when training neural networks?,,2,0,,,,CC BY-SA 4.0 20903,1,20957,,5/4/2020 14:39,,4,3108,"

In reinforcement learning (RL), what is the difference between training and testing an algorithm/agent? If I understood correctly, testing is also referred to as evaluation.

As I see it, both imply the same procedure: select an action, apply to the environment, get a reward, and next state, and so on. But I've seen that, e.g., the Tensorforce RL framework allows running with or without evaluation.

",32237,,2444,,5/4/2020 15:05,9/29/2021 7:09,What is the difference between training and testing in reinforcement learning?,,4,3,,,,CC BY-SA 4.0 20907,2,,20902,5/4/2020 15:45,,2,,"

I can't say that it is the biggest problem of deep neural network but it is one of the big problem with deep neural network.

Other issue which happens a lot is overfitting on the training data hence network behaves badly on test set which can be solved using regression. So you have to make sure that network is generalised enough.

If the network is trying to have a classification then if the outputs are mutually inclusive or exclusive, depending upon that you define your loss function which will be categorical cross entropy for mutually inclusive output and binary cross entropy for mutually exclusive.

Other thing which I can think is about all zero initialization. Initializing the network weights as zero can lead to same gradient computation and due to which network does not learn most of the times.

",33835,,33835,,5/4/2020 21:14,5/4/2020 21:14,,,,1,,,,CC BY-SA 4.0 20909,2,,20888,5/4/2020 16:39,,2,,"

it can be either. If you consider the lack of reward as ""penalty"" then getting 0 reward is bad.

if you use a value estimator through a neural network, the range of rewards will dictate the squashing function you use for the output layer

",36518,,,,,5/4/2020 16:39,,,,19,,,,CC BY-SA 4.0 20910,1,20937,,5/4/2020 16:52,,7,384,"

I am asking for a book (or any other online resource) where we can solve exercises related to neural networks, similar to the books or online resources dedicated to mathematics where we can solve mathematical exercises.

",4211,,2444,,5/4/2020 17:59,5/5/2020 18:56,What are some resources with exercises related to neural networks?,,3,0,,,,CC BY-SA 4.0 20911,1,,,5/4/2020 17:14,,3,96,"

Is there a good and modern book that focuses on word embeddings and their applications? It would also be ok to provide the name of a paper that provides a good overview of word embeddings.

",36055,,2444,,5/22/2020 18:04,5/22/2020 18:04,Is there a good book or paper on word embeddings?,,0,0,,,,CC BY-SA 4.0 20913,1,,,5/4/2020 19:01,,2,38,"

From Neural Architecture Search: A Survey, first published in 2018:

Moreover, common search spaces are also based on predefined building blocks, such as different kinds of convolutions and pooling, but do not allow identifying novel building blocks on this level; going beyond this limitation might substantially increase the power of NAS.

Has anyone tried that? If not, do you have any thoughts about the feasibility of this idea?

",36632,,2444,,5/4/2020 20:30,5/4/2020 20:30,Can operations like convolution and pooling be discovered with a neural architecture search approach?,,0,0,,,,CC BY-SA 4.0 20914,1,,,5/4/2020 19:26,,2,139,"

I have taken some reference implementations of PPO algorithm and am trying to create an agent which can play space invaders . Unfortunately from the 2nd trial onwards (after training the actor and critic N Networks for the first time) , the probability distribution of the actions converges on only action and the PPO loss and the critic loss converges on only one value.

Wanted to understand the probable reasons why this might occur . I really cant run the code in my cloud VMs without being sure that I am not missing anything as the VMs are very costly to use . I would appreciate any help or advice in this regarding .. if required I can post the code as well . Hyperparameters used are as follows :

clipping_val = 0.2 critic_discount = 0.5 entropy_beta = 0.01 gamma = 0.99 lambda = 0.95

code repo : github.com/superchiku/ReinforcementLearning .

",36749,,,,,5/4/2020 19:26,PPO algorithm converges on only one action,,0,0,,,,CC BY-SA 4.0 20915,2,,20728,5/4/2020 19:41,,1,,"

In addition to the reason outlined in the comment, also note that if the state-space and action-space are both finite and of feasible size, tabular methods can be used, and there are some advantages to them (like the existence of convergence guarantees and generally a smaller number of hyperparameters to tune).

",36748,,,,,5/4/2020 19:41,,,,0,,,,CC BY-SA 4.0 20916,1,,,5/4/2020 19:44,,1,270,"

I am trying to reproduce the recommender task experiment from this paper. The paper suggests to embed discrete actions into continuous action space and then to use the proposed Wolpertinger agent. The Wolpertinger agent is as follows:

DDPG produces so called proto action $f(s)$, then KNN finds k nearest embeddings of discrete actions to this proto action, and then we choose the one of these $k$ embeddings, which has the highest Q-function value. The whole is a full policy, $\pi(\cdot)$.

While training we optimize the critic using only the full policy (DDPG + a choice of a neighbour). The actor is optimized using proto action output in order to be differentiable, $Q(s, f_{\theta}(s)) \rightarrow \max_{\theta}$.

The problem is that the critic does not know that it is used to optimize continuous output of the algorithm. It is trained only to value the embedded actions. As I understand, we hope that continuity of critic will help us with it, but what I have is that proto actions constantly appear to be in some corners with no real actions and where the Q-function unreasonably has greater values (because it is simply untrained in such domains). The DDPG output is normalized to match the embeddings bounds to make these empty spaces not so large.

It seems for me that there is a way to make embeddings more appropriate for the task and achieve higher reward. However, when I use $k = | \mathcal{A}|$, proto actions are not considered and algorithm works pretty well. Usually I use $|\mathcal{A}| = 100$ and $k = 10$. I have trained them with skip-gram, based on the users history.

Below are 2d projections of my embeddings to the first 10 axes (embeddings are from $\mathbb{R}^{20}$). And undependently of the state, proto actions are about the same. The blue is a proto action for the some fixed state. Having some state fixed, $Q(s, f(s))$ value is always higher than $Q(s, a)$ for any $a \in \mathcal{A}$.

Would be glad to get any help, especially the help of people familiar with this algorithm. Do I need to make embeddings fill proto actions range (some hyperrectangle in case we have the tanh activation it the actor)? What is the way to fill such a domain with embeddings?

",36690,,36690,,5/4/2020 19:50,5/4/2020 19:50,A question about the Wolpertinger algorithm (Deep RL in Large Discrete Action Spaces paper),,0,0,,,,CC BY-SA 4.0 20917,2,,20902,5/4/2020 20:42,,2,,"

There are several pitfalls or issues that require your attention when or before training or using neural networks. I will list some of them below, along with some questions you need to ask yourself before or while using neural networks.

  • Over-fitting and under-fitting problems, and the related generalization problem. Is your neural network generalizing to unseen data?

  • Availability of training and test data

    • Do you have enough data to train your neural network so that it generalizes well (i.e. it neither over-fits or under-fits)?
    • Is your test dataset big enough to assess the generalization ability of your neural network?
    • Is your data representative of the problem you are trying to solve?
    • Do you need to augment or normalize your data?
    • Do you need to use cross-validation?
    • Is your data independent and identically distributed (i.i.d.)? If your data is correlated, training could be unstable. Shuffling your data may be a feasible solution when your data is initially correlated.
  • Do you have enough computational resources (i.e. GPUs) for training and testing your neural network?

  • Are you solving a regression or classification problem? The type of the outputs and the loss function will typically be different in both cases

  • Do you need explainability and transparency? If yes, neural networks aren't probably the best model to use, as the connections between the neurons are quite obscure and don't really represent any meaningful interaction. That's why neural networks are called black-boxes.

  • Do you need uncertainty estimation? If yes, you may want to try Bayesian neural networks. Typical neural networks are not very appropriate for uncertainty estimation!

  • If you use a neural network for function approximation (e.g. in reinforcement learning), you will lose certain convergence guarantees.

",2444,,2444,,5/4/2020 20:49,5/4/2020 20:49,,,,0,,,,CC BY-SA 4.0 20918,2,,20910,5/4/2020 22:44,,4,,"

There are actually quite a few. Personally I would say these courses have high quality and strong focus on practice:

  • Standford computer vision cs231. Check the assignments materials on this page. This course has good explanation/exercises of how generally neural nets and backprop works.
  • Fastai course notebooks. You can listen to the lectures as well, but notebooks are quite self-containing
  • Practical reinforcement learning course, if you interested in NN application for RL
",16940,,16940,,5/4/2020 22:49,5/4/2020 22:49,,,,0,,,,CC BY-SA 4.0 20919,2,,20910,5/4/2020 23:11,,3,,"

One of the most famous books dedicated to neural networks is Neural Networks - A Systematic Introduction (1996) by Raul Rojas. Most chapters end with a series of exercises that test your understanding of the material. For example, in chapter 14 Stochastic Networks, one of the exercises is

Solve the eight queens problem using a Boltzmann machine. Define the network's weights by hand.

This should give you a sense of the type of exercise that you will find in this book.

",2444,,,,,5/4/2020 23:11,,,,0,,,,CC BY-SA 4.0 20921,1,,,5/5/2020 3:44,,3,96,"

I was following a tutorial about Feed-Forward Networks and wrote this code for a simple FFN :

class FirstFFNetwork:
  
  #intialize the parameters
  def __init__(self):
    self.w1 = np.random.randn()
    self.w2 = np.random.randn()
    self.w3 = np.random.randn()
    self.w4 = np.random.randn()
    self.w5 = np.random.randn()
    self.w6 = np.random.randn()
    self.b1 = 0
    self.b2 = 0
    self.b3 = 0
  
  def sigmoid(self, x):
    return 1.0/(1.0 + np.exp(-x))
  
  def forward_pass(self, x):
    #forward pass - preactivation and activation
    self.x1, self.x2 = x
    self.a1 = self.w1*self.x1 + self.w2*self.x2 + self.b1
    self.h1 = self.sigmoid(self.a1)
    self.a2 = self.w3*self.x1 + self.w4*self.x2 + self.b2
    self.h2 = self.sigmoid(self.a2)
    self.a3 = self.w5*self.h1 + self.w6*self.h2 + self.b3
    self.h3 = self.sigmoid(self.a3)
    return self.h3
  
  def grad(self, x, y):
    #back propagation
    self.forward_pass(x)
    
    self.dw5 = (self.h3-y) * self.h3*(1-self.h3) * self.h1
    self.dw6 = (self.h3-y) * self.h3*(1-self.h3) * self.h2
    self.db3 = (self.h3-y) * self.h3*(1-self.h3)
    
    self.dw1 = (self.h3-y) * self.h3*(1-self.h3) * self.w5 * self.h1*(1-self.h1) * self.x1
    self.dw2 = (self.h3-y) * self.h3*(1-self.h3) * self.w5 * self.h1*(1-self.h1) * self.x2
    self.db1 = (self.h3-y) * self.h3*(1-self.h3) * self.w5 * self.h1*(1-self.h1)
  
    self.dw3 = (self.h3-y) * self.h3*(1-self.h3) * self.w6 * self.h2*(1-self.h2) * self.x1
    self.dw4 = (self.h3-y) * self.h3*(1-self.h3) * self.w6 * self.h2*(1-self.h2) * self.x2
    self.db2 = (self.h3-y) * self.h3*(1-self.h3) * self.w6 * self.h2*(1-self.h2)
    
  
  def fit(self, X, Y, epochs=1, learning_rate=1, initialise=True, display_loss=False):
    
    # initialise w, b
    if initialise:
      self.w1 = np.random.randn()
      self.w2 = np.random.randn()
      self.w3 = np.random.randn()
      self.w4 = np.random.randn()
      self.w5 = np.random.randn()
      self.w6 = np.random.randn()
      self.b1 = 0
      self.b2 = 0
      self.b3 = 0
      
    if display_loss:
      loss = {}
    
    for i in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
      dw1, dw2, dw3, dw4, dw5, dw6, db1, db2, db3 = [0]*9
      for x, y in zip(X, Y):
        self.grad(x, y)
        dw1 += self.dw1
        dw2 += self.dw2
        dw3 += self.dw3
        dw4 += self.dw4
        dw5 += self.dw5
        dw6 += self.dw6
        db1 += self.db1
        db2 += self.db2
        db3 += self.db3
        
      m = X.shape[1]
      self.w1 -= learning_rate * dw1 / m
      self.w2 -= learning_rate * dw2 / m
      self.w3 -= learning_rate * dw3 / m
      self.w4 -= learning_rate * dw4 / m
      self.w5 -= learning_rate * dw5 / m
      self.w6 -= learning_rate * dw6 / m
      self.b1 -= learning_rate * db1 / m
      self.b2 -= learning_rate * db2 / m
      self.b3 -= learning_rate * db3 / m
      
      if display_loss:
        Y_pred = self.predict(X)
        loss[i] = mean_squared_error(Y_pred, Y)
    
    if display_loss:
      plt.plot(loss.values())
      plt.xlabel('Epochs')
      plt.ylabel('Mean Squared Error')
      plt.show()
      
  def predict(self, X):
    #predicting the results on unseen data
    Y_pred = []
    for x in X:
      y_pred = self.forward_pass(x)
      Y_pred.append(y_pred)
    return np.array(Y_pred)

The data was generated as follows :

data, labels = make_blobs(n_samples=1000, centers=4, n_features=2, random_state=0)
labels_orig = labels
labels = np.mod(labels_orig, 2)
X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0)

When I ran the program yesterday, I had gotten a training accuracy of about 98% and a test accuracy of 94%. But when I ran it today, suddenly the accuracy dropped to 60-70%. I tried to scatter plot the result, and it looked like it behaved as if it were a single sigmoid instead of the Feed-Forward Network.

ffn = FirstFFNetwork()
#train the model on the data
ffn.fit(X_train, Y_train, epochs=2000, learning_rate=.01, display_loss=False)
#predictions
Y_pred_train = ffn.predict(X_train)
Y_pred_binarised_train = (Y_pred_train >= 0.5).astype("int").ravel()
Y_pred_val = ffn.predict(X_val)
Y_pred_binarised_val = (Y_pred_val >= 0.5).astype("int").ravel()
accuracy_train_1 = accuracy_score(Y_pred_binarised_train, Y_train)
accuracy_val_1 = accuracy_score(Y_pred_binarised_val, Y_val)
#model performance
print("Training accuracy", round(accuracy_train_1, 2))
print("Validation accuracy", round(accuracy_val_1, 2)

I do not understand how this happened and cannot figure it out.

",36756,,32410,,10/2/2021 18:59,10/2/2021 18:59,Accuracy dropped when I ran the program the second time,,1,1,,,,CC BY-SA 4.0 20925,1,,,5/5/2020 6:35,,1,315,"

I've been trying to figure out how to compute the number of Flops in backward pass of ResNet. For forward pass, it seems straightforward: apply the conv filters to the input for each layer. But how does one do the Flops counts for gradient computation and update of all weights during the backward pass?

Specifically,

  • how to compute Flops in gradient computations for each layer?

  • what all gradients need to be computed so Flops for each of those can be counted?

  • How many Flops in computation of gradient for Pool, BatchNorm, and Relu layers?

I understand the chain rule for gradient computation, but having a hard time formulating how it'd apply to weight filters in conv layers of ResNet and how many Flops each of those would take. It'd be very useful to get any comments about method to compute total Flops for Backward pass. Thanks

",33580,,,,,5/5/2020 6:35,Backward pass of CNN like Resnet: how to manually compute flops during backprop?,,0,2,,,,CC BY-SA 4.0 20926,1,,,5/5/2020 7:24,,1,92,"

What would happen when an artificial general intelligence can improve itself over a long time, with limited resources?

The assumption is that it has a large but finite amount of computing power, and can not escape that limit to find resources. Let's assume the limit to be on the order 100 human brains or so - the limits are not important, only that it is limited.

Now, we let it run to self-improve as much as it can with the given resources, until it goes to some stable or periodic state.

What it can do is at least one of the following:

  • converge to some state
  • oscillate between two or more states
  • go into a chaotic state
  • stop doing anything
  • disable itself in some other way than just stop doing anything

There are certainly more complex behaviors possible:

  • become maximally happy, and then periodically redefine what happy means
  • hacking its reward function in some other way

I would expect it to converges to a single state - but that seems naive.

Are there any ideas on how it would end up?

",2317,,2444,,5/5/2020 20:06,5/5/2020 21:09,What does a self-improving artificial general intelligence with finite resources and infinite time do?,,0,3,,,,CC BY-SA 4.0 20929,2,,3494,5/5/2020 8:04,,1,,"

Many people who are interested in machine learning aren't professional programmers. For example there are mathematicians who work on differential equations and there are physicists who work on stochastic processes. These people aren't programmers. So using a language like C++ which is hard to learn is only detrimental to their works. And also creating a model in Python is much easier compared with C++ and Java. You have to use C++ when you want to create a game engine because the graphics is directly related to the hardware and if you want to be a professional Android programmer you have to learn Java. What are the benefits of choosing C++ and Java over Python when your work mainly consists of linear algebra and statistics?

",35633,,,,,5/5/2020 8:04,,,,1,,,,CC BY-SA 4.0 20930,1,,,5/5/2020 8:16,,2,56,"

Let's say I want to teach a neural to classify images, and, for some reason, I insist on using reinforcement learning rather than supervised learning.

I have a dataset of images and their matching classes. Then, for each image, I could define a reward function which is $1$ for classifying it right and $-1$ for classifying it wrong (or perhaps even define a more complicated reward function where some mistakes are less costly than others). For each image $x^i$, I can loop through each class $c$ and use a vanilla REINFORCE step: $\theta = \theta + \alpha \nabla_{\theta}log \pi_{\theta}(c|x^i)r$.

Would that be different than using standard supervised learning methods (for example, the cross-entropy loss)? Should I expect different results?

This method actually seems better since I could define a custom reward for each misclassification, but I've never seen anyone use something like that

",36083,,2444,,5/5/2020 11:03,5/5/2020 11:03,Can a typical supervised learning problem be solved with reinforcement learning methods?,,0,1,,,,CC BY-SA 4.0 20933,2,,20921,5/5/2020 10:22,,3,,"
  • It is common during the training of Neural Networks for accuracy to improve for a while and then get worse -- in general, This is caused by over-fitting. It's also fairly common for the Neural Network to "get UNLUCKY and get knocked into a BAD sectors of parameter space corresponding to a sudden decrease in accuracy -- sometimes it can recover from this quickly, but sometimes not.

  • In general, lowering your learning rate is a good approach to this kind of problem. Also, setting a learning rate schedule like FactorScheduler can help you achieve more stable convergence by lowering the learning rate every few epochs. In fact, this can sometimes cover up mistakes in picking an initial learning rate that is too high.

  • you can try using mini-batches.

  • The error (Entropy) with log functions must be used precisely.


",36737,,32410,,9/26/2021 8:38,9/26/2021 8:38,,,,0,,,,CC BY-SA 4.0 20935,1,,,5/5/2020 12:25,,2,430,"

I'm trying to make a bot to the famous "Icy Tower" game. I rebuilt the game using pygame and I'm trying to build the bot using Python-NEAT.

Every generation a population of 70 characters tries to jump to the next platform and increase their fitness. right now the fitness is the number of platforms they jumped on, each platform gives +10.

The problem I'm facing is that the bot isn't learning good enough after 1000 generations the best score was around 200 (it can get to 200 even at the first few generations by mistake. 200 means 20 platforms which is not a lot).

when I look at the characters jumping it looks like they just always jump and go left or right and not deliberately aiming to the next platform.

I tried several input configurations to make the bot perform better. but nothing really helped.

these are the inputs I tried to mess around with:

  • pos.x, pos.y
  • velocity.x, velocity.y
  • isOnPlatform (bool)
  • [plat.x, plat.y, plat.width] (list of the 3-7 next platforms locations, also tried distance from character in x,y)
  • [prev.x, prev.y] (2-6 previous character positions)

I'm not so proficient with neuroevolution and I'm probably doing something wrong. glad if you could explain what's causing the bot to be so bad or what's not helping him to learn properly.

Although I think that the fitness function and the inputs should be the only problem I'm attaching the python-NEAT config file.

[NEAT]
fitness_criterion     = max
fitness_threshold     = 10000
pop_size              = 70
reset_on_extinction   = False

[DefaultGenome]
# node activation options
activation_default      = tanh
activation_mutate_rate  = 0.0
activation_options      = tanh

# node aggregation options
aggregation_default     = sum
aggregation_mutate_rate = 0.0
aggregation_options     = sum

# node bias options
bias_init_mean          = 0.0
bias_init_stdev         = 1.0
bias_max_value          = 30.0
bias_min_value          = -30.0
bias_mutate_power       = 0.5
bias_mutate_rate        = 0.7
bias_replace_rate       = 0.1

# genome compatibility options
compatibility_disjoint_coefficient = 1.0
compatibility_weight_coefficient   = 0.5

# connection add/remove rates
conn_add_prob           = 0.5
conn_delete_prob        = 0.5

# connection enable options
enabled_default         = True
enabled_mutate_rate     = 0.01

feed_forward            = True
initial_connection      = full

# node add/remove rates
node_add_prob           = 0.2
node_delete_prob        = 0.2

# network parameters
num_hidden              = 6
num_inputs              = 11
num_outputs             = 3

# node response options
response_init_mean      = 1.0
response_init_stdev     = 0.0
response_max_value      = 30.0
response_min_value      = -30.0
response_mutate_power   = 0.0
response_mutate_rate    = 0.0
response_replace_rate   = 0.0

# connection weight options
weight_init_mean        = 0.0
weight_init_stdev       = 1.0
weight_max_value        = 30
weight_min_value        = -30
weight_mutate_power     = 0.5
weight_mutate_rate      = 0.8
weight_replace_rate     = 0.1

[DefaultSpeciesSet]
compatibility_threshold = 3.0

[DefaultStagnation]
species_fitness_func = max
max_stagnation       = 3
species_elitism      = 2

[DefaultReproduction]
elitism            = 3
survival_threshold = 0.2
  • Note: the previous character positions are the position in the previous frame, and if the game runs at 60 fps the previous position is not that different from the current one...

  • Note2: the game score is a bit more complex than just jumping on platforms, the bot should also be rewarded for combos that can make him jump higher. the combo system is already implemented but I first want to see the bot aiming to the next platform before he learns to jump combo.

",32953,,32953,,3/18/2021 6:30,3/18/2021 6:30,How to select good inputs and fitness function to achive good results with NEAT for Icy Tower bot,,0,6,,,,CC BY-SA 4.0 20937,2,,20910,5/5/2020 13:58,,4,,"

The book Grokking Deep Learning, by Andrew Trask (a PhD student at Oxford University and a research scientist at DeepMind), a wonderful, clean, and plain-English discussion of the basic mechanics that go on under the hood of neural networks - from data flow to updating of weights. It is written without a slant on normally-wonky math, the concepts are presented and then advanced at a digestible pace for anyone.

Here are a few more possibly useful resources.

  1. Neural Networks: Playground Exercises

  2. Getting Started With Deep Learning: Convolutional Neural Networks

  3. Deep learning focuses on practical aspects of deep learning

  4. First lab assignment in Deep learning

  5. Deeplearning stanford

",36737,,36737,,5/5/2020 18:56,5/5/2020 18:56,,,,0,,,,CC BY-SA 4.0 20939,1,,,5/5/2020 14:49,,3,482,"

I am training an autoencoder on (general) image data.

I use binary crossentropy loss function, but it is not very informative when I want to evaluate the performance of my autoencoder.

An obvious performance metric would be pixel-wise MSE, but it has its own downsides, shown on some toy examples in an image from paper from Pihlgren et al.

In the same paper, the authors suggest using perceptual loss, but it seems complicated and not well-studied.

I found some other instances of this question, but there doesn't seem to be a concensus.

I understand that it depends on the application, but I want to know if there are some general guidelines as to which performance metric to use when training autoencoders on image data.

",36769,,,,,7/24/2020 13:02,How to evaluate the performance of an autoencoder trained on image data?,,1,0,,,,CC BY-SA 4.0 20940,2,,20903,5/5/2020 15:07,,1,,"

If you want, you can do training and testing in RL. Exactly the same usage, training for building up a policy, and testing for evaluation.

In supervised learning, if you use test data in training, it is like cheating. You cannot trust the evaluation. That's why we separate train and test data.

The Objective of RL is a little different. RL trying to find the optimal policy. Since RL collects the information by doing, while the agent explores the environment (for more information), there might be a loss in the objective function. But, it might be inevitable for a better future gain.

Multi-arm bandit example, If there are 10 slot machines. They will return random amounts of money. They have different expected returns. I want to find the best way to maximize my gain. easy, I have to find the machine with the greatest expected return and use only the machine. How to find the best machine?

If we have a training and testing (periods), For example, I will give you an hour of the training period, so it doesn't matter if you lose or how much you earn. And in the testing period, I will evaluate your performance.

What would you do? In the training period, you will try as much as possible, without considering the performance/gain. And in the testing period, you will use only the best machine you found.

This is not a typical RL situation. RL is trying to find the best way, Learning by doing. All the results while doing are considered.

Suppose that I tried all 10 machines once each. And, the No.3 machine gave me the most money. But I am not sure that it is the best machine, because all the machines provide a RANDOM amount. If I keep using the No.3 machine, it might be a good idea, because according to the information so far, it is the best machine. However, You might miss the better machine if you don't try other machines due to randomness. But if you try other machines, you might lose an opportunity to earn more money. What should I do? This is a well-known Exploration and Exploitation trade-off in RL.

RL trying to maximize the gain including the gains right now and the gains in the future. In other words, the performance during training is also considered as its performance. That's why RL is not unsupervised nor supervised learning.

However, in some situations, you might want to separate training and testing. RL is designed for an agent who interacts with the environment. However, in some cases, (for example), rather than having an interactive playground, you have data of interactions. The formulation would be a little different in this case.

",23788,,32410,,9/29/2021 7:09,9/29/2021 7:09,,,,0,,,,CC BY-SA 4.0 20941,1,26839,,5/5/2020 15:59,,1,1711,"

I'm trying to design an OpenAI Gym environment in which multiple users/players perform actions over time. It's round based and each user needs to take an action before the round is evaluated and the next round starts. The action for one user can be model as a gym.spaces.Discrete(5) space. I want my RL agent to make decisions for all users. I'm wondering how to take multiple actions before progressing time and calculating the reward.

Basically, what I want is:

obs = env.reset()
user_actions = []
for each user:
    user_actions.append(agent.predict(obs))
obs, reward, done, _ = env.step(user_actions)

So the problem is that I don't immediately know the reward after getting an action since I need to collect all actions before evaluating the round.

I could of course extend actions to include actions of all users in one go. But this would be problematic if I have a really large number of users or it even changes over time, right?

I found these two (1, 2) related questions, but they didn't solve my problem.

",19928,,,,,4/18/2021 11:21,OpenAI Gym: Multiple actions in one step,,1,2,,,,CC BY-SA 4.0 20942,1,,,5/5/2020 16:17,,3,46,"

I had the same question when I am reading the RL textbook from Sutton Bartol as posted here.

Why do we update $W$ with $\frac{1}{\mu (A_t | S_t)}$ instead of $\frac{\pi (A_t | S_t)}{\mu (A_t | S_t)}$?

It seems that, with the updating rule from the textbook, whatever action $\mu$ decides to choose, we automatically assume that $\pi$ will choose it with 100% probability. But $\pi$ is greedy with respect to Q. How does this assumption make sense?

",36120,,2444,,5/5/2020 17:22,5/5/2020 17:22,Why do we update $W$ with $\frac{1}{\mu (A_t | S_t)}$ instead of $\frac{\pi (A_t | S_t)}{\mu (A_t | S_t)}$ in off-policy Monte Carlo control?,,0,2,,,,CC BY-SA 4.0 20944,2,,20903,5/5/2020 16:23,,5,,"

Reinforcement Learning Workflow

The general workflow for using and applying reinforcement learning to solve a task is the following.

  1. Create the Environment
  2. Define the Reward
  3. Create the Agent
  4. Train and Validate the Agent
  5. Deploy the Policy

Training

  • Training in Reinforcement learning employs a system of rewards and penalties to compel the computer to solve a problem by itself.

  • Human involvement is limited to changing the environment and tweaking the system of rewards and penalties.

  • As the computer maximizes the reward, it is prone to seeking unexpected ways of doing it.

  • Human involvement is focused on preventing it from exploiting the system and motivating the machine to perform the task in the way expected.

  • Reinforcement learning is useful when there is no “proper way” to perform a task, yet there are rules the model has to follow to perform its duties correctly.

  • Example: By tweaking and seeking the optimal policy for deep reinforcement learning, we built an agent that in just 20 minutes reached a superhuman level in playing Atari games.

  • Similar algorithms, in principle, can be used to build AI for an autonomous car.

Testing

  • Debugging RL algorithms is very hard. Everything runs and you are not sure where the problem is.

  • To test if it worked well, if the trained agent is good at what it was trained for, you take your trained model and apply it to the situation it is trained for.

  • If it’s something like chess or Go, you could benchmark it against other engines (say stockfish for chess) or human players.

  • You can also define metrics for performance, ways of measuring the quality of the agent’s decisions.

  • In some settings (e.g a Reinforcement Learning Pacman player), the game score literally defines the target outcome, so you can just evaluate your model’s performance based on that metric.

",36737,,2444,,5/5/2020 22:08,5/5/2020 22:08,,,,0,,,,CC BY-SA 4.0 20945,2,,20903,5/5/2020 16:47,,1,,"

The goal of the reinforcement learning (RL) is to use data obtained via interaction with the environment to solve the underlying Markov Decision Process (MDP). ""Solving the MDP"" is tantamount to finding the optimal policy (with respect to the MDP's underlying dynamics which are usually assumed to be stationary).

Training is the process of using data in order to find the optimal policy. Testing is the process of evaluating the (final) policy obtained by training.

Note that, since we're generally testing the policy on the same MDP we used for training, the distinction between the training dataset and the testing set is no longer as important as it is the case with say supervised learning. Consequently, classical notions of overfitting and generalization should be approached from a different angle as well.

",36748,,,,,5/5/2020 16:47,,,,0,,,,CC BY-SA 4.0 20946,1,20953,,5/5/2020 16:48,,2,93,"

In pattern recognition systems, when no labeled data is available, what are some common unsupervised learning algorithms for pattern recognition, that can be used?

",36776,,36776,,5/5/2020 18:36,5/5/2020 18:49,"When labelled data is not available, what are some common unsupervised learning algorithms for pattern recognition that can be used?",,1,0,,,,CC BY-SA 4.0 20948,1,20958,,5/5/2020 17:01,,2,2561,"

Let's consider this scenario. I have two conceptually different video datasets, for example a dataset A composed of videos about cats and a dataset B composed of videos about houses. Now, I'm able to extract a feature vectors from both the samples of the datasets A and B, and I know that, each sample in the dataset A is related to one and only one sample in the dataset B and they belong to a specific class (there are only 2 classes).

For example:

Sample x1 AND sample y1 ---> Class 1
Sample x2 AND sample y2 ---> Class 2
Sample x3 AND sample y3 ---> Class 1
and so on...

If I extract the feature vectors from samples in both datasets , which is the best way to combine them in order to give a correct input to the classifier (for example a neural network) ?

feature vector v1 extracted from x1 + feature vector v1' extracted from y1 ---> input for classifier

I ask this because I suspect that neural networks only take one vector as input, while I have to combine two vectors

",36363,,36363,,5/5/2020 17:06,5/6/2020 10:13,Combine two feature vectors for a correct input of a neural network,,2,0,,,,CC BY-SA 4.0 20950,2,,20728,5/5/2020 17:48,,1,,"

Note: I assume you mean, countable Action and State Sets by 'Finite'.

MDP(s) are not exclusive to finite spaces only. They can be used in Continuous/uncountable sets of Action and States too.

Markov Decision Process (MDP) is a tuple $(\mathcal S, \mathcal A, \mathcal P^a_s, \mathcal R^a_{ss'}, \gamma, \mathcal S_o)$ where $\mathcal S$ is a set of States, $\mathcal A$ is the set of actions, $\mathcal P_{s}^a: \mathcal A \times \mathcal S \rightarrow [0, 1]$ is a function that denotes Probability distribution over the states if action $a$ is execuited at state $s$. [1][2]

Where, Q-function is defined as:

$$ Q^\pi (s,a) = \mathbb E_\pi \left [ \sum \limits_{t=0}^{+\infty} \gamma(t)r_t | s_o = s, a_o = a \right] \tag{*}$$

Note that $r_t$ is just special case of Reward function $\mathcal R^a_{ss'}$.

Now, if states and actions are discrete, then, the Q-Table Method[3] which is a state-action matrix helps us to evaluate $Q$ function and optimize efficiency.

Whereas, in cases where the state/action sets are infinite or continuous, Deep Networks are preferred to Approximate $Q$ function. [4].

Q-Learning is Off-Policy method, doesn't require $\pi$ policy function


References:

  1. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
  2. Alborz Geramifard, Thomas J. Walsh, Stefanie Tellex, Girish Chowdhary, Nicholas Roy and Jonathan P. How. A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning. Foundations and Trends (R) in Machine Learning Vol. 6, No. 4 (2013) 375–454
  3. Andre Violante. Simple Reinforcement Learning: Q-learning, Create a q-table, https://towardsdatascience.com, 2019.
  4. Alind Gupta. Deep Q-Learning, Deep Q-Learning, https://www.geeksforgeeks.org/deep-q-learning/, 2020.

Edit: I'd like to thank @nbro for editing suggestions.

",34312,,34312,,5/5/2020 18:11,5/5/2020 18:11,,,,0,,,,CC BY-SA 4.0 20952,1,,,5/5/2020 18:42,,1,131,"

I want a model that outputs the pixel coordinates of the tip of my forefinger, and whether it's touching something or not. Those would be 3 output neurons: 2 for the X-Y coordinates and 1, with a sigmoid activation, wich predicts the probability whether it's touching or not.

What do I need to change in the squeezenet model in order to do this?

(PS: the trained model needs to be the fastest possible (in latency), that's why I wanted to use SqueezeNet)

",32751,,32751,,5/6/2020 6:58,5/6/2020 6:58,Can SqueezeNet be used for regression?,,0,2,,,,CC BY-SA 4.0 20953,2,,20946,5/5/2020 18:49,,2,,"

There are some unsupervised learning algorithms that can be used for pattern recognition (i.e. the discovery of patterns in data). The most notable one is probably k-means, which is a clustering algorithm. In k-means, you cluster your unlabeled data into groups (or clusters) based on the distance (or similarity) between them. When a new data point arrives, you'll associate it with the most similar cluster. In this sense, you are performing pattern recognition in an unsupervised way.

Here's an excerpt from the famous book Pattern Recognition and Machine Learning (2006) by C. Bishop

In other pattern recognition problems, the training data consists of a set of input vectors $x$ without any corresponding target values. The goal in such unsupervised learning problems may be to discover groups of similar examples within the data, where it is called clustering, or to determine the distribution of data within the input space, known as density estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization.

So, there are other problems, apart from the problem of clustering, that you may want to solve with unsupervised learning algorithms for the purpose of pattern recognition, such as density estimation (see e.g. mixture models) or dimensionality reduction (see e.g. PCA).

",2444,,,,,5/5/2020 18:49,,,,2,,,,CC BY-SA 4.0 20954,2,,20844,5/5/2020 19:52,,1,,"

First and foremost, I have to say that this could (and likely will) be a very hard task. Neural networks (NNs) have excelled at computer vision tasks identifying everything from textures to complex objects but what you are trying to do goes beyond that. We (humans) identify trash using the context as much as the object. An object on a table and the same object in a trash bin can look identical but the context tells us which one is trash. Also, anything can be trash. Trash is not an object, its a state.

Having said all that, it sounds like an interesting project and it would be a very useful model so I'll do my best to explain how you would go about trying this. As per your comment, you have outlined the two steps you need.

  1. Identify and extract the trash object from the image.
  2. Classify the type of waste

This could be accomplished using a single model but I'll explain how to do it with two models to make clear what is being done.

Identify and extract trash

The objective of the first model is to identify and extract the region of interest (the part of the image containing the trash). This is an image segmentation task typically accomplished using an R-CNN - some guides explaining how they work can be found here, here or here. These methods use supervised learning which requires masks delineating the object of interest to be used as the ground truth. A mask is a binary image the same size as your input image where each pixel in the image representing your positive class (trash) is set to 1 and all others are set to 0.

The output of your Canny/Watershed algorithms could be used as the masks to train the model, however your model will only ever be as good as your masks. Therefore, you might as well use your Canny/Watershed algorithms to accomplish task 1. If the masks generated by your algorithm are not of sufficient accuracy you will need to find another way to generate your masks - maybe even doing it manually.

An approximate rule of thumb for object detection with NNs is that you need 1,000 representative images per class, where class implies a specific object. In this case, unless you are attempting to identify very specific items of trash, you would likely need 100s of thousands of images to obtain a high degree of accuracy.

Classify the type of waste

This task should be easier, making the assumption that the segmented image from the first step is always trash. By assigning a label to each segmented image (paper, metal, glass, cardboard, etc) it becomes a normal multi-class classification task which are well documented online with lots of explainations and tutorials.

Single model

These tasks could be combined into a single model by modifying the masks in step 1. Instead of using a single binary mask, you could create a multichannel mask of shape m x n x o where m x n is your input image size and o is the number of kinds of waste you are attempting to identify. Each channel is a binary mask for a given type of waste. Therefore, it becomes a matter of not just identifying and segmenting trash but of segmenting each type of trash separately. Needless to say the complexity of the model would be a lot higher and so would the process to create the masks.

",31980,,,,,5/5/2020 19:52,,,,0,,,,CC BY-SA 4.0 20955,1,,,5/5/2020 20:12,,1,66,"

I'm trying to train a RL agent on a custom, highly stochastic environment (MDP). In order to do so I'm using existing implementations of state-of-the-art RL algorithms as provided by Stable Baselines. However, no matter what algorithm I try out and despite extensive hyperparameter tuning, I'm failing to obtain any meaningful result. More precisely, a trivial ""always perform the same action (0.7,0.7) each time"" strategy works better than any of the obtained policies. The environment is highly stochastic (model of a financial market). How likely is it that the environment is simply ""too stochastic"" for any meaningful learning to take place? If interested, here's the environment code:

class environment1(gym.Env):
def __init__(self):
    self.t = 0.0 # initial time
    self.s = 100.0 # initial midprice value
    self.T = 1.0 # trading period length
    self.sigma = 2 # volatility constant
    self.dt = 0.005 # time step
    self.q = 0.0 # initial inventory
    self.oldq = 0 # initial old inventory
    self.x = 0 # initial wealth/cash
    self.gamma = 0.1 # risk aversion parameter
    self.k = 1.5 # intensity of arivals of orders
    self.A = 140 # constant
    self.done = False
    self.info = []
    high = np.array([np.finfo(np.float32).max,
                     np.finfo(np.float32).max,
                     np.finfo(np.float32).max],
                    dtype=np.float32)
    self.action_space = spaces.Discrete(100)
    self.observation_space = spaces.Box(-high, high, dtype=np.float32)
    self.seed()
    self.state = None

def seed(self, seed=None):
    self.np_random, seed = seeding.np_random(seed)
    return [seed]

def step(self, action):
    old_x, old_q, old_s = self.x, self.q, self.s # current becomes old
    self.t += 0.005 # time increment
    P1 = self.dt*self.A*np.exp(-self.k*(action//10)/10) # probabilities of execution
    P2 = self.dt*self.A*np.exp(-self.k*(action%10)/10)
    if random.random() < P1: # decrease inventory increase cash
        self.q -= 1
        self.x += self.s + (action//10)/10
    if random.random() < P2: # increase inventory decrease cash
        self.q += 1
        self.x -= self.s - (action%10)/10
    if random.random() < 0.5:
        self.s += np.sqrt(0.005)*self.sigma
    else:
        self.s -= np.sqrt(0.005)*self.sigma
    self.state = np.array([self.s-100,(self.q-34)/25,(self.t-0.5)/0.29011491975882037])
    reward = self.x+self.q*self.s-(self.oldx+self.oldq*self.olds)
    if np.isclose(self.t, self.T):
        self.done = True
    self.oldq = self.q
    self.oldx = self.x
    self.olds = self.s
    return self.state, reward, self.done, {}

def reset(self):
    self.t = 0.0 # initial time
    self.s = 100.0 # initial midprice value
    self.T = 1.0 # trading period length
    self.sigma = 2 # volatility constant
    self.dt = 0.005 # time step
    self.q = 0.0 # initial inventory
    self.oldq = 0.0
    self.oldx = 0.0
    self.olds = 100.0
    self.x = 0.0 # initial wealth/cash
    self.gamma = 0.1 # risk aversion parameter
    self.k = 1.5 # intensity of arivals of orders
    self.A = 140 # constant
    self.done = False
    self.info = []
    self.state = np.array([self.s-100,(self.q-34)/25,(self.t-0.5)/0.29011491975882037])
    return self.state

The state space is mostly normalized. The action space consists of 100 possible discrete actions (integers from 0 to 99 which are then transformed to (0.0,0.0),(0.0,0.1),...(1.0,1.0). The reward is simply given by the change in the portfolio value (cash+stock).

Note: I've also tried transforming the action space into a continuous one in order to use DDPG but all to no avail.

",26195,,,,,5/5/2020 20:12,State-of-the-art algorithms not working on a custom RL environment,,0,1,,,,CC BY-SA 4.0 20956,2,,20728,5/5/2020 21:07,,0,,"

To my knowledge you can't compute or solve an uncountably large MDP numerically. It will need to be discretized in some capacity. The same applies for classic control: you can't optimize over the true functional so you use a discrete approximation to the system and solve that.

",32390,,,,,5/5/2020 21:07,,,,1,,,,CC BY-SA 4.0 20957,2,,20903,5/5/2020 23:40,,7,,"

What is reinforcement learning?

In reinforcement learning (RL), you typically imagine that there's an agent that interacts, in time steps, with an environment by taking actions. On each time step $t$, the agent takes the action $a_t \in \mathcal{A}$ in the state $s_t \in \mathcal{S}$, receives a reward (or reinforcement) signal $r_t \in \mathbb{R}$ from the environment and the agent and the environment move to another state $s_{t+1} \in \mathcal{S}$, where $\mathcal{A}$ is the action space and $\mathcal{S}$ is the state space of the environment, which is typically assumed to be a Markov decision process (MDP).

What is the goal in RL?

The goal is to find a policy that maximizes the expected return (i.e. a sum of rewards starting from the current time step). The policy that maximizes the expected return is called the optimal policy.

Policies

A policy is a function that maps states to actions. Intuitively, the policy is the strategy that implements the behavior of the RL agent while interacting with the environment.

A policy can be deterministic or stochastic. A deterministic policy can be denoted as $\pi : \mathcal{S} \rightarrow \mathcal{A}$. So, a deterministic policy maps a state $s$ to an action $a$ with probability $1$. A stochastic policy maps states to a probability distribution over actions. A stochastic policy can thus be denoted as $\pi(a \mid s)$ to indicate that it is a conditional probability distribution of an action $a$ given that the agent is in the state $s$.

Expected return

The expected return can be formally written as

$$\mathbb{E}\left[ G_t \right] = \mathbb{E}\left[ \sum_{i=t+1}^\infty R_i \right]$$

where $t$ is the current time step (so we don't care about the past), $R_i$ is a random variable that represents the probable reward at time step $i$, and $G_t = \sum_{i=t+1}^\infty R_i $ is the so-called return (i.e. a sum of future rewards, in this case, starting from time step $t$), which is also a random variable.

Reward function

In this context, the most important job of the human programmer is to define a function $\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, the reward function, which provides the reinforcement (or reward) signal to the RL agent while interacting with the environment. $\mathcal{R}$ will deterministically or stochastically determine the reward that the agent receives every time it takes action $a$ in the state $s$. The reward function $R$ is also part of the environment (i.e. the MDP).

Note that $\mathcal{R}$, the reward function, is different from $R_i$, which is a random variable that represents the reward at time step $i$. However, clearly, the two are very related. In fact, the reward function will determine the actual realizations of the random variables $R_i$ and thus of the return $G_i$.

How to estimate the optimal policy?

To estimate the optimal policy, you typically design optimization algorithms.

Q-learning

The most famous RL algorithm is probably Q-learning, which is also a numerical and iterative algorithm. Q-learning implements the interaction between an RL agent and the environment (described above). More concretely, it attempts to estimate a function that is closely related to the policy and from which the policy can be derived. This function is called the value function, and, in the case of Q-learning, it's a function of the form $Q : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$. The name $Q$-learning derives from this function, which is often denoted as $Q$.

Q-learning doesn't necessarily find the optimal policy, but there are cases where it is guaranteed to find the optimal policy (but I won't dive into the details).

Of course, I cannot describe all the details of Q-learning in this answer. Just keep in mind that, to estimate a policy, in RL, you will typically use a numerical and iterative optimization algorithm (e.g. Q-learning).

What is training in RL?

In RL, training (also known as learning) generally refers to the use of RL algorithms, such as Q-learning, to estimate the optimal policy (or a value function)

Of course, as in any other machine learning problem (such as supervised learning), there are many practical considerations related to the implementation of these RL algorithms, such as

  • Which RL algorithm to use?
  • Which programming language, library, or framework to use?

These and other details (which, of course, I cannot list exhaustively) can actually affect the policy that you obtain. However, the basic goal during the learning or training phase in RL is to find a policy (possibly, optimal, but this is almost never the case).

What is evaluation (or testing) in RL?

During learning (or training), you may not be able to find the optimal policy, so how can you be sure that the learned policy to solve the actual real-world problem is good enough? This question needs to be answered, ideally before deploying your RL algorithm.

The evaluation phase of an RL algorithm is the assessment of the quality of the learned policy and how much reward the agent obtains if it follows that policy. So, a typical metric that can be used to assess the quality of the policy is to plot the sum of all rewards received so far (i.e. cumulative reward or return) as a function of the number of steps. One RL algorithm dominates another if its plot is consistently above the other. You should note that the evaluation phase can actually occur during the training phase too. Moreover, you could also assess the generalization of your learned policy by evaluating it (as just described) in different (but similar) environments to the training environment [1].

The section 12.6 Evaluating Reinforcement Learning Algorithms of the book Artificial Intelligence: Foundations of Computational Agents (2017) by Poole and Mackworth provides more details about the evaluation phase in reinforcement learning, so you should probably read it.

Apart from evaluating the learned policy, you can also evaluate your RL algorithm, in terms of

  • resources used (such as CPU and memory), and/or
  • experience/data/samples needed to converge to a certain level of performance (i.e. you can evaluate the data/sample efficiency of your RL algorithm)
  • robustness/sensitivity (i.e., how the RL algorithm behaves if you change certain hyper-parameters); this is also important because RL algorithms can be very sensitive (from my experience)

What is the difference between training and evaluation?

During training, you want to find the policy. During the evaluation, you want to assess the quality of the learned policy (or RL algorithm). You can perform the evaluation even during training.

",2444,,2444,,11/23/2020 15:42,11/23/2020 15:42,,,,1,,,,CC BY-SA 4.0 20958,2,,20948,5/6/2020 0:27,,3,,"

The easiest way can be the concatenation of the feature vectors to create a single feature vector for each sample.

Assume the first sample is made of the pair $X_1$ and $Y_1$. Let the corresponding feature vectors for $X_1$ and $Y_1$ be $\textbf{v}_1$ and $\textbf{v}_2$, respectively.

$$ \textbf{v}_1 = [f_1, f_2, \ldots , f_n],\\ \textbf{v}_2 = [g_1, g_2, \ldots , g_m]. $$ Then, the first sample's feature can be defined as $$ \textbf{v} = [f_1, f_2, \ldots , f_n, g_1, g_2, \ldots , g_m]. $$ Eventually, when you pass the latter feature vector to a machine learning model, it will try to capture the dependencies among all of these features, to learn a solution for your task of interest (i.e. classification).

",36785,,,,,5/6/2020 0:27,,,,2,,,,CC BY-SA 4.0 20959,1,,,5/6/2020 5:16,,1,194,"

This is an experiment in order to understand the working of Q table and Q learning.

I have the states as

states = [0,1,2,3]

I have an arbitrary value for each of these states as shown below (assume index-based mapping) -

arbitrary_values_for_states = [39.9,47.52,32.92,37.6]

I want to find the minimum of the state which will give me the minimum value. So I have complimented the values to 50-arbitrary value.

inverse_values_for_states = [50-x for x in arbitrary_values_for_states]

Therefore, I defined reward function as -

def reward(s,a,s_dash):
    if inverse_values_for_states[s]<inverse_values_for_states[s_dash]:
        return 1
    elif inverse_values_for_states[s]>inverse_values_for_states[s_dash]:
        return -1
    else:
        return 0

Q table is initialized as - Q = np.zeros((4,4)) (np is numpy)

The learning is carried out as -

episodes = 5
steps = 10
for episode in range(episodes):
    s = np.random.randint(0,4)
    alpha0 = 0.05
    decay = 0.005
    gamma = 0.6
    for step in range(steps):
        a = np.random.randint(0,4)
        action.append(a)
        s_dash = a
        alpha = alpha0/(1+step*decay)
        Q[s][a] = (1-alpha)*Q[s][a]+alpha*(reward(s,a,s_dash)+gamma*np.max(Q[s_dash]))

        s = s_dash

The problem is, the table doesn't converge.

Example. For the above scenario -

np.argmax(Q[0]) gives 3
np.argmax(Q[1]) gives 2
np.argmax(Q[2]) gives 2
np.argmax(Q[3]) gives 2

All of the states should give argmax as 2 (which is actually the index[state] of the minimum value).

Another example,

when I increase steps to 1000 and episodes to 50,

np.argmax(Q[0]) gives 3
np.argmax(Q[1]) gives 0
np.argmax(Q[2]) gives 1
np.argmax(Q[3]) gives 2

More, steps and episodes should assure convergence, but this is not visible.

I need help where I am going wrong.

PS: This little experiment is needed to make Q-learning applicable to a larger combinatorial problem. Unless I understand this, I don't think I will be able to do that right. Also, there is no terminal state because this is an optimization problem. (And I have heard that Q-learning doesn't necessarily needs a terminal state)

",21983,,,,,2/10/2022 22:04,Q table not converging for an arbitrary experiment,,1,3,,,,CC BY-SA 4.0 20960,2,,12274,5/6/2020 5:46,,2,,"

Deterministic Policy :

Its means that for every state you have clear defined action you will take

For Example: We 100% know we will take action A from state X.

Stochastic Policy :

Its mean that for every state you do not have clear defined action to take but you have probability distribution for actions to take from that state.

For example there are 10% chance of taking action A from state S, There are 20% chance of taking B from State S and there are 70% chance of taking action C from state S, Its mean we don't have clear defined action to take but we have some probability of taking actions.

",36104,,-1,,6/17/2020 9:57,5/6/2020 6:00,,,,0,,,,CC BY-SA 4.0 20961,2,,12274,5/6/2020 8:11,,1,,"

Apart from the answers above,


Stochastic Policy function: $\pi (s_1s_2 \dots s_n, a_1 a_2 \dots a_n): \mathcal S \times \mathcal A \rightarrow [0,1]$ is the probability distribution function, that, tells the probability that action sequence $a_1a_2 \dots a_n$ may be chosen in state sequence $s_1 s_2 \dots s_n$[2][3].

In Markov Decision Process (MDP), it's only $\pi (s, a)$ following the assumptions[1]: $$ \mathbb P(\omega_{t+1}| \omega_t, a_t) = \mathbb P(\omega_{t+1}| \omega_t,a_t, \dots \omega_o,a_o)$$ Where $\omega \in \Omega$ which is the set of Observations. $\mathcal A, \mathcal S$ denote the set of actions and states respectively. Since, the next observation is dependent only on present states and not the past, the policy function only needs the present state and action as parameter.

The next action is chosen as[2]: $$ a^* = \arg \max_a \pi(s_{t+1}, a) \quad\forall a \in \mathcal A $$

Deterministic Policy function [3]: is a special case of Stochastic Policy function where for particular $a_o \in \mathcal A$, $\pi(s, a_n) = \delta^o_n$ for all $a_n \in \mathcal A$. Here, we are totally certain to choose particular action $a_o$ in some arbitrary state $s$ and no other. Here $\delta$ is Kronecker delta. Since, the probability distribution here is discrete, it's often written in the form of $\pi(s): \mathcal S \rightarrow \mathcal A$, where the function takes arbitrary state $s$ and maps it to an action $a$ which is 100% probable.

IMPORTANT

The Stochastic Policy function is not meant to be confused with the Transition Function[2] (which is also a Probability Distribution Function), $T(s_t, a_t, s_{t+1}): \mathcal S \times \mathcal A \times \mathcal S \rightarrow [0, 1]$ which tells the probability that - at state $s_t$, the action $a_t$ will lead us to next state $s_{t+1}$.


References:

  1. https://ocw.mit.edu. 6.825 Techniques in Artificial Intelligence. https://ocw.mit.edu. Page Number - 6. Web. 6 May 2020

  2. Simonini, Thomas. https://www.freecodecamp.org .An introduction to Policy Gradients with Cartpole and Doom. 9 May 2018. Web. 6 May 2020.

  3. https://www.computing.dcu.ie/. Reinforcement Learning. 2.1.1 Special case - Deterministic worlds. Web. 6 May 2020

",34312,,34312,,5/6/2020 8:34,5/6/2020 8:34,,,,0,,,,CC BY-SA 4.0 20965,1,20969,,5/6/2020 10:06,,3,83,"

Is there any research on machine learning models that provide uncertainty estimation?

If I train a denoising autoencoder on words and put through a noised word, I'd like it to return a certainty that it is correct given the distribution of data it has been trained on.

Answering these questions or metrics for uncertainty are both things I am curious about. Just general ways for models to just say ""I'm not sure"" when it receives something far outside the inputs it's been trained to approximate.

",30885,,2444,,1/17/2021 12:15,1/17/2021 12:15,Is there any research on models that provide uncertainty estimation?,,1,0,,,,CC BY-SA 4.0 20966,2,,20948,5/6/2020 10:13,,1,,"

$^*$Note - Question is bit unclear, in case the answer doesn't addresses the question, please ask for edit/delete Request.


GENERALIZATION

Suppose there are multiple datasets denoted by $A_i$. Datasets contain a set of Vectors $x_{j} $. Mathematically $A_i = \{ x_j\}_{j=0}^n$. We've to find an estimator function $\hat f$, such that $\hat f( \vec r) = y, \, \vec r \in X$ where $X $ is a special dataset created by combining all $A_i$ which helps in classification into $y \in Y$ which is the set of classes.

.

As @Amir Mentioned out, linearly separable feature can be easily separated by straight combination of vectors i.e. if $x_u \in A_i, w_v \in A_j \dots$, then $r = [x_1 \,x_2 \, \dots \, x_u \, w_1 \, \dots w_v \dots]$. Where, $r \in X$ which is the required dataset.

There are cases where the features are not linearly separable, We use basis expansion methods[1] to make required shape of hyperplane to separate the features. We create a new dataset combining $A_i \, \forall i \in C \subset \mathbb N$. Suppose that the new dataset is $X$, then $r \in X$ and $r = [r_0, r_1, \dots r_n].$

Then,

$$r_1 = u_1^2v_1^2 \\ r_2 = \sin(u_2)\sin(v_2) \\ r_3 = ae^{u_3 + v_3} \\ r_4 = a v_4 v_4 + a_2 u_4^2 v_4^2 + \dots \\ \dots$$

Here $u_p \in A_i; \, v_q \in A_j$

Here you can use all the creativity to set $r = [r_1, r_2, \dots , r_n]$ and make a new dataset. What equations and what functions you chose fully depends on the kind of hyperplane shape you want to obtain. Basis expansion is just one of the methods for feature extraction is certainly one of the most flexible too.

Now, you feed the newly created vectors into your trained estimator functions (which is Neural Net) which can classify things much easily now.

In case of Regression/Classification without Neural Net needs some extra treatment to train the model[2].


[2]Note: There is also a big role of encoding. For example, if you encode colors by numbers $1, 2, 3$ for RGB or $10,01, 11$ fully changes everything and your features too. In such cases, You may even need different equations to make your required dataset $X$ and vectors $r$.


REFERENCES:

  1. Oleszak, Michal. https://towardsdatascience.com. Non-linear regression: basis expansion, polynomials & splines. Sep 30, 2019. Web. 6 May 2020.
  2. Sangarshanan. https://medium.com. Improve your classification models using Mean /Target Encoding. Jun 23, 2018. Web. 6 May 2020.
",34312,,,,,5/6/2020 10:13,,,,16,,,,CC BY-SA 4.0 20968,1,,,5/6/2020 11:04,,3,324,"

Can you explain policy gradient methods and what it means for the policy to be parameterised? I am reading Sutton and Barto book on reinforcement learning and didn't understand well what it is, can you give some examples?

",36107,,36821,,5/14/2020 12:10,5/14/2020 14:19,What does it mean to parameterise a policy in policy gradient methods?,,1,0,,,,CC BY-SA 4.0 20969,2,,20965,5/6/2020 12:14,,2,,"

Yes, there is some research on this topic. It's often called Bayesian machine learning or Bayesian deep learning (but I don't think this is a good name because there are models that aren't really based on a direct application of Bayesian statistics). Some ML/DL models that provide some kind of uncertainty estimation are, for example, Monte Carlo Dropout (MC dropout) or Bayesian neural networks. In theory, these techniques look promising. In practice, I don't think they are the ultimate solution to the problem of uncertainty estimation in deep learning. In fact, e.g. in the case of Bayesian neural networks, they have some disadvantages, such as more parameters to tweak and save.

",2444,,,,,5/6/2020 12:14,,,,2,,,,CC BY-SA 4.0 20970,1,,,5/6/2020 13:31,,1,34,"

How can I reduce the caputured movement data of a person in a way, that I have filtered the main features of the movement. Or how can I detect pattern/ main features in that data set?

I captured some data with a device (Inertial sensor and motion capturing (x,y,z)) attached to human body. So I have a huge data set. I want to prepare the data set so I have no noise and no ""unecessary"" data.

In the end I just want to know for example that certain movement behaviour hints to a clueless guy or a person under stress.

I first thought approaches from association analysis could be useful but I think based on what I have read about the application of such algorithms they are not suitable for this data set.

",27777,,,,,5/6/2020 13:31,How can raw data from a motion sensor (like an IMU) reduced to the main points of the data,,0,0,,,,CC BY-SA 4.0 20971,1,,,5/6/2020 15:24,,0,117,"

Actually what is mean by 3D face recognition? In normal cases we are extracting face encoding s from a 2D image,right? Is 3D face recognition is used for liveness detection? how its possible?

",31576,,,,,12/23/2022 3:07,What is 3D face recognition? and how we can check liveness of a face image?,,1,2,,,,CC BY-SA 4.0 20974,2,,20893,5/6/2020 17:00,,2,,"

Let's first clarify a couple of details:

  1. The policy $\pi$ we're talking about is an $\epsilon$-soft policy (defined to mean that $\pi(a \vert s) \geq \frac{\epsilon}{\vert \mathcal{A}(s) \vert}$ for all states and all actions).
  2. We're not trying to prove equality of $v_{\pi}$ and $v_*$, but of $v_{\pi}$ and $\tilde{v}_*$, where $\tilde{v}_*$ denotes the optimal value function in this ""new environment"" that we're constructing.

So, ""environment"" is basically the ""world"" that our agent ""lives"" and acts in. You can think of it as the ""rules"" that we ""play"" by. So, you could think of the definitions of our complete state and action spaces as part of the environment, and the function that tells us which successor state $s'$ we end up in whenever we pick an action $a$ in a state $s$ (i.e. the state transition dynamics) are a part of the environment. And the function that tells us what Rewards we'll obtain in what situations is also a part of the environment. The policy $\pi$ is not a part of the environment; this is the ""brain"" of the agent itself.

Now, recall that here we're not interested in proving that $v_{\pi}$ moves towards the true optimal value function $v_*$ of the ""real"" environment. We know for a fact that we won't ever become completely equal to that, because we're forcing our policies to have exploratory behaviour by requiring them to be $\epsilon$-soft, so it would be hopeless to prove such a thing. Instead, we're interested in proving that $v_{\pi}$ will move towards whatever value function is the best one that we could possibly achieve under the restriction that we must have an $\epsilon$-soft policy.

What we do in the book here is that we slightly ""transform"" our environment into a new environment (i.e. we change the rules that we play by just a little bit). This is done in a clever way, such that the thing I just described that we want to prove becomes mathematically equivalent to just proving that $v_{\pi}$ moves towards (or becomes equal to) $\tilde{v}_*$. Now, if we can prove this for the new environment, we'll automatically have proven the thing that we actually wanted to prove for the ""real"" environment.

",1641,,,,,5/6/2020 17:00,,,,4,,,,CC BY-SA 4.0 20975,1,,,5/6/2020 18:20,,2,726,"
class AtariA2C(nn.Module):
    def __init__(self, input_shape, n_actions):
        super(AtariA2C, self).__init__()

        self.conv = nn.Sequential(
            nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4),
            nn.ReLU(),
            nn.Conv2d(32, 64, kernel_size=4, stride=2),
            nn.ReLU(),
            nn.Conv2d(64, 64, kernel_size=3, stride=1),
            nn.ReLU(),
        )

        conv_output_size = self. _get_conv_out(input_shape)

        self.policy = nn.Sequential(
            nn.Linear(conv_output_size, 512),
            nn.ReLU(),
            nn.Linear(512, n_actions),
        )

        self.value = nn.Sequential(
            nn.Linear(conv_output_size, 512),
            nn.ReLU(),
            nn.Linear(512, 1),
        )

    def _get_conv_out(self, shape):
        o = self.conv(T.zeros(1, *shape))
        return int(np.prod(o.shape))

    def forward(self, x):
        x = x.float() / 256
        conv_out = self.conv(x).view(x.size()[0], -1)
        return self.policy(conv_out), self.value(conv_out)

In Maxim Lapan's book Deep Reinforcement Learning Hands-on, after implementing the above network model, it says

The forward pass through the network returns a tuple of two tensors: policy and value. Now we have a large and important function, which takes the batch of environment transitions and returns three tensors: batch of states, batch of actions taken, and batch of Q-values calculated using the formula $$Q(s,a) = \sum_{i=0}^{N-1} \gamma^i r_i + \gamma^N V(s_N)$$ This Q_value will be used in two places: to calculate mean squared error (MSE) loss to improve the value approximation, in the same way as DQN, and to calculate the advantage of the action.

I am very confused about a single thing. How and why do we calculate the mean squared error loss to improve the value approximation in Advantage Actor-Critic Algorithm?

",35626,,35626,,5/6/2020 19:40,5/7/2020 15:18,Why do we calculate the mean squared error loss to improve the value approximation in Advantage Actor-Critic Algorithm?,,1,0,,,,CC BY-SA 4.0 20977,1,,,5/6/2020 19:43,,1,55,"

My task is to classify some texts. I have used word2vec to represent text words and I pass them to an LSTM as input. Taking into account that texts do not contain the same number of words, is it a good idea to create text features of fixed dimension using the word2vec word representations of the text and then classify the text using these features as an input of a neural network? And in general is it a good idea to create text features using this method?

",36055,,,,,5/6/2020 19:43,Creating Text Features using word2vec,,0,0,,,,CC BY-SA 4.0 20978,2,,6082,5/6/2020 23:39,,0,,"

I think a nice back-of-the envelope calculation is the intuition for exploding/vanishing gradients in RNNs:

Simplifications

  • diagonalisable weights $U$ and $W$
  • no non-linearities
  • 1 layer

This gives a hidden state $h_t$ at timestep $t$ for input $x_t$: $h_t = W\cdot h_{t-1} + U\cdot x_t$

Let $L_t$ be the loss at timestep $t$ and the total loss $L = \sum_t L_t$. Then (eq. 3 -> 5 in the paper)

$$ \frac{\partial L_t}{\partial W} \sim = \sum_{k=1}^{t} \frac{\partial L_t}{\partial h_t} \frac{\partial h_t}{\partial h_k} \frac{\partial h_k}{\partial W} = \sum_{k=1}^{t}\frac{\partial h_t}{\partial h_k}\times\alpha_{t, k} $$

Let's not care about terms regrouped in $\alpha_{t, k}$:

$$ \frac{\partial h_t}{\partial h_k} = \prod_{k<i\leq t} \frac{\partial h_i}{\partial h_{i-1}} = \prod_{k<i\leq t} W = \prod_{k<i\leq t} PDP^\top = PD^{t-k}P^{\top} $$

So you can easily see$^1$ that if the eigen values of $W$ (in the diagonal matrix $D$) are larger than $1$, the gradient will explode with time, and if they are smaller than $1$, it will vanish.

More detailed derivations in On the difficulty of training recurrent neural networks


$^1$ remember $\lim_{n \to +\infty}|x^n| = +\infty$ if $|x|>1$ and $=0$ for $|x| < 1$

",11351,,11351,,10/5/2020 5:38,10/5/2020 5:38,,,,2,,,,CC BY-SA 4.0 20979,1,,,5/7/2020 5:33,,1,51,"

My girlfriend has a masters degree in linguistics and would like to create an AI chatbot personal project to show potential employers her linguistics skills since she is struggling to find a job.

Unfortunately she doesn't know how to program except for extremely basic Python skills. She has been searching for weeks for tools to create a chatbot without needing to program, refusing to ask on forums for help so I'm asking the StackExchange community. Sort of like a plugin/widget for Slack, Facebook Messenger or website that you can just install on your website and just concentrate on the workflow/data/conversational design in a similar way to programming in Scratch or Node-Red.

I know nothing about NLP, neural networks or anything like that and I can't understand for the life of me what exactly it is a linguist needs to do in AI. Adversely, she doesn't have the computer knowledge to understand how an API works, or how to get some sort of service and a chat interface are needed to bootstrap her conversational designs with some code somewhere.

So my question is: Is there a way for a linguist to create a chatbot without knowing programming or going in too deep, all by themselves? We looked at tools like hubspot.com, where the chatbot design is either multiple choices questions with predefined answers or offers expensive paid solutions for companies. I'm sure there are free educational or community open-source platforms doing this.

",21790,,,,,5/7/2020 5:33,"Designing a chatbot personal project with zero coding experience, using an existing platform",,0,2,,,,CC BY-SA 4.0 20980,1,21207,,5/7/2020 7:25,,2,125,"

According to Reinforcement Knowledge Graph Reasoning for Explainable Recommendation

pure KG embedding methods lack the ability to discover multi-hop relational paths.

Why is it so?

",35585,,2444,,12/26/2021 12:13,12/26/2021 12:13,Why can't pure KG embedding methods discover multi-hop relations paths?,,1,0,,,,CC BY-SA 4.0 20981,1,,,5/7/2020 9:09,,1,46,"

So I have with me a data of rendered 2D images of a 3D object and along with that, I have the image projection coordinates (X, Y) of all the voxels that are in the camera perspective in that image.

The rendered image

The voxel camera projections(This is just for visualization purposes)

I wanted to build a CNN which takes an input a rendered image and outputs all the voxel camera projection coordinates (X, Y).I was thinking to try out an encoder-decoder based network (like a U-Net) but shouldn't the image obtained after decoding be the same dimensions as the input image. I want my decoder to output the tensor of coordinates but I am having a hard time thinking about how it would do so.

",36274,,36274,,5/7/2020 9:16,5/7/2020 9:16,An Encoder-Decoder based CNN to predict a tensor of points,,0,0,,,,CC BY-SA 4.0 20982,1,20985,,5/7/2020 9:58,,2,205,"

I have been reading the Sutton and Barto textbook and going through David Silvers UCL lecture videos on YouTube and have a question on the equivalence of two forms of the state-action value function written in terms of the value function.

From Question 3.13 of the textbook I am able to write the state-action value function as $$q_{\pi}(s,a) = \sum_{s',r}p(s',r|s,a)(r + \gamma v_\pi(s')) = \mathbb{E}[r + \gamma v_\pi(s')|s,a]\;.$$ Note that the expectation is not taken with respect to $\pi$ as $\pi$ is the conditional probability of taking action $a$ in state $s$. Now, in David Silver's slides for the Actor-Critic methods of the Policy Gradient lectures, he says that $$\mathbb{E}_{\pi_\theta}[r + \gamma v_{\pi_\theta}(s')|s,a] = q_{\pi_\theta}(s,a)\;.$$

Are these two definitions equivalent (in expectation)?

",36821,,2444,,5/7/2020 16:51,1/24/2022 13:20,Are these two definitions of the state-action value function equivalent?,,1,0,,,,CC BY-SA 4.0 20983,1,21007,,5/7/2020 11:09,,3,159,"

I was going through the AlphaGo Zero paper and I was trying to understand everything, but I just can't figure out this one formula:

$$ \pi(a \mid s_0) = \frac{N(s_0, a)^{\frac{1}{\tau}}}{\sum_b N(s_0, b)^{\frac{1}{\tau}}} $$

Could someone decode how the policy makes decisions following this formula? I pretty much understood all of the other parts of the paper, and also the temperature parameter is clear to me.

It might be a simple question, but can't figure it out.

",36825,,2444,,5/12/2020 16:52,2/7/2021 11:50,How does the AlphaGo Zero policy decide what move to execute?,,2,0,,,,CC BY-SA 4.0 20984,1,,,5/7/2020 12:30,,2,81,"

I have a question for heuristic search with multiple agents. I know how heuristic search works with one agent (ex. one Pacman) but I don't really understand it with multiple agents. Let's say we have this problem where Worm A has to get to its goal state A and Worm B to B, knowing that the agents can move only in vertical and horizontal way:

If we had only Worm B, the optimal cost from starting position to the goal position would be 9, since one action costs 1 and it'd follow the path RIGHT-RIGHT-RIGHT-RIGHT-RIGHT-RIGHT-UP-UP-UP.

My question is, if we have two worms, like in the picture, the optimal cost would be 9 + optimal cost for Worm A?

Also, strictly for this problem with 2 agents, if we use Manhattan distance as a heuristic for one agent, would it be admissible if we take the average of Worm A and B heuristics for a problem with two agents?

Another question, I know for a fact that sum of two admissible heuristics won't be admissible for one agent but would it be for the problem with two agents?

These two worms are dependent of each other. How? If one worm moves from position X to Y, the position X is marked as a wall and is not an available field to move in. So if one worm has been in a specific position, that position is no more free for moving in.

For example, if we have something like B^^^X^^^, where B is the Worm B, ^ is an available field and X is a wall, after one RIGHT action it'll look like XB^^X^^, after one more RIGHT: XXB^X^^ etc.

",36830,,,,,5/7/2020 12:30,How does heuristic work with multiple agents?,,0,3,,,,CC BY-SA 4.0 20985,2,,20982,5/7/2020 13:13,,3,,"

The definition of the state-action value function is always the same.

Your definition is correct, as $q_{\pi}(s,a)$ is conditioned on $a$, so you don't need to write $q_{\pi}(s,a)$ as an conditional expectation that depends on $\pi$. In fact, the conditional expectation is taken wrt the probability distribution $p(s',r|s,a)$. However, you need the subscript $\pi$ in $v_\pi(s')$ because $v_\pi(s')$ is defined as the expected return by following $\pi$ starting in $s'$.

If you didn't write $q_{\pi}(s,a)$ in terms of $v_\pi(s')$, then you could write $q_{\pi}(s,a)$ as an expectation that depends on $\pi$, because, in that case, $q_{\pi}(s,a)$, is defined as an expectation of the return (after having taken $a$ in $s$), which depends on $\pi$ (see equation 3.3 of Sutton & Barto book, p. 58). Of course, this way of writing $q_{\pi}(s,a)$ is equivalent to writing it in terms of $v_\pi(s')$.

I think David Silver's notation might be an abuse of notation. In his equation, the policy is parametrized by $\theta$, so I think he wants to emphasize that you will estimate the state-action value function (critic) based on $\pi_\theta$ (actor). Alternatively, he uses $\pi_\theta$ as a subscript of $\mathbb{E}_{\pi_\theta}$ to emphasize that the future return starting in $s'$, after having taken action $a$ in $s$, still depends on $\pi_\theta$.

",2444,,2444,,1/24/2022 13:20,1/24/2022 13:20,,,,0,,,,CC BY-SA 4.0 20986,2,,20975,5/7/2020 15:04,,1,,"

I believe that the author is referring to how the networks are trained in Deep RL. Consider Deep Q-Learning where the $Q(s,a)$ is approximated using a neural network. Then the loss function used to train the network is $$\mathbb{E}[(r + \gamma \max_{a'} Q(s',a') - Q(s,a))^2]\;.$$ Here, $r + \gamma \max_{a'} Q(s',a')$ is your target, what you want your network to aim towards, and $Q(s,a)$ is what your network predicted. (Note that I have left off some details that can be found in the Nature paper for simplicity).

As for actor-critic methods, most popular actor-critic methods will use the value function to 'replace' the action-value function by using the following relationship: $$\mathbb{E}[r + \gamma v_\pi(s')] = Q_\pi(s,a)\;.$$ This relationship can be proved by looking at exercise 3.13 (or somewhere around there) in the Sutton and Barto textbook. This looks like what the author is doing in the textbook you are reading.

Based on what I said at the start regarding how state-action value functions are trained, it is analogous to train a critic network that approximates the value function in the same way.

e1: spelling.

e2: added link to nature paper.

",36821,,36821,,5/7/2020 15:18,5/7/2020 15:18,,,,0,,,,CC BY-SA 4.0 20987,1,21010,,5/7/2020 15:14,,1,265,"

If I have a convolutional neural network, and I convolve my input tensor with a kernel, the output is a feature map. Is an activation function then applied to this feature map?

If its an image that is a 2D tensor, would the activation function change every single value on this image?

",35615,,2444,,5/7/2020 17:16,5/8/2020 17:23,Are activation functions applied to feature maps?,,1,0,,,,CC BY-SA 4.0 20988,1,,,5/7/2020 15:18,,1,32,"

What would be the best way to create a vector representation of roadmap like scans? The goal I am trying to achieve is illustrated below. The left side represents the source image, the right side the output in the form of three vectors. The fussiness on the left is a simulation, not the actual source image:

The actual source image would look more like:

Currently I am looking at a combination of skeletonization and Hough transform. The result is rather messy though, and seems to warrant quite some extra engineering. Any other suggestions?

",36619,,,,,7/7/2020 7:42,How to create vector representation of roadmap like scans,,1,0,,,,CC BY-SA 4.0 20989,2,,9396,5/7/2020 16:28,,1,,"

I will try to answer the question in a lesser mathematical (and hopefully correct way).

NOTE: I have used $V_{\pi}$ and $v_{\pi}$ interchangeably.

We start from LHS:

$$\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert$$

This can be written in terms of trajectories. Say the probability of observing a $n$ step trajectory (for a $n$ step return) is $p_j^{s}$ from state $S_t = s$ . Thus we can write the expected return as sum of returns from all trajectories multiplied with the probability of the trajectory:

$$\mathbb{E}_{\pi} [G_{t:t+n}|S_t = s] = \sum_j p_j^sG_{t:t+n}^j = \sum_j p_j^s [R_{t+1}^j + \gamma R_{t+2}^j.....\gamma^{n-1}R_{t+n}^j + \gamma^n V_{t+n-1}(S_{t + n})^j]$$

We use @Dennis's terminology for $n$ step rewards i.e

$R_{t:t+n}^j \doteq R_{t + 1}^j + \gamma R_{t + 2}^j + \dots + \gamma^{n - 1} R_{t + n}^j$.

Now we know $v_{\pi}(s)$ is nothing but $\mathbb{E}_{\pi} [G_{t:t+n}^{\pi}|S_t = s]$ where I have used $G_{t:t+n}^{\pi}$ to denote that the returns are actual returns if we have evaluated the policy completely (the value functions are consistent with the policy) for every state(using infinite episodes maybe) i.e $G_{t:t+n}^{\pi} = R_{t:t+n} + \gamma^n V_{\pi}(S_{t + n})$.

So now if we evaluate the equation:

$$\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert = \max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} \mid S_t = s \right] - \mathbb{E}_{\pi} [G_{t:t+n}^{\pi}|S_t = s]\Bigr\rvert$$

which is further written in form of trajectory probabilities (for easier comprehension):

$$\max_s \Bigl\lvert \sum_j p_j^s(R_{t:t+n}^j+\gamma^n V_{t+n-1}^j(S_{t + n})) - \sum_j p_j^s(R_{t:t+n}^j+\gamma^n V_{\pi}^j(S_{t + n})) \Bigr\rvert$$

Now the equation can be simplified by cancelling the reward terms ($R_{t:t+n}^j$) as they are the same for a trajectory, and thus is common to both the terms we get: $$\max_s \Bigl\lvert \gamma^n\sum_j p_j^s( V_{t+n-1}^j(S_{t + n})-V_{\pi}^j(S_{t + n}))\Bigr\rvert$$

This is basically the expectation of the deviation of each and every state at the $n$ th step (from its actual value $V_{\pi}$,), starting from $S_t = s$ and multiplied by a discount factor.

Now using the identity $E[X] \leq \max X$ we get:

$$\max_s \Bigl\lvert \gamma^n\sum_j p_j^s( V_{t+n-1}^j(S_{t + n})-V_{\pi}^j(S_{t + n}))\Bigr\rvert \leq \max \Bigl\lvert \gamma^n( V_{t+n-1}^j(S_{t + n})-V_{\pi}^j(S_{t + n}))\Bigr\rvert$$

Which can finally be written as: $$\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert \leq \max \Bigl\lvert \gamma^n( V_{t+n-1}^j(S_{t + n})-V_{\pi}^j(S_{t + n}))\Bigr\rvert$$

Now the RHS is true for only those states reachable from $S_t = s$ via a trajectory, but since it has a maximizing operation we can include the whole state space in the $\max$ operation without any problem to finally write:

$$\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert \leq \max_s \Bigl\lvert \gamma^n( V_{t+n-1}(s)-V_{\pi}(s))\Bigr\rvert$$

which concludes the proof.

",,user9947,,user9947,5/8/2020 4:07,5/8/2020 4:07,,,,0,,,,CC BY-SA 4.0 20991,1,,,5/7/2020 16:47,,1,67,"

I'm trying to process product data for an e-commerce platform. The goal is to understand products' size.

Just to show you some examples on how messy product dimension description is:

Overall Dimensions: 66 in W x 41 in D x 36 in H
Overall: 59 in W x 28.75 in D x 30.75 in H
92w 37d 32h"",
86.6 in W x 33.9 in D x 24 in H
W: 95.75\"" D: 36.5\"" H: 28.75\"""",
W: 96\"" D: 39.25\"" H: 32\"""",
""118\""W x 35\""D x 33\""T."",
""28 L x 95 W x 41 H""
""95\"" W x 26.5\"" H x 34.75\"" D""
""98\""W x 39\""D x 29\""H""
""28\"" High x 80\"" Wide x 32\"" Deep""

Now assume that the product dimension description is short < 60 characters, I trained a two layer bidirectional LSTM, which can handle this task perfectly.

But the problem is, the above dimension is usually embedded in a long context (as a part of the product description). How can I extract the useful information from the long context and understand it? My LSTM can only accept context size of 60.

What language model is more suitable for this?

",33082,,,,,12/4/2021 18:17,Which NLP model to use to handle long context?,,1,2,,,,CC BY-SA 4.0 20993,1,,,5/7/2020 17:57,,3,131,"

If I have the fitness of each genome, how do I determine which genome will crossover with which, and so on, so that I get a new population?

Unfortunately, I can't find anything about it in the original paper, so I ask here?

",36844,,2444,,5/7/2020 18:02,5/25/2021 17:07,How do I determine the genomes to use for crossover in NEAT?,,2,1,,,,CC BY-SA 4.0 20994,1,,,5/7/2020 18:30,,1,151,"

I am fairly new to reinforcement learning (RL) and deep RL. I have been trying to create my first agent (using A3C) that selects an optimal path with the reward being some associated completion time (the more optimal the path is, packets will be delivered in less time kind of).

However, in each episode/epoch, I do not have a certain batch size for updating my NN's parameters.

To make it more clear, let's say that, in my environment, on each step, I need to perform a request to the servers, and I have to select the optimal path.

Now, each execution in my environment does not contain the same amount of files. For instance, I might have 3 requests as one run, and then 5 requests for the next one.

The A3C code I have at hand has a batch size of a 100. In my case, that batch size is not known a priori.

Is that going to affect my training? Should I find a way to keep it fixed somehow? And how can one define an optimal batch size for updating?

",35978,,2444,,5/20/2020 12:16,5/20/2020 12:16,How should I deal with variable batch size in A3C?,,0,0,,,,CC BY-SA 4.0 20995,1,21156,,5/7/2020 20:03,,1,318,"

On-Policy Algorithms like PPO directly maximize the performance objective or an approximation of it. They tend to be quite stable and reliable but are often sample inefficient. Off-Policy Algorithms like TD3 improve the sample inefficiency by reusing data collected with previous policies, but they tend to be less stable. (Source: Kinds of RL Algorithms - Spinning up - OpenAI)

Looking at learning curves comparing SOTA algorithms, we see that off-policy algorithms quickly improve performance at the training's beginning. Here an example:

Can we start training off-policy and after some time use the learned and quickly improved policy to init the policy network of an on-policy algorithm?

",35821,,,,,5/14/2020 13:41,Can we combine Off-Policy with On-Policy Algorithms?,,1,0,,,,CC BY-SA 4.0 20997,1,,,5/7/2020 21:14,,0,631,"

I wanted to train a model that recognizes sign language. I have found a dataset for this and was able to create a model that would get 94% accuracy on the test set. I have trained models before and my main goal is not to have the best model (I know 94% could easiy be tuned up). However these models where always for class exercises and thus were never used on 'real' new data.

So I took a new picture of my hand that I know I wanted to be a certain letter (let's assume A).

Since my model was trained on 28x28 images, I needed to re-size my own image because it was larger. After that I fed this image to my model only to get a wrong classification.

https://imgur.com/a/QE6snTa

These are my pictures (upper-left = my own image (expected class A), upper-right = an image of class A (that my model correctly classifies as A), bottom = picture of class Z (the class my image was classified as)).

You can clearly see that my own image looks for more like the image of class A (that I wanted my model to predict), than the model it did predict.

What could be reasons that my model does not work on real-life images? (If code is wanted I can provide it ofcourse but since I don't know where I go wrong, it seemed out of line to copy all the code).

",34359,,,,,6/8/2020 15:56,"My CNN model performs bad on new (self-created) pictures, what are possible reasons?",,3,0,,,,CC BY-SA 4.0 20998,2,,20997,5/8/2020 1:03,,1,,"

I’m assuming that you used LeNet (our some other model with small number of parameters) since your training image size is 28x28. Note that LeNet doesn’t generalize well to new images. I think it performs fine (>90%) on MNIST but not good on CIFAR10 (>60%) albeit both datasets contain similar size image. (Just trying to remember the performance from PyTorch implementations). It’s more about if the model has capacity to learn the complexity of dataset. CIFAR10 is more complex and harder to model than MNIST.

LeNet is a small image classification model (in terms of capacity) so it cannot nicely learn the correlation between pixels of input images well and therefore doesn’t perform well on unseen images.

In your case it seems like your model has overfit to training examples. It might perform well on test images because both training and test subsets are sampled from same data generating distribution but real-world images it might experience in future might be different (like your own hand). If it doesn’t perform well on unseen images we say it has not generalized well, which looks the case in your situation. In this case you need a validation set to validate that your model generalizes to unseen images. If you have it then you should use it in early stopping regularization technique. You can also add other regularizers to your model (the simplest one is weight decay).

But instead of inventing your network architecture why don’t you use models like ResNet. Just fine-tune the pre-trained ResNet on your own dataset. I’d prefer to fine-tune personally in this situation because data distribution it was trained on (ImageNet) is pretty different from your hand sign dataset. In other case if your dataset contained nature and surrounding images I’d rather freeze the parameters of fixed-feature extractor layers and trained only the last few layers of ResNet (or similar model).

I hope this helps!

",5260,,,,,5/8/2020 1:03,,,,0,,,,CC BY-SA 4.0 20999,1,,,5/8/2020 1:25,,1,63,"

Are there any general guidelines for dealing with imbalanced data through upsampling/downsampling?

This Google developer guide suggests performing downsampling with upweighting, but for the most part I've found upsampling usually works better in practice (some corroboration).

Is there any clear consensus or empirical study of what works in practice, or when to use which? Does it matter which classification algorithm you use?

",18086,,11539,,5/9/2020 22:17,5/9/2020 22:17,Are there any general guidelines for dealing with imbalanced data through upsampling or downsampling?,,0,0,,,,CC BY-SA 4.0 21000,1,,,5/8/2020 2:56,,0,1156,"

I'm looking to implement a AI for the turn-based game Mastermind in Node.JS, using Google's Tensorflow library. Basically the AI needs to predict the 4D input for the optimal 2D output [0,4] with a given list of 4D inputs and 2D outputs from previous turns in the form of [input][output].

The optimal output would be [0,4], which would be the winning output. The training data looks like this:

[1,2,3,4][0,1] [0,5,2,6][3,1] [0,2,5,6][2,2] [6,5,2,0][4,0] [5,2,0,6][0,4]

So given these previous turns

[1,2,3,4][0,1] [0,5,2,6][3,1] [0,2,5,6][2,2] [6,5,2,0][4,0]

the AI would predict an input of [5,2,0,6] for the output [0,4]. I've looked at this post but it talks about only inferring input for a output without any context. In Mastermind, the context of previous guesses and results from them are critical

My algorithm would need to use the information from previous turns to determine the best input for the winning output ([0,4]).

So my question is: How can I implement AI for Mastermind?

",36853,,1847,,5/9/2020 11:48,6/9/2020 16:07,How to implement AI strategy for Mastermind,,2,4,,,,CC BY-SA 4.0 21002,1,21015,,5/8/2020 8:14,,4,196,"

We all have heard about how beneficial AI can be in health. There are plenty of papers and research about confronting diseases, like cancer. However, in 2020 with COVID-19 be one of the most serious health problems that have caused thousands of deaths worldwide.

Is AI already being used in the drug industry to combat the COVID-19? If yes, can you, please, provide a reference?

",36055,,32410,,9/25/2021 12:55,9/26/2021 8:38,Is AI already being used in the drug industry to combat the COVID-19?,,2,1,,,,CC BY-SA 4.0 21003,1,,,5/8/2020 11:15,,2,202,"

How long should the state-dependent baseline be trained at each iteration? Or what baseline loss should we target at each iteration for use with policy gradient methods?

I'm using this equation to compute the policy gradient:

$$ \nabla_{\theta} J\left(\pi_{\theta}\right)=\underset{\tau \sim \pi_{\theta}}{\mathrm{E}}\left[\sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}\left(a_{t} | s_{t}\right)\left(\sum_{t^{\prime}=t}^{T} R\left(s_{t^{\prime}}, a_{t^{\prime}}, s_{t^{\prime}+1}\right)-b\left(s_{t}\right)\right)\right] $$

Here is mentioned to use one or more gradient steps, so is it a hyper-parameter to be found using random search?

Is there some way we can use an adaptive method to find out when to stop?

In an experiment to train Cartpole-v2 using a policy gradient with baseline, I found the results are better when applying 5 updates than when only a single update was applied.

Note: I am referring to the number of updates to take on a single batch of q values encountered across trajectories collected using current policy.

",36861,,2444,,5/18/2020 12:27,10/5/2022 19:00,How long should the state-dependent baseline for policy gradient methods be trained at each iteration?,,1,0,,,,CC BY-SA 4.0 21005,2,,20988,5/8/2020 13:32,,1,,"

It turned out that my intuition was not far off. The skeletonization is a good step. The Hough transform though is not a good way to create a graph of the roadmap. It seems that the Ramer–Douglas–Peucker algorithm can help out here. This algorithm first takes all the skeleton pixels as input, and sees this as a starting graph. The algorithm then proceeds to remove intermediary pixels that do not add information to the shape of the graph.

",36619,,36619,,7/7/2020 7:42,7/7/2020 7:42,,,,0,,,,CC BY-SA 4.0 21006,2,,20997,5/8/2020 14:06,,1,,"

This is not an uncommon situation. The data set your model is trained on represents a certain probability distribution. Your test set is most likely a good representation from that distribution so your test results will be good. However when you use real world images they may or may not have a similar distribution. Typically if the training set is large and diverse it is a good representation of the distribution and when the model is used to classify a real world image it will do so correctly. I think I know of the data set you are working with and if I recall it is fairly large. So the problem may be that your model is not complex enough to fully capture the complexity of the data. You can test that fairly simply by using transfer learning with a model that is known to be effective for image classification. I recommend using the MobileNet model. It contains only about 4 million parameters but is about as accurate as larger models containing 10 times as many parameters. So MobileNet is not computationally expensive to train. Documentation can be found here.

",33976,,,,,5/8/2020 14:06,,,,0,,,,CC BY-SA 4.0 21007,2,,20983,5/8/2020 14:14,,1,,"

The formula in question uses a function N(state, action) that defines a visit count of a state-action pair (introduced on page 3). To describe how it is used, lets first describe the steps of AlphaGo Zero as a whole.

There are 4 ""phases"" to the Monte-Carlo tree search in AlphaGo Zero as depicted in Figure 2. The first 3 expand and update the tree and together are the ""search"" in Monte-Carlo tree ""search"" in AlphaGo Zero.

  1. Select an edge (action) in the tree with maximum action-value Q (plus upper confidence bound U)

  2. Expand and evaluate the leaf node using the network

  3. Backup Action-values Q are updated to track the evaluations of the Value in the subtree.

  4. Play - After the ""search"" is complete, the search probabilities are returned proportional to the visitation count of the nodes of the tree.*

* This is where the formula in question comes into play (pun intended). During the ""search"", nodes that looked good were expanded on and thus their visitation counts were updated. So the formula is essentially describing this logic:

Good nodes have higher counts, so choose the nodes with higher counts often

But what if there is a really good node but it hasn't been visited a lot so its not chosen?

This is where the temperature parameter comes in:

  • If the temperature is 1, this selects moves proportionally to their visit counts.
  • If the temperature is 0 (not actually but rather in infinitesimal that approaches 0), and with some added noise, it ""ensures that all moves may be tried, but the search may still overrule bad moves.""

So altogether, the formula is saying : Pick good things most of the time

The mathematical evaluation of the formula is described below:

AlphaGo Zero defines the probability of each action (aka the policy) by that formula. If there are 3 nodes, A, B and C, and they have each been visited 10, 70, and 20 times (100 times in total), respectively, then the probability of taking those actions is:

  • P(A) = 10/100 = .10
  • P(B) = 70/100 = .7
  • P(C) = 20/100 = .2
",4398,,4398,,5/12/2020 15:02,5/12/2020 15:02,,,,1,,,,CC BY-SA 4.0 21009,1,,,5/8/2020 14:41,,3,67,"

I was reading the average reward setting for continuous tasks from rich sutton's book (page 202, 2nd edition). There he perform a simplification over the expected reward under the limit approaching to infinite. I mark this point in this picture:

The book does not clearly mention the steps to simplify the above expression. I search on the web to find the solution but there is no clear explanation on that. Can anyone explain the marked point?

",28048,,,,,10/6/2020 1:05,Simplification of expected reward under the limit in continuous tasks,,0,0,,,,CC BY-SA 4.0 21010,2,,20987,5/8/2020 17:23,,1,,"

Yes, it is applied element-wise on every single value of the feature map. Assuming ReLU as your nonlinearity function, all negative values of the image feature map are set to zero, and the rest of the elements stay unchanged.

",36785,,,,,5/8/2020 17:23,,,,0,,,,CC BY-SA 4.0 21011,1,,,5/8/2020 19:35,,1,44,"

I have hopefully a fundamental question of Do I understand things right. (Thank you in advance and sorry for my English which might be not so good)

1-Preambula 1: I know that if we have 2 independent variables, both of a continuous type, it is ok to represent them as a 2d plane in a 3d space:

2-Preambula 2: I have seen that many times when we have to deal with continuous and categorical variables(male\female for example), we represent them like this(note the lines are parallel):

3-Assumption: In the beginning I assumed that it is 2d representation of this 3d case: 4-Discussion 1: But If my assumption above was right, why do they always ""picture"" it with parallel lines? After all this is a very specific situation. In most of cases both regression lines will not be parallel, further more, they may have different slope direction (one negative and another positive)For example:

5-Discussion 2: On the other hand parallel models may be explained in such a way: if we will add a regression hyperplane which ""somehow"" fits both groups(male and female), we will get the parallel lines:

6-Finally My questions are quite simple.

question 5.1: Did I understand right the nature of parallel lines as I show it above (in discussion2)?

question 5.2: If I was right in 5.1, I assume that in such cases hyperplane regression is a quite a bad predictor. Am I right?

",36453,,,,,5/8/2020 19:35,3d representation of a regression with two independent variables one of them is categorical and another is continuous,,0,0,,,,CC BY-SA 4.0 21012,1,,,5/8/2020 19:38,,4,208,"

Modern artificial neural networks use a lot more functions than just the classic sigmoid, to the point I'm having a hard time really seeing what classifies something as a ""neural network"" over other function approximators (such as Fourier series, Bernstein polynomials, Chebyshev polynomials or splines).

So, what makes something an artificial neural network? Is there a subset of theorems that apply only to neural networks?

Backpropagation is classic, but that is the multi-variable chain rule, what else is unique to neural networks over other function approximators?

",32390,,2444,,5/9/2020 14:56,6/15/2020 23:05,What are the differences between artificial neural networks and other function approximators?,,1,0,,,,CC BY-SA 4.0 21014,2,,21002,5/8/2020 20:20,,2,,"

I'm not sure if it is being used directly in the industry, but here is an interesting article on research being done by 3 UK universities using AI.

",36821,,,,,5/8/2020 20:20,,,,0,,,,CC BY-SA 4.0 21015,2,,21002,5/8/2020 21:09,,5,,"

A global race is underway to discover a vaccine, drug, or combination of treatments that can disrupt the SARS-CoV-2 virus.

The problem is, there are more than a billion such molecules. A researcher would conceivably want to test each one against the two dozen or so proteins in SARS-CoV-2 to see their effects. Such a project could use every wet lab in the world and still not be completed for centuries.

Computer modelling is a common approach used by academic researchers and pharmaceutical companies as a preliminary, filtering step in drug discovery. However, in this case, even every supercomputer on Earth could not test those billion molecules in a reasonable amount of time.

Folding@home is a distributed computing project run by Stanford University. The aim of the project is to examine how proteins fold and it does this using spare computing power. however, there is a lot of research in progress that are harnessing the potential of Artificial intelligence to develop the potential treatment to combat the COVID-19.

Check this recent article By Tyler Orton in biv focus on how artificial intelligence is used to accelerate the process of Drug Discovery: Drug research turns to artificial intelligence in COVID-19 fight

Here are the list of some companies that are using AI-driven approach for Drug discovery

The Hong Kong-based company Insilico Medicine, a developer of comprehensive drug discovery and biomarker development platform GENTRL, and a pioneer in the application of generative adversarial networks (GANs) to drug discovery.

Insilico Medicine, Publishes a paper in September last year titled, Deep learning enables rapid identification of potent DDR1 kinase inhibitors," in a most reputed journal Nature Biotechnology. The paper describes a timed challenge, where the new artificial intelligence system called Generative Tensorial Reinforcement Learning (GENTRL) designed six novel inhibitors of DDR1, a kinase target implicated in fibrosis and other diseases, in 21 days.

Four compounds were active in biochemical assays, and two were validated in cell-based assays. One lead candidate was tested and demonstrated favorable pharmacokinetics in mice.

",36737,,32410,,9/26/2021 8:38,9/26/2021 8:38,,,,0,,,,CC BY-SA 4.0 21016,2,,20645,5/8/2020 22:19,,2,,"

In the, presumably final, printed version the last two equal signs are approximations. This is just because over a large amount of weight updates where you have been sampling the expectation will be approximated by Monte Carlo.

",36821,,,,,5/8/2020 22:19,,,,2,,,,CC BY-SA 4.0 21018,1,21025,,5/8/2020 23:08,,4,157,"

From what I can find, reinforcement algorithms work on a grid or 2-dimensional environment. How would I set up the problem for an approximate solution when I have a 1-dimensional signal from a light sensor. The sensor sits some distance away from a lighthouse. The intent would be to take the reading from the sensor to determine the orientation of the lighthouse beam.

The environment would be a lighthouse beam, the state would be the brightness seen at the sensor for a given orientation, and the agent would be the approximate brightness/orientation? What would the reward be? What reinforcement learning algorithm would I use to approximate the lighthouse orientation given sensor brightnesses?

",36877,,,,,5/9/2020 14:57,Is there 1-dimensional reinforcement learning?,,1,0,,,,CC BY-SA 4.0 21019,1,,,5/9/2020 2:44,,2,394,"

I've implemented the Monte Carlo tree search (MCTS) algorithm for a connect four game I've built. The MCTS agent beats a random choice agent 90-100% of the time, but I’m still able to beat it pretty easily. It even misses obvious three in a row opportunities where it just needs to add one more token to win (but places it elsewhere instead).

Is this normal behavior, or should the MCTS agent be able to beat me consistently too? I'm allowing it to grow its tree for 2 seconds before getting it to return its chosen action - could it be that it needs longer to think?

",27629,,2444,,5/9/2020 11:17,5/9/2020 11:17,Should Monte Carlo tree search be able to consistently beat me in the connect four game?,,1,10,,,,CC BY-SA 4.0 21020,1,,,5/9/2020 4:44,,2,245,"

I am a beginner in machine learning and neural networks. I have only used neural networks for classification problems. My aim is to modify it so that it can work for polynomial regression as well. In my problem, I have three inputs and three outputs. My aim is to predict these three outputs based on these three inputs. The outputs are real-valued, and can take positive and negative values.

How should I choose the activation functions? I have only used sigmoid.

",36881,,2444,,5/9/2020 11:19,5/9/2020 11:19,Which activation functions should I use for polynomial regression?,,0,3,,,,CC BY-SA 4.0 21021,1,21026,,5/9/2020 5:05,,2,1206,"

I am reading sutton barton's reinforcement learning textbook and have come across the finite Markov decision process (MDP) example of the blackjack game (Example 5.1).

Isn't the environment constantly changing in this game? How would the transition probabilities be fixed in such an environment, when both you and the dealer draw cards?

",32780,,2444,,5/9/2020 11:21,5/9/2020 11:37,How can blackjack be formulated as a Markov decision process?,,1,0,,,,CC BY-SA 4.0 21022,2,,21019,5/9/2020 9:13,,1,,"

You should not let the tree grow for only two seconds rather you should use the simulation number equal to 1000 or something like that. I use the simulation number equal to 10000 for making a single move in the tictactoe game and it was working fine for me. Also, after the agent has chosen the move you do not have to start the statistics(N = visit count, V = expected reward, U = UCT score) from the beginning, you can use the current statistics and replace the root node with the chosen node.

",28048,,28048,,5/9/2020 9:42,5/9/2020 9:42,,,,4,,,,CC BY-SA 4.0 21025,2,,21018,5/9/2020 9:56,,0,,"

From what I can find, reinforcement algorithms work on a grid or 2-dimensional environment.

A lot of teaching materials use a ""grid world"" presentation to demonstrate basic reinforcement learning (RL). However, the underlying Markov Decision Process (MDP) theory works on an arbitrary graph of connected states. This graph could be based on subdividing a metric space of any dimensions into a grid of the same dimensions (and using tiles of any shape that worked in that dimension). However, it is not limited to that, the state space does not need to be a metric that represents distances or physical properties.

In practice, the set of states can be arbitrary objects, connected via state transitions in any consistent way. Provided the transition probability function $p(s'|s,a)$ is consistent, the environment could be used in a RL problem.

A very common state description is that the state is a vector of numbers that capture all the variables relevant to the problem. The environment can then be measurements taken in the real world of those variables, or the same quantities provided by a simulation. That state vector can be of any size, and have arbitrary constraints on individual components. This is no different from numerical representations of other machine learning problems, such as the inputs allowed to a neural network.

The environment would be a lighthouse beam, the state would be the brightness seen at the sensor for a given orientation, and the agent would be the approximate brightness/orientation?

Something not quite right about the description there. There does not seem to be any action that the agent takes.

What would the reward be?

It would be whatever measure of reaching a goal or maintaining a ""good"" result that is appropriate for the problem. You do not give any information about the goal in your description.

If your goal is to light up a moving sensor with the highest brightness, then the brightness measured at the sensor, transformed into suitable units, would seem to be a good candidate for a reward function (you would also need the state to give information about the target - where it had been seen last for instance). Assuming the problem is continuous, you would also need a discount factor.

What reinforcement learning algorithm would I use to approximate the lighthouse orientation given sensor brightnesses?

Generally RL algorithms estimate rewards, or generate policies. If the lighthouse orientation is the action you wanted to take, then pretty much all RL algorithms can do one or the other to allow you to do this. The differences are in things like complexity or speed of the algorithm, what approximations you are willing to take etc.

You don't give enough information about the problem to even nearly suggest a ""best"" algorithm. Before you start, you will need to determine a more thorough description of state, action and rewards, that will define the problem. Once you have a more formal description of the problem, that may suggest which algorithms would be good starting points.

",1847,,1847,,5/9/2020 14:57,5/9/2020 14:57,,,,1,,,,CC BY-SA 4.0 21026,2,,21021,5/9/2020 11:37,,2,,"

Isn't the environment constantly changing in this game?

The current state of the agent and the environment is constantly changing as you play, but not necessarily the transition probabilities. For simplicity, you may assume that the transition probabilities do not change (e.g. if the dealer and the deck are the same every time you play).

How would the transition probabilities be fixed in such an environment, when both you and the dealer draw cards?

The actions of the dealer will be incorporated into the transition probabilities of the environment. Whenever the player (or RL agent) takes an action, then it will receive a reward, according to the reward function of the environment (the rules of the game), and the agent and the environment will be moved to the next state, according to the transition probabilities, which do not have to change for stochasticity to occur in the environment. In fact, these transition probabilities already incorporate this stochasticity.

Also, if even the environment was changing, you could still model your problem as an MDP, but the MDP would change accordingly.

The example 5.1 of the book (that you mention) actually explains in detail how to formulate this game as a finite MDP.

",2444,,,,,5/9/2020 11:37,,,,1,,,,CC BY-SA 4.0 21028,2,,21012,5/9/2020 14:48,,2,,"

First of all, neural networks are not (just) defined by the fact that they are typically trained with gradient descent and back-propagation. In fact, there are other ways of training neural networks, such as evolutionary algorithms and the Hebb's rule (e.g. Hopfield networks are typically associated with this Hebbian learning rule).

The first difference between neural networks and other function approximators is conceptual. In neural networks, you typically imagine that there are one or more computational units (often called neurons) that are connected in different and often complex ways. The human can choose these connections (or they could also be learned) and the functions that these units compute given the inputs. So, there's a great deal of flexibility and complexity, but, often, also a lack of rigorousness (from the mathematical point of view) while using and designing neuron networks.

The other difference is that neural networks were originally inspired by the biological counterparts. See A logical calculus of the ideas immanent in nervous activity (1943) by Warren McCulloch and Walter Pitts, who proposed, inspired by neuroscience, the first mathematical model of an artificial neuron.

There are other technical differences. For example, the Taylor expansion of a function is typically done only at a single value of the domain, it assumes that the function to be approximated is differentiable multiple times, and it makes uses of the derivatives of such a function. Fourier series typically approximate functions with a weighted sum of sinusoids. Given appropriate weights, the Fourier series can be used to approximate an arbitrary function in a certain interval or the entire function (if the function you want to approximate is also periodic). On the other hand, neural networks attempt to approximate functions of the form $f: [0, 1]^n \rightarrow \mathbb{R}$ (at least, this is the setup in the famous paper that proved the universality of neural networks) in many different ways (for example, weighted sums followed by sigmoids).

To conclude, neural networks are quite different from other function approximation techniques (such as Taylor or Fourier series) in the way they approximate functions and their purpose (i.e. which functions they were supposed to approximate and in which context).

",2444,,2444,,5/16/2020 22:56,5/16/2020 22:56,,,,0,,,,CC BY-SA 4.0 21030,1,,,5/9/2020 15:06,,1,48,"

I have a dataset A of videos. I've extracted the feature vector of each video (with a convolutional neural network, via transfer learning) creating a dataset B. Now, every vector of the dataset B has a high dimension (about 16000), and I would like to classify these vectors using an RBF-ANN (there are only 2 possible classes).

Is the high dimensionality of input vectors a problem for a radial basis function ANN? If yes, is there any way to deal with it?

",36363,,2444,,5/9/2020 22:26,5/9/2020 22:26,Is the high dimensionality of input vectors a problem for a radial basis function neural network?,,0,0,,,,CC BY-SA 4.0 21032,1,,,5/9/2020 18:03,,2,93,"

Why can't DQN be used for self-driving cars? Why can't DQN and similar RL algorithms be used for self-driving cars?

The reason why I am curious is that it successfully plays go and other multistate games.

",36107,,2444,,11/23/2020 13:30,11/23/2020 13:30,Why can't DQN be used for self-driving cars?,,1,0,,,,CC BY-SA 4.0 21033,1,,,5/9/2020 19:49,,1,64,"

I am reading a paper implementing a deep deterministic policy gradient algorithm for portfolio management. My question is about a specific neural network implementation they depict in this picture (paper, picture is on page 14).

The first three steps are convolutions. Once they have reduced the initial tensor into a vector, they add that little yellow square entry to the vector, called the cash bias, and then they do a softmax operation.

The paper does not go into any detail about what this bias term could be, they just say that they add this bias before the softmax. This makes me think that perhaps this is a standard step? But I don't know if this is a learnable parameter, or just a scalar constant they concatenate to the vector prior to the softmax.

I have two questions:

1) When they write softmax, is it safe to assume that this is just a softmax function, with no learnable parameters? Or is this meant to depict a fully connected linear layer, with a softmax activation?

2) If it's the latter, then I can interpret the cash bias as being a constant term they concatenate to the vector before the fully connected layer, just to add one more feature for the cash assets. However, if softmax means just a function, then what is this cash bias? It must be a constant that they implement, but I don't see what the use of that would be, how can you pick a constant scalar that you are confident will have the intended impact on the softmax output to bias the network to put some weight on that feature (cash)?

Any comments/interpretations are appreciated!

",35809,,,,,12/3/2022 12:07,What do the authors of this paper mean by the bias term in this picture of a neural network implementation?,,1,0,,,,CC BY-SA 4.0 21034,2,,18341,5/9/2020 20:04,,1,,"

Probably to as many as possible. Average accept rate of papers is around 20%. You can find the best conferences on AI & ML Event.

",36893,,,,,5/9/2020 20:04,,,,0,,,,CC BY-SA 4.0 21040,1,,,5/10/2020 5:53,,1,30,"

I am trying to train a card game called Callbreak I tried inputs like all the opponents discarded cards all hands everything a human can see and calculate with ""common sense"" I fed it to the Agent but it's not learning the way it should after 100M steps it learned as basic as any new/bad player will do in the game. I have never tried to train a card game before so Idk what should I feed and how much to feed any help will be appreciated.

I have trained 100m+ steps before it will just keep repeating these drops and will never get more rewards.

config I am using is:

default:
trainer: ppo
batch_size: 1024
beta: 5.0e-3
buffer_size: 10240
epsilon: 0.2
hidden_units: 256
lambd: 0.95
learning_rate: 3.0e-4
learning_rate_schedule: linear
max_steps: 100.0e5
memory_size: 128
normalize: false
num_epoch: 3
num_layers: 3
time_horizon: 64
sequence_length: 64
summary_freq: 100000
use_recurrent: false
vis_encode_type: simple
reward_signals:
    extrinsic:
        strength: 1.0
        gamma: 0.99

Tensorboard

New Training Results

default:
trainer: ppo
batch_size: 1024
beta: 5.0e-3
buffer_size: 10240
epsilon: 0.2
hidden_units: 512
lambd: 0.95
learning_rate: 3.0e-4
learning_rate_schedule: linear
max_steps: 500.0e5
memory_size: 128
normalize: false
num_epoch: 3
num_layers: 3
time_horizon: 64
sequence_length: 64
summary_freq: 100000
use_recurrent: false
vis_encode_type: simple
reward_signals:
    extrinsic:
        strength: 1.0
        gamma: 0.99
    curiosity:
        strength: 0.02
        gamma: 0.99
        encoding_size: 256

",36902,,36902,,5/11/2020 4:21,5/11/2020 4:21,Trying to Train Cards Game with RL,,0,0,,,,CC BY-SA 4.0 21042,1,21048,,5/10/2020 10:32,,3,3469,"

I want to understand the process of finding a homography matrix given 4 points in both images. I am able to do that in python OpenCV, but I wonder how it works behind the scenes.

Suppose I have points $p_1, p_2, p_3, p_4$ in the first image and $p'_1, p'_2, p'_3, p'_4$ in the second. How am I going to generate the homography matix given these points.

",36484,,2444,,12/25/2021 11:03,12/25/2021 11:03,How do you find the homography matrix given 4 points in both images?,,1,0,,,,CC BY-SA 4.0 21043,2,,21000,5/10/2020 10:32,,1,,"

You could possibly apply neural networks, reinforcement learning to summarise results of previous choices (what you are calling context) and use score predictions to suggest the next turn's guess. However, the game of Mastermind has a small search space and it is possible to process this ""context"" more directly by refining a set of guesses. This will be much more efficient and simpler to understand than a neural network approach. It would be very hard to make a neural network variant which was as efficient - either in terms of CPU time, or in terms of number of turns it takes to find a solution.

In practice, a Mastermind solver is much like a Hangman solver, or a Guess Who? solver. You have an initial large set of all possible answers, and need to narrow it down to a single correct answer. You do this by processing after each guess to reduce the set of answers that meet all the constraints that the game has given you so far.

The agent needs to know the score function that compares a target value with the guess and returns the score. Let's call that score(guess, target)

The algorithm looks like this:

(Opponent sets unknown_target)
Initialise possible_answers as list of all valid targets

For each turn:
  Select one of possible_answers as this turn's guess
  Ask opponent for gscore = score(guess, unknown_target)
  If gscore is [0,4] then exit(win)
  If it was last guess then exit(lose)
  For each possible_answer in possible_answers:
    pscore = score(guess, possible_answer)
    If pscore != gscore, then remove possible_answer from possible_answers

You can finesse this for the stage Select one of possible_answers as this turns guess by trying to optimise either by psychological model of opponent or trying to find choices that are likely to cause the best reduction in size of possible_answers. However, a simple random choice should do quite well.

Also worth noting is that the algorithm does not depend on the exact nature of the scoring function, so it is applicable for many variations of guessing games. It does rely on the score providing information that will reduce the remaining set of guesses. In some games that may mean taking more care about the precise nature of a guess, in order to maximise this effect.

Out of interest, I implemented this algorithm and tested it when there were 10 choices at each position (i.e. digits 0 to 9), maximum of 10 guesses allowed, and the target to guess was set randomly. Using random guesses and the algorithm exactly as written, the above approach guessed correctly 9,996 times out of 10,000, and on average the guesser won the game in 6.2 turns.

",1847,,1847,,5/10/2020 12:40,5/10/2020 12:40,,,,0,,,,CC BY-SA 4.0 21044,1,,,5/10/2020 10:52,,11,1412,"

I'm currently studying reinforcement learning and I'm having difficulties with question 6.12 in Sutton and Barto's book.

Suppose action selection is greedy. Is Q-learning then exactly the same algorithm as SARSA? Will they make exactly the same action selections and weight updates?

I think it's true, because the main difference between the two is when the agent explores, and following the greedy policy it never explores, but I am not sure.

",36105,,2444,,12/4/2020 18:40,12/6/2020 12:26,Are Q-learning and SARSA the same when action selection is greedy?,,1,2,,,,CC BY-SA 4.0 21045,1,,,5/10/2020 12:01,,5,222,"

I read so many articles and the Fast R-CNN paper, but I'm still confused about how the region proposal method works in Fast R-CNN.

As you can see in the image below, they say they used a proposal method, but it is not specified how it works.

What confuses me is, for example, in the VGGnet, the output of the last convolution layer is feature maps of shape 14x14x512, but what is the used algorithm to propose the regions and how does it propose them from the feature maps?

",36913,,2444,,2/3/2021 10:22,2/4/2021 10:46,How does the region proposal method work in Fast R-CNN?,,1,0,,,,CC BY-SA 4.0 21046,1,,,5/10/2020 13:01,,2,141,"

I am starting to get my head around convolutional neural networks, and I have been working with the CIFAR-10 dataset and some research papers that used it. In one of these papers, they mention a network architecture notation for a CNN, and I am not sure how to interpret that exactly in terms of how many layers are there and how many neurons in each.

This is an image of their structure notation.

  1. Can some give me an explanation as to what exactly this structure looks like?

  2. In the CIFAR-10 dataset, each image is $32 \times 32$ pixels, represented by 3072 integers indicating the red, green, blue values for each pixel.

    Does that not mean that my input layer has to be of size 3072? Or is there some way to group the inputs into matrices and then feed them into the network?

",32477,,2444,,6/22/2020 16:37,7/12/2022 22:00,Can you explain me this CNN architecture?,,1,1,,,,CC BY-SA 4.0 21048,2,,21042,5/10/2020 13:30,,7,,"

To understand homographies and how to find them, you will need a good dose of projective geometry. I will briefly describe some preliminary concepts that you need to know before trying to find the homography, but don't expect to understand all these concepts with one reading iteration and only by reading this answer, if you are not familiar with them, especially, if you don't even know what homogenous coordinates are. For more details, I suggest you read the book Multiple view geometry in computer vision (2004) by Richard Hartley and Andrew Zisserman, in particular, chapter 4.

The projective space $\mathbb{P}^2$

$\mathbb{P}^2$ is the projective space of $\mathbb{R}^2$, so it is $\mathbb{R}^2$ augmented with lines and points at infinity. All points and lines of $\mathbb{P}^2$ actually belong to $\mathbb{R}^3$, i.e. they are vectors of three components, because they are homogenous representations of the counterparts in $\mathbb{R}^2$. To emphasize, in $\mathbb{P}^2$, both points and lines can be represented by a vector in $\mathbb{R}^3$, which is the homogenous representation of the counterpart vector in $\mathbb{R}^2$ (if it exists, e.g. points at infinity do not exist in $\mathbb{R}^2$).

What is a homography?

A homography (aka projectivity, collineation or projective transformation) is an invertible map $h$ from the projective space $\mathbb{P}^2$ to itself, $$h: \mathbb{P}^2 \rightarrow \mathbb{P}^2,$$ such that $\mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3$ lie on the same straight-line if and only if $h(\mathbf{x}_1), h(\mathbf{x}_2), h(\mathbf{x}_3)$ do.

This property is called collinearity. In practice, this property means that projective transformations can map straight-lines in one image to straight-line to another image, but it cannot map e.g. straight-lines to parabolas (and vice-versa). So, if you have a distorted image, a homography cannot convert it to a non-distorted image.

All and only linear maps in $\mathbb{P}^2$ are homographies

All linear maps in $\mathbb{P}^2$ are homographies and only linear maps can be homographies. Hence, when you want to find a homography, you are looking for a linear map in $\mathbb{P}^2$.

A homography can be represented as $3 \times 3$ matrix

Given that homographies are linear maps they can be represented as an invertible matrix

$$\mathbf{H} = \begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix} \in \mathbb{R}^{3 \times 3}, $$ such that, $\forall \mathbf{x} \in \mathbb{P}^2$, the following equation holds $$ h(\mathbf{x}) = \mathbf{H}\mathbf{x} $$

Homographies map points in $\mathbb{P}^2$ to points in $\mathbb{P}^2$

Given that homographies map points in $\mathbb{P}^2$ to other points in $\mathbb{P}^2$ and there's a matrix $\mathbf{H} \in \mathbb{R}^{3 \times 3}$ for each homography $h$, then it follows that, $\forall \mathbf{x} \in \mathbb{P}^2$, \begin{align} \mathbf{x}' &= \mathbf{H}\mathbf{x} \\ \begin{bmatrix} \mathbf{x}'_1 \\ \mathbf{x}'_2 \\ \mathbf{x}'_3 \end{bmatrix} &= \begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix} \begin{bmatrix} \mathbf{x}_1 \\ \mathbf{x}_2 \\ \mathbf{x}_3 \end{bmatrix} \tag{1}\label{1} \end{align} where $\mathbf{x}' \in \mathbb{P}^2$, for some homography $\mathbf{H} \in \mathbb{R}^{3 \times 3}$.

In practice, this means that you can transform points in an image $I_1$ to points in image $I_2$ by matrix multiplication.

A homography has $8$ degrees of freedom

Although the matrix $\mathbf{H}$ has $9$ entries, it has only $8$ degrees of freedom, which means that, in practice, there are only $8$ variables that you need to find. This property comes from the fact that only the ratio between the elements of $\mathbf{H}$ actually counts. This means that equation \ref{1} can actually be written as \begin{align} \mathbf{x}' = \lambda \mathbf{H}\mathbf{x} \tag{2}\label{2} \end{align} for all $\lambda \in \mathbb{R} \setminus \{0 \}$.

How to estimate a homography from point correspondences?

We want to estimate a homography $\mathbf{H} \in \mathbb{R}^{3 \times 3}$ from point-to-point correspondences $\{(\mathbf{x}^i, \mathbf{x}'^i) \}_{i=1}^N$, where $N \geq 4$, such that $\mathbf{x}'^i = \mathbf{H}\mathbf{x}^i, \forall i$.

First, recall that the matrix $\mathbf{H}$ has $8$ degrees of freedom (i.e. variables we want to find).

Each point-to-point correspondence represents $2$ constraints

Each point-to-point correspondence $(\mathbf{x}^i, \mathbf{x}'^i)$ accounts for $2$ constraints, i.e. $\mathbf{H}\mathbf{x}^i$ maps to the point $\mathbf{x}'^i$, which has $2$ degrees of freedom (even if it's defined by 3 components) because it's represented in homogenous coordinates and so, as for equation \ref{2}, it's defined ""up to scale"". In other words, these $2$ degrees of freedom represent $2$ constraints.

$4$ point-to-point correspondences are necessary to estimate a homography

Given that $1$ point-to-point correspondence represents $2$ constraints, then $4$ point-to-point correspondences corresponds to $8$ constraints. Given this and given that homographies have $8$ degrees of freedom, at least $4$ point-to-point correspondences are necessary to estimate a homography.

The equation \ref{2} can actually be written as

$$ \mathbf{x}' \times \lambda \mathbf{H}\mathbf{x} = \mathbf{0} \label{3} \tag{3} $$ where $\times$ is the cross-product.

Direct linear transformation

If you manipulate equation \ref{3}, then you will end up with another equation (if you ignore the scaling factor $\lambda$, which we can ignore because the homography converts points independently of their magnitude)

$$ \mathbf{A}_i \mathbf{h} = \mathbf{0} $$ where $\mathbf{A}_i \in \mathbb{R}^{2 \times 9}$ is the design matrix (i.e. the matrix that contains the input data to estimate the homography) that contains the elements of the the point-to-point correspondence $(\mathbf{x}^i, \mathbf{x}'^i)$ and $\mathbf{h} \in \mathbb{R}^{9}$ is a vector that contains the unknown elements of $\mathbf{H}$. The details of this manipulation can be found in section 4.1 of the book Multiple view geometry in computer vision.

Now, the idea is that you can vertically stack $N \geq 4$ equations of the form $\mathbf{A}_i \mathbf{h} = \mathbf{0}$ to build the final linear system that you would like to solve $$\mathbf{A} \mathbf{h} = \mathbf{0} \tag{4}\label{4}$$ where $\mathbf{A} \in \mathbb{R}^{2N \times 9}$.

The solution to this system is the vector $\mathbf{h} \in \mathbb{R}^{9}$, that is, your homography!

If you know something about linear algebra, you know that the solutions to $\mathbf{A} \mathbf{h} = \mathbf{0}$ are elements of the null space of $\mathbf{A}$.

Then, to find $\mathbf{h}$, you will typically use singular value decomposition (SVD). See e.g. How is the null space related to singular value decomposition? for a possible explanation of why you can use SVD to find an element of the null space.

This algorithm (which is described on page 91 of the cited book, i.e. Algorithm 4.1) is called direct linear transformation.

",2444,,2444,,5/10/2020 22:17,5/10/2020 22:17,,,,1,,,,CC BY-SA 4.0 21049,1,,,5/10/2020 13:44,,2,809,"

I have this problem.

A basic wooden railway set contains the pieces shown in Figure 3.32.

The task is to connect these pieces into a railway that has no overlapping tracks and no loose ends where a train could run off onto the floor.

a. Suppose that the pieces fit together exactly with no slack. Give a precise formulation of the task as a search problem.

b. Identify a suitable uninformed search algorithm for this task and explain your choice.

I know I have to use a DFS for this problem. But how do I know all the pieces are connected. The last one and the first one are connected.

Can someone help me with some tips on how to solve this problem and implement it (in Python)?

",36886,,-1,,6/17/2020 9:57,5/10/2020 14:02,Wooden railway search problem AI,,0,7,,,,CC BY-SA 4.0 21050,1,,,5/10/2020 14:27,,1,83,"

I am currently learning policy gradient methods from the Deep RL boot camp by Pieter Abbeel in which he explains the actor-critic algorithm derivation.

At around minute 39, he explains that the sum of the rewards from time step $t$ onwards is actually an estimation of $Q^\pi(s,u)$. I understand the definition of $Q^\pi(s,u)$ but I'm not sure why this is the case here. Is the reward following after time step $t+1$ collected based on current policy?

",32780,,2444,,5/10/2020 18:41,5/10/2020 18:41,Is the reward following after time step $t+1$ collected based on current policy?,,0,7,,,,CC BY-SA 4.0 21051,1,,,5/10/2020 15:39,,3,829,"

I have seen this happening in implementations of state-of-the-art RL algorithms where the model converges to a single action over time after multiple training iterations. Are there some general loopholes or reasons why this kind of behavior is exhibited?

",36749,,2444,,5/10/2020 15:48,5/10/2020 18:34,Why do RL implementations converge on one action?,,1,6,,,,CC BY-SA 4.0 21052,2,,17047,5/10/2020 15:53,,1,,"

for regression, you can use a hidden layer with sigmoid, then a LINEAR output layer, where the weighted sum goes straight through, without modification.

this way your output is not restricted to 0-1

",36518,,,,,5/10/2020 15:53,,,,1,,,,CC BY-SA 4.0 21053,1,,,5/10/2020 16:01,,3,1052,"

Why is it hard to prove the convergence of the DQN algorithm? We know that the tabular Q-learning algorithm converges to the optimal Q-values, and with a linear approximator convergence is proved.

The main difference of DQN compared to Q-Learning with linear approximator is using DNN, the experience replay memory, and the target network. Which of these components causes the issue and why?

",16912,,2444,,3/23/2021 9:08,3/24/2021 11:02,Why is it hard to prove the convergence of the deep Q-learning algorithm?,,0,2,,,,CC BY-SA 4.0 21054,2,,21000,5/10/2020 16:02,,0,,"

although I have seen RL solutions to this problem, (those I saw) fail to realize that the state of mastermind is not observable, as there is the ""secret"" we're trying to guess.

mastermind is best approached as a constraint satisfaction problem, along the lines described by Neil Slater. The whole trick is to realize that you can eliminate options from the ""current possible alternatives"" set by treating the latest guess as the target and eliminate any combinations that don't agree with it e.g. (using 3 digits for clarity)

last guess=123, scores [2,0] i.e. 2 white, 0 black

then the current alternatives are eliminated if they don't score [2,0] against the last guess 123:

124 [0,2] 214 [2,0] 215 [2,0] 321 [2,1]

let's say the secret is 215, you see that our method of elimination is correct, albeit we don't know the secret!

I have seen lots of different approaches (genetic engineering, information theory etc), but the plain truth is that a 50-line matlab piece of code with random guessing will give a winning stragegy that averages 4.3 guesses for the standard mastermind game (1296 alternatives)

",36518,,,,,5/10/2020 16:02,,,,0,,,,CC BY-SA 4.0 21055,1,,,5/10/2020 16:12,,2,39,"

The reservoir of the Liquid State Machine is an array of random numbers connected to each other with a probability depending on the distance between each other. Because of this connection with each other it apparently has ""recurrence"". The reservoir is followed by a readout stage where the actual weight training is done.

The hidden layer of a FF-NN can be an array of random weights, which is exactly what an Extreme Learning Machine is. ELM has a closed form solution of the second-stage weight (Beta) calculation i.e. only the Beta needs to be trained.

So in both cases you have a second-stage layer or readout layer where weights are trained.

My question is, if the reservoir random weights are very much like the ELM random weights and both don't need to be trained, how are they any different than each other? In other words, both have a set of untrained random weights, so in LSM where is the recurrence exactly happening if the weights are just random? Can't the LSM be reduced to a FF-NN?

",36919,,,,,5/10/2020 16:12,Reservoir of LSM vs. FF-NN or ELM,,0,0,,,,CC BY-SA 4.0 21056,2,,21051,5/10/2020 16:33,,2,,"

Why do RL implementations converge on one action?

If the optimal policy shouldn't always select the same action in the same state, i.e., if the optimal policy isn't deterministic (e.g., in the case of the rock paper scissors, the optimal policy cannot be deterministic because any intelligent player would easily memorize your deterministic policy, so, after a while, you would always lose again that player), then there are a few things that you can do to make your policy more stochastic

  1. Change the reward function. If your agent ends up selecting always the same action and you don't want that, it's probably because you're not giving it the right reinforcement signal (given that the agent selects the action that apparently will give it the highest reward in the long run).

  2. Try to explore more during training. So, if you're using a behavior policy like $\epsilon$-greedy, you may want to increase your $\epsilon$ (i.e. probability of selecting a random action).

  3. If you estimated the state-action value function (e.g. with Q-learning), maybe you derived the policy from it by selecting the best action, but, of course, that will make your policy deterministic. You may want to use e.g. softmax to derive the policy from the state-action value function (i.e. the probability of selecting an action is proportional to its value), although Q-learning assumes that your target policy is greedy with respect to the state-action value function.

If the optimal policy is supposed to be deterministic, then, if you find the optimal policy (which isn't probably the case), you will end up with an agent that always selects the same action. In that case, obviously, it's not a problem that the RL agent selects always the same optimal action.

",2444,,2444,,5/10/2020 18:34,5/10/2020 18:34,,,,2,,,,CC BY-SA 4.0 21058,1,,,5/10/2020 20:12,,2,71,"

When to do discretization to decrease the state/action space in RL? Can you give me some references that such a technique is used?

",36055,,2444,,1/4/2021 18:00,1/4/2021 18:00,When to do discretization to decrease the state/action space in RL?,,0,0,,,,CC BY-SA 4.0 21062,1,21064,,5/11/2020 4:30,,1,154,"

AI is supposed to do anything human or traditional computer can do, that is what we expect AI to be.

So 'generating random value' is also a task included in the scope that AI should be able to do

I'm trying to generate random value using a single neuron but the outcome isn't much good. Any suggestions?

PS.
Random weight initialisation is allowed coz weights are constants at start.
Using 'random' function is forbidden anywhere else.

",2844,,,,,5/11/2020 8:07,Random value generator using a single neuron or DNN,,1,0,,,,CC BY-SA 4.0 21063,1,,,5/11/2020 6:44,,1,31,"

Consider that we want to create a very big neural network. If we consider to use dense layers, we might face some challenges. Now consider that we use sparse layers instead of dense layers. When using a really sparse model, we would have much less parameters and could create much bigger neural networks. Is there an efficient algorithm to parallelize the training of such networks?

",35633,,,,,5/11/2020 6:44,Is there a parallelizable algorithm for training sparse neural networks?,,0,0,,,,CC BY-SA 4.0 21064,2,,21062,5/11/2020 7:47,,1,,"

AI is supposed to do anything human or traditional computer can do, that is what we expect AI to be.

Technically you would need AGI (Artifical General Intelligence) to do anything a human can do. This is not a technology that exists, but a goal of some AI research to perform more and more general tasks.

So 'generating random value' is also a task included in the scope that AI should be able to do

Humans are actually very bad at generating random values directly. In fact it is possible to identify a person by getting them to generate random numbers from their mind. Indirectly, a human could pick up a die, roll it, and read the number - this is also something that is within capabilities of modern AI, but not something you could implement with a small neural network.

Computer software cannot generate random numbers directly, only pseudo-random numbers that are deterministic but follow a statistical pattern very similar to theoretical randomness. When combined with frequently changing seed data from the environment, this becomes very close to an ideal random source, but it does require some hardware input. Modern computer chips include an internal source of randomness to provide this seed data, but it is important to note that it is a piece of specialised hardware, not something that can be coded or simulated using a simple neural network.

I'm trying to generate random value using a single neuron but the outcome isn't much good. Any suggestions?

This is going to be much harder than you assumed. Random number generators are always more complex than a single function, and require at least a little bit of architecture. The simplest architecture would be to add a state value to use as the next input to the generator, and which is somehow affected by the previous output. A simple feed-forward neural network does not have any architecture like this, you need to add it.

My first suggestion would be to learn how some simple software pseudo-random number generators (PRNGs) work. A good place to start might be linear congruential generators which are considered very poor quality PRNGs nowadays, but are the sort found in early computer systems.

It should be possible to create a rough approximation to a PRNG from a single neuron. Picking the weight, bias and non-linear functions carefully should allow you to produce something similar to the linear congruential generator, where it generates the next pseudo-random number when you input the previous one. A slightly more sophisticated approach would be to make this a simple recurrent neural network (RNN) so that the neural network has an internal state - such a network could still have a single neuron.

An example neuron that works like this might have a weight of 3, bias of 1 (to generate a logit from input $x$ of $z= 3x+1$) and have activation function $y = 1000\text{sin}(z) \text{ mod } 1$. This is not a normal activation function that you would use to train a neural network - in fact this neural network probably could not be trained at all. It has been specially designed to achieve your goal of generating (pseudo-)random numbers using a single neuron. It is a valid neural network though.

I created the above example neuron, started it with a seed of 0, and got the sequence 0.4710, 0.8543, 0.1536, 0.9479, 0.1142, 0.0683, 0.7741, 0.2985, 0.7315, 0.0777, 0.5078 by feeding back previous values - you can see it is working, although it is probably a very bad random number generator even compared to linear congruential generators. Possibly better than a human though.

",1847,,1847,,5/11/2020 8:07,5/11/2020 8:07,,,,0,,,,CC BY-SA 4.0 21065,1,21083,,5/11/2020 8:58,,1,70,"

What are some conferences for publishing papers on graph convolutional networks?

",31672,,2444,,5/11/2020 20:18,5/11/2020 20:18,What are some conferences for publishing papers on graph convolutional networks?,,1,0,,,,CC BY-SA 4.0 21067,1,,,5/11/2020 9:35,,1,134,"

I have a convolutional neural network such as U-Net for a segmentation task and in input images with the same spatial resolution (256x256) but different pixel size due to the acquisition process. Specifically, every image has FoV 370x370mm 256/256 pixel, but a different zoom, for example an image might have 2.7/1.8 px/mm and another image 2.4/1.7 px/mm. Considering the FoV the pixel size should be 370/256=1.44mm x pixel but with a zoom of 2.7/1.8 px/mm which is the pixel size in this case ? I thought 1.8/2.7= 0.67mm but I am not sure. Why should I have the same in-plane resolution (pixel size) for each image when I train my CNN and not only the same spatial resolution (256x256 px) ?

",36939,,,,,5/11/2020 9:35,How to manage the different pixel size for a CNN?,,0,0,,,,CC BY-SA 4.0 21068,1,21122,,5/11/2020 11:59,,0,416,"

I am working on an app that generates heat/ thermal map given a picture. i have been able to get what i expected using python opencv builtin function cv2.applyColorMap(img, cv2.COLORMAP_JET). Everything works exactly as expected. But i want to understand how applyColorMap works at the back end. I am aware how several image filters (like image, edge filters) work by convolution / cross correlation with appropriate kernals, but i can't seem to pull the same concept for color maps. For this question lets consider a color map where we want :

Brightest ones: (RED COLORED)

MEDIUM INTENSITY ONES: (YELLOW COLORED)

LOW INTENSITY ONES: (BLUE COLORED)

What i have done:

I tried dividing the pixels into 3 categories and replaced each pixel with the either of the colors (RED, YELLOW, BLUE ) depending upon it's value from gray scale image( 0-255). This approach had a problem that there were solid 3 colors in the image with no variation in intensity of the individual color while in a good heat map there is blend of colors ( it decreases or increases ) based upon the intensity . I want to achieve that effect. I would appreciate any help or any lead to understand how heat maps work .

",36484,,,,,5/13/2020 9:56,How does the math behind heat map filters work?,,1,0,,,,CC BY-SA 4.0 21069,1,,,5/11/2020 13:08,,2,358,"

I have 2 different models with each model doing a separate function and have been trained with different weights. Is there any way I can merge these two models to get a single model.

If it can be merged

  • How should I go about it? Will the number of layers remain the same?
  • Will it give me any performance gain?(Intuitively speaking, I should get a higher performance)
  • Will the hardware requirements change when using the new model?
  • Will I need to retrain the model? Can I somehow merge the trained weights?

If the models cannot be merged

  • Why so? After all, convolution is finding the correct pattern in data.
  • Also, if CNN's cannot be merged, then how do skip-connections like ResNet50 work?

EDIT:

Representation:

What I currently have

Image ---(model A) ---> Temporary image ---(Model B)---> Output image

What I want:

Image ---(model C) ---> Output image

",36944,,36944,,5/11/2020 14:33,5/11/2020 14:33,Merge two different CNN models into one,,0,2,,,,CC BY-SA 4.0 21070,2,,21044,5/11/2020 13:31,,8,,"

If we write the pseudo-code for the SARSA algorithm we first initialise our hyper-parameters etc. and then initialise $S_t$, which we use to choose $A_t$ from our policy $\pi(a|s)$. Then for each $t$ in the episode we do the following:

  1. Take action $A_t$ and observe $R_{t+1}$, $S_{t+1}$
  2. Choose $A_{t+1}$ using $S_{t+1}$ in our policy
  3. $Q(S_t, A_t) = Q(S_t, A_t) + \alpha [R_{t+1} + \gamma Q(S_{t+1},A_{t+1}) - Q(S_t, A_t)]$

Now, in Q-learning we replace $Q(S_{t+1},A_{t+1})$ in line 3 with $\max_aQ(S_{t+1},a)$. Recall that in SARSA we chose our $A_{t+1}$ using our policy $\pi$ - if our policy is greedy with respect to the action value function then this simply means the policy is $\pi(a|s) = \text{argmax}_aQ(s,a)$ which is exactly how we choose our weight update in Q-learning.

To answer the question - no, they are not always the same algorithm.

Consider where we transition from $s$ to $s'$ where $s'=s$. I will outline the updates for SARSA and Q-learning indexing the $Q$ functions with $t$ to demonstrate the difference.

For each case, I will assume we are at the start of the episode, as this is the easiest way to illustrate the difference. Note that actions denoted by $A_i$ are for actions taken explicitly in the environment -- in the Q-Learning update the $\max$ action that is chosen for the update is not executed in the environment, the action taken in the environment is chosen by the policy after the update has happened.

SARSA

  1. We initialise $S_0 = s$ and choose $A_0 = \text{argmax}_a Q_0(s,a)$
  2. Take action $A_0$ and observe $R_{1}$ and $S_{1} = s' = s$.
  3. Choose action $A_{1} = \text{argmax}_aQ_{0}(s,a)$
  4. $Q_{1}(S_0,A_0) = Q_0(S_0,A_0) + \alpha [R_{1} + \gamma Q_0(s,A_1) - Q_0(S_0,A_0)]$

Q-Learning

  1. Initialise $S_0 = s$
  2. Choose action $A_0 = \text{argmax}_aQ_0(s,a)$, observe $R_{1}, S_{1} = s' = s$
  3. $Q_{1}(S_0,A_0) = Q_0(S_0,A_0) + \alpha [R_{1} + \gamma \max_aQ_0(s,a) - Q_0(S_0,A_0)]$
  4. Choose action $A_1 = \text{argmax}_aQ_1(s,a)$

As you can see the next action for the updates in SARSA (line 4) and Q-learning (line 3) are taken with respect to the same $Q$ function, but the key difference is that the actual next action taken in $Q$-learning is taken with respect to the updated $Q$-function.

The key for understanding this edge case is that when we transition into the same state, the Q-Learning update will update the Q-function before choosing $A_1$. I have indexed actions and Q-functions by the episode step - hopefully, it makes sense why I have done this for the Q-functions as, usually, this would not make sense, but, because we have two successive states that are the same, it is okay.

",36821,,2444,,12/6/2020 12:26,12/6/2020 12:26,,,,1,,,,CC BY-SA 4.0 21071,2,,20588,5/11/2020 15:20,,0,,"

well... simply put a .split() at the end of your first two lines:

a = ""This is a dog."".split()
b = ""This is a cat."".split()

Your algorithm works with the iterables, and the string is broken into it's characters. You do the split, and a,b would be a list of words, then your algorithm works on the word-level

Output on your example:

[[0. 1. 2. 3. 4.]
 [1. 0. 1. 2. 3.]
 [2. 1. 0. 1. 2.]
 [3. 2. 1. 0. 1.]
 [4. 3. 2. 1. 1.]]

1.0
",6258,,,,,5/11/2020 15:20,,,,0,,,,CC BY-SA 4.0 21072,2,,20588,5/11/2020 15:29,,0,,"

Maybe try this:

from functools import lru_cache
from itertools import product

@lru_cache(maxsize=4095)
def ld(s, t):
    """"""
    Levenshtein distance memoized implementation from Rosetta code:
    https://rosettacode.org/wiki/Levenshtein_distance#Python
    """"""
    if not s: return len(t)
    if not t: return len(s)
    if s[0] == t[0]: return ld(s[1:], t[1:])
    l1 = ld(s, t[1:])      # Deletion.
    l2 = ld(s[1:], t)      # Insertion.
    l3 = ld(s[1:], t[1:])  # Substitution.
    return 1 + min(l1, l2, l3)


a = ""this is a sentence"".split()
b = ""yet another cat thing"".split()

# To get the triplets.
for i, j in product(a, b):
    print((i, j, ld(i, j)))

To get a matrix:

from scipy.sparse import coo_matrix
import numpy as np

a = ""this is a sentence"".split()
b = ""yet another cat thing , yes"".split()

tripets = np.array([(i, j, ld(w1, w2)) for (i, w1) , (j, w2) in product(enumerate(a), enumerate(b))])
row, col, data = [np.squeeze(splt) for splt in np.hsplit(tripets, tripets.shape[-1])]
coo_matrix((data, (row, col))).toarray()

[out]:

array([[4, 5, 4, 2, 4, 3],
       [3, 7, 3, 4, 2, 2],
       [3, 6, 2, 5, 1, 3],
       [6, 7, 7, 7, 8, 7]])
",36951,,36951,,5/11/2020 15:37,5/11/2020 15:37,,,,0,,,,CC BY-SA 4.0 21073,1,21107,,5/11/2020 15:41,,1,184,"

My main purpose right now is to train an agent using the A2C algorithm to solve the Atari Breakout game. So far I have succeeded to create that code with a single agent and environment. To break the correlation between samples (i.i.d), I need to have an agent interacting with several environments.

class GymEnvVec():

    def __init__(self, env_name, n_envs, seed=0):
        make_env = lambda: gym.make(env_name)
        self.envs = [make_env() for _ in range(n_envs)]
        [env.seed(seed + 10 * i) for i, env in enumerate(self.envs)]

    def reset(self):
        return [env.reset() for env in self.envs]

    def step(self, actions):
        return list(zip(*[env.step(a) for env, a in zip(self.envs, actions)]))

I can use the class GymEnvVec to vectorize my environment.

So I can set my environments with

envs = GymEnvVec(env_name=""Breakout-v0"", n_envs=50)

I can get my first observations with

observations = envs.reset()

Pick some actions with

actions = agent.choose_actions(observations)

The choose_actions method might look like

def choose_actions(self, states):
        assert isinstance(states, (list, tuple))

        actions = []
        for state in states:
            probabilities  = F.softmax(self.network(state)[0])
            action_probs = T.distributions.Categorical(probabilities)
            actions.append(action_probs.sample())

        return [action.item() for action in actions] 

Finally, the environments will spit the next_states, rewards and if it is done with

next_states, rewards, dones, _ = env.step(actions)

It is at this point I am a bit confused. I think I need to gather immediate experiences, batch altogether and forward it to the agent. My problem is probably with the ""gather immediate experiences"".

I propose a solution, but I am far from being sure it is a good answer. At each iteration, I think I must take a random number with

nb = random.randint(0, len(n_envs)-1)

and put the experience in history with

history.append(Experience(state=states[nb], actions[nb], rewards[nb], dones[nb]))

Am I wrong? Can you tell me what I should do?

",35626,,35626,,5/11/2020 18:25,5/18/2020 9:58,"Once the environments are vectorized, how do I have to gather immediate experiences for the agent?",,1,5,,,,CC BY-SA 4.0 21074,2,,17047,5/11/2020 15:58,,2,,"

Lets mock some data up.

""100 numbers, each one is a parameter, they together define a number X(also given)""

# i.e. size of X_train -> [n x d]
# i.e. size of X_train -> [??? x 100]  , when d = 100

# ""I have 20000 instances for training""
# i.e. size of X_train -> [20000 x 100], when n = 20000

import torch
import numpy as np

X_train = torch.rand((20000, 100))
X_train = np.random.rand(20000, 100) # Or using numpy

But what is your Y?

# Since the definition of a regression task, 
# loosely means to predict an output real number
# given an input of d dimension

# So the appropriate Y_train would 
# be of dimension [n x 1] 
# and look like this:

y_train = torch.rand((20000, 1))

y_train = np.random.rand(20000, 1) # Or using numpy

What is a linear perceptron?

Taking definition from this tutorial

Thus, in picture:

Next we need to define training routine,

For now take it as biblical truth that this is an okay routine to train a neural net model (this isn't the only way but easiest or supervised learning):

In code:

import math
import numpy as np
np.random.seed(0)

def sigmoid(x): # Returns values that sums to one.
    return 1 / (1 + np.exp(-x))

def sigmoid_derivative(sx): 
    # See https://math.stackexchange.com/a/1225116
    # Hint: let sx = sigmoid(x)
    return sx * (1 - sx)

def cost(predicted, truth):
    return np.abs(truth - predicted)


num_epochs = 10000 # No. of times to iterate.
learning_rate = 0.03 # How large a step to take per iteration.

# Lets standardize and call our inputs X and outputs Y
X = np.array(torch.rand((20000, 100)))
Y = or_output

for _ in range(num_epochs):
    layer0 = X

    # Step 2a: Multiply the weights vector with the inputs, sum the products, i.e. s
    # Step 2b: Put the sum through the sigmoid, i.e. f()
    # Inside the perceptron, Step 2. 
    layer1 = sigmoid(np.dot(X, W))

    # Back propagation.
    # Step 3a: Compute the errors, i.e. difference between expected output and predictions
    # How much did we miss?
    layer1_error = cost(layer1, Y)

    # Step 3b: Multiply the error with the derivatives to get the delta
    # multiply how much we missed by the slope of the sigmoid at the values in layer1
    layer1_delta = layer1_error * sigmoid_derivative(layer1)

    # Step 3c: Multiply the delta vector with the inputs, sum the product (use np.dot)
    # Step 4: Multiply the learning rate with the output of Step 3c.
    W +=  learning_rate * np.dot(layer0.T, layer1_delta)

Now that we learn the model, i.e. the W.

When we see the data points that we need to use the model on, we apply the same forward propagation step, i.e. layer1 = sigmoid(np.dot(X, W))

Since we have:

I have 5000 lines given, each containing the 100 numbers as parameters.My task is to predict the number X for these 5000 instances.

And in code:

# If we mock up the data,
# it should be the same internal dimension. 
X_test = np.random.rand(5000, 100)

# The desired output just needs to pass through the W and the activation:
# the shape of `output` -> [5000 x 1] , 
# where there's 1 output value for each input.
output = sigmoid(np.dot(X_test, W))
",36951,,,,,5/11/2020 15:58,,,,2,,,,CC BY-SA 4.0 21075,2,,16599,5/11/2020 16:00,,3,,"

Kaggle recently started adding 'Simulation' competitions, which are well-suited for reinforcement learning.

The first competition that's live (no prizes) is ConnectX, like a generalised Connect Four.

The first competition with prize money is likely to be the next iteration of TwoSigma's Halite. There's a page for it here, but it hasn't been launched yet: https://www.kaggle.com/c/halite/overview

I created a site that lists ongoing machine learning competitions including Reinforcement Learning competitions - you can also sign up to the email list in case you want to get emails (roughly monthly) when new competitions launch. As of right now (May 2020) there are a few live RL competitions on there - the KDD cup, and AWS DeepRacer.

",36952,,36952,,5/15/2020 15:13,5/15/2020 15:13,,,,0,,,,CC BY-SA 4.0 21076,1,,,5/11/2020 16:04,,1,52,"

I finished working on a new algorithm in Reinforcement Learning, I need to compare it to some well-known algorithms. That's why I need to know the step-by-step procedures that RL researchers usually take in order to get their results and compare them to other papers' results. (e.g. running the algorithm multiple times with different random seeds, saving results to .csv files, plotting).

Anyone can help?

(I am working on Pytorch, PyBullet Environments.)

",36603,,2444,,5/11/2020 17:39,5/11/2020 17:39,What are the procedures to get RL paper results?,,0,2,,5/17/2020 11:09,,CC BY-SA 4.0 21077,1,,,5/11/2020 16:30,,2,135,"

During the training of DQN, I noticed that the model with prioritized experience replay (PER) had a smaller loss in general compared to a DQN without PER. The mean squared loss was an order of magnitude $10^{-5}$ for the DQN with PER, whereas the mean squared loss was an order of magnitude $10^{-2}$.

Do the smaller training errors have any effect on executing the final policy learned by the DQN?

",32780,,2444,,5/11/2020 17:42,7/1/2022 4:10,Do smaller loss values during DQN training produce better policies?,,1,0,,,,CC BY-SA 4.0 21078,2,,17047,5/11/2020 18:32,,2,,"

The quick answer is that you want to use an activation function on the output layer that does not compress values to $(0,1)$. Depending on your software, this might be called ""linear"" or ""identity"". It looks like Keras just wants you to leave off the activation function: model.add(Dense(1)).

The typical way of thinking of a neural network as a classifier (let's say a binary classifier) is just extending a logistic regression. In fact, when you use a sigmoid activation function on the output node, you're (sort of) running logistic regression on the final hidden layer.

A logistic regression is one type of generalized linear model. The gist of GLM is that some transformation of the the value of interest is a linear function of the feature space.

Let $X$ be the data matrix for the feature space. Let $\beta$ be a parameter vector. Then $\hat{y} = \mathbb{E}[y] = X\beta$ is the linear model, and $g(\mathbb{E}[y]) = X\beta$ is the generalized linear model (vectorized, so apply $g$ to each $y_i$).

But we could extend this to a nonlinear transformation, and when a neural network is a binary classifier, this is precisely what we're doing. Instead of the transformation of $X$ being given by $\beta$ and thus linear, we apply some nonlinear transformation $f$ and get $g(\mathbb{E}[y]) = f(X)$.

The terminology in GLM is ""link function"", but that is essentially the activation function on the final node(s) of the neural network. Consequently, all of the GLM link functions are in play, and one of those link functions is the identity function. For a GLM, that's just linear regression. For your neural network, it will be a neural network (nonlinear) regression, which sounds like what you want.

",25529,,25529,,5/11/2020 18:41,5/11/2020 18:41,,,,0,,,,CC BY-SA 4.0 21079,2,,20681,5/11/2020 18:59,,3,,"

Did you mean:

How do you use a pre-trained BERT model in a feature-based setting to get pre-trained word contextual embeddings?

Here is the BERT paper. I highly recommend you read it.

Firstly, by sentences, we mean a sequence of word embedding representations of the words (or tokens) in the sentence. Word embeddings are the vectors that you mentioned, and so a (usually fixed) sequence of such vectors represent the sentence input. (We don't need the input to always be divided to individual sentences)

There are mainly two ways to train the modern neural network models for sequence modeling tasks such as language modelling, machine translation and question answering:

1. The pretrain - finetune based approach (also called Transfer Learning)

In this approach, a model is first pre-trained on some auxiliary dataset whose domain usually overlaps with the domain of the dataset corresponding to the downstream task, that is, the actual task for which we need to build the model.

In the case of Natural Language Processing (NLP) tasks, such as machine translation and question answering, the pre-training that is done is usually unsupervised (no labels in the dataset) where our training objective is to maximize the log likelihood of the sentences. You can read more about Maximum Likelihood estimation here.

Its almost always a good idea to pre-train models (whether it is for NLP tasks or for computer vision tasks etc). The more we pre-train a model, the closer its parameters are to the optimal ones, and the work needed in the fine-tuning phase is minimized. This is why we use large scale datasets for pre-training.

Later, during the fine-tuning phase, we fine tune the parameters of the model to suit a downstream task, such as a task of translation between two specific languages of text. Since the parameters of the model are already at a position in the error/loss space that is already sort of good enough for the model (because we already pre-trained it with text in, say, 10 languages), the fine-tuning can, in a sense, tailor the model to suit the task a hand.

2. The feature based approach

In this approach, we take an already pre-trained model (any model, e.g. a transformer based neural net such as BERT, which has been pre-trained as described in the previous approach) and then we extract the activations from one or more layers of this pre-trained model.

This is done by simply inputting the word embedding sequence corresponding to a sentence to the pre-trained model and then extracting the activations from one or more layers (one or more of the last layers, since the features associated to the activations in these layers are far more complex and include more context) of this pre-trained model.

These activations (also called contextual embeddings) are used as input (similar to the word embeddings described earlier) to another model, such as an LSTM or even another BERT. That is, fixed features are extracted from the pre-trained model.

Usually the size of these layers are the same as the size of the input layer, i.e. they are also a sequence, but a sequence of far more complex features instead of word embeddings. If the word embedding vector of the input to this model id $D$ and the length of the sentence which is input to the model is $L$, then each of the hidden activation layers have the same length $L$, but the hidden size of these vectors is $H >> D$, i.e. far higher to incorporate more complex features.

Even though the dimensionality of the hidden layers is far higher, these layers are equal in length to the input layer, and so we have a one to one correspondence between the input tokens in the sentence and each of these hidden layer activations.

The bonus in using these hidden representations is that we get to use representations of the input sentence that also incorporate CONTEXT, dependency between the words of the sentence, an important feature that is to be modeled by all models used for NLP tasks. Models such as BERT, are based on Transformer models that use an attention mechanism in order to model these contextual features.

This is a superb article describing to the last detail the attention mechanism and the Transformer model based on which models like BERT operate. For more about contextual embeddings I recommend taking a look at this article. The figure used here is from this article.

Models such as BERT make use of one half of these Transformer models, called the encoder, since the Transformer is a seq2seq or encoder-decoder model. Here is the link to the Transformer paper.

Here is a great blog on extracting contextual word embeddings from BERT using Tensorflow and Keras.

I'll also provide a link to a Kaggle Python Notebook on using Pipelines functionality from the HuggingFace community repo on github that also is used for feature extraction (contextual embeddings).

Please let me know if you have any questions, happy to help!

",26392,,26392,,5/11/2020 21:08,5/11/2020 21:08,,,,0,,,,CC BY-SA 4.0 21080,1,,,5/11/2020 19:40,,2,74,"

The first neural net I wrote was a classifier. After that, I learned that neural nets can be used for regression tasks, even quantile regression.

It has become clear to me that the usual games with extensions of OLS linear regression can be applied to neural networks.

What work has been done with Poisson-style regression via neural networks with log link functions (exponential activation function)?

",25529,,2444,,6/27/2020 23:46,6/27/2020 23:46,What work has been done with Poisson-style regression via neural networks with exponential activation function?,,0,0,,,,CC BY-SA 4.0 21082,2,,21077,5/11/2020 19:43,,1,,"

I think it says something about the training progress, while another approach you can make sure is to look at the gradient norm. Sometimes, the training loss is really noisy while the gradient norm is much more clear.

",36787,,,,,5/11/2020 19:43,,,,2,,,,CC BY-SA 4.0 21083,2,,21065,5/11/2020 20:13,,2,,"

Based on past publications, here are some journals and conferences where you can possibly publish or present a research paper on geometric deep learning or graph neural networks

The website http://geometricdeeplearning.com/ also provides some information about the topic and links you to several workshops, papers, and tutorials.

Here are some links to some of the past workshops on GDL.

",2444,,,,,5/11/2020 20:13,,,,0,,,,CC BY-SA 4.0 21084,1,21094,,5/11/2020 20:36,,0,209,"

For some environments taking an action may not update the environment state. For example, a trading RL agent may take an action to buy shares s. The state at time t which is the time of investing is represented as the interval of 5 previous prices of s. At t+1 the share price has changed but it may not be as a result of the action taken. Does this affect RL learning, if so how ? Is it required that state is updated as a result of taking actions for agent learning to occur ?

In gaming environments it is clear how actions affect the environment. Can some rules of RL breakdown if no ""noticeable"" environment change takes place as a result of actions ?

Update:

""actions influence the state transitions"", is my understanding correct: If transitioning to a new state is governed by epsilon greedy and epsilon is set to .1 then with .1 probability the agent will choose an action from the q table which has max reward reward for the given state. Otherwise the agent randomly chooses and performs an action then updates the q table with discounted reward received from the environment for the given action.

I've not explicitly modeled an MDP and just defined the environment and let the agent determine best actions over multiple episodes of choosing either a random action or the best action for the given state, the selection is governed by epsilon greedy.

But perhaps I've not understood something fundamental in RL. I'm ignoring MDP in large part as I'm not modeling the environment explicitly. I don't set the probabilities of moving from each state to other states.

",12964,,12964,,5/12/2020 11:10,9/18/2022 9:39,Is it required that taking an action updates the state?,,2,0,,,,CC BY-SA 4.0 21086,1,21090,,5/11/2020 21:53,,1,70,"

Are there any reference papers where it is used a KMeans-like algorithm in state space quantization in Reinforcement Learning instead of range buckets?

",36055,,,,,5/11/2020 23:11,Can I do state space quantization using a KMeans-like algorithm instead of range buckets?,,1,0,,,,CC BY-SA 4.0 21089,2,,20870,5/11/2020 22:59,,1,,"

Bernoulli naïve Bayes

$P(x \mid c_k) = \prod^{n}_{i=1} p^{x_i}_{ki} (1-p_{ki})^{(1-x_i)}$

Let's examine the example of document classification.
Let K different text classes and n different terms that our vocabulary contains. $x_i$ are boolean variables (0, 1) expressing if the $i^{th}$ term exists in document $\mathbf{x}$. $\mathbf{x}$ is a vector of dimension $n$.

$P(x \mid c_k)$ is the probability that given the class $k$, document $\mathbf{x}$ to be generated. The equation uses a common trick to represent a multivariate Bernoulli event model, taking into account that when $x_i = 1$, then $1 - x_i = 0$ and inversely. In other words, for each term, it takes the probability that the document does contain this term or it does not.

$p_{ki}$ is the probability of class $c_k$ generating the term $x_i$, that is it could be the prior probability a document that belongs to $k$ class contains this term of the vocabulary.

",36055,,2444,,12/13/2021 9:13,12/13/2021 9:13,,,,0,,,,CC BY-SA 4.0 21090,2,,21086,5/11/2020 23:11,,2,,"

There is this paper Representation and Reinforcement Learning for Personalized Glycemic Control in Septic Patients, presented in the Machine Learning for Health Workshop in NIPS 2017. Here is a quote from the paper where the authors describe the clustering approach:

After we generated the state representation, we used the k-means clustering algorithm to categorize millions of patient states into 500 clusters such that similar clinical states can collapse into the same cluster.

",34010,,,,,5/11/2020 23:11,,,,0,,,,CC BY-SA 4.0 21091,2,,17287,5/11/2020 23:36,,3,,"

The Problem of Overfitting

In most cases, when you increase a lot the number of epochs your model finally overfits. This is because your model reaches the point that it does not learn anymore but tries to remember what it has seen before. This is overfitting. So there is often a trade-off between the number of epochs and overfitting. In general, a good way to avoid overfitting, except for fine-tuning, regularization, dropout, etc, is to understand what you have from the learning curve. In most cases, overfitting happens, after some epochs have passed, and, as a result, the training error still decreases, whereas the validation error fluctuates or increases. If so, you should save only the learning updates before overfitting appears and/or validation error is minimum.
Methods: early-stopping, checkpoint. useful link: https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/

",36055,,18758,,5/29/2022 4:52,5/29/2022 4:52,,,,0,,,,CC BY-SA 4.0 21092,1,,,5/12/2020 0:37,,1,322,"

I'm predicting the used 9 pictures to predict the last picture
so (40,40,9) -> unet -> (40,40,1)

but as you see the predict picture

It's not just a mask(0or 1) its float
so which loss function should I define to achieve the best Unet result? and why?

",27529,,,,,5/12/2020 0:37,what will be the best loss function for unet to predict the each pixel values?,,0,0,,,,CC BY-SA 4.0 21093,1,,,5/12/2020 2:21,,3,89,"

Suppose $x_{t+1} \sim \mathbb{P}(\cdot | x_t, a_t)$ denotes the state transition dynamics in a reinforcement learning (RL) problem. Let $y_{t+1} = \mathbb{P}(\cdot | x_{t+1})$ denote the noisy observation or the imperfect state information. Let $H_{t}$ denote the history of actions and observations $H_{t+1} = \{b_0,y_0,a_0,\cdots,y_{t+1}\}$.

For the RL Partially Observed Markov Decision Process (RL-POMDP), the summary of the history is contained in the ""belief state"" $b_{t+1}(i) = \mathbb{P}(x_{t+1} = i | H_{t+1})$, which is the posterior distribution over the states conditioned on the history.

Now, suppose the model is NOT known. Clearly, the belief state can't be computed.

Can we use a Gaussian Process (GP) to approximate the belief distribution $b_{t}$ at every instant $t$?

Can Variational GP be adapted to such a situation? Can universal approximation property of GP be invoked here?

Are there such results in the literature?

Any references and insights into this problem would be much appreciated.

",36970,,2444,,5/12/2020 9:49,5/12/2020 9:58,Can we use a Gaussian process to approximate the belief distribution at every instant in a POMDP?,,0,3,,,,CC BY-SA 4.0 21094,2,,21084,5/12/2020 2:42,,1,,"

A very vague question. What's the objective?

Reinforcement Learning (RL) typically uses the Markov Decision Process framework, which is a sequential decision making framework. In this framework, actions influence the state transitions. In other words, RL deals with controlling (via actions) a Markov chain. The objective in RL is figure out how to take actions in an optimal (in some sense) way!

If, in the application you mentioned, the actions don't influence the state transitions and the objective is to predict states, RL is not required. It's just a regression/ time-series problem.

",36970,,,,,5/12/2020 2:42,,,,4,,,,CC BY-SA 4.0 21096,1,22042,,5/12/2020 6:55,,2,283,"

MuZero seems to use two different methods to encode actions into planes for Atari games:

  1. For the input action to the representation function, MuZero encodes historical actions as simple bias planes, scaled as $a/18$, where $18$ is the total number of valid actions in Atari.(from the appendix E of the paper)
  2. For the input action to the dynamics function, Muzero encode an action as a one-hot vector, which is tiled appropriately into planes(from the appendix F of the paper)

I'm not so sure about how to make of the term ""bias plane"".

About the second, my understanding is that, as an example, for action $4$, we first apply one-hot encoding, which gives us a zero vector of length $18$ with one in the $5$-th position(as there are $18$ actions). Then we tile it and get a zero vector of length $36$, with ones in the $5$-th and $23$-rd positions. At last, this vector is reshaped into a $6\times 6$ plane as follows:

$$ 0, 0, 0, 0, 1, 0\\ 0, 0, 0, 0, 0, 0\\ 0, 0, 0, 0, 0, 0\\ 0, 0, 0, 0, 1, 0\\ 0, 0, 0, 0, 0, 0\\ 0, 0, 0, 0, 0, 0 $$

",8689,,8689,,5/12/2020 12:37,6/20/2020 16:28,How's the action represented in MuZero for Atari?,,1,0,,,,CC BY-SA 4.0 21097,1,21099,,5/12/2020 6:59,,0,78,"

I'm Spanish and I don't understand the meaning of ""non-held-out"". I have tried Google Translator and online dictionaries like Longman but I can't find a suitable translation for this term.

You can find these term using this Google Search, and in articles like this one:

  1. ""computing SVD on the non-held-out data"" from here.
  2. ""The training set consists all the images and annotations containing non-held-out classes while held-out classes are masked as background during the training"" from Few-Shot Semantic Segmentation with Prototype Learning.
  3. ""A cross-validation procedure is that non held out data (meaning after holding out the test set) is splitted in k folds/sets"" from here.

What is non-held-out data and held-out data or classes?

",4920,,4920,,5/12/2020 9:33,5/12/2020 9:33,What are non-held-out data or non-held-out classes?,,1,0,,,,CC BY-SA 4.0 21099,2,,21097,5/12/2020 7:47,,1,,"

Held-out simply means ""not included"" particularly in the sense of:

This part of the data was not included in this specific training run.

Depending on the context of all of these text non-held-out data/classes means the data that actually was included in a particular modeling exercise.

Consider this excerpt from your first example:

For instance, Owen and Perry (2009) show a method for holding out data, computing SVD on the non-held-out data, and selecting k so as to minimize the reconstruction error between the held-out data and its SVD approximation.

It actually means:

For instance, Owen and Perry (2009) show a method for excluding data, computing SVD on the remaining data, and selecting k so as to minimize the reconstruction error between the excluded data and its SVD approximation.

So it simply talks about a particular way of train-test-validation splitting the data.

",27665,,,,,5/12/2020 7:47,,,,0,,,,CC BY-SA 4.0 21101,1,21104,,5/12/2020 9:51,,1,265,"

Pieter Abbeel in his deep rl bootcamp policy gradient lecture derived the gradient of the utility function with respect to $\theta$ as $\nabla U(\theta) \approx \hat{g} = 1/m\sum_{i=1}^m \nabla_\theta logP(\tau^{(i)}; \theta)R(\tau^{(i)})$, where $m$ is the number of rollouts, and $\tau$ represents the trajectory of $s_0,u_0, ..., s_H, u_H$ state action sequences.

He also explains that the gradient increases the log probabilities of trajectories that have positive reward and decreases the log probabilities of trajectories with negative reward, as seen in the picture. From the equation, however, I don't see how the gradient tries to increase the probabilities of the path with positive R?

From the equation, what I understand is that we would want to update $\theta$ in a way that moves in the direction of $\nabla U(\theta)$ so that the overall utility is maximised, and this entails computing the gradient log probability of a trajectory.

Also, why is $\theta$ omitted in $R(\tau^{(i)})$, since $\tau$ depends on the policy which is dependent on $\theta$ ?

",32780,,2444,,5/12/2020 9:53,5/13/2020 3:06,How does the gradient increase the probabilities of the path with a positive reward in policy gradient?,,2,0,,,,CC BY-SA 4.0 21103,1,,,5/12/2020 10:11,,1,48,"

I can't figure out what preprocessing of the image is needed before feeding it into the convolutional neural network. For example, I want to recognize circles on a 1000 by 1000 px photo. The learning process of a neural network occurs on 100 by 100 px (https://www.kaggle.com/smeschke/four-shapes/data). I'm having a little difficulty wrapping my head around the situation when the circle in the input image is much larger (or smaller) than 100x100 px. How then the convolution neural network determines that circle if it was learned on a dataset of a different picture's size.

For clarity, I want to submit a 454 by 430 px image to the network input:

Example of the dataset for the learning process (100 by 100 px):

Finally, I want to recognize all the circles on the input image:

",19347,,,,,5/12/2020 10:11,What pre-processing of the image is needed before feeding it into the convolutional neural network?,,0,0,,,,CC BY-SA 4.0 21104,2,,21101,5/12/2020 10:14,,0,,"

Think of a surface where z-axis is $U$, x and y axis are $\theta_{1}$ and $\theta_{2}$ accordingly. Since you are following the gradient direction with respect to $\theta$ vector, that means you are moving at the direction that increases the $U$. If $R(\tau)$ is positive, you are moving towards the uphill direction and vice-versa. More formally you will say the following:

In the policy gradient algorithm, our update step is:

$ \theta_{new} = \theta_{old} + \alpha \nabla_{\theta}U(\theta) $

So, if we select a very bad trajectory the sum of all rewards ($R(\tau)$) will be negative and the following update will shift the $\theta_{new}$ vector away from $\nabla_{\theta}U(\theta)$ vector.

If we select a good trajectory means $R(\tau)$ is positive, the update will shift $\theta_{new}$ vector towards the $\nabla_{\theta}U(\theta)$ vector. So, it will increase the probability of selecting path with positive $R$

",28048,,28048,,5/12/2020 10:22,5/12/2020 10:22,,,,9,,,,CC BY-SA 4.0 21105,1,,,5/12/2020 13:22,,3,39,"

How do reinforcement learning and collaborative learning overlap? What are the differences and similarities between these fields?

I feel like the results I get via google do not make the distinction clear.

",36978,,2444,,5/12/2020 14:02,5/12/2020 14:02,How do reinforcement learning and collaborative learning overlap?,,0,0,,,,CC BY-SA 4.0 21106,1,,,5/12/2020 14:33,,2,64,"

The Deep RL bootcamp on policy gradient techniques gives the update equation for the policy network in A3C as

$\theta_{i+1} = \theta_i + \alpha \times 1/m \sum_{k=1}^m\sum_{t=0}^{H-1}\nabla_{\theta}log\pi_{\theta_i}(u_t^{(k)} | s_t^{(k)})(Q(s_t^{(k)},u_t^{(k)}) - V_{\Phi_i}^\pi(s_t^{(k)})) $

However in the actual A3C paper, the gradient update is based on a single trajectory and there is no averaging of the gradient over $m$ trajectories as defined in the video ? The simple action-value actor-critic algorithm also does not seem to require an averaging over m trajectory.

",32780,,,,,2/7/2021 18:07,Why a single trajectory can be used to update the policy network $\theta$ in A3C?,,1,0,,,,CC BY-SA 4.0 21107,2,,21073,5/12/2020 14:43,,0,,"
class ExperienceSource():
    def __init__(self, env, agent, reward_steps):
        self.env = env
        self.agent = agent
        self.reward_steps = reward_steps

    def __iter__(self):
        histories = [deque(maxlen=self.reward_steps) for i in range(len(self.env.envs))]
        states = self.env.reset()

        while True:

            for idx, env in enumerate(self.env.envs):
                action = self.agent.choose_action(states[idx])
                state, reward, done, _ = env.step(action)

                current_rewards[idx] += reward
                histories[idx].append(Experience(state, action, reward, done))

                if len(histories[idx]) == self.reward_steps:
                    yield tuple(histories[idx])

                if done: 
                    yield tuple(histories[idx])
                    state = env.reset()

Be aware that self.reward_steps is simply the value defined by N-1 in the following formula $$Q(s,a) = \sum_{i=0}^{N-1} \gamma^i r_i + \gamma^N V(s_N)$$ and self.env is simply an instance of GymEnvVec class from the question.

",35626,,3012,,5/18/2020 9:58,5/18/2020 9:58,,,,0,,,,CC BY-SA 4.0 21108,1,,,5/12/2020 14:54,,1,60,"

In the paper ""Residual Energy-Based Models for Text Generation"" (arXiv), on page 5, they write that equation 5 is an instance of importance sampling.

Equation 5 is:

$$ P(x_t \mid x_{<t}) = P_{LM}(x_t \mid x_{<t}) \, \frac{\mathbb{E}_{x'_{>t} \sim P_{LM}(\cdot \mid x_{\leq t})}[\exp(-E_\theta (x_{<t}, \, x_t, \, x'_{>t}))]}{\mathbb{E}_{x'_{\geq t} \sim P_{LM}(\cdot \mid x_{\leq t-1})}[\exp(-E_\theta (x_{<t}, \, x'_t, \, x'_{>t}))]} \ \ .$$

The goal is to approximate sampling from a distribution from which sampling is intractable $P_\theta(Y \mid X) = P_{LM}(Y \mid X) \, \frac{\exp(-E_\theta (X, Y))}{Z_\theta(X)}$, by sampling from $P_{LM}$, from which sampling is cheaper.

I understand that they are marginalizing over $>t$ in eq. 5, and I understand the basic idea of importance sampling to change $\mathbb{E}_{x \sim p}[f(x)]$ into $\mathbb{E}_{x \sim q}[f(x) \frac{p(x)}{q(x)}]$. However, eq. 5 is not a mean or aggregate, it is a probability.

What is happening? I don't see how eq. 5 fits in the importance sampling scheme (or a self-normalizing importance sampling scheme, link). Thanks in advance!

",22545,,,,,5/12/2020 14:54,"Importance sampling eq. 5 in paper ""Residual Energy-based Models for Text Generation""",,0,0,,,,CC BY-SA 4.0 21109,1,21186,,5/12/2020 15:17,,5,663,"

One of the approaches to improving the stability of the Policy Gradient family of methods is to use multiple environments in parallel. The reason behind this is the fundamental problem we discussed in Chapter 6, Deep Q-Network, when we talked about the correlation between samples, which breaks the independent and identically distributed (i.i.d) assumption, which is critical for Stochastic Gradient Descent (SDG) optimization. The negative consequence of such correlation is very high variance in gradients, which means that our training batch contains very similar examples, all of them pushing our network in the same direction. However, this may be totally the wrong direction in the global sense, as all those examples could be from one single lucky or unlucky episode. With our Deep Q-Network (DQN), we solved the issue by storing a large amount of previous states in the replay buffer and sampling our training batch from this buffer. If the buffer is large enough, the random sample from it is much better representation of the states distribution at large. Unfortunately, this solution won't work for PG methods, at most of them are on-policy, which means that we have to train on samples generated by our current policy, so, remembering old transitions is not possible anymore.

The above excerpt is from Maxim Lapan in the book Deep Reinforcement Learning Hands-on page 284.

How does being on-policy prevent us from using the replay buffer with the policy gradients? Can you explain to me mathematically why we can't use replay buffer with A3C for instance?

",35626,,2444,,10/12/2020 12:55,10/12/2020 12:55,How does being on-policy prevent us from using the replay buffer with the policy gradients?,,1,1,,,,CC BY-SA 4.0 21110,1,,,5/12/2020 17:08,,3,129,"

I'm struggling with calculating accuracy when I do cross-validation for a deep learning model. I have two candidates for doing this. 1. Train a model with 10 different folds and get the best accuracy of them(so I get 10 best accuracies) and average them. 2. Train a model with 10 different folds and get 10 accuracy learning curves. Now, average these learning curves by calculating the mean of 10 accuracies of each epoch. So now we get one averaged accuracy learning curve and find the highest accuracy from this curve.

Among these two candidates which one is correct??

",36987,,,,,5/12/2020 21:50,Calculating accuracy for cross validation,,2,0,,,,CC BY-SA 4.0 21111,1,,,5/12/2020 19:31,,2,96,"

I have a neural network that connects $N$ input variables to $M$ output variables (qoi). By default, neural networks just give out point estimations.

Now, I want to plot some of the quantity of interests and produce also a prediction interval. To calculate the model uncertainty, I use the bootstrap method.

$$\sigma_{model}^2=\frac{1}{B-1}\sum_{i=1}^B(\hat{y}_i^b-\hat{y}_i)^2\qquad \text{with}\quad\hat{y}_i = \frac{1}{B}\sum_{b=1}^B\hat{y}_i^b$$ $B$ training datasets are resampled from original dataset with replacement. $\hat{y}_i^b$ is the preditcion of the $i$ sample generated by the $b$th bootstrap model.

If I understood it correctly, the model uncertainty (or epistemic uncertainty) is enough to create a confidence interval.

But for the PI I also need the irreducible error $\sigma_{noise,\epsilon}^2$. $$\sigma_y^2= \sigma_{model}^2+\sigma_{noise,\epsilon}^2$$

The aleatoric uncertainty is explained in the following picture:

Is there a procedure to calculate this aleatoric uncertainty?

I read the paper High-Quality Prediction Intervals for Deep Learning and watched the corresponding YouTube video. And I read the paper Neural Network-Based Prediction Intervals.

EDIT I suggest the following algorithm to estimate the noise variance, but I am not sure if this makes sense:

",36989,,36989,,5/19/2020 19:55,5/19/2020 19:55,How to calculate the data noise variance for a prediction interval?,,0,0,,,,CC BY-SA 4.0 21112,2,,21110,5/12/2020 21:13,,2,,"

I guess you could train your model with 10 different folds and in each fold calculate the average accuracy. So you would have 10 values - one corresponding to each fold. And then take the mean of all of them to get the average accuracy of your model.

Your first option doesn't seem great because you take the highest accuracy among folds. If for some reason, the variance between accuracies is high for a fold, this would bias your numbers. Taking mean or maybe median of accuracies might be more reasonable.

Does that help?

",36074,,,,,5/12/2020 21:13,,,,2,,,,CC BY-SA 4.0 21113,2,,21032,5/12/2020 21:49,,3,,"

I'm not familiar with the ins and outs of self-driving cars, but I imagine that the action space is not discrete. For instance, the car may want to decide what angle it needs to turn (rather than left or right). The update in Q-Learning involves taking $\max_aQ(s, a)$; this is theoretically possible for a continuous action space, but it would itself require some expensive optimisation at each time step to find the maximum. It is more likely that if RL were to be applied to self-driving cars it would be through a method that easily allows for a continuous action space, like the methods detailed in this paper.

I found this survey of Deep RL for autonomous driving that you may want to look at.

",36821,,2444,,11/23/2020 13:30,11/23/2020 13:30,,,,0,,,,CC BY-SA 4.0 21114,2,,21110,5/12/2020 21:50,,0,,"

In most cases we choose to take the mean of the k accuracies of k-fold cross validation; that is each time take the one that corresponds to the fold and when every fold has been used as validation set, find the mean accuracy of them.

",36055,,,,,5/12/2020 21:50,,,,0,,,,CC BY-SA 4.0 21115,2,,21046,5/12/2020 23:31,,1,,"

While it would certainly help if the link to the paper could also be posted, I will give it a shot based on what I understand from this picture.

1) For any convolutional layer, there are few important things to configure, namely, the kernel (or, filter) size, number of kernels, stride. Padding is also important but it is generally defined to be zero unless mentioned otherwise. Let us consider the picture block-by-block.

The first block contains 3 convolutional layers: (i) 2 conv layers with 96 filters each and the size of each filter is $ 3 \times 3$ (and stride $=1$ by default since it is not mentioned), and (ii) another conv layer with same configurations as above but with stride $=2$.

The second block is pretty much the same as the previous except the number of filters is increased to 192 for each layer that is defined.

The only considerable change in the third block is the introduction of $ 1 \times 1$ convolutional filters instead of $3 \times 3$.

And finally, a global average pooling layer is used (instead of a fully connected layer).

2) As for your analysis, it is exactly the case in fully connected layers, wherein the number of units in the input layer must match the vectorized dimensions of the input data. But, in the case of CNN, we give the images directly as the input to the network. The whole idea of a CNN is to understand the spatial structure of the data by analyzing patches of the image at a time (which is what the filter size defines). This PyTorch tutorial should give an idea as to how exactly the input is given to CNN.

",36971,,,,,5/12/2020 23:31,,,,0,,,,CC BY-SA 4.0 21116,1,21132,,5/13/2020 2:18,,4,256,"

I came across these slides Natural Language Processing with Deep Learning CS224N/Ling284, in the context of natural language processing, which talk about the Jacobian as a generalization of the gradient.

I know there is a lot of topic regarding this on the internet, and trust me, I've googled it. But things are getting more and more confused for me.

In simple words, how is the Jacobian a generalization of the gradient? How can it be used in gradient descent?

",30725,,2444,,5/13/2020 11:06,5/13/2020 18:08,How is the Jacobian a generalisation of the gradient?,,1,0,,,,CC BY-SA 4.0 21117,2,,21101,5/13/2020 3:06,,0,,"

The grad log probability of the trajectory parameterised by $\theta$ tells us the direction $\theta$ should move to increase the probability of that trajectory $P(\tau;\theta)$ the most.

If the reward is positive, $\nabla U(\theta)$ tells us how much we want to increase/decrease the probability of that path $\tau$. The scalar quantity $R(\tau)$ determines the magnitude and direction of shift. If $R(\tau)$ is positive, and $\theta$ is updated based on equation $\theta_{new}$ = $\theta_{old} + \alpha\nabla_{\theta}U(\theta)$, then $\theta$ will move in the direction of it's steepest increase, leading to an increase in probability of $\tau$. If $R(\tau)$ is negative, then $\theta$ moves in the direction of steepest decrease, leading to a decrease in probability of $\tau$.

",32780,,,,,5/13/2020 3:06,,,,0,,,,CC BY-SA 4.0 21118,1,,,5/13/2020 5:53,,3,224,"

According to Brian Cantwell Smith

no calculation without representation

Therefore, computers depend on models. So, we can say that AI is limited internally by the model and externally by the environment. This problem is discussed here in a previous question I have asked.

Now, consider Gödel's second incompleteness theorem

a coherent theory does not demonstrate its own coherence

Can we say that Gödel's second incompleteness theorem puts a limitation on artificial intelligence? How could AI bypass Gödel's second incompleteness theorem?

",21644,,2444,,12/11/2020 11:26,12/11/2020 18:12,Does Gödel's second incompleteness theorem put a limitation on artificial intelligence systems?,,1,2,,,,CC BY-SA 4.0 21119,1,,,5/13/2020 7:18,,3,53,"

I am trying to build a model for extractive text summarization using keras sequential layers. I am having a hard time trying to understand how to input my x data. Should it be an array of documents with each document containing an array of sentences? or should I further break it down to each sentence containing an array of words?

The y input is basically a binary classification of each sentence to check whether or not they belong to the summary of the document.

The first layer is an embedding layer and I'm using 100d Glove word embedding.

P.s: I am new to machine learning.

",37006,,,,,10/10/2020 10:04,What should the dimension of the input be for text summarization?,,1,0,,,,CC BY-SA 4.0 21120,1,21123,,5/13/2020 8:55,,3,87,"

When papers talk about the ""test time"", does this mean the phase when the model is passed with new data instances to derive the accuracy of the test data set? Or is ""test time"" the phase when the model is fully trained and launched for real-world input data?

",32092,,2444,,5/13/2020 10:54,5/13/2020 11:19,Is the test time the phase when the model's accuracy is calculated with test data set?,,1,0,,,,CC BY-SA 4.0 21121,2,,21119,5/13/2020 9:28,,1,,"

Briefly:

I think what you are looking for is an RNN network (Either LSTM or GRU) with the many-to-many topology.

Explanation:

Clearly your input is the sentences (or to be more precise, the an embedding of your sentences, because you cannot feed the raw text to the network). then for each sentence you want to assign a value, which means for n inputs, you need n outputs. This is the many-to-many architecture.

Moreover, you might want to check the Bi-directional LSTM for your study. Not relevant to your question, but worth mentioning.

For more information, refer to this

",6258,,,,,5/13/2020 9:28,,,,0,,,,CC BY-SA 4.0 21122,2,,21068,5/13/2020 9:56,,1,,"

About your question concerning ColorMaps: A cv2 ColorMap is basically just a lookup table which directly maps the intensity values of the input image to a predefined RGB color. In its essences it is exactly what you did by categorizing and associating with a specific color value.

Most of the cv2 ColorMaps just have a little more detail, most of them have either 64( ""Rainbow"", ""Hot"", ...) or 256 (""Jet"", ""Magma"", ...) steps, some like ""Winter"" have 11. They come from the GNU Octave or Matlab color palettes.

If you really want to build this lookup table yourself the easiest way to achieve a high level of granularity for the individual steps is to use the HSV color space

import numpy as np
import cv2

N_STEPS = 20
h = np.linspace(0, 180, N_STEPS, endpoint=False) # cv2 Hue range is [0, 179]  
s = np.ones_like(h)*255   # cv2 saturation range is from [0, 255]; adjust it to your liking
v = np.ones_like(h)*255   # cv2 value range is from [0, 255]; adjust it to your liking
hsv_colormap = np.dstack([h,s,v])

rgb_colormap = cv2.cvtColor(np.uint8(hsv_colormap), cv2.COLOR_HSV2RGB)

IMG_SIZE = 100
#intensities = np.random.random((IMG_SIZE,IMG_SIZE)) # from 0 to 1
intensities = np.array([np.linspace(0,1, IMG_SIZE, endpoint=False), ]*IMG_SIZE) # from 0 to 1
intensity_indices = np.uint8(N_STEPS*intensities) # map insity ranges to discretized intervals 

color_mapped_intensities = rgb_colormap[0,intensity_indices,:] 
cv2.imshow('colormap', color_mapped_intensities)

cv2.waitKey(0)

you can play around with the Hue values ranges to feed into the color map

Happy color mapping to you !

",3132,,,,,5/13/2020 9:56,,,,0,,,,CC BY-SA 4.0 21123,2,,21120,5/13/2020 10:06,,2,,"

If it is not defined otherwise, testing is the phase where the model is passed with new data instances to derive the score of the test set. It should not be confused with validation set.

A validation dataset is a sample of data held back from training your model that is used to give an estimate of model skill while tuning model’s hyperparameters during training. There are a lot of methods of validation with k-fold cross-validation being one of the most popular.

In k-fold cross-validation, the original training set is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for validating the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data for the each epoch. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling is that all observations are used for both training and validation, and each observation is used for validation exactly once.

The validation dataset is different from the test set that is also held back from the training of the model, but is instead used to give an unbiased estimate of the skill of the final tuned model when comparing or selecting between final models.

",36055,,36055,,5/13/2020 11:19,5/13/2020 11:19,,,,0,,,,CC BY-SA 4.0 21124,1,,,5/13/2020 10:32,,4,580,"

One way of understanding the difference between value function approaches, policy approaches and actor-critic approaches in reinforcement learning is the following:

  • A critic explicitly models a value function for a policy.
  • An actor explicitly models a policy.

Value function approaches, such as Q-learning, only keep track of a value function, and the policy is directly derived from that (e.g. greedily or epsilon-greedily). Therefore, these approaches can be classified as a ""critic-only"" approach.

Some policy search/gradient approaches, such as REINFORCE, only use a policy representation, therefore, I would argue that this approach can be classified as an ""actor-only"" approach.

Of course, many policy search/gradient approaches also use value models in addition to a policy model. These algorithms are commonly referred to as ""actor-critic"" approaches (well-known ones are A2C / A3C).

Keeping this taxonomy intact for model-based dynamic programming algorithms, I would argue that value iteration is an actor-only approach, and policy iteration is an actor-critic approach. However, not many people discuss the term actor-critic when referring to policy iteration. How come?

Also, I am not familiar with any model-based/dynamic programming like actor only approaches? Do these exist? If not, what prevents this from happening?

",34351,,2444,,5/13/2020 10:58,12/20/2022 6:45,Would you categorize policy iteration as an actor-critic reinforcement learning approach?,,2,0,,,,CC BY-SA 4.0 21127,2,,17790,5/13/2020 10:59,,0,,"

I don't know if this comment will be helpful, but shouldn't the sum of the log determinant of the Jacobian (LDJ) have opposite sign in the forward and inverse pass? I'm not talking about the LDJ being sum of the positive scaling function in the forward and sum of the negative of the scaling function in the inverse, I'm talking about the LDJ itself.

For example, from the change of variable formula, $$p_z(z) \ dz = p_x(x) \ dx$$ differentiating w.r.t $\ dz$ gives, $$ p_z(z) = p_x(x) \left|\frac{dx}{dz}\right|$$ taking the log of both sides gives, $$ ln(p_z(z)) = ln(p_x(x)) + ln\left(\left|\frac{dx}{dz}\right|\right)$$ Now, if we repeat this process but differentitating w.r.t $\ dx$, $$ p_x(x) = p_z(z) \left|\frac{dz}{dx}\right|$$ taking the log and rearranging for $ln(p_z(z))$ gives, taking the log of both sides gives, $$ ln(p_z(z)) = ln(p_x(x)) - ln\left(\left|\frac{dz}{dx}\right|\right)$$

A change of sign occurs when the Jacobian is reversed. Could this be your problem? Also, as @chris-cundy said you can have a probability density greater than 1 at a single point. Remember it is the integral over all space that cannot be greater than one, an individual point between greater than 1 is fine.

",36829,,,,,5/13/2020 10:59,,,,0,,,,CC BY-SA 4.0 21128,2,,21124,5/13/2020 11:34,,3,,"

Keeping this taxonomy intact for model-based Dynamic programming algorithms, I would argue that value iteration is a Actor only approach, and policy iteration is a Actor-Critic approach. However, not many people discuss the term Actor-Critic when referring to Policy Iteration. How come?

Both policy iteration and value iteration are value-based approaches. The policy in policy iteration is either arbitrary or derived from a value table. It is not modelled separately.

To count as an Actor, the policy function needs to modelled directly as a parametric function of the state, not indirectly via a value assessment. You cannot use policy gradient methods to adjust an Actor's policy function unless it is possible to derive the gradient of the policy function with respect to parameters that control the relationship bewteen state and action. An Actor policy might be noted as $\pi(a|s,\theta)$ and the parameters $\theta$ are what make it possible to learn improvements.

Policy iteration often generates an explicit policy, from the current value estimates. This is not a representation that can be directly manipulated, instead it is a consequence of measuring values, and there are no parameters that can be learned. Therefore the policy seen in policy iteration cannot be used as an actor in Actor-Critic or related methods.

Another way to state this is that the policy and value functions in DP are not separate enough to be considered as an actor/critic pair. Instead they are both views of the same measurement, with the value function being closer to raw measurements and policy being a mapping of the value function to policy space.

Also, I am not familiar with any model-based/dynamic programming like actor only approaches? Do these exist? If not, what prevents this from happening?

The main difference between model-based dynamic programming and model-free methods like Q-learning, or SARSA, is that the dynamic programming methods directly use the full distribution model (which can be expressed as $p(r, s'|s,a)$) to calculate expected bootstrapped returns.

There is nothing in principle stopping you substituting expected returns calculated in this way into REINFORCE or Actor-Critic methods. However, it may be computationally hard to do so - these methods are often chosen when action space is large for instance.

Basic REINFORCE using model-based expectations would be especially hard as you need an expected value calculated over all possible trajectories from each starting state - if you are going to expand the tree of all possible results to that degree, then a simple tree search algorithm would perform better, and the algorithm then resolves to a one-off planning exhaustive tree search.

Actor-Critic using dynamic programming methods for the Critic should be viable, and I expect you could find examples of it being done in some situations. It may work well for some card or board games, if the combined action space and state space is not too large - it would behave a little like using Expected SARSA for the Critic component, except also run expectations over the state transition dynamics (whilst Expected SARSA only runs expectations over policy). You could vary the depth of this too, getting better estimates theoretically at the expense of extra computation (potentially a lot of extra computation if there is a large branching factor)

",1847,,1847,,5/13/2020 13:41,5/13/2020 13:41,,,,7,,,,CC BY-SA 4.0 21129,1,,,5/13/2020 13:32,,2,29,"

I have a quick question regarding the use of different latent spaces to represent a distribution. Why is it that a Gaussian is usually used to represent the latent space of the generative model rather than say a hypercube? Is it because a Gaussian has most of its distribution centred around the origin rather than a uniform distribution which uniformly places points in a bounded region?

I've tried modelling different distributions using a generative model with both a Gaussian and Uniform distribution in the latent space and the Uniform is always slightly restrictive when compared with a Gaussian. Is there a mathematical reason behind this?

Thanks in advance!

",36829,,,,,5/13/2020 13:32,Why do hypercube latent spaces perform poorer than Gaussian latent spaces in generative neural networks?,,0,0,,,,CC BY-SA 4.0 21130,2,,21106,5/13/2020 15:14,,1,,"

I guess the gradient of the expectation of the Utility function, $\nabla_{\theta}J(\theta)$ in policy gradient methods where $\nabla_{\theta}J(\theta) = E_{\tau \sim p(\tau ; \theta)}[r(\tau)\nabla_{\theta}log p(\tau;\theta)]$ can be approximated using a single sample trajectory as shown in a deep reinforcement learning lecture by stanford where $J(\theta) \approx \sum_{t > 0}r(\tau)\nabla_{\theta}log\pi_{\theta}(a_t |s_t)$ and an average over sampled trajectories is not needed to compute the gradient for $\theta$ update in the direction of gradient.

",32780,,,,,5/13/2020 15:14,,,,0,,,,CC BY-SA 4.0 21131,1,,,5/13/2020 15:24,,2,73,"

In the decision tree algorithm, why do we use a weighted average of child entropies when we calculate information gain? What is wrong about using the arithmetic mean of entropies?

",37017,,2444,,5/13/2020 20:22,5/13/2020 20:24,Why do we use a weighted average of child entropies when we calculate information gain?,,0,2,,,,CC BY-SA 4.0 21132,2,,21116,5/13/2020 15:38,,1,,"

In short, the Jacobian matrix is a generalization of the gradient for vector-valued functions.

Recall that the gradient is a vector of partial derivatives of a multi-variable function. So, consider a multi-variable function of the form $f: \mathcal{X}_1 \times \mathcal{X}_2 \times \dots \times \mathcal{X}_N \rightarrow \mathcal{Y}$. The output of this function is $f(x_1, x_2, \dots, x_N) = y$, where $x_i \in \mathcal{X}_i$, for $i=1, \dots, N$ and $y \in \mathcal{Y}$. And the gradient is $\nabla f = \left[ \frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_N} \right] \in \mathbb{R}$.

A vector-valued function is a function whose output is a vector, i.e. a function of the form $f: \mathcal{X} \rightarrow \mathcal{Y}_1 \times \mathcal{Y}_2 \times \dots \times \mathcal{Y}_M$ (I am not sure if this notation is rigorous enough!), so the output of this function is a vector $f(x) = [y_1, y_2, \dots, y_M]$, where $x \in \mathcal{X}$ and $y_i \in \mathcal{Y}_i$, for $i = 1, \dots, M$. You can also view a vector-valued function $f$ as a vector of scalar-valued functions $[f_1, f_2, \dots, f_M]$, where $f_i: \mathcal{X} \rightarrow \mathcal{Y}_i$, for all $i$.

You can also have multi-variable vector-valued functions, i.e. functions of the form

$$f: \mathcal{X}_1 \times \mathcal{X}_2 \times \dots \times \mathcal{X}_N \rightarrow \mathcal{Y}_1 \times \mathcal{Y}_2 \times \dots \times \mathcal{Y}_M.$$

The Jacobian matrix is an $N \times M$ matrix with one partial derivative for each combination of inputs and outputs (i.e. $f_i$).

If you want to optimize a multi-variable vector-valued function, you can make use of the Jacobian, in a similar way that you make use of the gradient in the case of multi-variable functions, but, although I've seen it in the past, I can't provide now a concrete example of an application of the Jacobian (but the linked slides probably do that).

",2444,,2444,,5/13/2020 18:08,5/13/2020 18:08,,,,0,,,,CC BY-SA 4.0 21134,2,,13544,5/13/2020 19:48,,0,,"

That does not seem very useful to apply local minima search (as SGD) to another local minima search. Existing successful solutions combine global minima search techniques with local minima search.

For example, it's beneficial to combine simulated annealing with SGD to optimize it's learning rate and/or Nesterov momentum. In this case, you don't even need to spawn a population of SGD optimizers. But, you can also try population-based algorithms like evolution programming.

The idea to optimize optimizers is very curious, but it's rather useful to try it on global optimization algorithms.

",36779,,,,,5/13/2020 19:48,,,,1,,,,CC BY-SA 4.0 21135,1,,,5/13/2020 20:17,,2,87,"

I understand that L1 and L2 regularization helps to prevent overfitting. My question is then, does that mean they also help a neural network learn faster as a result?

The way I'm thinking is that since the regularization techniques reduce weights (to 0 or close to 0 depending on whether it's L1 or L2) that are not important to the neural network, this would, in turn, result in "better values" for the output neurons right? Or perhaps I am completely wrong.

For example, suppose I have a neural network that is to train a snake to move around a NxN environment. With regularization, the snake will learn faster in terms of survive longer in the game?

",33579,,2444,,1/29/2021 23:12,2/24/2022 2:03,Does L1/L2 Regularization help reach an optimum result faster?,,1,0,,,,CC BY-SA 4.0 21140,1,,,5/14/2020 1:46,,1,70,"

I have retrospective data for a sort of ""behaviour policy"" which I will use to train a deep q network to learn a target greedy policy. After learning the Q values for this target policy, can we make the conclusion that because the Q value for the target policy, $Q(s,\pi_e(s))$ is higher than the Q values for the behaviour policy, $Q(s,\pi_b(s))$ at all states encountered, where $\pi_e$ is the policy output by deep Q-learning and $\pi_b$ is the behaviour policy, then this target policy has better performance than the behaviour policy?

I know the proper way is to run the policy and do an empirical comparison of some sort. However, that is not possible in my case.

",32780,,2444,,5/14/2020 10:40,2/8/2021 11:04,Is it possible to prove that the target policy is better than the behavioural policy based on learned Q values?,,1,0,,,,CC BY-SA 4.0 21142,1,25979,,5/14/2020 5:40,,6,836,"

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

According to the Wikipedia article on swarm intelligence

Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence.

The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general set of algorithms.

SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems.

These two terms seem to be related, especially in their application in computer science and software engineering. Is one a subset of another? Is one tool (SI) is used to build a system for the other(AI)? What are their differences and why are they significant?

",30725,,-1,,6/17/2020 9:57,1/14/2022 17:56,What is the difference between artificial intelligence and swarm intelligence?,,3,0,,,,CC BY-SA 4.0 21143,2,,21140,5/14/2020 5:59,,-1,,"

No, mainly because these are all stochastic approximations and may not represent the true values.

Almost nothing good can be said about NN approximations to value and Q functions(at least according to a professor I have had).

",32390,,,,,5/14/2020 5:59,,,,1,,,,CC BY-SA 4.0 21144,2,,21142,5/14/2020 6:38,,2,,"

Well, one of the simpler definitions for SI sounds like this:

The emergent collective intelligence of groups of simple agents.” (Bonabeau et al, 1999)

So, in order to get to the SI you have to use some kind of algorithms/AI to get simple intelligent agents. It's just cooperative intelligence, or cooperative AI if you wish. SI just uses today's AI/ML techniques to build the swarm, in same manners as reinforcement learning uses AI/ML techniques to make agents that can behave reasonably in large spaces by approximating value functions V(S) and policies pi(S). I hope this helps a little.

So AI/ML is kinda of a tool plugged in SI, as SI is field with it's own algorithm definitions and theory.

",18676,,,,,5/14/2020 6:38,,,,1,,,,CC BY-SA 4.0 21145,1,,,5/14/2020 7:55,,2,53,"

I have a dataset with MRI of patients with a specific disease that affects the brain and another dataset with MRI of healthy patients.

I want to create a classifier (using neural networks) to classify if the MRI of a new patient shows the presence of the illness or not.

First of all, I extracted the brain from all the MRIs (the so-called skull stripping) using BET tool found in FSL.

I have three questions for you

  1. As the input to the training phase, I want to give the whole extracted brains (possibly in the nii format). What kind of preprocessing steps do I need to apply once I've extracted the brains (before passing it to the classifier)?

  2. Do you know any better tool for skull stripping?

  3. Do you know a tool (or library) that takes as input a nii files and allows me to create a classifier that uses neural networks?

",36363,,2444,,5/14/2020 10:28,5/14/2020 12:18,Are my steps correct for a proper classification of a sick brain?,,1,1,,,,CC BY-SA 4.0 21146,2,,8607,5/14/2020 8:06,,0,,"

Your network without regularization does not appear to be over fitting but rather it appears to be converging to a minima. I am actually a bit surprised it is doing as well as it is given that your data set is small. So You don't need regularization. If you want to improve the accuracy you might try using an adjustable learning rate. The Keras call back ReduceLROnPlateau can be used for this. Documentation is here. Also use the callback ModelCheckpoint to save the model with the lowest validation loss. Documentation is here. It would help a lot if you posted your model code. I have found if you do encounter over fitting dropout works more effectively than regularization.

",33976,,,,,5/14/2020 8:06,,,,0,,,,CC BY-SA 4.0 21147,1,,,5/14/2020 8:21,,1,74,"

So far I have seen TD3 and DDPG benchmarks on Pybullet environments, but I am looking for SAC benchmarks on Pybullet too, anyone can help?

",36603,,,,,5/14/2020 8:21,Benchmarking SAC on Pybullet,,0,0,,,,CC BY-SA 4.0 21148,1,,,5/14/2020 8:24,,3,94,"

I have, like you see, just a general question about the combination of fuzziness and neural networks. I understood it as follows

  1. Fuzzy neural networks as a hybrid system: the neural network helps me to find the optimal parameters related to the fuzzy system, for example, the rules or the membership function

  2. Adaptive neural fuzzy inference systems (ANFIS): the NN helps me to find the optimal parameters related to the fuzzy inference system. What are some examples here?

I cannot intuitively grasp the difference between these two.

",37039,,2444,,5/14/2020 10:07,5/14/2020 10:07,What is the difference between fuzzy neural networks and adaptive neuro fuzzy inference systems?,,0,0,,,,CC BY-SA 4.0 21149,1,21151,,5/14/2020 9:01,,3,409,"

I have implemented a simple Q-learning algorithm to minimize a cost function by setting the reward to the inverse of the cost of the action taken by the agent. The algorithm converges nicely, but there is some difference I get in the global cost convergence for different orders of the reward function. If I use the reward function as:

$$\text{reward} = \frac{1}{(\text{cost}+1)^2}$$

the algorithm converges better (lower global cost, which is the objective of the process) than when I use the reward as:

$$\text{reward} = \frac{1}{(\text{cost}+1)}$$

What could be the explanation for this difference? Is it the issue of optimism in the face of uncertainty?

",27231,,2444,,10/8/2020 11:46,10/8/2020 11:46,Why is the reward function $\text{reward} = 1/{(\text{cost}+1)^2}$ better than $\text{reward} =1/(\text{cost}+1)$?,,1,0,,,,CC BY-SA 4.0 21150,2,,7390,5/14/2020 9:11,,0,,"

Though the word ""Advantage"" in the actor-critic realm has been used to refer to the difference between the state value and the state action value, A2C brings in the ideas of A3C. In A3C, several worker networks interact with different copies of the environment (asynchronous learning) and update a master network after a set if steps. This was meant to solve instability issues associated with both temporal difference update method and correlations within neural network generated prediction and target values. However it was noticed by OpenAI that there was no need of the asynchrony, i.e. there was no practical benefit of having different worker networks. Instead, they had the same copy of the network that interacted with different copies of the environment (one works from the beginning, another working backwards from the end) and they update at once without the master lagging behind like in A3C. The removal of asynchrony gave rise to A2C.

",27231,,,,,5/14/2020 9:11,,,,0,,,,CC BY-SA 4.0 21151,2,,21149,5/14/2020 9:21,,3,,"

Reinforcement learning (RL) control maximises the expected sum of rewards. If you change the reward metric, it will change what counts as optimal. Your reward functions are not the same, so will in some cases change the priority of solutions.

As a simple example, consider a choice between trajectories with costs A(0,4,4,4) and B(1,1,1,1). In the original cost formula B is clearly better, with 4 total cost compared with A's cost of 12 - A just has one low cost at the beginning, which I put in deliberately as it exposes the problem with your conversion.

In your two reward formulae:

reward = 1/(cost+1)**2. 
  A: 1.0 + 0.04 + 0.04 + 0.04 = 1.12
  B: 0.25 + 0.25 + 0.25 + 0.25 = 1.0

reward = 1/(cost+1).
  A: 1.0 + 0.2 + 0.2 + 0.2 = 1.6
  B: 0.5 + 0.5 + 0.5 + 0.5 = 2.0

So with this example (numbers carefully chosen), maximising the total reward favours A for sum of inverse squares but B for sum of inverses, whilst B should be the clear preference for minimising sum of costs. It is possible to find examples for both of your formulae where the best sum of rewards does not give you the lowest cost.

In your case, if you truly want to minimise total cost, then your conversion to rewards should be:

reward = -cost

Anything else is technically changing the nature of the problem, and will result in different solutions that may not be optimal with respect to your initial goal.

",1847,,1847,,5/14/2020 9:48,5/14/2020 9:48,,,,2,,,,CC BY-SA 4.0 21152,1,,,5/14/2020 9:37,,2,59,"

I am actually working with the iris dataset from sklearn and try to understand the ANFIS-Package for python. But that does not really matter! I have a more general question.

During thinking about adaptive neuro-fuzzy inference system (ANFIS), a general question came into my mind. I don't really understand: in general, why is ANFIS necessary?

So, for example, if I want to predict classes for this iris dataset, I also can use a supervised learning method or a neural network and I get the result.

In ANFIS, I do nothing other than splitting the input attributes into linguistic terms and give membership functions to it. At the end of the day, I will receive ""predictions"" for the input values, which are classes.

But - with the ANFIS-Package in Python - I cannot see, if my membership function has changed during the learning time or what rules the network constructed. So, I cannot really see why this is useful. Maybe it is just because I am usually using the iris dataset for supervised learning.

",37039,,2444,,5/14/2020 10:08,5/14/2020 10:08,Why is ANFIS important in general?,,0,0,,,,CC BY-SA 4.0 21153,1,,,5/14/2020 10:22,,2,39,"

I want to build a reinforcement learning model, which takes a camera picture as input, that learns online (in terms of machine learning). Based on the position of an object on the camera, I want the model to output an action. That action would be a stepper motor, that either moves to the right or left. This process would be repeated until a given goal/position is reached.

I can't go to the lab at the moment, so I wrote a virtual environment and let the agent live in that.

I am trying a neural network with the cross-entropy function. For small environments, this works fine. However, when I increase the size of the environment, the computation becomes really really slow and the model needs a lot of data input until it starts to learn. Also, it only learns offline. But what I would rather want is a model that learns online and only takes a few tries until it understands the underlying pattern. That isn't a problem for the virtual environment, since I can easily get thousands of data samples. But in the real environment, it would take ages this way.

  • Is there an online reinforcement algorithm that could help me out (instead of training the neural network with the cross-entropy loss function)?
",37042,,1671,,5/14/2020 22:02,5/14/2020 22:02,Is there an online RL algorithm that receives as input a camera frame and produces an action as output?,,0,1,,,,CC BY-SA 4.0 21154,2,,21145,5/14/2020 11:44,,2,,"

It looks like everything you want is available with the Deep Learning Toolkit (DLTK) for Medical Imaging

There is also a blog: An Introduction to Biomedical Image Analysis with TensorFlow and DLTK

There is a DataCamp course that walks you through most of the process but instead of a classifier they use deep learning to reconstruct brain images. They provide a link to their MNIST classifier example which should be easy to adapt for your purpose. See: Reconstructing Brain MRI Images Using Deep Learning (Convolutional Autoencoder)

ResearchGate has a thread that may help: What is the appropriate way to use Nifti files in deep learning?

",5763,,5763,,5/14/2020 12:18,5/14/2020 12:18,,,,0,,,,CC BY-SA 4.0 21155,1,,,5/14/2020 12:26,,7,1431,"

What is the difference between the prediction (value estimation) and control problems in reinforcement learning?

Are there scenarios in RL where the problem cannot be distinctly categorised into the aforementioned problems and is a mixture of the problems?

Examples where the problem cannot be easily categorised into one of the aforementioned problems would be nice.

",,user9947,2444,,1/2/2022 9:53,1/2/2022 9:53,What is the difference between the prediction and control problems in the context of Reinforcement Learning?,,2,0,,,,CC BY-SA 4.0 21156,2,,20995,5/14/2020 13:41,,1,,"

In the DRL nanodegree in Udacity, the instructor says it is possible to combine on- and off-policy learning and suggests the following paper where this has been done: Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic (ICLR 2017). Citing the paper:

The core idea is to use the first-order Taylor expansion of the critic as a control variate, resulting in an analytical gradient term through the critic and a Monte Carlo policy gradient term consisting of the residuals in advantage approximations. The method helps unify policy gradient and actor-critic methods: it can be seen as using the off-policy critic to reduce variance in policy gradient or using on-policy Monte Carlo returns to correct for bias in the critic gradient.

The authors provide an open source implementation of it in https://github.com/shaneshixiang/rllabplusplus

There is a follow-up paper by the same authors also addressing this problem: Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning (NIPS 2017).

The Related Work section in both papers might be also worth looking at.

",34010,,,,,5/14/2020 13:41,,,,3,,,,CC BY-SA 4.0 21157,2,,21155,5/14/2020 13:47,,6,,"

Prediction is the problem of predicting any feature of the environment. In reinforcement learning, the typical feature is the reward or return, but this doesn't have to be always the case. See Multi-timescale nexting in a reinforcement learning robot (2011) by Joseph Modayil et al.

Control is the problem of estimating a policy. Clearly, the term control is related to control theory. In fact, the term control is often used as a synonym for action. See Is there any difference between a control and an action in reinforcement learning?. Similarly, the term controller is also used as a synonym for agent. For example, in the paper Metacontrol for Adaptive Imagination-Based Optimization (2017) by Jessica B. Hamrick et al. the term meta-controller is used to refer to an agent. A controlled system can also refer to the environment.

Section 14.1 of the book Reinforcement learning: an introduction (2nd edition) provides more details about the distinction between prediction and control and how this distinction is related to psychological concepts.

",2444,,2444,,9/4/2020 22:51,9/4/2020 22:51,,,,0,,,,CC BY-SA 4.0 21158,2,,20968,5/14/2020 14:19,,2,,"

In the context of RL, for a policy to be parameterised it typically means we explicitly model the policy and is common in policy gradient methods.

Consider value based methods such as Q-learning where our policy is usually something like $\epsilon$-greedy where we choose our action using the following policy

\begin{align} \pi(a|s) = \left\{ \begin{array}{ll} \arg \max_a Q(s,a) & \text{with probability } 1-\epsilon\;; \\ \text{random action} & \text{with probability } \epsilon\;. \end{array}\right. \end{align} Here we have parameterised the policy with $\epsilon$ but the learning is done by learning the Q-functions. When we parameterise a policy we will explicitly model $\pi$ by the following: $$\pi(s|a,\boldsymbol{\theta}) = \mathbb{P}(A_t = a | S_t=s, \boldsymbol{\theta}_t = \boldsymbol{\theta})\;.$$ Learning is now done by learning the parameter $\boldsymbol{\theta}$ that maximise some performance measure $J(\boldsymbol{\theta})$ by doing approximate gradient ascent updates of the form $$\boldsymbol{\theta}_{t+1} = \boldsymbol{\theta}_t + \alpha \hat{\Delta J(\boldsymbol{\theta}_t)}. $$

Note that, as per the Sutton and Barto textbook, $\hat{\Delta J(\boldsymbol{\theta}_t)}$ is a noisy, stochastic estimate of $\Delta J(\boldsymbol{\theta}_t)$ where the former approximates the latter in expectation.

The policy can be parameterised in any way as long as it is differentiable with respect to the parameters. Commonly in Deep RL the policy is parameterised as a neural network so $\boldsymbol{\theta}$ would be the weights of the network.

",36821,,,,,5/14/2020 14:19,,,,0,,,,CC BY-SA 4.0 21159,1,,,5/14/2020 14:19,,24,4355,"

I'm a novice researcher, and as I started to read papers in the area of deep learning I noticed that the implementation is normally not added and is needed to be searched elsewhere, and my question is how come that's the case? The paper's authors needed to implement their models anyway in order to conduct their experimentations, so why not publish the implementation? Plus, if the implementation is not added and there's no reproducibility, what prevents authors from forging results?

",36083,,34157,,5/17/2020 22:42,1/13/2022 23:51,Why do most deep learning papers not include an implementation?,,3,2,,,,CC BY-SA 4.0 21161,2,,21159,5/14/2020 14:44,,18,,"

The paper's authors needed to implement their models anyway in order to conduct their experimentations, so why not publish the implementation?

Some papers and authors actually provide a link to their own implementation, but most of the papers (that I have read) don't provide it, although some third-party implementations may already be available on Github (or other code-hosting sites) when you are reading the paper. There may be different reasons why the author(s) of a paper don't provide a reference implementation

  • They use some closed-source software or maybe it makes use of other resources that cannot be shared
  • Their implementation is a mess and, so, from a pedagogical point of view, it's quite useless
  • This may encourage other people to try to reproduce their results with different implementations, so it may indirectly encourage people to do research on the same topic (but maybe not providing a reference implementation could actually have the opposite effect!)

Plus, if the implementation is not added and there's no reproducibility, what prevents authors from forging results?

I had some experience as a researcher, but not enough to answer this question precisely.

Nevertheless, from some reviews of papers I have read (e.g. on OpenReview), in most cases, the reviewers are interested in the consistency of the results, the novelty of the work, the clarity and structure of the paper, etc. I think that, in most cases, they probably trust the provided results, also because, often, for reproducibility, researchers are expected to describe their models and parameters in detail, provide plots, etc., but I don't exclude that there are cases of people that try to fool the reviewers. For example, watch this video where Phil Tabor comments on ridiculous attempts to fool people and plagiarism by Siraj Raval.

",2444,,2444,,1/13/2022 23:51,1/13/2022 23:51,,,,0,,,,CC BY-SA 4.0 21163,2,,21159,5/14/2020 15:02,,6,,"

Someone can argue to some human adequate reasons, but there is a bad trend of falsified results in deep learning research papers that propose some nowel solutions or even update state-of-the-art model performance. And that's not just a few papers that lie, it's a large portion of them. And the reason for that is even more sad - most of so-called deep learning research papers just describing some empirical experiments, without any math and proving any theorem, and so it's easy to cheat.

So objectively, if the only thing you propose in your paper is your empirical results - you must confirm them true by sharing source code. Otherwise, your work will be ignored.

",36779,,36779,,5/14/2020 15:08,5/14/2020 15:08,,,,0,,,,CC BY-SA 4.0 21165,1,,,5/14/2020 17:24,,1,23,"

I have read several papers about super-resolution with CNNs, where a low-resolution image is reconstructed to a high-resolution image. What I don't understand is, why it is necessary to interpolate the low-resolution image at the beginning to a size that matches the high-resolution image target.

What is the idea about that? If I have an image to image transformation, what are the benefits in a Neuronal network to have the same input size as the same output size?

",35615,,,,,5/14/2020 17:24,"Super-Resolution with Convolutional Neuronal Networks, why interpolation at the beginning?",,0,0,,,,CC BY-SA 4.0 21167,2,,21159,5/14/2020 18:35,,6,,"
  1. The first reason described in nbro's answer can definitely be an important one; authors may have implemented their software using code that they can't share. There's a lot of research coming out of companies (large and small), and they may use all sorts of proprietary libraries that were built in the company and cannot be distributed outside.

  2. As described in this answer, sometimes researchers prefer to keep the code to themselves because it may give them an ""advantage"" over other researchers fot future work / follow-up research in the same area. I'm not saying that I believe this is a good reason, it definitely doesn't sound like it's good for the overall benefit of science... but it may be understandable in a ""publish or perish"" world where there's quite a bit of pressure to keep publishing frequently if you want your academic career to survive.

  3. Also described in more detail in the answer I linked above, research code is often messy, and not pretty. Nbro also mentioned this, though I personally don't feel like the rationale is ""it's too messy to be useful"", and more often it's more along the lines of ""it's so messy that I'm too embarassed to share it"".

  4. Some researchers, especially in larger teams, do not just work on a single paper at a time. They may have multiple papers they're working on simultaneously, and if they're closely related it can often be convenient to have them all in a single codebase. This is especially the case with longer review times; in the time between submission of a paper -- where it and anything related to it, such as source code, must remain private -- and an acceptance notification, there's plenty of time to start working on a next project. If the code for the previous project is mixed in with the code for the next project, and you can't / don't want to publish the code for the next project either yet... it may be easier to just not release anything.

  5. In some cases, authors may feel it is ""dangerous"" to release their source code (or trained models). This is probably relatively uncommon, but can happen. Consider the situation surrounding OpenAI's GPT-2 language model, for example.

Not directly a response to your question, but it may also be useful to keep in mind that sometimes not all authors of a paper may agree on whether or not to open-source it. Legally, I suppose that usually all the authors (or all contributors to the source code) would be copyright holders, and it can only be released if they all agree to release it. So if one of them feels (based on any of the reasons listed above, or maybe other reasons) that it shouldn't be released, it won't. In practice, I suppose that it would often primarily be the call of the more senior authors on a paper / principal investigators / supervisors.


Plus, if the implementation is not added and there's no reproducibility, what prevents authors from forging results?

Personally I wouldn't be concerned as much about forging results as just... ""accidental"" false positives. Yes, it's possible and it will happen. But the pay-off of successfully forging false results and getting a paper published seem REALLY low compared to the risk of your academic career ending if it gets out. If you really have to forge your results just to get your paper accepted, and it has zero other meaningful contributions (no ""unforgeable"" contributions like theoretical results or really new and useful insights).. it's unlikely to become a really impactful paper, a widely-cited one. The really highly impactful empirical papers only become highly impactful because people will immediately try to re-implement and reproduce it anyway, and if that turns out to be impossible, it will turn into a dead end.

That said, I'm not saying it can't be important to share source code. Especially in deep learning, and especially in deep reinforcement learning, it has indeed been shown that tiny implementation details can be massively important to empirical performance, and these tiny implementation details are rarely all available in papers. There has certainly been a push towards encouraging the publication of source code, and it is important -- but unfortunately it's not always a black-and-white story, and there can sometimes also be good reasons that make it difficult/impossible to do so. If it's good research, I'd personally still rather have it without source code, than not have it at all.

",1641,,,,,5/14/2020 18:35,,,,1,,,,CC BY-SA 4.0 21168,2,,21155,5/14/2020 19:42,,5,,"

Nbro's answer already addresses the basic definitions, so I won't repeat that. Instead I'll try to elaborate a bit on the other parts of the question.

Are there scenarios in RL where the problem cannot be distinctly categorised into the aforementioned problems and is a mixture of the problems?

I'm not sure about cases where the ""problem"" can't be distinctly categories... but often, when we're actually interested in control as a problem, we still also actually deal with the prediction problem as a part of our training algorithm. Think of $Q$-learning, Sarsa, and all kinds of other algorithms related to the idea of ""Generalized Policy Iteration"". Many of them work (roughly) like this:

  1. Initialise (somehow, possibly randomly) a value function
  2. Express a policy in terms of that value function (greedy, $\epsilon$-greedy, etc.)
  3. Generate experience using that policy
  4. Train the value function to be more accurate for that policy (prediction problem here)
  5. Go back to step 2 (control problem here)

You could view these techniques in this way, as handling both of the problems at the same time, but there's also something to be said for the argument that they're really mostly just tackling the prediction problem. That's where all the ""interesting"" learning happens. The solution to the control problem is directly derived from the solution to the prediction problem in a single, small step. There are different algorithms, such as Policy Gradient methods, that directly aim to address the control problem instead.


An interesting (in my opinion :)) tangent is that in some problems, one of these problems may be significantly easier than the other, and this can be important to inform your selection of algorithm. For example, suppose you have a very long ""road"" where you can only move to the left or the right, you start on the left, and the goal is all the way to the right. In this problem, a solution to the control problem is trivial to express; just always go right. For the prediction problem, you need something much more powerful to be able to express all the predictions of values in all possible states.

In other problems, it may be much easier to quickly get an estimate of the value, but much more complicated to actually express how to obtain that value. For example, in StarCraft, if you have a much larger army, it is easy to predict that you will win. But you will still need to execute some very specific, long sequences of actions to achieve that goal.

",1641,,,,,5/14/2020 19:42,,,,0,,,,CC BY-SA 4.0 21169,1,,,5/14/2020 19:54,,2,561,"

Q-learning is a temporal-difference method and Monte Carlo tree search is a Monte Carlo method. In what category is MiniMax?

",27629,,2444,,5/14/2020 23:00,5/14/2020 23:00,In what RL algorithm category is MiniMax?,,1,0,,,,CC BY-SA 4.0 21170,1,21376,,5/14/2020 20:56,,4,131,"

I have a dataset with missing values, I would like to use machine learning methods to fill. In more detail, there are $n$ individuals, for which up to 10 properties are provided, all numerical. The fact is, there are no individuals for which all properties are given. The first rows (each row contains data for a given individual) do look as the following

\begin{bmatrix} 1 & NA & 3.6 & 12.1 & NA \\ 1.2 & NA & NA & 4 & NA \\ NA & 4 & 5 & NA & 7 \end{bmatrix}

What methods could be applicable in general?

I have some basic experience in classifiers and Random Forests. Modulo the obvious difference that this is not a classifying problem, what I struggle most with is that the same variable (described in the e.g $n$-th column) is both an input and an output. Say I want to predict the value $A_{2,3}$ in the dataset above. In this case, all the values in the third column could be used as input, excluded of course $A_{2,3}$ itself, which would be an output.

This seems to be different than the more conventional set-up of predicting a property, given a set of other properties (e.g, predict income given education, work sector, seniority, etc.). In this case, sometimes the income is to be predicted, sometimes used for predicting another variable. I am aware of methods which, given a vector $X_i$, could approximate a function $F$ and predict responses $Y_i$ with

$$ Y_i = F(X_i)$$

In the scenario I described though, it looks like some implicit function $\Phi$ is to be found, a function of all the variables $Z_i$ (columns in the dataset above)

$$ \Phi (Z_i) = 0$$

What methods could handle this aspect? I understand the question is probably too general, but I could not find much and could do with a starting point. I would be already content with some hints for my further reading, but anything more would be gratefully welcomed, thanks.

",37059,,2444,,5/15/2020 10:23,5/21/2020 11:44,How to fill missing values in a dataset where some properties can be inputs and outputs?,,1,4,,,,CC BY-SA 4.0 21171,2,,21169,5/14/2020 21:19,,3,,"

I think you are looking at it from the wrong direction, min-max is just a planning algorithm, decision strategy, in the sense that you are describing other algorithms/methods it does not have a category. For example, you have negamax algorithm which is in a sense the same thing the Monte Carlo Search Tree is to Monte Carlo. Min-max category is game theory really.

Now you should be thinking about RL algorithms in another way, and this is taxonomy:

So if you think about methods, you mentioned, let's put them in the right place:

  • TD methods in general - model free
  • Monte Carlo methods - model free
  • MinMax - model-based (that could be discussed but it definitely needs access to a world model)
",18676,,,,,5/14/2020 21:19,,,,0,,,,CC BY-SA 4.0 21172,1,,,5/14/2020 21:47,,2,210,"

Here is the code written by Maxim Lapan. I am reading his book (Deep Reinforcement Learning Hands-on). I have seen a line in his code which is really weird. In the accumulation of the policy gradient $$\partial \theta_{\pi} \gets \partial \theta_{\pi} + \nabla_{\theta}\log\pi_{\theta} (a_i | s_i) (R - V_{\theta}(s_i))$$ we have to compute the advantage $R - V_{\theta}(s_i)$. In line 138, maxim uses adv_v = vals_ref_v - value_v.detach(). Visually, it looks fine, but look at the shape of each term.

ipdb> adv_v.shape                                                                                                                            
torch.Size([128, 128])

ipdb> vals_ref_v.shape                                                                                                                       
torch.Size([128])

ipdb> values_v.detach().shape                                                                                                                
torch.Size([128, 1]) 

In a much simpler code, it is equivalent to

In [1]: import torch                                                            

In [2]: t1 = torch.tensor([1, 2, 3])                                            

In [3]: t2 = torch.tensor([[4], [5], [6]])                                      

In [4]: t1 - t2                                                                 
Out[4]: 
tensor([[-3, -2, -1],
        [-4, -3, -2],
        [-5, -4, -3]])

In [5]: t1 - t2.detach()                                                        
Out[5]: 
tensor([[-3, -2, -1],
        [-4, -3, -2],
        [-5, -4, -3]])

I have trained the agent with his code and it works perfectly fine. I am very confused why it is good practice and what it is doing. Could someone enlighten me on the line adv_v = vals_ref_v - value_v.detach()? For me, the right thing to do was adv_v = vals_ref_v - value_v.squeeze(-1).

Here is the full algorithm used in his book :

UPDATE

As you can see by the image, it is converging even though adv_v = vals_ref_v - value_v.detach() looks wrongly implemented. It is not done yet, but I will update the question later.

",35626,,35626,,5/15/2020 18:47,5/16/2020 0:50,Advantage computed the wrong way?,,2,4,,,,CC BY-SA 4.0 21173,1,,,5/14/2020 22:39,,1,43,"

I've been reading on Tacotron-2, a text-to-speech system, that generates speech just-like humans (indistinguishable from humans) using the GitHub https://github.com/Rayhane-mamah/Tacotron-2.

I'm very confused about a simple aspect of text-to-speech even after reading the paper several times. Tacotron-2 generates spectrogram frames for a given input-text. During training, the dataset is a text sentence and its generated spectrogram (it seems at a rate of 12.5 ms per spectogram frame).

  • If the input is provided as a character string, then how many spectogram frames does it predict for each character?

  • How does training supply which frames form the expected output from the dataset? Because the training dataset is simply a thousand of frames for a sentences, how does it know which frames are ideal output for a given character?

This basic aspect seems just not mentioned clearly anywhere and I'm having a hard time figuring this one out.

",33580,,2444,,5/15/2020 15:04,5/15/2020 15:04,How many spectrogram frames per input character does text-to-speech (TTS) system Tacotron-2 generate?,,0,0,,,,CC BY-SA 4.0 21174,1,,,5/14/2020 22:49,,0,95,"

In my discussion over my question on Math SE, I explained to a user, how I think AI works, I wrote that with the sigmoid(logistic) function, features of a data set are identified, many such iterations provide learning.

Is my understanding of how this works correct?

",31307,,2444,,5/14/2020 22:59,5/14/2020 23:05,Is my understanding of how AI works correct?,,1,3,,,,CC BY-SA 4.0 21175,2,,21174,5/14/2020 23:05,,3,,"

There's some useful information in your description, but that's just a very vague description of how neural networks with sigmoid activation functions are trained.

Moreover, there are many other AI systems apart from neural networks (such as support vector machines, expert systems, etc.), which, of course, I cannot exhaustively list here.

Is my understanding of how AI works correct?

I would say it's not completely incorrect, but, as I said, it's a very vague description and it only refers to a subset of techniques in the AI field. With that description, no newbie would probably understand how neural networks are really trained, apart from knowing that you will train them iteratively and there are sigmoids involved.

",2444,,,,,5/14/2020 23:05,,,,5,,,,CC BY-SA 4.0 21176,1,,,5/15/2020 2:45,,3,169,"

I’ve coded a simple ELIZA chatbot for a high school coding competition. The chatbot is part of an app that’s designed to help its user cope with depression, anxiety, and similar mental health disorders. It uses sentiment analysis to identify signs of mental illness, and to track it's user's progress toward ""happiness"" over time.

My question is, what steps can I take to make it more realistic (without using some pre-existing software, library, etc, which isn't allowed)? Also, are there any existing tables of questions/responses I can add to my ELIZA bot's repertoire so that it can handle more conversations?

",37062,,2444,,5/15/2020 10:11,5/15/2020 10:19,How can I make ELIZA more realistic?,,1,0,,,,CC BY-SA 4.0 21178,1,,,5/15/2020 5:46,,4,71,"

I'm implementing strided 2D convolution. My formula looks like this: $$y_{i, j} = \sum_{m=0}^{F_h - 1}\sum_{n=0}^{F_w - 1} x_{s\cdot i + m, s\cdot j + n}\,f_{m, n}, \tag{1}$$ where $s$ is the stride (some sources might refer to this as 'cross-correlation' but 'convolution' is consistent with PyTorch's definition)

I have calculated the gradient with respect to the filter as:

$$\frac{\partial E}{\partial f_{m', n'}} = \sum_{i=0}^{(x_h - F_h) / s}\sum_{j=0}^{(x_w - F_w) / s} x_{s\cdot i + m', s\cdot j + n'} \frac{\partial E}{\partial y_{i, j}} \tag{2}$$

and some simple dummy index relabeling leads to: $$\frac{\partial E}{\partial f_{i, j}} = \sum_{m=0}^{(x_h - F_h) / s}\sum_{n=0}^{(x_w - F_w) / s} x_{s\cdot m + i, s\cdot n + j} \frac{\partial E}{\partial y_{m, n}} \tag{3}$$

Equation $(3)$ looks similar to the first, but not exactly (the $s$ is on the wrong term!). My objective is to convert the second equation into 'convolutional form' so that I can calculate it using my existing, efficient convolution algorithm.

Could someone please help me work this out, or point out any errors that I have made?

",36938,,,,,5/15/2020 5:46,Conversion of strided filter gradient to convolutional form,,0,1,,,,CC BY-SA 4.0 21181,2,,21176,5/15/2020 10:19,,3,,"

One 'easy' way would be to have some sort of conversational memory, where you track what the user has said already. I don't know how complex your patterns are, but if you could recognise names and track references, you could try and build up a mental model of the user's relationships with other people, and perhaps refer to that in your bots responses.

The latter will be quite advanced, but keeping track of things said earlier and referring back to them on occasion might make it appear a lot more capable.

As an added bonus, track changes in the user's sentiment scores, and see if you spot a pattern in the conversation (maybe over the course of multiple conversations) to see which bot utterance have the biggest (positive or negative) effect on the user's mood.

",2193,,,,,5/15/2020 10:19,,,,1,,,,CC BY-SA 4.0 21182,1,21185,,5/15/2020 11:48,,2,702,"

Good day, it's a pleasure having joined this Stack.

In my master thesis I have to expand a Deep Reinforcement Learning Network, to be precise a Deep Q-Network, which is used to control machines in an electrical grid for power quality management.

What would be the best way to evaluate if a network is doing a good job during training or not? Right now I have access to the reward function as well as the q_value function.

The rewards consist of 4 arrays, one for each learning criteria of the network. The first tuple is a hard criteria (adherence mandatory) while the latter 3 are soft criteria:

Episode: 1/3000 Step: 1/11 Reward: [[1.0, 1.0, -1.0], [0.0, 0.68, 1.0], [0.55, 0.55, 0.55], [1.0, 0.62, 0.79]]
Episode: 1/3000 Step: 2/11 Reward: [[-1.0, 1.0, 1.0], [0.49, 0.46, 0.67], [0.58, 0.58, 0.58], [0.77, 0.84, 0.77]]
Episode: 1/3000 Step: 3/11 Reward: [[-1.0, 1.0, 1.0], [0.76, 0.46, 0.0], [0.67, 0.67, 0.67], [0.77, 0.84, 1.0]]

The q_values are arrays which I do not fully understand yet. Could one of you explain them to me? I read the official definiton of Q-Values positive False Discovery Rate. Can these values be used to evaluate neural network training? These are the Q-Values for step 1:

Q-Values: [[ 0.6934726  -0.24258053 -0.10599071 -0.44178435  0.5393113  -0.60132784
  -0.07680141  0.97968364  0.7707691   0.57855517  0.16273917  0.44632837
   0.00799532 -0.53355324 -0.45182624  0.9229134  -1.0455914  -0.0765233
   0.37784138  0.14711905  0.10986999  0.08918551 -0.8189287   0.14438646
   0.8869624  -0.43251887  0.7742889  -0.7671829   0.07737591  0.2569678
   0.5102049   0.5132051  -0.31643414 -0.0042788  -0.66071266 -0.18251896
   0.7762838   0.15322062 -0.06284399  0.18447408 -0.9609979  -0.4508798
  -0.07925312  0.7503184   0.6858963  -1.0436649  -0.03167241  0.87660617
  -0.43605536 -0.28459656 -0.5564517   1.2478396  -1.1418368  -0.9335588
  -0.72871417  0.04163677  0.30343965 -0.30024529  0.08418611  0.19429305
   0.44063848 -0.5541725   0.5740701   0.76789933 -0.9621064   0.0272104
  -0.44953588  0.13415053 -0.07738207 -0.16188647  0.6667519   0.31965214
   0.3241703  -0.27273563 -0.07130697  0.49683014  0.32996863  0.485767
   0.39242893  0.40508035  0.3413986  -0.5895434  -0.05772913 -0.6172271
  -0.12423459  0.2693861   0.32966745 -0.16036317 -0.36371914 -0.04342368
   0.22878243 -0.09400887 -0.1134861   0.07647536  0.04724833  0.2907955
  -0.70616114  0.71054566  0.35959414 -1.0539075   0.19137645  1.1948669
  -0.21796732 -0.583844   -0.37989947  0.09840107  0.31991178  0.56294084]]

Are there other ways of evaluating DQNetworks? I would also appreciate literature about this subject. Thank you very much for your time.

",37079,,,,,5/15/2020 17:21,How to evaluate a Deep Q-Network,,1,1,,,,CC BY-SA 4.0 21185,2,,21182,5/15/2020 17:21,,3,,"

Q-values represent expected return after taking action $a$ in state $s$, so they do tell you how good it is to take an action in the specific state. Better actions will have larger Q-values. Q-values can be used to compares actions but they are not very meaningful in representing performance of the agent since you have nothing to compare them with. You don't know the actual Q-values so you can't conclude if your agent is approximating well those Q-values or not.

Better performance metric would be the average reward per episode/epoch, or average reward in last $N$ timesteps for continuing tasks. If your agent is improving its performance then it's average reward should be increasing. You said that you have rewards per state and that some of them represent more important criteria then others. You could plot the average reward per episode by doing some kind of weighted linear combination of criteria rewards \begin{equation} \bar R = \bar R_1 w_1 + \bar R_2 w_2 + \bar R_3 w_3 + \bar R_4 w_4 \end{equation} where $\bar R_i$ is the average episode reward for criteria $i$.That way you can provide more importance to some specific criteria in your evaluation.

",20339,,,,,5/15/2020 17:21,,,,1,,,,CC BY-SA 4.0 21186,2,,21109,5/15/2020 19:32,,5,,"

Let's say your old policy is $\pi_b$ and your current one is $\pi_a$. If you collected trajectory by using policy $\pi_b$ you would get return $G$ whose expected value is \begin{align} E_{\pi_b}[G_t|S_t = s] &= E_{\pi_b}[R_{t+1} + G_{t+1}]\\ &= \sum_a \pi_b(a|s) \sum_{s', r} p(s', r|s, a) [r + E_{\pi_b}[G_{t+1}|S_{t+1} = s']]\\ \end{align} You can see if you write out this recursively that this expectation depends on $\pi_b(a|s), \pi_b(a'|s'), \ldots$

If you collect trajectory with policy $\pi_a$ you would get expected return that depends on $\pi_a(a|s), \pi_a(a'|s'), \ldots$ Since these are two different policies then $\pi_b(a|s) \neq \pi_a(a|s)$ for some $(s, a)$. That would mean that returns have different expected values and are sampled through different distributions. You cannot then use some return $G$ sampled by following policy $\pi_b$ to update policy $\pi_a$ because it's not sampled according to the proper distribution, if we did, we would be updating policy $\pi_a$ with biased gradient update that does not reflect how policy $\pi_a$ performed.

",20339,,,,,5/15/2020 19:32,,,,6,,,,CC BY-SA 4.0 21187,1,21308,,5/15/2020 19:33,,2,987,"

I have two different implementations with PyTorch of the Atari Pong game using A2C algorithm. Both implementations are similar, but some portion are different.

  1. https://colab.research.google.com/drive/12YQO4r9v7aFSMqE47Vxl_4ku-c4We3B2?usp=sharing

The above code is from the following Github repository: https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/blob/master/Chapter10/02_pong_a2c.py It converged perfectly well!

You can find an explanation in Maxim Lapan's book Deep Reinforcement Learning Hands-on page 269

Here is the mean reward curve :

  1. https://colab.research.google.com/drive/1jkZtk_-kR1Mls9WMbX6l_p1bckph8x1c?usp=sharing

The above implementation has been created by me based on the Maxim Lapan's book. However, the code is not converging. There's a small portion of my code that is wrong, but I can't point out what it is. I've been working on that near a week now.

Here is the mean reward curve :

Can someone tell me the problem portion of the code and how can I fix it?

UPDATE 1

I have decided to test my code with a simpler environment, i.e. Cartpole-v0.

Here is the code : https://colab.research.google.com/drive/1zL2sy628-J4V1a_NSW2W6MpYinYJSyyZ?usp=sharing

Even that code doesn't seem to converge. Still can't see where is my problem.

UPDATE 2

I think the bug might be in the ExperienceSource class or in the Agent class.

UPDATE 3

The following question will help you understand the classes ExperienceSource and ExperienceSourceFirstLast.

",35626,,35626,,5/18/2020 13:02,5/20/2020 0:43,Why isn't my implementation of A2C for the the atari pong game converging?,,1,2,,,,CC BY-SA 4.0 21188,2,,21172,5/15/2020 20:27,,1,,"

I changed the line adv_v = vals_ref_v - value_v.detach() to adv_v = vals_ref_v - value_v.squeeze(-1).detach(). It seems the convergence is much faster. According to the A2C algorithm, it is just logic to apply $Q(a, s) - V(s)$, where $Q(a, s)$ and $V(s)$ with the same shape.

The call to detach() is important here as we don't want to propagate the PG into our value approximation head.

",35626,,,,,5/15/2020 20:27,,,,0,,,,CC BY-SA 4.0 21189,1,,,5/15/2020 20:55,,2,79,"

I've been trying to train a snake for the snake game in DQN. Which the snake can essentially just move up, down, left and right. I'm having a hard time getting the snake to stay alive longer. So my question is, what are some techniques that I can implement to get the snake to stay alive for longer?

Some of the things that I've attempted but doesn't seem to have done much after about 1000 episodes are:

  1. Implementing the L2 regularization
  2. Reduce the exploration decay rate so it give the snake more chance to explore
  3. Randomize the starting point for the snake for each episode to try to reduce ""local exploration""
  4. I've tweeked some hyper parameters such as learning rate, policy/target network update rate

The input neurons are fed with the state of the board. For example, if my board size is 12*12 then there are 144 input neurons each representing the space of the environment. I've checked that the loss decreases fairly quickly but no improvements on snake lasting longer in the game.

As a side note my reward function is simply a +1 for every time step that the snake survives.

I'm out of ideas of what I can do to get the snake to learn, maybe 1000 episodes is simply not enough? Or maybe my input is not providing good enough information to train the snake?

",33579,,,,,5/15/2020 20:55,DQN not showing the agent is learning in a snake grid environment game,,0,0,,,,CC BY-SA 4.0 21191,1,,,5/16/2020 0:33,,2,112,"

I’ve created an agent using MCTS to play Connect Four. It wins against humans pretty well, but I’d like to improve upon it. I decided to add domain knowledge to the MCTS rollout stage. My evaluation function checks how “good” an action is and returns the best/highest value action to the rollout policy as the action to use. I created a “gym” application for one agent, who’s not using the evaluation function, to play against an agent who is using the evaluation function. I would have expected the agent using the heuristics to perform better than the agent who isn’t, but the inclusion of the heuristics doesn’t seem to make any difference! Any ideas why this might be the case?

",27629,,,,,2/12/2021 0:01,Why aren’t heuristics for Connect Four Monte Carlo tree search improving the agent?,,1,0,,,,CC BY-SA 4.0 21193,2,,21172,5/16/2020 0:50,,2,,"

Yeah, it seems like it's a wrong implementation. vals_ref_v is a matrix of 1 row, and 128 columns. value_v.detach() is a matrix of 128 row

",37099,,,,,5/16/2020 0:50,,,,1,,,,CC BY-SA 4.0 21194,1,,,5/16/2020 2:55,,1,108,"

When does it happen that a layer (either first or hidden) outputs negative values in order to justify the use of RELU?

As far as I know, features are never negative or converted to negative in any other type of layer.

Is it that we can use the RELU with a different ""inflection"" point than zero, so we can make the neuron start describing a lineal response just after this ""new zero""?

",36440,,2444,,5/16/2020 11:16,5/16/2020 11:16,"If features are always positives, why do we use RELU activation functions?",,1,0,,,,CC BY-SA 4.0 21196,2,,21194,5/16/2020 5:28,,3,,"

The fact that features are always positive values don't guarantee that outputs of hidden layers are positive too.

Due to multiplication, output of an hidden layer could contain negative values, i.e., a hidden layer can contain weights that have opposites signs as its input. Remember that only layer outputs, not their weights, are passed through ReLu, so, weights of a model could contain negative values.

",32621,,32621,,5/16/2020 7:24,5/16/2020 7:24,,,,0,,,,CC BY-SA 4.0 21197,1,,,5/16/2020 7:02,,1,60,"

Just made an interesting observation playing around with the stable-baseline's implementation of PPO and the BipedalWalker environment from OpenAI's Gym. But I believe this should be a general property of deep learning.

Using a small batch size of 512 samples the walker achieves a near-optimal behavior after just 0.5 Mio steps. The optimized hyperparameters in the RL Zoo suggest using a batch size of 32k steps. This definitely leads to better performance after 5 Mio steps but takes 2 Mio steps until it reaches a near-optimal behavior.

Therefore the question: Shouldn't we schedule the batch-size to improve sample efficiency?

I believe it makes sense because after initialization the policy is far away from the optimal one and therefore should update quickly to get better. Even when the gradient estimates using small batches are very noisy, they still seem to bring the policy quickly in a quite good state. Thereafter we can increase the batch-size and make less but more precise gradient steps. Or am I missing an important point here?

",35821,,,,,5/16/2020 7:02,Should we start with a small batch-size and increase during training to improve sample efficiency?,,0,0,,,,CC BY-SA 4.0 21198,1,21355,,5/16/2020 9:03,,2,58,"

The paper Hierarchical Graph Pooling with Structure Learning (2019) introduces a distance measure between:

  1. a graph's node-representation matrix $\text{H}$, and
  2. an approximation of this constructed from each node's neighbours' information $\text{D}^{-1}\text{A}\text{H}$:

Here, we formally define the node information score as the Manhattan distance between the node representation itself and the one constructed from its neighbors:

$$\mathbb{p} = \gamma(\mathcal{G}_i) = ||(\text{I}^{k}_{i} - (\text{D}^{k}_{i})^{-1}\text{A}^{k}_{i})\text{H}^{k}_{i}|| $$

(where $\text{A}$ and $\text{D}$ are the Adjacency and Diagonal matrices of the graph, respectively)

Expanding the product on the RHS we get (ignoring index notation for simplicity):

$$||\text{H} - (\text{D}^{-1}\text{A}\text{H})||$$

Problem: I don't see how $\text{D}^{-1}\text{A}\text{H}$ is a "node representation... constructed from its neighbors".

$\text{I} - \text{D}^{-1}\text{A}$ is clearly equivalent to the Random Walk Laplacian, but it's not immediately obvious to me how multiplying this by $\text{H}$ provides per-node information on how well one can reconstruct a node from its neighbours.

",23503,,-1,,6/17/2020 9:57,5/21/2020 7:52,"Understanding the node information score in the paper ""Hierarchical Graph Pooling with Structure Learning""",,1,4,,,,CC BY-SA 4.0 21200,1,,,5/16/2020 12:48,,2,501,"

I am trying to implement CNN using python NumPy. I searched so much, but all I found was for one filter with one channel for convolution.

Suppose $x$ is an image with the shape: (N_Height, N_Width, N_Channel) = (5,5,3).

Let's say I have 16 filters with this shape: (F_Height, F_Width, N_Channel) = (3,3,3) , stride=1 and padding=0

Forward:

The output shape after convolution 2d will be

(
math.floor((N_Height - F_Height + 2*padding)/stride + 1 )),
math.floor((N_Width- F_Width + 2*padding)/stride + 1 )),
filter_count
)

So, the output of this layer will be an array with this shape: (Height, Width, Channel) = (3, 3, 16)

BackPropagation:

Suppose $dL/dh$ is the input for my layer in back-propagation with this shape: (3, 3, 16)

Now, I must find $dL/dw$ and $dL/dx$: $dL/dw$ to update my filters parameter and $dL/dx$ to pass it as input to the previous layer as the loss respect to the input $x$.

From this answer Error respect to filters weights I found how to calculate $dL/dw$.

The problem I have in the back-propagation is I don't know how to calculate $dL/dx$ having this shape: (5, 5, 3) and pass it to the previous layer.

I read lots of articles in Medium and other sites, but I don't get how to calculate it:

",36905,,2444,,5/13/2022 16:05,5/13/2022 16:05,How do I calculate the partial derivative with respect to $x$?,,1,0,,,,CC BY-SA 4.0 21203,1,,,5/16/2020 16:04,,4,174,"

I am pretty new to the machine learning field. I want to use an $n \times m$ matrix as the input of a model, in order to predict a vector $1 \times m$, both of real numbers. Input data are quite clean, with statistics of about 10000 items.

Do you know a method that can handle that?

",37117,,2444,,5/16/2020 16:57,4/10/2022 12:04,Which machine learning method can take a matrix as input?,,1,1,,,,CC BY-SA 4.0 21205,2,,12133,5/16/2020 17:32,,1,,"

The usage of the word ""kernel"" in the context of support vector machines probably comes from its usage in the context of integral transforms.

See the article Kernel of an integral operator, and the questions What is the difference between a kernel and a function? and Why is the kernel of an integral transform called kernel?.

The word ""kernel"" has been used in many other contexts, such as in computer vision, to refer to a certain function with a special purpose. See e.g. the paper that introduced SIFT.

",2444,,2444,,5/16/2020 17:39,5/16/2020 17:39,,,,0,,,,CC BY-SA 4.0 21206,2,,17577,5/16/2020 18:22,,3,,"

In a classification problem it's better to get higher error and higher error slope when we predict the label wrong.

As you see in the graph by using cross-entropy you get high error when the algorithm predict a label wrong and small error when the prediacted label is close enough, so it helps us to separate the predicted classes better.

",37123,,37123,,7/23/2020 12:56,7/23/2020 12:56,,,,1,,,,CC BY-SA 4.0 21207,2,,20980,5/16/2020 18:39,,1,,"

To put this insert to context, we should take at least this much of text from the paper:

One line of research focuses on making recommendations using knowledge graph embedding models, such as TransE [2] and node2vec [5]. These approaches align the knowledge graph in a regularized vector space and uncover the similarity between entities by calculating their representation distance [30]. However,pure KG embedding methods lack the ability to discover multi-hop relational paths.

In my understanding the pure KG embedding here refers to TransE and node2vec solutions. To learn more about those, we should read links [1] and [2]. From [1]:

Usually, we use a triple (head, relation, tail) to represent a knowledge. Here, head and tail are entities. For example, (sky tree, location, Tokyo). We can use the one-hot vector to represent this knowledge.

Later on same source there is an explanation on the end of definition section of TransE solution:

But this model only can take care with one-to-one relation, not suitable for one-to-many/many-to-one relation, for example, there is two knowledge, (skytree, location, tokyo) and (gundam, location, tokyo). After training, the 'sky tree' entity vector will be very close with 'gundam' entity vector. But they do not have such similarity in real.

On other hand, [3] tells:

Knowledge graphs as described above represents a static snapshot of our knowledge. It does not reflect the process of it’s how the knowledge built up. In the real world, we learn by observing temporal patterns. While it’s possible to learn the similarity between nodes A and node B, it will be hard to see the similarity between node A and node C as it was 3 years ago.

Solution in paper states ""--,each recommended item is associated with around 1.6 reasoning paths."" which is supposedly impossible for TransE solution.

So, knowledge graph, purely, is a static snapshot that can identify by great accuracy one to one findings. Actually according what [2] tells how node2vec works, they can describe and combine also more information (node2vec combines different types of similarities at same time), but anyways I think the main point is actually in one word in the citation: discover!

The model suggested on paper adds Reinforcement Learning principles to the KG modeling, so to say, the pure KG embedding always tells one distance based statistical solution, but a RL based solution may learn more aspects behind the scenes as it learns by try and error more complex paths underlying in the behavior.

On the other hand, the paper says, when relating their solution to pure TransE:

It can be regarded as a single-hop latent matching method, but the post-hoc explanations do not necessarily reflect the true reason of generating a recommendation. In contrast, our methods generate recommendations through an explicit path reasoning process over knowledge graphs, so that the explanations directly reflect how the decisions are generated, which makes the system transparent.

So, whether TransE and those could actually recommend things in the given environment, the recommendation reasoning paths may stay obscure.

Sources:

[1] https://towardsdatascience.com/summary-of-translate-model-for-knowledge-graph-embedding-29042be64273

[2] https://towardsdatascience.com/node2vec-embeddings-for-graph-data-32a866340fef

[3] https://towardsdatascience.com/extracting-knowledge-from-knowledge-graphs-e5521e4861a0

",11810,,,,,5/16/2020 18:39,,,,0,,,,CC BY-SA 4.0 21209,1,21213,,5/16/2020 20:12,,3,307,"

Here's the approximated value using weighted importance sampling

$$ V_{n} \doteq \frac{\sum_{k=1}^{n-1} W_{k} G_{k}}{\sum_{k=1}^{n-1} W_{k}}, \quad n \geq 2 $$

Here's the incremental update rule for the approximated value

$$V_{n+1} \doteq V_{n}+\frac{W_{n}}{C_{n}}\left[G_{n}-V_{n}\right], \quad n \geq 1$$

How is the second equation derived from the first?

These are used for the weighted importance sampling method of off-policy Monte Carlo control.

",37128,,2444,,5/16/2020 20:36,5/16/2020 22:09,How is the incremental update rule derived from the weighted importance sampling in off-policy Monte Carlo control?,,1,0,,,,CC BY-SA 4.0 21211,1,,,5/16/2020 20:34,,2,34,"

I am doing some research on the visual attention mechanism in remote sensing domain (where the features learnt from one layer are highlighted using the attention mask derived from another layer). From what I have observed, the attention mask is learnt in a similar fashion as any other branch in CNN. So, what is so special about the visual attention mask that makes it different from a regular two branch CNN? The reference papers are provided below:

Visual Attention-Driven Hyperspectral Image Classification (IEEE, 2019)

A Two-Branch CNN Architecture for Land Cover Classification of PAN and MS Imagery (MDPI, 2019)

",15238,,1671,,5/18/2020 22:00,5/18/2020 22:00,How is visual attention mechanism different from a two branch convolutional neural network?,,0,1,,,,CC BY-SA 4.0 21212,2,,19889,5/16/2020 21:04,,1,,"

Assumption

In this answer it is assumed that with ""neurochips"" you mean chips made (using neuromorphic engineering) for neuromphic computing.

Related example

From what I currently understand from this article neuromorphic chips, in particular the TrueNorth chip, are being used (or emulated) for embedded systems related signals processing.

Doubt

The signal processing performed by these (emulated) neuromorphic chips might be part of real-life applications involving control systems.

",4903,,,,,5/16/2020 21:04,,,,0,,,,CC BY-SA 4.0 21213,2,,21209,5/16/2020 22:09,,2,,"

By definition of $V_{n+1}$, we have:

$V_{n+1} = \frac{\sum_{k=1}^{n} W_{k} G_{k}}{\sum_{k=1}^{n} W_{k}} \; \tag{1}$

Then, taking the $n^{th}$ term out of the sum in the numerator, we have:

$V_{n+1} = \frac{W_{n}G_{n} \; + \; \sum_{k=1}^{n-1} W_{k} G_{k}}{\sum_{k=1}^{n} W_{k}} \; \tag{2}$

Then, from the definition of $V_n$, $V_{n} = \frac{\sum_{k=1}^{n-1} W_{k} G_{k}}{\sum_{k=1}^{n-1} W_{k}}$, we have:

$\sum_{k=1}^{n-1} W_{k} G_{k} = V_{n}*\sum_{k=1}^{n-1} W_{k} \; \tag{3}$

Then, substituting $(3)$ in the numerator of $(2)$, we get:

$V_{n+1} = \frac{W_{n}G_{n} \; + \; V_{n}*\sum_{k=1}^{n-1} W_{k}}{\sum_{k=1}^{n} W_{k}} \; \tag{4}$

Then, adding and subtracting $V_{n}W_{n}$ in the numerator of $(4)$, we obtain:

$V_{n+1} = \frac{W_{n}G_{n} \; + \; V_{n}*\sum_{k=1}^{n-1} W_{k} \; + \; V_n W_n \; - \; V_n W_n}{\sum_{k=1}^{n} W_{k}} \; \tag{5}$

We factor $V_n$ in the numerator of $(5)$:

$V_{n+1} = \frac{W_{n}G_{n} \; + \; V_{n}(W_n \; + \; \sum_{k=1}^{n-1} W_{k}) \; - \; V_n W_n}{\sum_{k=1}^{n} W_{k}} \; \tag{6}$

We simplify, taking into account that the denominator $\sum_{k=1}^{n} W_{k} = W_n + \sum_{k=1}^{n-1} W_{k}$, we get:

$V_{n+1} = V_n + \frac{W_n G_n - W_n V_n}{\sum_{k=1}^{n} W_{k}} \; \tag{7} $

Further rearrangements of the terms give us: $V_{n+1} = V_n + \frac{W_n}{\sum_{k=1}^{n} W_{k}}[G_n - V_n] \; \tag{8}$

Finally, by definition of $C_n$ as the cumulative sum of the weights up to time $n$, we get the desired incremental update equation: $V_{n+1} = V_n + \frac{W_n}{C_n}[G_n - V_n] \; \tag{9}$

",34010,,,,,5/16/2020 22:09,,,,0,,,,CC BY-SA 4.0 21215,2,,3101,5/17/2020 4:11,,-1,,"

Not really, I mean at it's core machine learning from an application perspective often seeks produce human level results, but there isn't any theorem describing human understanding of reality.

Like proving computer vision works well is essentially like proving you have a correct understanding of human perception.

It becomes somewhat circular, and while there exists proofs for certain qualities of data, none of them are true. I mean think about trying to describing reality, it exists on a lower dimensional manifold but analytically describing it? Don't think so.

Even proving robustness ends up being somewhat futile since even if you correctly eliminate advasarial examples this doesn't mean you CV application will produce correct results in general, only that the classification is robust(robust and correct are two different things).

",32390,,,,,5/17/2020 4:11,,,,0,,,,CC BY-SA 4.0 21216,1,,,5/17/2020 4:48,,4,796,"

Besides computer vision and image classification, what other use cases/applications are for few-shot learning?

",37137,,2444,,1/7/2021 17:24,1/7/2021 17:24,What are some use cases of few-shot learning?,,2,1,,,,CC BY-SA 4.0 21218,1,,,5/17/2020 6:55,,1,111,"

As far as I know, neural networks have hidden computational units and HMM has hidden states.

Hidden Markov Models can be used to generate a language, that is, list elements from a family of strings. For example, if you have an HMM that models a set of sequences, you would be able to generate members of this family, by listing sequences that would befall into the group of sequences we are modeling.

In this, this and this paper, HMMs are combined with ANNs. But how exactly? What is a Hidden Markov Model - Artificial Neural Network (HMM-ANN)? Is HMM-ANN a hybrid algorithm? In simple words, how is this model or algorithm used?

",30725,,2444,,5/17/2020 12:47,5/17/2020 12:47,What is a Hidden Markov Model - Artificial Neural Network (HMM-ANN)?,,0,6,,,,CC BY-SA 4.0 21219,1,21227,,5/17/2020 7:24,,2,6199,"

What is the difference between deep learning and shallow learning?

What I am interested in knowing is not the definition of deep learning and shallow learning, but understanding the actual difference.

Links to other resources are also appreciated.

",30725,,2444,,12/22/2021 10:25,12/22/2021 10:26,What is the difference between deep learning and shallow learning?,,1,3,,12/22/2021 10:26,,CC BY-SA 4.0 21220,1,,,5/17/2020 7:28,,2,369,"

A concept class $C$ is PAC-learnable if there exists an algorithm that can output a hypothesis with probability at least $(1-\delta)$ (the ""probably"" part), and an error that is less than $\epsilon$ (the ""approximately"" part), in time that is polynomial in $1/\epsilon$, $1/\delta$, $n$ and $|C|$.

Tom Mitchell defines an upper bound for the sample complexity, $m >= 1/\epsilon (ln(|H|) + ln(1/\delta))$ for the finite hypotheses. Based on this bound, he classifies whether target concepts are PAC-learnable or not. For example, $n$ conjunction boolean literal concept class.

It seems to me that PAC-learnability seeks to act more like a classification of certain concept classes.

Are there any practical purposes for knowing whether a concept class is PAC-learnable?

",32780,,2444,,5/17/2020 14:01,5/18/2020 13:54,Is there any practical application of knowing whether a concept class is PAC-learnable?,,1,2,,,,CC BY-SA 4.0 21226,2,,21216,5/17/2020 11:23,,3,,"

Few-short learning (FSL) can be useful for many (if not all) machine learning problems, including supervised learning (regression and classification) and reinforcement learning.

The paper Generalizing from a Few Examples: A Survey on Few-Shot Learning (2020) provides an overview (including examples of applications and use cases) of FSL. Their definition of FSL provided is based on Tom Mitchell's famous definition of machine learning.

Definition 2.1 (Machine Learning [92, 94]). A computer program is said to learn from experience $E$ with respect to some classes of task $T$ and performance measure $P$ if its performance can improve with $E$ on $T$ measured by $P$.

Here's the definition of FSL.

Definition 2.2. Few-Shot Learning (FSL) is a type of machine learning problems, specified by $E$, $T$ and $P$, where $E$ contains only a limited number of examples with supervised information for the task $T$.

Specific examples of applications of FSL are

  • character generation
  • drug toxicity discovery
  • sentiment classification from short text
  • object recognition
",2444,,2444,,5/17/2020 13:34,5/17/2020 13:34,,,,0,,,,CC BY-SA 4.0 21227,2,,21219,5/17/2020 12:57,,0,,"

That article only mentions ""shallow learning"" in the title and it mentions ""shallow"" to refer to the fact that deep learning models are not really learning any ""deep"" concepts, where ""deep"" here means ""philosophically deep"". So, I think the title is just (fairly?) provocative.

Currently, in machine learning, the expression ""shallow learning"" isn't really standardized, as opposed to deep learning, which refers to learning, with gradient descent and back-propagation, from (typically) huge amounts of data with neural networks. Nevertheless, ""shallow learning"" may occasionally refer to everything that isn't deep learning (e.g. traditional machine learning models, such as support vector machines), but most likely it refers to learning in neural networks with only a small number (0-2) of hidden layers (i.e. non-deep neural networks).

Note that the difference between deep and shallow neural networks isn't really clear. Some people may consider neural networks with only 1-2 hidden layers already deep, while others may consider only neural networks with e.g. 5-10 hidden layers deep. This also shows that deep learning isn't actually well-defined too.

The other linked article actually says

CNN performance was compared to that of conventional (shallow) machine learning methods, including ridge regression (RR) on the images’ principal components and support vector regression.

So, in this article, they use ""shallow learning"" to refer to traditional (or conventional) machine learning models, which confirms what I said above.

",2444,,2444,,5/17/2020 13:18,5/17/2020 13:18,,,,0,,,,CC BY-SA 4.0 21228,1,,,5/17/2020 16:15,,1,46,"

Is there an AI system (preferably, one that interacts with the human, such as a chatbot like this one) that, given some input (e.g. entered into the system by writing text), such as a person's physical history and symptoms (of a certain disease), produces a diagnosis of the disease and/or suggests medications or a treatment to improve the condition of the patient?

",36957,,2444,,5/17/2020 18:53,5/17/2020 18:53,"Is there an AI system that, given a patient's symptoms, produces a diagnosis and suggests a treatment?",,0,1,,,,CC BY-SA 4.0 21229,2,,21216,5/17/2020 17:17,,0,,"

An interesting use case is IQ tests, or program synthesis from examples in general.

IQ tests often require you to derive a program from a few examples that can produce a certain output. See for instance https://github.com/fchollet/ARC

",37145,,,,,5/17/2020 17:17,,,,0,,,,CC BY-SA 4.0 21230,1,,,5/17/2020 17:44,,3,68,"

From the brief research I've done on the topic, it appears that the way Deepmind's Alphazero or Muzero makes decisions is through Monte Carlo tree searches, where in the randomized simulations allows for a more rapid way to make calculations than traditional alpha-beta pruning. As the simulation space increases, this search approaches that of a classical tree search.

Where exactly did Deepmind use neural networks? Was it in the evaluation portion? And if so, how did they make determinations on what makes a ""good"" or ""bad"" game state? If they deferred the evaluations of another chess engine like Stockfish, how do we see AlphaZero absolutely demolish Stockfish in head-to-head matches?

",37147,,,,,5/17/2020 17:44,Where does reinforcement learning actually show up in Deepmind's game engines?,,0,0,,,,CC BY-SA 4.0 21231,1,,,5/17/2020 18:01,,1,40,"

I want to generate a confidence interval around my prediction (vector) $\hat{y}$. I have implemented the following procedure. However, I am not sure whether this makes sense in a statistical way:

  1. I have a data set. First I split it into a 80% training set (2000 measurements), 10% valdidation set and 10% testing set (250 measurements)
  2. I resample B ($\sim 100$) training sets from the original training set with replacement.

    • For each of the B training datasets I train the model $b$ and
      validate it (everytime I use the same validation set).

    • I use the testset from point 1. and make a prediction $\hat{y}_i^b$ (so everytime I use the same test set, since I need the predictions for the same input values)

  3. I calculate the average of the $B$ predictions. $$\bar{\hat{y}}_i=\frac{1}{B}\sum_{b=1}^B\hat{y}_i^b$$.
  4. I calculate the variance ($i\in [1,250]$) $$\sigma_{\hat{y}_i}^2=\frac{1}{B-1}\sum_{b=1}^B(\hat{y}_i^b -\bar{\hat{y}}_i)^2$$

  5. I guess the $95\%$ confidence interval for the prediction $\hat{y}_i$ is $$\hat{y}_i\in \bar{\hat{y}}_i\pm z_{0.025}\frac{\sigma_{\hat{y}_i}}{\sqrt{B}}$$ with $z_{0.025}=1.96$

  6. If I sort the $\hat{y}_i$ values and plot it together with the upper and lower bound, I will get the prediction curve with a CI.

My biggest uncertainty relates to step 5). I read in a book Supervised Classification:Quite a Brief Overview, Marco Loog:

When the population standard deviation $\sigma$ is known and the parent population is normally distributed or $N>30$ the $100(1-\alpha)$ CI for the population mean is given by the symmetrical distribution for the standardized normal distribution $z$ $$\mu\in \bar{x}\pm z_{a/2}\frac{\sigma}{\sqrt{N}}$$

Is it correct to say here $N=B$ (number of bootstrap models or number of resampled trainingsets or number of estimators $\hat{y}_i^b$ ). Does the procedure make sense?

",36989,,36989,,5/21/2020 8:45,5/21/2020 8:45,Confidence Interval around prediction with bootstrapping,,0,0,,,,CC BY-SA 4.0 21232,2,,21191,5/17/2020 21:15,,1,,"

It might be the case that if you perform a large number of random rollouts, the ""best action"" as chosen by the agent without the domain knowledge, is same as the agent with the domain knowledge. I guess what you can do is try to reduce the number of rollouts and see if the performance changes.

",36074,,,,,5/17/2020 21:15,,,,0,,,,CC BY-SA 4.0 21233,1,,,5/17/2020 21:25,,2,78,"

I am currently implementing the very basic version (REINFORCE) of the Monte Carlo policy gradient algorithm. I was wondering if this is the correct gradient for the log of softmax.

\begin{align} \nabla_{\theta} \log \pi_{\theta}(s, a) &= \varphi(s, a)-\mathbb{E}\left[\varphi(s, a)_{\forall a \in A}\right] \\ &= \left(\varphi(s)^T \cdot \theta_{a}\right)-\sum_{\forall a \in A}\left(\varphi(s)^T \cdot \theta_{a}\right) \end{align}

where $\varphi(s)$ is the feature vector at state $s$.

I am not sure if my interpretation of the equation is correct. I ask because, in my implementation, my weights ($\theta$) blow up after a few iterations, and I have a feeling the problem is in this line.

",36404,,36404,,5/18/2020 18:05,5/18/2020 18:05,Is this the correct gradient for log of softmax?,,0,0,,5/21/2020 22:49,,CC BY-SA 4.0 21234,1,,,5/17/2020 22:31,,2,445,"

I am working with the K-means clustering algorithm for unsupervised learning.

Is the following dataset suitable for the k-means clustering task or not? Why or why not? The dataset has only two features.

",33670,,2444,,5/18/2020 9:43,5/18/2020 9:43,Is this dataset with only two features suitable for clustering with k-means?,,1,0,,,,CC BY-SA 4.0 21235,1,,,5/17/2020 23:00,,2,30,"

I have the following time-series aggregated input for an LSTM-based model:

x(0): {y(0,0): {a(0,0), b(0,0)}, y(0,1): {a(0,1), b(0,1)}, ..., y(0,n): {a(0,n), b(0,n)}}
x(1): {y(1,0): {a(1,0), b(1,0)}, y(1,1): {a(1,1), b(1,1)}, ..., y(1,n): {a(1,n), b(1,n)}}
...
x(m): {y(m,0): {a(m,0), b(m,0)}, y(m,1): {a(m,1), b(m,1)}, ..., y(m,n): {a(m,n), b(m,n)}}

where x(m) is a timestep, a(m,n) and b(m,n) are features aggregated by the non-temporal sequential key y(m,n) which might be 0...1,000.

Example:

0: {90: {4, 4.2}, 91: {6, 0.2}, 92: {1, 0.4}, 93: {12, 11.2}}
1: {103: {1, 0.2}}
2: {100: {3, 0.1}, 101: {0.4, 4}}

Where 90-93, 103, and 100-101 are aggregation keys.

How can I feed this kind of input to LSTM?

Another approach would be to use non-aggregated data. In that case, I'd get the proper input for LSTM. Example:

Aggregated input:

0: {100: {3, 0.1}, 101: {0.4, 4}}

Original input:

0: 100, 1, 0.05
1: 101, 0.2, 2
2: 100, 1, 0
3: 100, 1, 0.05
4: 101, 0.2, 2

But in that case, the aggregation would be lost, and the whole purpose of aggregation is to minimize the number of steps so that I get 500 timesteps instead of e.g. 40,000, which is impossible to feed to LSTM. If you have any ideas I'd appreciate it.

",36999,,36999,,5/19/2020 16:58,5/19/2020 16:58,How to feed key-value features (aggregated data) to LSTM?,,0,0,,,,CC BY-SA 4.0 21236,1,,,5/18/2020 0:32,,1,160,"

Today, if you scan an object and want its CAD file (Solidworks/Autocad), you need to use reverse engineering software (Geomagic). This takes time and you need experience of the software tools.

Is there an AI tool/app that does the job automatically? If not, is this a reasonable idea to develop an AI application capable of doing it? What would be the biggest challenges?

",37156,,1671,,5/18/2020 21:56,11/5/2022 21:05,Is there an AI tool to reverse engineer scanned data to obtain its CAD file?,,1,0,,,,CC BY-SA 4.0 21237,1,25057,,5/18/2020 1:28,,5,2047,"

In scaled dot product attention, we scale our outputs by dividing the dot product by the square root of the dimensionality of the matrix:

The reason why is stated that this constrains the distribution of the weights of the output to have a standard deviation of 1.

Quoted from Transformer model for language understanding | TensorFlow:

For example, consider that $Q$ and $K$ have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of $d_k$. Hence, square root of $d_k$ is used for scaling (and not any other number) because the matmul of $Q$ and $K$ should have a mean of 0 and variance of 1, and you get a gentler softmax.

Why does this multiplication have a variance of $d_k$?

If I understand this, I will then understand why dividing by $\sqrt({d_k})$ would normalize to 1.

Trying this experiment on 2x2 arrays I get an output of 1.6 variance:

",36486,,32410,,5/9/2021 23:50,5/9/2021 23:50,"Why does this multiplication of $Q$ and $K$ have a variance of $d_k$, in scaled dot product attention?",,2,1,,,,CC BY-SA 4.0 21243,2,,13959,5/18/2020 6:57,,1,,"

From "Deep learning-based video summarization — A detailed exploration" by Surya Remanan:

Video summarization can be considered as the process of distilling a raw video into a more compact form without losing much information. In a general video summarization system, image features of video frames are extracted, and then the most representative frames are selected through analyzing the visual variations among visual features. This is done either by taking a holistic view of the entire video or by identifying the local differentiation among the adjacent frames. Most of those attempts rely on global features such as colour, texture, motion information, etc. Clustering techniques are also used for summarization. Video summarization can be categorized into two forms:

  • Static video summarization (keyframing) and
  • Dynamic video summarization (video skimming)

Static video summaries are composed of a set of keyframes extracted from the original video, while dynamic video summaries are composed of a set of shots and are produced taking into account the similarity or domain-specific relationships among all video shots.

Following is an attention-based video summarization model - PyTorch implementation of the ACCV 2018-AIU2018 paper Video Summarization with Attention

There is a video summarisation focused models using reinforcement learning - Unsupervised video summarization with deep reinforcement learning (Theano)

There is an LSTM - GAN based approach to video summarization - video summarization lstm-gan pytorch implementation

Microsoft Bing Search has come up with a video summarization technique using thumbnails. Intelligent Search: Video summarization using machine learning

",12861,,12630,,10/27/2020 1:31,10/27/2020 1:31,,,,2,,,,CC BY-SA 4.0 21244,2,,21003,5/18/2020 7:29,,0,,"

I went into the pytorch code for the spinning up implementation of vanilla policy gradient and from what I could understand, found that they use a learning rate of 1e-3 for training the baseline and do a gradient descent 80 times on the same dataset by default with no termination criteria.

Also it is usually impossible to fit the value function completely as we are using a function approximator. The main point is not to get too carried away to get the lowest loss as the main improvement in the agent's performance will happen by doing policy gradient descents rather than trying to aggressively get the accurate value function for the baseline.

Link for the implementation: spinning up implementation of vpg

",36861,,,,,5/18/2020 7:29,,,,0,,,,CC BY-SA 4.0 21245,2,,7838,5/18/2020 8:50,,1,,"

According to the book Artificial Intelligence: A Modern Approach (section 1.1), artificial intelligence (AI) has been defined in multiple ways, which can be organized into 4 categories.

  1. Thinking Humanly
  2. Thinking Rationally
  3. Acting Humanly
  4. Acting Rationally

The following picture (from the same book) provides 8 definitions of AI, where each box contains 2 definitions that fall into the same category. For example, the definitions in the top-left corner fall into the category thinking humanly.

There is also the AI effect, which Pamela McCorduck describes (in her book Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, p. 204) as follows

it's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something — play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that's not thinking

",3005,,2444,,1/15/2021 23:55,1/15/2021 23:55,,,,1,,,1/15/2021 23:55,CC BY-SA 4.0 21247,2,,21234,5/18/2020 9:37,,2,,"

One problem with clustering algorithms is that they will typically find you a solution, ie they will split your data set into clusters, but it will find you a structure even if there isn't one. Your data looks like it could consist of about 5 to 7 clusters, but it could equally well just be 2 or only 1.

What you need to do after the clustering is to assess the quality of the result. I recommend having a look at Finding Groups in Data by Kaufman & Rousseeuw. They discuss various clustering algorithms and also a procedure that works out how cohesive your clusters are. Though it is 30 years old, it is an excellent book on the topic.

You also have the issue of choosing a value for k in your clustering: I usually start with two, and increase it from there; at each step I compute the cohesion of the result using their method, until I get the best score. This is an objective way of finding a good value for k and usually a reasonable clustering result.

The ultimate test, of course, is then if looking at the result makes sense to you. No cluster algorithm can do that for you.

",2193,,,,,5/18/2020 9:37,,,,3,,,,CC BY-SA 4.0 21248,2,,21237,5/18/2020 10:03,,0,,"

It might help to take two small matrices that match the assumptions (mean of zero and variance of one) and just do the matrix multiplication. The dimensionality of K scales Q in the multiplication, scaling the variance simultaneously.

",30426,,,,,5/18/2020 10:03,,,,1,,,,CC BY-SA 4.0 21249,1,,,5/18/2020 10:54,,1,160,"

How to best make use of learning rate scheduling in reinforcement learning?

To me, a low learning rate towards the end to fine-tune what you've learned with subtle updates makes sense. But I don't see why over training time this should be linearly brought down. Wouldn't this increase overfitting too, as it promotes an early adopted policy to get further and further finetuned for the rest of the training? Wouldn't it be better to keep it constant over the entire training so that when the agent finds novel experiences later, it still has a high enough learning rate to update its model?

I also don't really know how these modern deep RL papers do it. The starcraft II paper by DeepMind, and the OpenAI hide and seek paper don't mention learning rate schedules for instance.

Or are there certain RL environments where it's actually best to use something like a linear learning rate schedule?

",31180,,2444,,5/18/2020 11:05,10/20/2022 0:07,How to best make use of learning rate scheduling in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 21250,1,,,5/18/2020 11:25,,1,91,"

Can anyone list the differences between deep Belief network (DBN), restricted Boltzmann machine (RBM), deep Boltzmann machine (DBM) using simple examples?

Links to other resources are also appreciated.

",30725,,2444,,5/18/2020 12:22,5/18/2020 12:22,"What are the differences between a deep belief network, a restricted Boltzmann machine and a deep Boltzmann machine?",,0,1,,,,CC BY-SA 4.0 21261,1,,,5/18/2020 12:54,,3,185,"

In deep learning, the concept of validation loss is to ensure that the model being trained is not currently overfitting the data. Is there a similar concept of overfitting in deep q learning?

Given that I have a fixed number of experiences already in a replay buffer and I train a q network by sampling from this buffer, would computing the validation loss (separate from the experiences in the replay buffer) help me to decide whether I should stop training the network?

For example, If my validation loss increases even though my train loss continues to decrease, I should stop training the training. Does deep learning validation loss also apply in the deep q network case?

Just to clarify again, no experiences are collected during the training of the DQN.

",32780,,2444,,5/18/2020 13:27,5/18/2020 13:27,Does the concept of validation loss apply to training deep Q networks?,,0,1,,,,CC BY-SA 4.0 21262,2,,21220,5/18/2020 13:44,,1,,"

Is there any practical application of knowing whether a concept class is PAC-learnable?

If you know that a concept class is PAC-learnable (i.e. its VC dimension is finite), then there's a possibility that you can design an algorithm that can find a function (or concept) that is arbitrarily close to your target (or desired) function.

This is not really an application, but a consequence, which can lead to applications.

However, note that asking if PAC learning is useful in practice is like asking if special relativity is useful in practice. Yes, they are useful, but in the sense that they can be used to predict the outcomes of an experiment or explain the rules in their specific context. In the case of machine learning, PAC learning can be used to explain e.g. the probably required number of data points needed to learn a concept (a target function) approximately.

See also Are PAC learning and VC dimension relevant to machine learning in practice? for more concrete ""applications"" of PAC-learning and the VC dimension.

",2444,,2444,,5/18/2020 13:54,5/18/2020 13:54,,,,0,,,,CC BY-SA 4.0 21264,1,21290,,5/18/2020 15:11,,3,217,"

I'm trying to understand RL applied to time series (so with infinite horizon) which have a continous state space and a discrete action space.

First, some preliminary questions: in this case, what is the optimal policy? Given the infinite horizon there is no terminal state but only an objective to maximise the rewards, so I can't run more than one episode, is it correct?

Consequently, what is the difference between on-policy and off-policy learning given this framework?

",37169,,1847,,5/19/2020 8:31,5/19/2020 19:46,What is the difference between on-policy and off-policy for continuous environments?,,1,1,,,,CC BY-SA 4.0 21266,1,,,5/18/2020 15:37,,2,969,"

This old question has no definitive answer yet, that's why I am asking it here again. I also asked this same question here.

If I'm doing policy gradient in Keras, using a loss of the form:

rewards*cross_entropy(action_pdf, selected_action_one_hot)

How do I manage negative rewards?

I've had success with this form in cases where the reward is always positive, but it does not train with negative rewards. The failure mode is for it to drive itself to very confident predictions all the time, which results in very large negative losses due to induced deviation for exploration. I can get it to train by clipping rewards at zero, but this throws a lot of valuable information on the table (only carrots, no sticks).

",27616,,2444,,11/1/2020 22:48,11/1/2020 22:48,How do you manage negative rewards in policy gradients?,,1,1,,,,CC BY-SA 4.0 21267,1,21298,,5/18/2020 16:18,,6,494,"

I am trying to understand the difference between a Bayesian Network and a Markov Chain.

When I search for this one the web, the unanimous solution seems to be that a Bayesian Network is directional (i.e. it's a DAG) and a Markov Chain is not directional.

However, often a Markov Chain example is overtime, where the weather today is impacting the weather tomorrow, but the weather tomorrow is not (obviously) impacting the weather today. So I am quite confused how is a Markov Chain not directional?

I seem to be missing something here. Can someone please help me understand?

",36997,,2444,,5/18/2020 19:55,5/19/2020 13:37,What is the difference between a Bayesian Network and a Markov Chain?,,2,0,,,,CC BY-SA 4.0 21268,2,,21266,5/18/2020 17:22,,1,,"

You don't need to manage negative rewards separately, if you implemented the algorithm correctly it will work regardless if the rewards are negative or not. You seem to be using rewards for the loss but you should be using the return which is the sum of the rewards for some state action pair from that point until the end of trajectory.

You also seem to be missing $-$ sign from the loss. The objective function for the vanilla policy gradient algorithm (REINFORCE) which we want to maximize is \begin{equation} J = \sum_a \pi(a|s) q_{\pi}(s, a) \end{equation} It can be shown that the gradient sample for this policy gradient method is \begin{equation} \nabla J = G_t \nabla \log (\pi(A_t|S_t)) \end{equation} so in TensorFlow you should define your loss as \begin{equation} J = - G_t \pi(A_t|S_t) \end{equation} We need the $-$ because in TensorFlow you use minimizers, but we want to maximize this function so minimizing this loss is same as maximizing the objective function. In conclusion, the code similar to what you wrote, should be
-return * cross_entropy(action_pdf, selected_action_one_hot)

EDIT

As pointed out in the comment we don't actually need $-$ because it is already included in cross_entropy function.

",20339,,20339,,5/18/2020 19:11,5/18/2020 19:11,,,,5,,,,CC BY-SA 4.0 21270,2,,21267,5/18/2020 17:57,,1,,"

The main difference between a Bayesian network and a Markov chain is not that a Markov Chain is not directional, it is that the graph of the Bayesian network is not trivial whereas the graph of a Markov chain would be somewhat trivial, as all the previous $k$ nodes would just point to the current node. To illustrate further why this would be trivial, we let each node represent a random variable $X_i$. Then the nodes representing $X_i$ for $ t-k \leq i < t$ would be connected by a directed edge to $X_t$. That is, the edges $(X_i, X_t) \in E$ for $ t-k \leq i < t$ and where $E$ is the set of edges of the graph.

To illustrate this please see the examples below.

Assume that we have a $k$th order Markov chain, then by definition we have $\forall t > k$ $\mathbb{P}(X_t = x | X_0,...,X_{t-1}) = \mathbb{P}(X_t = x | X_{t-k},...,X_{t-1})$.

The main difference between the above definition and the definition of a Bayesian Network is that due to the direction of the graph we can have different dependencies for each $X_t$. Consider the Bayesian Network in the Figure below

We would get that $\mathbb{P}(X_4 = x| X_1, X_2, X_3) = \mathbb{P}(X_4 = x | X_2, X_3)$ and $\mathbb{P}(X_5 = x| X_1, X_2, X_3, X_4) = \mathbb{P}(X_5 = x | X_3)$.

So, the past events that the current random variable depends on don't have to have the same 'structure' in a Bayesian Network as in a Markov Chain.

",36821,,36821,,5/19/2020 13:37,5/19/2020 13:37,,,,0,,,,CC BY-SA 4.0 21271,1,,,5/18/2020 20:29,,2,21,"

I am trying to understand the following statement taken from the paper Graph Neural Networks: A Review of Methods and Applications (2019).

Standard neural networks like CNNs and RNNs cannot handle the graph input properly in that they stack the feature of nodes by a specific order.

This statement is confusing to me. I have not used CNNs/RNNs for non-Euclidean data before, so perhaps that's where my understanding falls off.

How do CNNs/RNNs stack the feature of nodes by a specific order?

",36486,,2444,,5/18/2020 21:25,5/18/2020 21:25,"How do CNNs or RNNs ""stack the feature of nodes by a specific order""?",,0,0,,,,CC BY-SA 4.0 21272,2,,21249,5/18/2020 20:36,,0,,"

I have not used learning rate schedules, but I do have experience with adjustable learning rates.

The Keras callback ReduceLROnPlateau is useful for adjusting the learning rate. If you use it to monitor the validation loss versus training loss, you will avoid the danger of overfitting. Also, you can use the ModelCheckpoint callback to save the model with the lowest validation loss and use that to make predictions. The documentation is here.

I look at the validation loss as a deep valley in $N$ space, where $N$ is the number of trainable parameters. As you progress down the valley, it becomes increasingly narrower, so it is best to reduce the learning rate to get further down the valley (closer to the minimum). With the adjustable learning rate, you can start with a larger initial rate that converges faster, then reduces as needed to achieve a minimum loss.

I wrote a custom callback that initially monitors the training loss and adjusts the learning rate based on that until the training accuracy achieves 95%, then it switches to adjusting the learning rate based on validation loss.

I also am experimenting with a slightly different approach to training. On a given epoch, assume the quantity you are monitoring does NOT improve. That means that you have moved to a point in $N$ space (value of the weights) that is NOT as "good" as the point you were at in the previous epoch. So, instead of training from the point you are at in the current epoch, I set the weights back to what they were for the previous (better) epoch, reduce the learning rate then continue training from there. This appears to work rather well.

",33976,,2444,,9/29/2020 22:10,9/29/2020 22:10,,,,0,,,,CC BY-SA 4.0 21273,1,,,5/18/2020 21:18,,2,27,"

I am solving a problem of image classification of the image dataset for 3 classes. Dataset is highly imbalanced.

How will sampling (either over- or under-sampling) work in that case? Should I remove (or add) any random number of images, or should I follow some pattern?

In the case of CSV data, the general rule is to do PCA, and then remove the data points, but how to do it in the image dataset? Is there any other way to handle this problem?

",38341,,2444,,5/18/2020 21:35,5/18/2020 21:35,How does sampling works in case of imbalanced image datasets?,,0,0,,,,CC BY-SA 4.0 21274,1,21352,,5/18/2020 21:19,,4,69,"

Let's say you are training a neural network in an RL setting, where the state (i.e. features/input data) can be the same for multiple successive steps (~typically around 8 steps) of an episode.

For example, an initial state might consist of the following values:

[30, 0.2, 0.5, 1, 0]

And then again the same state could be fed into the neural network for e.g. 6-7 times more, resulting in ultimately the following input arrays:

[[30, 0.2, 0.5, 1, 0], 
 [30, 0.2, 0.5, 1, 0], 
 ..., 
 [30, 0.2, 0.5, 1, 0]]

I know that the value 0 in the feature set depicts that the weight for this feature results in insignificant value.

But what about the repetition of values? How does that affect learning, if it does at all? Any ideas?

Edit: I am going to provide more information as requested in the comments.

The reason I did not provide this information in the first place, is because I thought there would be similarities in such cases across problems/domains of application. But it is also fine to make it more specific.

  1. The output of the network is a probability among two paths. Our network has to select an optimal path based on some gathered network statistics.

  2. I will be using A3C, as similar work in the bibliography has made progress.

  3. The reason the agent is staying in the same state is the fact that the protocol can also make path selection decisions at the same time, without an actual update of network statistics. So in that case, you would have the same RTT for instance.

    i. This is a product of concurrency in the protocol

    ii. It is expected behavior

",35978,,35978,,5/20/2020 19:53,5/20/2020 20:01,How does the repetition of features across states at different time steps affect learning?,,1,4,,,,CC BY-SA 4.0 21276,1,,,5/18/2020 23:59,,2,27,"
  • Is there any published research on the information-carrying capacity of the human face?

Here I mean ""how much information can be conveyed via facial expressions & micro-expressions"".

This is a subject of interest because the human face is arguably ""the most interesting"" single thing for humans, since it's likely the first real pattern we recognize as infants, and conveys so much non-verbal communication than can relate to achievement of a goal or identification of a mortal threat. (Dogs similarly are said to have co-evolved to read human faces. Film acting as ""the art of the closeup"" also validates this viewpoint.)

Essentially, I'm tying to get a sense of how complex is the set of human facial expressions, what is the computational complexity of the range of problems related to identification of the range of possible expressions, and the emulation of such expressions to imitate a human agent. (i.e. these techniques can be used to ""read"" a human subject or manipulate a human subject.)

Well researched articles & blogs would also be welcome.

",1671,,2444,,5/19/2020 10:14,5/19/2020 10:14,Is there any published research on the information-carrying capacity of the human face?,,0,0,,,,CC BY-SA 4.0 21277,1,,,5/19/2020 1:32,,2,81,"

I've researched online and seen many papers on the use of RNNs (like LSTMs or GRUs) to autocomplete for, say, a search engine, character by character. Which makes sense since it inherently predicts character-by-character in a sequential manner.

Would it be possible to use the transformer architecture instead to do search autocomplete? If so, how might such a model be adapted?

",35867,,2444,,5/19/2020 9:45,5/19/2020 9:45,Can you use transformer models to do autocomplete tasks?,,0,0,,,,CC BY-SA 4.0 21280,1,21289,,5/19/2020 3:39,,6,470,"

I'm building a really simple experiment, where I let an agent move from the bottom-left corner to the upper-right corner of a $3 \times 3$ grid world.

I plan to use DQN to do this. I'm having trouble handling the starting point: what if the Q network's prediction is telling the agent to move downward (or leftward) at the beginning?

Shall I program the environment to immediately give a $-\infty$ reward and end this episode? Will this penalty make the agent "fear" of moving left again in the future, even if moving left is a possible choice?

Any suggestions?

",37178,,2444,,11/14/2020 17:55,11/14/2020 18:07,How should I handle invalid actions in a grid world?,,1,1,,,,CC BY-SA 4.0 21281,2,,10003,5/19/2020 4:05,,0,,"

So there's pictures of low level activation maps, and some gradient based information where yoy take the deriviative of the output with respect to the input and generate a heatmap.

I kind of have my doubts on how usefull this is in general, imo it kind of is creating a fallacious illusion of understanding.

There's some additional research using blurring to figure out the relevant features also but again I have my doubts.

Probably the most usefull is generating images by optimizing your class score. You can learn how badly your CNN actually labels things(doing this makes you realize quickly that CNN are garbage at actually understanding and incredibly easy to trick).

",32390,,,,,5/19/2020 4:05,,,,0,,,,CC BY-SA 4.0 21282,1,21387,,5/19/2020 5:02,,2,147,"

The gradient of the softmax eligibility trace is given by the following:

\begin{align} \nabla_{\theta} \log(\pi_{\theta}(a|s)) &= \phi(s,a) - \mathbb E[\phi (s, \cdot)]\\ &= \phi(s,a) - \sum_{a'} \pi(a'|s) \phi(s,a') \end{align}

How is this equation derived?

The following relation is true:

\begin{align} \nabla_{\theta} \log(\pi_{\theta}(a|s)) &= \frac{\nabla_{\theta} \pi_{\theta}(a|s)}{\pi_{\theta}(a|s)} \tag{1}\label{1} \end{align}

Thus, the following relation must also be true: \begin{align} \frac{\nabla_{\theta} \pi_{\theta}(a|s)}{\pi_{\theta}(a|s)} &=\phi(s,a) - \sum_{a'} \pi(a'|s) \phi(s,a') \end{align}

Mathematically, why would this be the case? Probably, you just need to answer my question above because \ref{1} is true and it's just the rule to differentiate a logarithm.

",36404,,2444,,5/21/2020 23:03,5/21/2020 23:03,How do I derive the gradient with respect to the parameters of the softmax policy?,,1,0,,,,CC BY-SA 4.0 21283,1,,,5/19/2020 5:17,,1,169,"

Since the hidden layers of a CNN work as a trainable feature extractor, more detailed content based on a larger number of pixels shall require bigger filter sizes. But for cases where localized differences are to receive greater attention, smaller filter sizes are required.

I know there is a lot of topic on the internet regarding CNN and most of them have a simple explanation about Convolution Layer and what it is designed for, but they don’t explain

How many convolution layers are required?

What filters should I use in those convolution layers?

",30725,,30725,,5/19/2020 10:05,5/19/2020 10:05,How do we choose the filters for the convolutional layer of a convolution neural network?,,0,1,,,,CC BY-SA 4.0 21284,1,,,5/19/2020 5:29,,2,64,"

Let's say we have several vector points. My goal is to distinguish the vectors, so I want to make them far from each other. Some of them are already far from each other, but some of them can be positioned very closely.

I want to get a certain mapping function that can separate such points that are close to each other, while still preserving the points that are already far away from each other.

I do not care what is the form of the mapping. Since the mapping will be employed as pre-processing, it does not have to be differentiable or even continuous.

I think this problem is somewhat similar to 'minimizing the maximum distance ratio between the points'. Maybe this problem can be understood as stretching the crushed graph to a sphere-like isotropic graph.

I googled it for an hour, but it seems that the people are usually interested in selecting the points that have such nice characteristics from a bunch of data, rather than mapping an existing vector points to a better one.

So, in conclusion, I could not find anything useful.

Maybe you can think 'the neural network will naturally learn it while solving classification problem'. But it failed. Because it is already struggling with too many burdens. So, this is why I want to help my network with pre-processing.

",18139,,2444,,5/19/2020 10:12,10/12/2021 17:02,Can I find a mapping that minimizes the maximum distance ratio of certain vectors?,,1,6,,,,CC BY-SA 4.0 21285,1,,,5/19/2020 5:43,,0,215,"

In this document, the terms ""Redes Neuronales estáticas monovariables"" and ""Redes Neuronales estáticas multivariables"" are mentioned.

What are mono-variable and multi-variable neural networks? Is it the same as a multi-layer or uni-layer NN?

I have searched about multivariable/mono-variable static/dynamic neural networks in some books, but at least in those books there's no information about these topics.

I have the idea it refers to the inputs/outputs, but I'm not sure.

",37183,,2444,,5/19/2020 17:12,1/5/2023 0:07,What are mono-variable and multi-variable neural networks?,,1,0,,,,CC BY-SA 4.0 21286,1,21288,,5/19/2020 6:08,,4,420,"

In the textbook ""Reinforcement Learning: An Introduction"", by Richard Sutton and Andrew Barto, the pseudo code for Policy Evaluation is given as follows:

The update equation for $V(s)$ comes from the Bellman equation for $v_{\pi}(s)$ which is mentioned below (the update equation) for your convenience: $$v_{k+1}(s) = \sum_{a} \pi(a|s)\sum_{s',r}p(s',r|s,a)[r+\gamma v_{k}(s')]$$

Now, in Policy Iteration, the Policy Evaluation comes in stage 2, as mentioned in the following pseudo code:

Here, in the policy Evaluation stage, $V(s)$ is updated using a different equation: $$\begin{align} v_{k+1}(s) = \sum_{s',r}p(s',r|s,\pi (s))[r + \gamma v_{k}(s)] \end{align}$$ where $a = \pi(s)$ is used.

Can someone please help me in understanding why this change is made in Policy Iteration? Are the two equations the same?

",37181,,37181,,5/27/2020 12:04,5/27/2020 12:04,Why is update rule of the value function different in policy evaluation and policy iteration?,,1,0,,,,CC BY-SA 4.0 21288,2,,21286,5/19/2020 7:38,,4,,"

Yes, the two update equations are equivalent. As an aside, technically the equation you give is not the Bellman equation, but the update step re-written as an equation - in the Bellman equation instead of $v_{k+1}(s)$ or $v_{k}(s)$ (showing iterations of approximate value functions), you would have $v_{\pi}(s)$ (representing the true value of a state under policy $\pi$).

The difference between the equations is that

  • In the first case of Policy Evaluation, in order to be general, a stochastic policy $\pi(a|s): \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R} = Pr\{A_t = a|S_t =s\}$ is used. That means to get the expected value, you must sum over all possible actions $a$ and weight them by the policy function output.

  • In the case of Policy Iteration, a deterministic policy $\pi(s): \mathcal{S} \rightarrow \mathcal{A}$ is used. For that, you don't need to know all possible values of $a$ for probabilities, but use the output of the policy function directly as the action that is taken by the agent. That action therefore has a probability of $1$ of being chosen by the policy in the given state.

The equation used in Policy Iteration is simplified for a deterministic policy. If you want you could represent the policy using $\pi(a|s)$ and use the same equation as for Policy Evaluation. If you do that, you would also need to alter the Policy Improvement policy update step to something like:

$a_{max} \leftarrow \text{argmax}_a\sum_{r,s'}p(r,s'|s,a)[r + \gamma V(s')]$

$\text{ for each } a \in \mathcal{A(s)}$:

$\qquad \pi(a|s) \leftarrow 1 \text{ if } a = a_{max}, 0 \text{ otherwise }$

Doing this will result in exactly the same value function and policy as before. The only reason to do this would be to show the equivalence between the two sets of update equations when dealing with a deterministic policy.

",1847,,-1,,6/17/2020 9:57,5/19/2020 13:00,,,,0,,,,CC BY-SA 4.0 21289,2,,21280,5/19/2020 8:20,,3,,"

In a toy environment, this is a choice you can make relatively freely, depending on what you want to achieve with the learning challenge.

It may help if you think through what the actual consequences for making the "wrong" move are in your environment. There are a few self-consistent options:

  • The move simply cannot be made and count as playing the game as intended. In which case, do not allow the agent to make that choice. You can achieve that by filtering the list of choices that the agent is allowed to make. In DQN that will mean supplying an action mask to the agent based on the state so it does not include the action at the stage it makes a choice. This "available actions" function is usually coded as part of the environment.

  • The move can be attempted, but results in no change to state (e.g. the agent bumps into a wall). If the goal is to reach a certain state in shortest possible time, then you will typically have 0 reward and a discount factor, or negative reward for each attempted action. Either way, the agent should learn that the move was wasted and avoid it after a few iterations.

  • The move can be attempted, but results in disaster (e.g. the agent falls off a cliff). This is the case where a large negative reward plus ending the episode is appropriate. However, don't use infinite rewards, positive or negative, because that will cause significant problems with numeric stability. Simply large enough to offset any interim positive rewards associated with that direction should be adequate. For a simple goal-seeking environment with no other positive rewards than reaching the goal, ending the episode early is already enough.

When you don't have a toy environment where you get to decide, then the three basic scenarios above can still help. For instance, in most board games we are not interested in having the agent learn the rules for valid moves when they are already supplied by the environment, so the first scenario applies - only select actions from the valid ones as provided by the environment.

",1847,,1847,,11/14/2020 18:07,11/14/2020 18:07,,,,3,,,,CC BY-SA 4.0 21290,2,,21264,5/19/2020 9:19,,2,,"

First, some preliminary questions: in this case, what is the optimal policy?

It is the policy that maximises return from any given time step $G_t$. You need to be careful with your definition of return with continuing environments. The simple expected sum of future rewards is likely to be positive or negative infinity.

There are three basic approaches:

  • Set an arbitrary finite horizon $h$, so $G_t = \sum_{k=1}^{h} R_{t+k}$

  • Use discounting, with discount factor $\gamma < 1$, so $G_t = \sum_{k=0}^{\infty} \gamma^k R_{t+k+1}$

  • Use average reward per time step, $\bar{R} = \lim\limits_{h \to \infty}\frac{1}{h} \sum_{k=1}^{h} R_{t+k}$, which leads to thinking in terms of differential return $G_t = \sum_{k=1}^{\infty} (R_{t+k} - \bar{R})$

With large enough horizon (so that state is ergodic) or large enough $\gamma$ (close to $1$), these approaches are similar and should result in approximately the same policy. The difference is in how you construct an agent in detail to solve the problem of maximising the value.

Given the infinite horizon there is no terminal state but only an objective to maximise the rewards, so I can't run more than one episode, is it correct?

The term episode becomes meaningless. It may be simpler to think of this differently though - you are trying to solve a non-episodic problem here, in that there is no natural separation of the process into separate meaningful episodes. No physical process is actually infinite, that's just a theoretical nicety.

In practice, if you can run your environment in simulation, or multiple versions of it for training purposes, then you do start and stop pseudo-episodes. You don't treat them as episodes mathematically - i.e. there is no terminal state, and you can never obtain a simple episodic return value. However, you can decide to stop an environment and start a new one from a different state.

Even if there is only one real environment, you can sample sections of it for training, or attempt to use different agents over time, each of which is necessarily finite in nature.

Consequently, what is the difference between on-policy and off-policy learning given this framework?

The notions of on-policy and off-policy are entirely separate from episodic vs continuing environments.

  • On-policy agents use a single policy both to select actions (the behaviour policy), and as the learning target. When the learnt policy is updated with new information that immediately affects the behaviour of the agent.

  • Off-policy agents use two or more policies. That is one or more behaviour policies that select actions (the behaviour policy), and a target policy that is learned, typically the best guess at the optimal policy given data so far.

These things do not change between episodic and continuing tasks, and many algorithms remain identical when solving episodic vs continuing problems. For example, DQN requires no special changes to support continuing tasks, you can just set a high enough discount factor and use it as-is.

You cannot wait until the end of an episode, so certain update methods won't work. However, value bootstrapping used in temporal difference (TD) learning still works.

In some cases, you will want to address the differences in the definition of return. Using an average reward setting typically means looking at a differential return for calculating TD targets for example.

",1847,,1847,,5/19/2020 19:46,5/19/2020 19:46,,,,2,,,,CC BY-SA 4.0 21291,1,,,5/19/2020 10:07,,1,186,"

Given e.g. 1M vectors of $1000$ floating points each, where every point in vectors is sampled from a uniform distribution between $-1$ to $1$:

Is it possible to have the bottleneck of the AE network with size 1? In other words, without caring about generalization, is it possible to train a network, where, given only 1 encoded value, it can recreate any of the 1M examples?

",37189,,37189,,5/19/2020 10:34,5/19/2020 12:53,Is it possible to have the latent vector of an auto-encoder with size 1?,,1,0,,,,CC BY-SA 4.0 21295,2,,21291,5/19/2020 11:01,,1,,"

According to various experimentation on autoencoders, it is very possible to have latent vector of size 1. Various layers can help the downsizing of the original input to a very small size of 1. But an issue may arise during decoding. If you're expecting that through one or two or maybe five layers in decoder you can achieve an accurate reconstruction, it is highly unlikely abd the result will turn out to be blurry. Maybe a great network with various parameters may help the reconstruction without considering generalization as asked by you.

",37180,,37180,,5/19/2020 12:53,5/19/2020 12:53,,,,2,,,,CC BY-SA 4.0 21296,1,,,5/19/2020 11:04,,1,59,"

Given e.g. $1$M vectors of $1000$ floating points each, where every point in vectors is sampled from a uniform distribution between $-1$ to $1$, how to estimate the minimum network size required between input ($1000$ units), bottleneck (preferably $1$ unit), and output ($1000$ units) which is cable of overfitting the training data perfectly?

",37189,,2444,,5/19/2020 20:26,5/19/2020 20:26,How estimate the minimum size of an autoencoder to overfit the training data?,,0,0,,,,CC BY-SA 4.0 21298,2,,21267,5/19/2020 12:06,,1,,"

I am not an expert on this, but I'll try to explain my understnding of this.

A Bayesian Network is a Directed Graphical Model (DGM) with the ordered Markov property i.e the relationship of a node (random variable) depends only on its immediate parents and not its predecessors (generalized from first order Markov process).

A Markov chain on the other hand can be of order $\geq 1$. Thus it may depends on not so immediate predecessors (although there can be no gap between predecessors).

(Defintions according to Kevin Murphy's book, A Probabilistic Approach to MAchine Learning).

Now the reason why such a confusion/ambiguity exists is, in my opinion, due to the fact that Bayesian Nets has been generally used to model causal relationships (and hence directed cause $\rightarrow$ effect) between random variables, which are of different type, i.e the each random variable has a completely different state space (e.g sky is cloudy or sunny vs the ground is wet or dry).

Whereas, we generally use Markov Chains to represent Stochastic Process (a collection of r.v's indexed by time: $X_0, X_1...$) having the Markov Property. And thus for Markov Chain's we have a state transition matrix. Thus, the collection of r.v's indexed with time has the same state space along with a transition matrix specifying the transition probabilities. The directed graph or DGM exists in Markov Chains to show you are moving forward in time, but the state space for each $X_k$ remains the same, and hence no 'real parent' exists.

",,user9947,,user9947,5/19/2020 12:39,5/19/2020 12:39,,,,5,,,,CC BY-SA 4.0 21299,1,21314,,5/19/2020 12:36,,3,956,"

Let's say I implemented a new deep learning model that pushed some SOTA a little bit further, and I wrote a new paper about for publication.

How does it work now? I pictured three options:

  1. Submit it to a conference. Ok, that's the easy one, I submit it to something like NeurIPS or ICML and hope to get accepted. At that point, how do you make your paper accessible? Are there problems in uploading it to arXiv later, in order to get read by more people?

  2. Upload it on arXiv directly. If I do that it would not be peer-reviewed, and technically speaking it would be devoid of ""academic value"". Right? It could easily be read by anyone, but there would be no formal ""proof"" of its ""scientific quality"". Correct me if I'm wrong.

  3. Submit it to a peer-reviewed journal. Avoid desk rejection, avoid reviewers' rejection, after a long painful process it ends up on some international scientific journal. At that point, since the article is formally the editor's property, can you still upload it on arXiv, or on your blog, so that it can be accessible by many people?

  4. How do the big stars of deep learning research do when they have some hot new paper ready for publication? And what publications are the most valued in the professional and the academic world?

",26580,,2444,,5/20/2020 11:41,5/21/2020 11:03,"How does publishing in the deep learning world work, with respect to journals and arXiv?",,1,0,,,,CC BY-SA 4.0 21300,2,,20412,5/19/2020 12:51,,1,,"

An interesting model I encountered in a course is Facebook Prophet. Prophet takes into account trends, seasonality, and holidays for its predictions. As you can probably guess, this is a model that fits Facebook's needs very well. I'll give a brief introduction then provide a link where you can read more. Prophet fits a couple of functions of time represented by a few terms. The general form of the timeseries predictions are,

$$y(t)=g(t)+s(t)+h(t)+\epsilon_t$$

$g(t)$ deals with the trends I mentioned above. This is exactly what you would think it is, and accounts for non-periodic features of the data. It takes the form of a piecewise linear or logistic function.

$s(t)$ accounts for seasonality - in other words these are periodic changes in our timeseries (maybe an increase of sunscreen purchases in the summer). As this is periodic, the natural way to model this term is with Fourier decomposition to identify important frequencies in the signal.

$h(t)$ deals with predictable changes in the timeseries but is for events like holidays (this can happen at different times year to year so this is not necessarily periodic, think Easter). The user provides a list of events and how they want to account for it.

$\epsilon_t$ is just an error term to deal with anything that cannot be addressed with the rest of the model.

This page has a great explanation if you want more detail. I highly recommend you check it out because it is very cool!

",22373,,,,,5/19/2020 12:51,,,,0,,,,CC BY-SA 4.0 21307,1,,,5/19/2020 15:40,,1,110,"

Assume there exists a new and very efficient algorithm for calculating the polar decomposition of a matrix $A=UP$, where $U$ is a unitary matrix and $P$ is a positive-semidefinite Hermitian matrix. Would there be any interesting applications in Machine Learning? Maybe topic modeling? Or page ranking? I am interested in references to articles and books.

",15404,,,,,5/19/2020 15:40,Applications of polar decomposition in Machine Learning,,0,3,,,,CC BY-SA 4.0 21308,2,,21187,5/19/2020 15:41,,2,,"

Here is the commit

I fixed few minor errors, but the major one was when I saw what the line histories = [deque(maxlen=self.reward_steps)] * len(self.env.envs) was doing. It was just repeating the same queue.

In [2]: histories = [deque(maxlen=5)] * 4                                       

In [3]: histories                                                               
Out[3]: [deque([]), deque([]), deque([]), deque([])]

In [4]: histories[0].append(1)                                                  

In [5]: histories                                                               
Out[5]: [deque([1]), deque([1]), deque([1]), deque([1])]

So I just replace it by histories = [deque(maxlen=self.reward_steps) for i in range(len(self.env.envs))]. That fixed my problem.

In [7]: histories = [deque(maxlen=5) for i in range(4)]                         

In [8]: histories                                                               
Out[8]: [deque([]), deque([]), deque([]), deque([])]

In [9]: histories[0].append(1)                                                  

In [10]: histories                                                              
Out[10]: [deque([1]), deque([]), deque([]), deque([])]

The curve representing the mean reward looks like

",35626,,35626,,5/20/2020 0:43,5/20/2020 0:43,,,,0,,,,CC BY-SA 4.0 21309,2,,21285,5/19/2020 16:46,,0,,"

What are mono-variable and multi-variable neural networks?

I am not sure about this, because most (if not all useful) neural networks are multivariable neural networks (i.e. they contain multiple parameters). Even the perceptron usually contains more than one parameter, so that terminology isn't clear even to me. Maybe they are referring to the number of inputs (sometimes called variables), but I don't see why this distinction in this context would make sense.

What are static and dynamic neural networks?

To answer this question, I will first quote an excerpt from this document (written in Spanish) to provide some context to Spanish speakers (I am not a Spanish speaker, but I understand 99% of it).

Un primer intento de clasificación puede separ estáticos y dinámicos o recurrentes (fig.2.1)

Los modelos estáticos realizan un mapeo entre entrada y salida. Despreciando el tiempo de procesamiento interno, la salida se obtiene en forma inmediata en función de la entrada, no existe memoria ni dinámica de estados en el sistema neuronal.

Por el contrario los sistemas recurrentes si la poseen, son sistemas realimentados que ante un estimulo de entrada evolucionan hasta converger a una salida estable.

Casos tipicos de ambos sistemas son el Perceptrón (Rosemblatt, 1960a) (de una o múltiples capas) y la memoria asociativa de Hopfield, respectivamente (Tank, 1987).

So, in this document, the word "dynamic" and "recurrent" are being used interchangeably. An example of a static (i.e. non-recurrent) neural network is the perceptron. An example of recurrent (or dynamic) neural network is the Hopfield network.

Anyway, I recommend you contact the author of that article to ask for clarification (especially, about the mono-variable NNs)!

",2444,,-1,,6/17/2020 9:57,5/19/2020 19:30,,,,2,,,,CC BY-SA 4.0 21311,2,,11617,5/19/2020 18:47,,0,,"

AlphaGo uses MCTS. AlphaZero does not.

Source: Mastering the Game of Go without Human Knowledge

",37204,,1671,,5/22/2020 22:55,5/22/2020 22:55,,,,2,,,,CC BY-SA 4.0 21312,1,21647,,5/19/2020 20:29,,3,289,"

I'm working on creating an environment for a card game, which the agent chooses to discard certain cards in the first phase of the game, and uses the remaining cards to play with. (The game is Crib if you are familiar with it.)

How can I make an action space for these actions? For instance, in this game, we could discard 2 of 6 cards, then choose 1 of 4 remaining cards to play, then 1 of 3 remaining cards, then 1 of 2 remaining cards. How do I model this?

I've read this post on using MultiDiscrete spaces, but I'm not sure how to define this space based on the previous chosen action. Is this even the right approach to be taking?

",37205,,2444,,5/20/2020 10:53,6/4/2020 21:51,What should the action space for the card game Crib be?,,1,1,,,,CC BY-SA 4.0 21313,1,,,5/19/2020 21:03,,1,66,"

From : https://debuggercafe.com/implementing-deep-autoencoder-in-pytorch/ the following autoencoder is defined

class Autoencoder(nn.Module):
    def __init__(self):
        super(Autoencoder, self).__init__()

        # encoder
        self.enc1 = nn.Linear(in_features=784, out_features=256)
        self.enc2 = nn.Linear(in_features=256, out_features=128)
        self.enc3 = nn.Linear(in_features=128, out_features=64)
        self.enc4 = nn.Linear(in_features=64, out_features=32)
        self.enc5 = nn.Linear(in_features=32, out_features=16)

        # decoder 
        self.dec1 = nn.Linear(in_features=16, out_features=32)
        self.dec2 = nn.Linear(in_features=32, out_features=64)
        self.dec3 = nn.Linear(in_features=64, out_features=128)
        self.dec4 = nn.Linear(in_features=128, out_features=256)
        self.dec5 = nn.Linear(in_features=256, out_features=784)

    def forward(self, x):
        x = F.relu(self.enc1(x))
        x = F.relu(self.enc2(x))
        x = F.relu(self.enc3(x))
        x = F.relu(self.enc4(x))
        x = F.relu(self.enc5(x))

        x = F.relu(self.dec1(x))
        x = F.relu(self.dec2(x))
        x = F.relu(self.dec3(x))
        x = F.relu(self.dec4(x))
        x = F.relu(self.dec5(x))
        return x

net = Autoencoder()

From the Autoencoder class, we can see that 784 features are passed through a series of transformations and are converted to 16 features.

The transformations (in_features to out_features) for each layer are:

784 to 256
256 to 128
128 to 64
64 to 32
32 to 16

Why do we perform this sequence of operations? For example, why don't we perform the following sequence of operations instead?

784 to 256
256 to 128

Or maybe

784 to 512
512 to 256
256 to 128

Or maybe just encode in two layers:

784 to 16

Does the reduction of the dimensions over multiple layers (instead of a single layer) allow more details to be stored within the final representation? For example, if we used only the transformation $784 \rightarrow 16$, may this cause some detail not to be encoded? If so, why is this the case?

",12964,,2444,,5/20/2020 10:48,5/20/2020 10:48,Does the reduction of the dimensions over multiple layers allow more details to be stored within the final representation?,,0,0,,,,CC BY-SA 4.0 21314,2,,21299,5/19/2020 21:37,,4,,"

Let me answer your questions one by one.

Submit it to a conference

Let's start with the optimistic case. Say your paper gets accepted! You can upload your preprint on arXiv with the ""arXiv.org perpetual, non-exclusive license to distribute this article (Minimal rights required by arXiv.org)"". It is a non-Creative Common License that does not provide any exclusive rights to arXiv as per. This is the default permission of arXiv (which does not interfere with the license of the conferences). You can easily find papers from ICML, NeurIPS, and AAAI on their websites as well as arXiv. For, e.g., NeurIPS 2017 began on 4th Dec 2017 and this paper has the latest submission on 6th Dec 2017.

If you say your work was rejected or you are about to submit it, you need to check for the restrictions on uploading the manuscript online before submitting it to the conference. Some conference allows papers to be submitted only if they are not uploaded to sites like arXiv within say 30 days prior to the paper submission deadline.

AAAI does not have restrictions on arXiv as per their policy. ACM also allows non-profit organizations like arXiv as per their policy. Among all the info on the major publications I could find, I found a common phrase that the paper should not be uploaded to a for-profit digital library. ArXiv is certainly not among them. The reason why editors do not oppose arXiv papers could be that there are a lot of papers that are uploaded to arXiv in order to claim to be first, so banning them would be a big loss for the editors.

Say you comply with the restrictions of the conference and have uploaded your preprint then it brings the risk that reviewers can know who the authors are. They might be biased once they know who the authors are. This works in the favor of renowned/superstars scientists of the field. The reviewer might be a Ph.D. who admires this superstar and hence would try to get it accepted. Or the author might be a known colleague of the author and would hence be biased into accepting it. On the other hand, if the reviewer has some personal bias towards the author then that may be reflected in their scores.

Upload it on arXiv directly

Only uploading to arXiv is of not much academic value because it is not peer-reviewed as you pointed out. Anyone can write an absurd paper with false results and get it published on arXiv. That is why peer-reviewed papers are important.

One reason to upload to arXiv first is to make your work safe from other people coming up with similar approaches. It is not rare at all for people coming up with similar ideas within a duration of 3 months. Deep Learning is a fast-paced domain as it is a hot research field. So getting your paper out first in the name of work in progress saves you from potential rejects from the conference pointing to a similar work done available online before you.

Submit it to a peer-reviewed journal

Same thing as a conference. You can upload your preprint after acceptance of your manuscript with the default license of arXiv. For submitting it before on arXiv needs cross-checking with the rules of the journal. For, e.g., IEEE allows the work to be submitted to arXiv as per.

How do big stars publish?

Coming to your question of how big stars publish. They either publish directly in a conference and then upload to arXiv or they first upload on arXiv, get publicity and citations and then submit to a top conference. Having the preprint available online can also bias the reviewer into accepting their research (as mentioned before). This way they reduce their chance of rejection.

Fun fact

Interestingly, if the preprints become very famous then the reviewer might get confused that your work is a derivative of a popular work which in reality is the preprint of your submitted work! In such cases, you would need to imply this diplomatically. For example, Music Transformer by the Google Brain team (which consisted of authors of the famous Transformer paper ""Attention is all you need"") was available on arXiv in 2018 and was already being cited by others. When it was submitted to ICLR 2019, a reviewer mistakenly took it as a derivative work of the 2018 arXiv paper and suggested a rejection of the paper. However, after further inspection, he/she realized the confusion which blew their mind! Immediately the paper was given a suitable high score. Source: https://openreview.net/forum?id=rJe4ShAcF7

Where to publish?

Unlike in core electronics and other branches where a journal is much more important than conference papers, in machine learning, conference papers are at par or better than any journal out there. Mostly the quality depends on the Impact Factor of the journal or conference. For example, NeurIPS is a top venue to publish and it's a conference and not a journal.

One reason for conferences to be so important is that they are more popular among researchers. Conferences are fast with their review as compared to the lengthy journal review. This allows researchers to meet and discuss the progress of the field with like-minded people much faster. This is crucial for a rapidly evolving field like artificial intelligence. This makes conference popular which attracts the best researchers to publish their cutting edge which in turn makes the conference more lucrative for other researchers to publish in the same venue as the big stars.

Conferences also allow people from the industry to meet the researchers and hire them. The quick meeting opportunity provided by the conference is beneficial for both the academic researchers as well as the industry for attracting talent.

",37206,,37206,,5/21/2020 11:03,5/21/2020 11:03,,,,8,,,,CC BY-SA 4.0 21315,1,,,5/20/2020 5:34,,0,98,"

I was reading about the possibility of using Google's Coral for deep learning-based object detection and image classification. I heard it has a good speed in terms of frames/sec.

I also read that Google's Coral is only compatible with quantized models. What does this mean? How will this affect the performance of object detection or classification in terms of accuracy and speed?

What is the advantage of using Google's Coral over Nvidia's Xavier?

",20025,,2444,,5/20/2020 10:36,5/21/2020 3:26,What is the advantage of using Google's Coral over Nvidia's Xavier?,,1,0,,,,CC BY-SA 4.0 21321,1,,,5/20/2020 7:12,,1,85,"

I have the following assignment.

I can't understand the b part of this question in my assignment. I have completed the 1st part and understand the maths behind it, but the 2nd part has me stumped.

I looked up ridge functions and they basically map real vectors to a single real value, from what I understood. For that reason, I considered that the activation function has to be one that ranges over the real numbers, but that still doesn't clear my doubts.

I don't need a full answer just an explanation of the question will be very helpful, here's some text from the book I'm referring( Russel and Norvig), though I couldn't really grasp how this would help me choose an activation function.

Before delving into learning rules, let us look at the ways in which networks generate complicated functions. First, remember that each unit in a sigmoid network represents a soft threshold in its input space, as shown in Figure 18.17(c) (page 726). With one hidden layer and one output layer, as in Figure 18.20(b), each output unit computes a soft-thresholded linear combination of several such functions. For example, by adding two opposite-facing soft threshold functions and thresholding the result, we can obtain a “ridge” function as shown in Figure 18.23(a). Combining two such ridges at right angles to each other (i.e., combining the outputs from four hidden units), we obtain a “bump” as shown in Figure 18.23(b).

",37216,,2444,,5/20/2020 10:22,5/20/2020 10:22,"If the output of a model is a ridge function, what should the activation functions at all the nodes be?",,0,0,,,,CC BY-SA 4.0 21339,1,21346,,5/20/2020 12:24,,1,586,"

I've read about the Knight Tour problem. And I wanted to try to solve it with a reinforcement learning algorithm with OpenAI's gym.

So, I want to make a bot that can move on the chess table like the knight. And it is given a reward each time it moves and does not leave the table or step in an already visited place. So, it gets better rewards if it survives more.

Or there is a better approach to this problem? Also, I would like to display the best knight in each generation.

I'm not very advanced at reinforcement learning (I'm still studying it), but this project really caught my attention. I know well machine learning and deep learning.

Do I need to start implementing a new OpenAI's gym environment and start all from scratch, or there is a better idea?

",37226,,2444,,5/20/2020 12:37,5/20/2020 15:12,How can I model and solve the Knight Tour problem with reinforcement learning?,,1,2,,,,CC BY-SA 4.0 21342,1,,,5/20/2020 13:01,,2,54,"

Cat swarm optimization (CSO) is a novel metaheuristic for evolutionary optimization algorithms based on swarm intelligence which proposed in 2006. See Feature Selection of Support Vector Machine Based on Harmonious Cat Swarm Optimization.

According to Modified Cat Swarm Optimization Algorithm for Feature Selection of Support Vector Machines

CSO imitates the behavior of cats through two sub-modes: seeking and tracing. Previous studies have indicated that CSO algorithms outperform other well-known meta-heuristics, such as genetic algorithms and particle swarm optimization. This study presents a modified version of cat swarm optimization (MCSO), capable of improving search efficiency within the problem space. The basic CSO algorithm was integrated with a local search procedure as well as the feature selection of support vector machines (SVMs).

Can someone explain how exactly Cat Swarm Algorithm (CSO) is used for feature selection?

",30725,,2444,,5/20/2020 13:15,5/20/2020 13:15,How can Cat Swarm Algorithm (CSO) used for feature selection?,,0,0,,,,CC BY-SA 4.0 21345,2,,21284,5/20/2020 14:06,,1,,"

An interesting question. I would start by finding n nearest neighbors of each data point, then calculate their center of mass c and the point's distance d to its nth nearest neighbor. The smaller the d is the larger the density is around a given point. You could then iteratively step every point away from their c in an inverse proportion to the distance d with a suitable step size. This would spread out the clusters.

But this won't help you transform any new points in the dataset, maybe you can learn this arbitrary mapping R^n -> R^n by using an other neural network and apply it to new samples?

This is the first ad-hoc idea which came to my mind. It would be interesting to see a 2D animation of this.

A more rigorous approach might be a variational autoencoder, you can embed the data in a lower dimensional space with approximately normal distribution. But it doesn't guarantee that clusters would be as spread out as you'd like. An alternative loss function would help with that, for example every point's distance to their original nth closest neighbor should be as close to one as possible.

",32722,,,,,5/20/2020 14:06,,,,3,,,,CC BY-SA 4.0 21346,2,,21339,5/20/2020 14:18,,1,,"

Model your problem as an MDP

To solve a problem with reinforcement learning, you need to model your problem as a Markov decision process (MDP), so you need to define

  • the state space,
  • the action space, and
  • the reward function

of the MDP.

Understand your problem and the goal

To do define these, you need to understand your problem and define it as a goal-oriented problem.

In the knight tour problem, there's a knight that needs to visit each square of a chessboard exactly once. The knight can perform only $L$-shaped moves (as for the rules of chess). See the animation below (taken from the related Wikipedia article).

The goal is then, by making $L$ moves, to find a path through the squares such that each square is visited exactly once.

What is the state space?

You could think that the state space $S$ could be the set of all squares of the chessboard. So, if you have an $n \times n$ chessboard, then $|S| = n^2$, i.e. you will have $n^2$ states.

However, this can be problematic because a square alone doesn't tell you all the information that you need to know to take the optimal action. So, you need to define the states such that all available information is available to the agent, i.e. you need to define a state as the position of the current square and the position of the other available squares.

What is the action space?

The action space could be defined as the set of all actions that the knight can take across all states. Given that the knight can only take $L$ moves, whenever the knight is in state $s$, only $L$-shaped actions are available. Of course, it is possible that, for each state $s$, there's more thane one valid $L$-shaped action. That's fine. However, the chosen $L$-shaped action will definitely affect your next actions, so we need a way of guiding the knight. That's the purpose of the reward function!

What is the reward function?

The reward function is typically the most crucial function that you need to define when modeling your problem as an MDP that needs to be solved with an RL algorithm.

In this case, you could give a reward of e.g. $1$ for each found path. More precisely, you will let your RL agent explore the environment. If it eventually finds a correct path (or solution), you will give it $1$. You could also penalise the knight if it ends in a situation where it cannot take an $L$-shaped action anymore. Given that you don't really want this to happen, you could give it a very small reward e.g. $-100$. Finally, you could give it a reward of $0$ for each action taken, which could imply that you don't really care about the actions that the knight takes, as long as it reaches the goal, i.e. find a path through the chessboard.

The design of the reward function will highly affect the behaviour and performance of your RL agent. The above-suggested reward function may actually not work well, so you may need to try different reward functions to get some satisfactory results.

Which RL algorithm to use?

Of course, you will also need to choose an RL algorithm to solve this problem numerically. The most common one is Q-learning. You can find its pseudocode here.

How to implement this with OpenAI's gym?

You probably need to create a custom environment and define the state and action spaces, as well as the reward function. I cannot tell you the details, but I think you can figure them out.

Is RL the right approach to solve this problem?

RL isn't probably the most efficient approach to solve this problem. There are probably more efficient solutions. For example, there's a divide-and-conquer approach, which I am not familiar with, but that you may also try to use and compare with the RL approach.

You could also read the paper Solution of the knight's Hamiltonian path problem on chessboards (1994), especially, if you are already familiar with the Hamiltonian path problem (HPP). Note that the knight tour problem is an instance of the HPP.

",2444,,2444,,5/20/2020 15:12,5/20/2020 15:12,,,,14,,,,CC BY-SA 4.0 21347,1,,,5/20/2020 15:23,,0,157,"

I want to use a pretrained model found in [BERT Embeddings] https://github.com/UKPLab/sentence-transformers and I want to add a layer to get the sentence embeddings from the model and pass on to the next layer. How do I approach this?

The inputs would be an array of documents and each document containing an array of sentences.

The input to the model itself is a list of sentences where it will return a list of embeddings.

This is what I've tried but couldn't solve the errors:

def get_embeddings(input_data):

    input_embed = []
    for doc in input_data:
      doc = tf.unstack(doc)
      doc_arr = asarray(doc)
      doc = [el.decode('UTF-8') for el in doc_arr]
      doc = list(doc)
      assert(type(doc)== list)

      new_doc = []
      for sent in doc:
        sent = tf.unstack(sent)
        new_doc.append(str(sent))
        assert(type(sent)== str)

      embedding= model.encode(new_doc)  # Accepts lists of strings to return BERT sentence embeddings
      input_embed.append(np.array(embedding))

    return tf.convert_to_tensor(input_embed, dtype=float)


sentences = tf.keras.layers.Input(shape=(3,5)) #test shape
sent_embed = tf.keras.layers.Lambda(get_embeddings)


x = sent_embed(sentences)

",37006,,,,,5/21/2020 10:57,How to add a pretrained model to my layers to get embeddings?,,1,0,,,,CC BY-SA 4.0 21348,1,21349,,5/20/2020 15:55,,0,84,"

Sometimes the agent learns a bit slow and you want to have multiple agents in one generation. And at each episode you'll draw on the screen only the best of them or all of them. How is that possible?

For clarification purposes, please watch this video on youtube at time 4:10.

I need just a theoretical approach, I'll try the coding myself :).

Thanks for any answer! I really do appreciate it! :)

",37226,,,,,5/20/2020 16:40,How to add more than 1 agent in one generation with Q Learning,,1,0,,,,CC BY-SA 4.0 21349,2,,21348,5/20/2020 16:25,,1,,"

The video you linked is not using reinforcement learning (RL). It is using genetic algorithms (GA).

GA is designed around using multiple agents and picking the best performing to move forward to next generation. With this approach, it is common to want to only view the best performing agents, as the learning mechanism uses the same selection process - the best agent is the output of the algorithm*.

Whilst you can run muliple agents to collect more data easily enough in RL, they typically won't perform better or worse other than by random chance. The best performance is not indicative of the agent's overall performance in the way that GA is, because there is only one agent. It would not be as meaningful to pick out the best ones for display. Instead, after a certain number of training episodes, you should take the agent so far, stop it running exploratory moves (set $\epsilon = 0$ if you are using $\epsilon$-greedy exploration). Then render the behaviour of that agent.

If you want to compare RL versus GA for learning efficiency, then one arguably fair comparison would be to render the agent each time it has trained on a number of episodes in RL equal to the population size used in the GA version. If you are only showing results after every 100 generations of GA, then multiply number of training episodes by 100 to compare using the same amount of data input.

You can also, separately to this concern, run RL with multiple exploring agents at once. If you want to have parallel training with multiple environments running in RL, then you will need a distributed environment. Each instance of the environment would run one agent, and collect training data. You have a rough choice between:

  • Feeding everything observed into one central experience replay memory and a single training loop routine samples from the whole memory and updates the agent. This should work OK for Q-learning.

  • Calculate update gradients on each distributed environment, collate them centrally and update the agent on a mean gradient update step before sending out the updated agent to all the distributed systems. This is the approach typically used by A3C and A2C which benefit from having multiple agents running at once.

In both cases, the latest parameters of the agent (the neural network weights) need to be regularly copied out to each environment so that each instance can work as much as possible with a current policy.

Setting up a distributed learning environment for RL is more work than for GA, because you need to move a lot more data between agents to complete learning, whilst for GA you only need to measure fitness. However, you should find that Q learning can be a lot more efficient (in terms of number of simulations required) than a GA-based approach for many control problems.


* As an aside, this can be a weakness of GA if you are running in an environment with random choices or events - the GA can select as it's ""best"" result at any stage something that is less optimal but that was lucky. On average over many generations this effect should be removed, because the same agent won't be luck every generation, but it does mean in some environments that you will get an over-estimate of performance unless your fitness assessment is very thorough (or perhaps run separately every so many generations, to double-check). Similar concerns occur in RL too, but do not affect which agent you select for assessment or display, since there is only ever one set of agent parameters at any time.

",1847,,1847,,5/20/2020 16:40,5/20/2020 16:40,,,,6,,,,CC BY-SA 4.0 21351,1,,,5/20/2020 19:01,,1,62,"

I am working on a project to implement a collision avoidance algorithm on a real unmanned aerial vehicle (UAV).

I'm interested in understanding the process to set up a negative reward to account for scenarios wherein there is a UAV crash. This can be done very easily during the simulation (if the UAV touches any object, the episode stops giving a negative reward). In the real world, a UAV crash would usually entail it hitting a wall or an obstacle, which is difficult to model.

My initial plan is to stop the RL episode and manually input a negative reward (to the algorithm) each time a crash occurs. Any improvements to this plan would be highly appreciated!

",31755,,2444,,5/26/2020 11:10,5/26/2020 11:10,How do I set up rewards to account for unmanned aerial vehicle crashes?,,0,1,,,,CC BY-SA 4.0 21352,2,,21274,5/20/2020 20:01,,1,,"

In RL, neural networks may intuitively be thought of as using the input features as a representation that ""identifies"" the input state (or input state + action pair). Think back to the ""tabular"" RL setting that most people first study when they learn about RL. In tabular RL, you have a table of values (state values $V(s)$, or state-action values $Q(s, a)$), with unique entries in the table for every state. Such a table can perfectly identify states or, in other words, perfectly disambiguate different states.

In a non-tabular, function approximation setting, with function approximators such as Neural Networks, you can generally no longer uniquely identify every single state. Instead, you use approximate representations of these states, and the approximation implies that it's possible that you have multiple different states that look identical; they have identical input features. This is the case you're dealing with. Now, you specified explicitly that these multiple states with identical representations / input features follow each other up immediately in a single episode, but I don't think this detail is particularly important. You'd have exactly the same problems if these different states with identical representation showed up at different times within an episode. The only problem that you really have is a disambiguation problem: you don't know how to disambiguate these states, since they look identical to the network.

How significant that problem is depends on your domain. Based on your domain knowledge, do you expect the optimal action, or the optimal values, to be kind of similar in all these states that have identical features? If so, no problem! Your network already thinks they're the same anyway, so it will learn that the same actions / same values are the best in those states. But do you expect the optimal actions / true value functions to be wildly different in these states despite the fact that a network can't disambiguate them? In this case, the problem will be more severe because you can't realistically expect your network to learn the optimal actions / value functions for all these different states. At best, it can learn a weighted average among them (weighted by how commonly they occur in your training episodes).

",1641,,,,,5/20/2020 20:01,,,,3,,,,CC BY-SA 4.0 21353,1,,,5/20/2020 20:23,,2,153,"

What are the most common feedforward neural networks? What kind of inputs do they receive? For example, do they receive binary numbers, real numbers, vectors, or matrics? Is there such a taxonomy?

",37183,,2444,,5/20/2020 21:11,7/20/2020 12:12,What are the most common feedforward neural networks?,,1,0,,,,CC BY-SA 4.0 21354,2,,16805,5/20/2020 20:54,,3,,"

You can use Pytorch_Geometric library for your projects. Its supports weighted GCNs. It is a rapidly evolving open-source library with easy to use syntax. It is mentioned in the landing page of Pytorch. It is the most starred Pytorch github repo for geometric deep learning. Creating a GCN model which can process graphs with weights is as simple as:

import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = GCNConv(dataset.num_node_features, 16)
        self.conv2 = GCNConv(16, dataset.num_classes)

    def forward(self, data):

        # data has the following 3 attributes
        x, edge_index, edge_weight = data.x, data.edge_index, data.edge_weight

        x = self.conv1(x, edge_index, edge_weight)
        x = F.relu(x)
        x = F.dropout(x, training=self.training)
        x = self.conv2(x, edge_index, edge_weight)

        return F.log_softmax(x, dim=1)

See this for getting started. Check out its documentation on different variants of GCNs for further details. One of the best thing is that like Pytorch, its documentation are self-sufficient.

",37206,,37206,,5/21/2020 9:33,5/21/2020 9:33,,,,1,,,,CC BY-SA 4.0 21355,2,,21198,5/20/2020 21:15,,2,,"

Here, $H$ is a $n * d$ matrix where $n$ is the number of total nodes in the graph and $d$ is the dimension of embedding of each node.

Using the notation in the question, the basic GNN formulation without self loop is: $\text{D}^{-1}\text{A}\text{H}$. If you study this equation closer then you will find that the $i^{th}$ row of $\text{A}\text{H}$ generates the $i^{th}$ node's representation by summing the node representation of its neighboring nodes. Multiplying it with $\text{D}^{-1}$ makes the aggregated representation normalized with respect to the degree of a node (number of neighbors).

By defining a metric called information score: $$||\text{H} - (\text{D}^{-1}\text{A}\text{H})||$$ we will be getting low values for nodes that are well represented by their local neighborhood nodes and high values for nodes that are having a hard time being represented/summarized by its neighboring nodes. To approximate the graph information, the authors choose to preserve the nodes that can not be well represented by their neighbors, i.e., the nodes with relatively larger node information score will be preserved in the construction of the pooled graph, because the authors believe it can provide more information.

",37206,,37206,,5/20/2020 21:33,5/20/2020 21:33,,,,0,,,,CC BY-SA 4.0 21356,1,21358,,5/20/2020 21:48,,5,437,"

Here is my understanding of importance sampling. If we have two distributions $p(x)$ and $q(x)$, where we have a way of sampling from $p(x)$ but not from $q(x)$, but we want to compute the expectation wrt $q(x)$, then we use importance sampling.

The formula goes as follows:

$$ E_q[x] = E_p\Big[x\frac{q(x)}{p(x)}\Big] $$

The only limitation is that we need a way to compute the ratio. Now, here is what I don't understand. Without knowing the density function $q(x)$, how can we compute the ratio $\frac{q(x)}{p(x)}$?

Because if we know $q(x)$, then we can compute the expectation directly.

I am sure I am missing something here, but I am not sure what. Can someone help me understand this?

",36074,,2444,,5/20/2020 23:19,5/21/2020 9:33,How can we compute the ratio between the distributions if we don't know one of the distributions?,,2,0,,,,CC BY-SA 4.0 21357,2,,18682,5/20/2020 22:03,,1,,"

The advantage is basically a function of the actual return received and a baseline. The function of the baseline is to make sure that only the actions that are better than average receive a positive nudge.

One way to estimate the baseline is to have a value function approximator. At every step, you train a NN, using the trajectories collected via the current policy, to predict the value function for states.

I hope that answers your query.

",36074,,36074,,5/21/2020 17:22,5/21/2020 17:22,,,,0,,,,CC BY-SA 4.0 21358,2,,21356,5/20/2020 22:18,,3,,"

The rationale behind importance sampling is that $q(x)$ is difficult to sample from but easy to evaluate. Or at least you can easily evaluate some $\tilde{q}$ such that: $$ \tilde{q}(z) = Zq(z) $$ where $Z$ (scalar) might be unknown. The geometrical example would be here e.g. sampling uniformly from an area under the curve $q(x)$ (in general it's not easy).

Because if we know $q(x)$, then we can compute the expectation directly.

That's the task we're trying to solve to begin with. And calculating expectation might be hard if we can't sample efficiently from $q$.

Say you want to compute an expectation of $x$, $E[x]$. For this you need to calculate the following integral: $$ E[x] = \int{xq(x)dx} $$ where $q$ is a probability distribution of $x$ for which you have an expression - so you can evaluate $q(x)$ (up to the constant of proportionality). This integral might be hard to evaluate analytically so we need to use other methods such as Monte Carlo. Let's say it is hard to generate samples from $q$ (as per example above, e.g. generating samples from the area under the curve $q(x)$ uniformly).

What you can do is to calculate an expectation under a simple distribution $p$ (proposal distribution) which is a distribution of your choice that needs to allow you to easily sample from it (say Gaussian). Then you can rewrite your integral as: $$ E_q[x] = \int{xq(x)dx} = \int{xq(x) \color{blue}{\frac{p(x)}{p(x)}} dx} = \int{x \frac{q(x)}{\color{blue}{p(x)}} \color{blue}{p(x)} dx} = E_p \Big[{x\frac{q(x)}{p(x)}}\Big] $$ (added index $p$ and $q$ to expectation to denote the sampling distribution). Now you can approximate the last expectation by Monte Carlo: $$ E_p \Big[{x\frac{q(x)}{p(x)}}\Big] = \frac{1}{S} \sum_{s}{x^{(s)} \frac{q(x^{(s)})}{p(x^{(s)})} }, \ x^{(s)} \sim q(x) $$

",22835,,22835,,5/21/2020 8:26,5/21/2020 8:26,,,,1,,,,CC BY-SA 4.0 21359,2,,21353,5/20/2020 22:30,,1,,"

What is a neural network?

Many neural networks can be defined as a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$, where $n, m \geq 1$.

Equivalently, many neural networks can also be defined as a set of interconnected units (aka neurons or nodes) $f^i$ that receive some input and produce output, i.e. $f^i(\mathbf{x}^i) = y^i$, where $\mathbf{x}^i \in \mathbb{R}^k$. The actual function $f^i$ is variable and depends on the application or problem you want to solve. For example, $f^i$ can just be a linear combination of the inputs, i.e. $f^i(\mathbf{x}^i) = \sum_{j} \mathbf{w}_j^i \mathbf{x}_{j}^i$, where $\mathbf{w}^i \in \mathbb{R}^k$ is a vector of weights (aka parameters or coefficients). The linear combination can also be followed by a non-linear function, such as a sigmoid function.

If the neural network (more precisely, its units) doesn't contain recurrent (aka cyclic) connections, then it can be called a feedforward neural network (FFNN).

What are the most common feedforward neural networks?

Perceptron

The simplest (non-binary) FFNN is the perceptron, where the inputs are directly connected to the outputs. The perceptron performs a linear combination of the inputs followed by a thresholding operation, so perceptrons can only represent straight-line functions, so they can only be used for classification or regression problems where your data is linearly separable. In fact, the perceptron cannot solve the XOR problem.

Before the perceptron, McCulloch and Pitts had introduced simplified models of biological neurons, where all signals are binary, in an attempt to closely mimic their biological counterpart. The perceptron can actually be seen as an extension of this work. In fact, a perceptron can be viewed as a single artificial neuron.

Multi-layer perceptron

An FFNN with more layers (of units) between the input and the output is often called a multi-layer perceptron (MLP). The layers in the middle are often denoted as hidden layers. The MLP can represent not only linear functions (i.e. straight lines), but also more complicated functions by making use of non-linear functions, such as the sigmoid.

Convolution neural network

You can have other forms of FFNNs that perform other operations.

For example, a convolution neural network (CNN), provided it doesn't contain recurrent connections, is an FFNN that performs the convolution operation (and often also a sub-sampling operation). For this reason, they are particularly suited to deal with images (and videos). (This shouldn't be surprising if you are familiar with the basics of image processing and computer vision, which I don't think it's the case)

However, note that CNNs can also have recurrent connections, but this is not usually the case.

Residual neural network

There are also residual neural networks, i.e. neural networks where a node in a certain layer $l$ can be connected to other nodes in layers $l+j$, for $j \geq 1$, as opposed to being connected only to the nodes in layer $l+1$, which is the typical case.

Auto-encoders

Auto-encoders are neural networks that compress the input and then decompress it. The answers to this question may help you to understand why AEs would be useful.

What kind of inputs do they receive?

What kind of inputs do they receive? For example, do they receive binary numbers, real numbers, vectors, or matrics?

In principle, each of these FFNNs can receive either binary or real numbers or vectors (either of real or binary numbers). However, certain NNs are more appropriate to deal with certain inputs. For example, CNNs are more appropriate for images (which are typically represented as matrices or tensors).

How can you further classify the NNs?

Based on chapter 2 of the book Neural Networks - A Systematic Introduction (1996) by Raul Rojas, you can also divide neural networks into other categories

  • Unweighted (i.e. binary, such as the McCulloch and Pitts' model) vs weighted (e.g. the perceptron)
  • Synchronous vs asynchronous (e.g. Hopfield networks, which are recurrent neural networks, though)
  • Neural networks that store states vs NNs that don't store states

You could also distinguish between FFNNs based on the learning algorithm. Nowadays, the widely used NNs are trained with gradient descent (and back-propagation to compute the gradients), but there are other approaches to train NNs, such as evolutionary algorithms or Hebbian learning. Moreover, you could also distinguish between neural networks that compute a deterministic function and neural networks that have some randomness or stochasticity inside them (e.g. Bayesian neural networks). There are probably many more possible subdivisions.

",2444,,2444,,7/20/2020 12:12,7/20/2020 12:12,,,,0,,,,CC BY-SA 4.0 21360,2,,21356,5/20/2020 22:45,,3,,"

It is common in Bayesian statistics to only know the posterior up to a constant of proportionality. This means that we can't directly sample from the posterior. However, using importance sample we are able to.

Consider our posterior density $\pi$ is only known up to some constant, i.e. $\pi(x) = K \tilde{\pi}(x)$, where $K$ is some constant and we only have $\tilde{\pi}$. Then by importance sampling we can evaluate the expectation of $X$ (or any function thereof) as follows by using a proposal density $q$:

\begin{align} \mathbb{E}_\pi[X] & = \int_\mathbb{R} x \frac{\pi(x)}{q(x)}q(x)dx \; ; \\ & = \frac{\int_\mathbb{R} x \frac{\pi(x)}{q(x)}q(x)dx}{\int_\mathbb{R}\frac{\pi(x)q(x)}{q(x)}dx} \; ;\\ & = \frac{\int_\mathbb{R} x \frac{\tilde{\pi}(x)}{q(x)}q(x)dx}{\int_\mathbb{R}\frac{\tilde{\pi}(x)q(x)}{q(x)}dx} \; ; \\ & = \frac{\mathbb{E}_q[xw(x)]}{\mathbb{E}_q[w(x)]} \; ; \end{align} where $w(x) = \frac{\tilde{\pi}(x)}{q(x)}$. Note that on line two we have not done anything crazy - as $\pi$ is a density we know that it integrates to one and then we multiply the integral by $1 = \frac{q(x)}{q(x)}$. The thing to notice is that the if we were to write $\pi(x) = K \tilde{\pi}(x)$ then the constants $K$ in the integrals would cancel, and so we have our result.

To summarise - we can sample from a distribution that is difficult/impossible to sample from (e.g. because we only know the density up to a constant of proportionality) by using importance sampling, as this allows us to calculate the importance ratio and use samples that are generated from a distribution of our choosing that is easier to sample from.

Note that importance sampling isn't just used in Bayesian statistics - for instance it could be used in Reinforcement Learning as an off policy way of sampling from the environment whilst still evaluating the value of the policy you're interested in.

edit: as requested I have added a concrete example

As an example to make things concrete - suppose we have $Y_i | \theta \sim \text{Poisson}(\theta)$ and we are interested in $\theta \in (0, \infty)$. The likelihood function for the Poisson distribution is $$ f(\textbf{y} | \theta) = \prod\limits_{i=1}^n \frac{\theta^{y_i}\exp(-\theta)}{y_i\!}\;.$$

We can then assign a gamma prior to $\theta$, that is we say that $\theta \sim \text{Gamma}(a,b)$ with density $$\pi(\theta) \propto \theta^{a-1} \exp(-b\theta)\;.$$

By applying Bayes rule our posterior is then \begin{align} \pi(\theta|\textbf{y}) & \propto f(\textbf{y} | \theta) \pi(\theta) \\ & = \prod\limits_{i=1}^n \frac{\theta^{y_i}\exp(-\theta)}{y_i\!} \times \theta^{a-1} \exp(-b\theta) \\ & = \theta^{\sum\limits_{i=1}^n y_i + a - 1} \exp(-[n+b]\theta)\;. \end{align} Now we know that this is the kernel of a Gamma($\sum\limits_{i=1}^n y_i + a$, $n+b$) distribution, but assume that we didn't know this and didn't want to calculate the normalising integral. This would mean that we are not able to calculate the mean of our posterior density, or even sample from it. This is where we can use importance sampling, for instance we could choose an Exponential(1) proposal distribution.

We would sample say 5000 times from the exponential distribution and then calculate the two expectations using MC integration and obtain an estimate for the mean of the posterior. NB that in this example $X$ from earlier would be $\theta$ in this example.

Below is some Python code to further demonstrate this.

import numpy as np

np.random.seed(1)

# sample our data
y = np.random.poisson(lam=0.5,size = 100)

# sample from proposal
samples_from_proposal = np.random.exponential(scale=1,size=5000)

# set parameters for the prior
a = 5; b = 3

def w(x, y, a, b):
    # calculates the ratio between our posterior kernel and proposal density
    pi = x ** (np.sum(y) + a - 1) * np.exp(-(len(y) + b) * x)
    q = np.exp(-x)
    return pi/q

# calculate the top expectation
top = np.mean(samples_from_proposal * w(samples_from_proposal,y,a,b))

# calculate the bottom expectation
bottom = np.mean(w(samples_from_proposal,y,a,b))

print(top/bottom)

# calculate the true mean since we knew the posterior was actually a gamma density
true_mean = (np.sum(y) + a)/(len(y) + b)
print(true_mean)

Running this you should see that the Expectation from importance sampling is 0.5434 whereas the true mean is 0.5436 (both of which are close to the true value of $\theta$ that I used to simulate the data from) so importance sampling approximates the expectation well.

",36821,,36821,,5/21/2020 9:33,5/21/2020 9:33,,,,0,,,,CC BY-SA 4.0 21361,1,21367,,5/21/2020 2:02,,6,981,"

I have been reading about LSTMs and GRUs, which are recurrent neural networks (RNNs). The difference between the two is the number and specific type of gates that they have. The GRU has an update gate, which has a similar role to the role of the input and forget gates in the LSTM.

Here's a diagram that illustrates both units (or RNNs).

With respect to the vanilla RNN, the LSTM has more "knobs" or parameters. So, why do we make use of the GRU, when we clearly have more control over the neural network through the LSTM model?

Here are two more specific questions.

  1. When would one use Long Short-Term Memory (LSTM) over Gated Recurrent Units (GRU)?

  2. What are the advantages/disadvantages of using LSTM over GRU?

",30725,,2444,,1/18/2021 21:31,1/18/2021 21:31,What's the difference between LSTM and GRU?,,1,1,,,,CC BY-SA 4.0 21362,2,,2349,5/21/2020 2:54,,0,,"

I tried to use 2 hidden ReLU-based unit, 1 output unit to solve the XOR problem and found that gradient will always become really small after training 1000 times.

The Loss vs training times:

And the gradient looks like:

I think that means the units all dead. The robust way to solve this problem is increase the number of units.

When it comes to 4 units, some times I will success, but sometimes not.

And 5 units, I will fail but the rate decrease.

And so on. That is all.

I will try to use sigmoid + cross entropy instead of ReLU, I imagine linear function will work better in this case.

",37222,,,,,5/21/2020 2:54,,,,0,,,,CC BY-SA 4.0 21363,2,,21315,5/21/2020 3:26,,1,,"

Quantization is a technique used to make deep learning models smaller and faster to run.

Deep learning models are essentially collections of real-valued numbers. Because there are infinitely many real numbers, computers represent them using a format call 'floating point' numbers, which are not completely accurate. For example, a 32-bit floating point number can only represent at most $2^{32}$ distinct values. In contrast, a 64-bit floating point number can represent $2^{64}$ distinct values.

Most CPUs and GPUs cannot operate directly on large floating point numbers. This means that to do something like multiply two floating point numbers, the CPU might have to work on half of each number at a time, and do some tricky work to combine the results. Some CPUs and GPUs can operate directly on large floating point numbers, but only by using more than one core to work on a single number.

To get around this, you might chose to take a model you have that was trained with high-precision floating point weights, and reduce it to lower precision. The weights won't be exactly the same, but they'll be very close. Doing this will make the models run much faster, but you might lose some accuracy.

So the advantage of using a tool that only supports Quantized models is that models will run faster, but might be slightly less accurate.

",16909,,,,,5/21/2020 3:26,,,,0,,,,CC BY-SA 4.0 21365,1,,,5/21/2020 6:36,,1,37,"

I am trying to use LSTM to do text classification and monitor the training process with tensorboard. But it seems that this model doesn't learn anything in early epochs. Is it normal for LSTM networks?

Here is the definition of model:

class RNN(nn.Module):
    """"""
        RNN model for text classification
    """"""
    def __init__(self, vocab_size, num_class, emb_dim, emb_droprate, rnn_cell_hidden, rnn_cell_type, birnn, num_layers, rnn_droprate, sequence_len):
        super().__init__()
        self.vocab_size = vocab_size                # vocab size
        self.emb_dim = emb_dim                      # embedding dimension
        self.emb_droprate = emb_droprate            # embedding droprate
        self.num_class = num_class                  # classes
        self.rnn_cell_hidden = rnn_cell_hidden      # hidden layer size
        self.rnn_cell_type = rnn_cell_type          # rnn cell type
        self.birnn = birnn                          # wheather use bidirectional rnn
        self.num_layers = num_layers                # number of rnn layers
        self.rnn_droprate = rnn_droprate            # rnn dropout rate before fc
        self.sequence_len = sequence_len            # fix sequence length, so we dont need loop
        pass

    def build(self):
        self.embedding = nn.Embedding(self.vocab_size, self.emb_dim)
        self.emb_dropout = nn.Dropout(self.emb_droprate)
        if self.rnn_cell_type == ""LSTM"":
            self.rnn = nn.LSTM(input_size=self.emb_dim, hidden_size=self.rnn_cell_hidden, num_layers=self.num_layers, bidirectional=self.birnn, batch_first=True)
        elif self.rnn_cell_type == ""GRU"":
            self.rnn = nn.GRU(input_size=self.emb_dim, hidden_size=self.rnn_cell_hidden, num_layers=self.num_layers, bidirectional=self.birnn, batch_first=True)
        else:
            self.rnn = None
            print(""unsupported rnn cell type, valid is [LSTM, GRU]"")
        if self.birnn:
            self.fc = nn.Linear(2 * self.rnn_cell_hidden, self.num_class)
        else:
            self.fc = nn.Linear(self.rnn_cell_hidden, self.num_class)

        self.rnn_dropout = nn.Dropout(self.rnn_droprate)

    def forward(self, input_):
        batch_size = input_.shape[0]

        x = self.embedding(input_)
        x = self.emb_dropout(x)

        if self.rnn_cell_type == ""LSTM"":
            if self.birnn:
                h_0 = torch.zeros(self.num_layers * 2, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
                c_0 = torch.zeros(self.num_layers * 2, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
            else:
                h_0 = torch.zeros(self.num_layers, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
                c_0 = torch.zeros(self.num_layers, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
            output, (h_n, c_n) = self.rnn(x, (h_0, c_0))
        elif self.rnn_cell_type == ""GRU"":
            if self.birnn:
                h_0 = torch.zeros(self.num_layers * 2, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
            else:
                h_0 = torch.zeros(self.num_layers, batch_size, self.rnn_cell_hidden, requires_grad=True).to(device)
            output, h_n = self.rnn(x, h_0)

        if self.birnn:
            x = h_n.view(self.num_layers, 2, batch_size, self.rnn_cell_hidden)
            x = torch.cat((x[-1, 0, : , : ], x[-1, 1, : , : ]), dim = 1)
        else:
            x = h_n.view(self.num_layers, 1, batch_size, self.rnn_cell_hidden)
            x = x[-1, 0, : , : ]

        x = x.view(batch_size, 1, -1)           # shape: [batch_size, 1, 2 or 1 * rnn_cell_hidden]
        x = self.rnn_dropout(x)

        x = self.fc(x)
        x = x.view(-1, self.num_class)          # shape: [batch_size, num_class]

        return x

Parameters of this model:

  • vocab size: 4805
  • number of classes: 27
  • embedding dimension: 300
  • embedding dropoutrate: 0.5
  • rnn cell type: LSTM
  • rnn cell hidden size: 1000
  • bidirectional rnn: False
  • number of lstm layers: 1
  • dropout rate at last lstm layer hidden: 0.5
  • padded sequence length: 64

The Optim:

criterion = nn.CrossEntropyLoss().to(device)
optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=1e-6)

learning rate here is 0.001, batch size is 32.

The tensorboard graph:

It seems that this model starts learning after epoch 15. Is it normal?

",37246,,,,,5/21/2020 6:36,My LSTM text classification model seems not learn anything in early epochs,,0,0,,,,CC BY-SA 4.0 21367,2,,21361,5/21/2020 8:17,,1,,"

On the same problems, sometimes GRU is better, sometimes LSTM.

In short, having more parameters (more ""knobs"") is not always a good thing. The training process needs to learn those parameters. There is a higher chance of over-fitting, amongst other problems.

The parameters are assigned specific roles inside either GRU or LSTM, so if that role is less important for a specific learning challenge, then it can be wasteful or even counter-productive to have the system attempt to learn values for them.

The only way to find out if LSTM is better than GRU on a problem is a hyperparameter search. Unfortunately, you cannot simply swap one for the other, and test that, because the number of cells that optimises a LSTM solution will be different to the number that optimises a GRU.

When would one use Long Short-Term Memory (LSTM) over Gated Recurrent Units (GRU)?

When it proves better, experimentally. In some problem domains, this may be established and you can check. However, in other problem domains if either GRU or LSTM works well enough to solve a problem (and the superiority of either LSTM or GRU is not the main point of the work), then it may not so clear.

",1847,,,,,5/21/2020 8:17,,,,1,,,,CC BY-SA 4.0 21368,1,,,5/21/2020 8:25,,1,49,"

Let's assume I use convolutional networks for time-series prediction. Data I feed to the network have 1 channel depth, height of number of periods and number of features is the width, so the frame size is: [1, periods, features]. Batch size is not relevant here.

Is there a difference between using 1d convolutions along time (height) dimension and 2d convolutional that will have a kernel size of for example (3, 1) or (5, 1), so that the larger number convolutes along the time dimension, and there is no convolution along features dimension?

",22659,,,,,5/21/2020 8:25,Is there a difference between using 1d conv layers and 2d conv layers with kernel with size of 1 along other than time dimension?,,0,0,,,,CC BY-SA 4.0 21369,1,21373,,5/21/2020 8:42,,1,273,"

I ran a deep q learning algorithm (DQN) for $x$ number of epochs and got policy $\pi_1$. I reran the same script for the same $x$ number of epochs and got policy $\pi_2$. I expected $\pi_1 $ and $\pi_2$ to be similar because i ran the same script. However, when computing the actions on the same test set, i realised the actions were very different.

Is this supposed to be normal when training deep q networks or is there something that I am missing ?

I am using prioritised experience replay when training the model.

",32780,,,,,5/21/2020 13:16,Can deep reinforcement learning algorithms be deterministic in their reproducibility in results?,,1,7,,,,CC BY-SA 4.0 21370,1,,,5/21/2020 9:14,,1,126,"

How can I increase the exploration in the Proximal Policy Optimation reinforcement learning algorithm? Is there a variable assigned for this purpose? I'm using the stable-baseline implementation: https://stable-baselines.readthedocs.io/en/master/modules/ppo2.html

",34341,,2444,,5/21/2020 10:06,5/21/2020 10:06,How can I increase the exploration in the Proximal Policy Optimation algorithm?,,0,0,,,,CC BY-SA 4.0 21373,2,,21369,5/21/2020 10:37,,3,,"

Can deep reinforcement learning algorithms be deterministic in their reproducibility in results?

Yes, but only if you control all places in the code where stochastic methods are used (typically by seeding the affected RNGs):

  • Neural network weight initialisation
  • Action choice for $\epsilon$-greedy or other behaviour policy (does not apply in your case, because you work exclusively from experience replay)
  • Minibatch sampling from experience replay
  • Stochastic choices in the environment (does not apply in your case)
  • Other stochastic parts of training that may be in use, such as dropout regularisation

Controlling all these should make your training process deterministic and repeatable. It won't necessarily make it correct.

I reran the same script for the same $x$ number of epochs and got policy $\pi_2$. I expected $\pi_1 $ and $\pi_2$ to be similar because i ran the same script.

This is subtly different. It seems you hoped that convergence of the algorithm would mean you got to the same approximately optimal policy. In principle this is possible, because Q-learning should find a deterministic policy. However, there are some details to bear in mind:

  • Many environments support multiple equivalent optimal policies. A simple grid world can have multiple equivalent paths from start to goal states. A Q-learning with approximation function will slightly prefer one or other path, resulting in very different, but still optimal, policies.

  • Q-learning with approximation can go wrong and learn incorrectly. The usual checks and balances against this are running large numbers of simulations and testing.

You don't have great options here, from your comments you are training purely offline from historic data. Your one sanity check - do I get the same policy if I re-try - has shown inconsistency. However, it doesn't necessarily mean you have a problem, perhaps the two policies are equivalent.

Here are a couple of additional tests that may help:

  • Instead of looking at the maximising action choice in the test data, look at how each Q function scores the behaviour policy action choice. If the scores are close (by some measure such as MSE), then the two Q-learners are basically agreeing and are more likely to have equivalent but different policies, as opposed to radically different end results.

  • Have each Q network score the other's Q function action choice over an arbitrary (but realistic) set of states. If the values are similar to each other, then again this points to successful convergence given the training data, but with different outcomes due to small details.

If either of these checks shows the networks are radically different, then you have a problem. Which run, if any, has found a viable policy, and which has failed?

Even if the checks agree, it is circumstantial evidence that the Q learning process is stable, not proof that you have an agent that is better than the prevailing behaviour policy in your real world system.

You won't know if the agent is truly better, unless you can find a more independent way to assess the agent.

",1847,,1847,,5/21/2020 13:16,5/21/2020 13:16,,,,7,,,,CC BY-SA 4.0 21374,1,21379,,5/21/2020 10:51,,2,343,"

What are the main differences and similarities between sparse autoencoders and convolution autoencoders?

When should one be preferred over the other? What are their applications?

(References are welcome. Somehow I was not able to find any comparisons of these autoencoders although I looked in a few textbooks and searched for material online. I was able to find the descriptions of each autoencoder separately, but what I am interested in is the comparison.)

",30725,,2444,,5/21/2020 11:08,5/21/2020 13:15,What are the main differences between sparse autoencoders and convolution autoencoders?,,1,0,,,,CC BY-SA 4.0 21375,2,,21347,5/21/2020 10:57,,1,,"

I think you should use Keras embedding layer. It will be too easier than what you are doing.

Steps

  • Create Embedding Matrix
  • add matrix to embedding layer while building model.

You will find detailed article

https://www.cs.uaf.edu/2011/spring/cs641/lecture/04_05_modeling.html

",21393,,,,,5/21/2020 10:57,,,,0,,,,CC BY-SA 4.0 21376,2,,21170,5/21/2020 11:38,,1,,"

As you mentioned in the comments about a possible problem of using mean, median type of imputations naively could lead to wrong predictions. In such cases, you need to first check whether you have enough data.

If you have enough data

You can try using MICE (Multivariate Imputation By Chained Equations) algorithm on your missing data. The method is based on Fully Conditional Specification, where each incomplete variable is imputed by a separate model. The MICE algorithm can impute mixes of continuous, binary, unordered categorical and ordered categorical data. One note of caution with this method: It is a computationally expensive method, so it use it if you are not short on time.

The important thing to keep in mind is that in order to tackle such problems, you would be needing multiple iterations as a part of your algorithm. The conventional setup you are describing, does not seem to iterative in nature and hence you are running into the problem of features being input and output at the same time.

If by any chance you insist on finding the missing values just for solving a downstream task like classification or regression, you can try the XgBoost algorithm. It can be used as a classifier or as a regressor. This algorithm can handle missing values inherently. Source: this answer

If you don't have enough data

In such a case you would need to introduce bias in your model using your insights or domain knowledge about the problem. For, e.g., in the a possible problem of estimating weights using heights, you had an insight that your data comprises more of short people. So instead of naively using median values of the total dataset for weight, you can try to bin the data according to their heights, say 'S', 'M', 'L', 'XL', and estimate the weights of each bin separately using the median values of their respective bins. The thing to keep in mind is that when data is low, you need to provide knowledge to the model by enforcing some bias using your insights and domain knowledge about the problem.

",37206,,37206,,5/21/2020 11:44,5/21/2020 11:44,,,,0,,,,CC BY-SA 4.0 21379,2,,21374,5/21/2020 13:15,,0,,"

Sparse auto-encoders (SAEs) are auto-encoders that impose constraints on the parameters so that they are sparse (i.e. zero or close to zero). This can be achieved in different ways. For example, you can train an auto-encoder with a loss function that includes a penalty term (to constraint the parameters to be close to zero or zero) or you e.g. set the smallest activations to zero.

Convolution auto-encoders (CAEs) are auto-encoders that use the convolution operation. So, they can be viewed as the auto-encoder version of convolutional neural networks. For this reason, they are particularly suited to compress and reconstruct images. The authors of the original paper, Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction, train them with gradient descent and back-propagation by minimizing the mean squared error, so there's e.g. no penalty term, but you can probably combine SAEs with CAEs.

Of course, you could say that CAEs are sparse with respect to the traditional auto-encoder (in the same way that you can say that CNNs are sparse with respect to fully connected neural networks), so, in this sense, CAEs are also sparse auto-encoders.

",2444,,,,,5/21/2020 13:15,,,,0,,,,CC BY-SA 4.0 21380,1,,,5/21/2020 13:38,,1,27,"

I am pruning a neural network (CNN and Dense) and for different sparsity levels, I have different sub-networks. Say for sparsity levels of 20%, 40%, 60% and 80%, I have 4 different sub-networks.

Now, I want to find the non-zero connections that they have in common. Any idea how to visualize this or compute this?

I am using Python 3.7 and TensorFlow 2.0.

After the convergence of a neural network following the random weight initialization, some weights/connections increase (magnitude), while other weights decrease. You can then prune the smallest magnitude weights. I want to compare the remaining weights for say two networks having the same level of sparsity of say 50%. The idea is to have an idea of which weights were pruned away and which weights/connections remain.

",31215,,31215,,5/22/2020 1:43,5/22/2020 1:43,How can I find the similar non-zero connections between different levels of sparsity of the same network?,,0,3,,,,CC BY-SA 4.0 21381,1,,,5/21/2020 14:17,,1,217,"

I understand that a tree-based variant will have nodes repeatedly added to the frontier. How do I craft an example where a particular goal node is never found. Is this example valid.

On the other hand, how do I explain that the graph-based version of the greedy best-first search is complete?

",37258,,2444,,5/21/2020 14:19,5/21/2020 14:19,What is a example showing that the tree-based variant for the greedy best-first search is incomplete?,,0,1,,,,CC BY-SA 4.0 21382,1,,,5/21/2020 15:40,,7,234,"

I am currently learning about deep learning and artificial intelligence and exploring his possibilities, and, as a mathematician at heart, I am inquisitive about how it can be used to solve problems in mathematics.

Seeing how well recurrent neural networks can understand human language, I suppose that they could also be used to follow some simple mathematical statements and maybe even come up with some proofs. I know that computer-assisted proofs are more and more frequent and that some software can now understand simple mathematical language and verify proofs (e.g. Coq). Still, I've never heard of deep learning applied to mathematical research.

Can deep learning be used to help mathematical research? So, I am curious about whether systems like Coq could be combined with deep learning systems to help mathematical research. Are there some exciting results?

",37262,,4446,,5/21/2020 20:21,6/10/2020 13:14,Can deep learning be used to help mathematical research?,,1,7,,,,CC BY-SA 4.0 21383,1,,,5/21/2020 16:13,,1,55,"

I'm confused about this aspect of RNNs while trying to learn how seq2seq encoder-decoder works at https://machinelearningmastery.com/configure-encoder-decoder-model-neural-machine-translation/.

It seems to me that the number of LSTMs in the encoder would have to be the same as number of words in the text (if word embeddings are being used) or characters in the text (if char embeddings are being used). For char embeddings, each embedding would correspond to 1 LSTM in 1 direction and 1 encoder hidden state.

  1. Is this understanding correct? E.g. If we have another model that uses encoder-decoder for a different application (say text-to-speech synthesis described here https://ai.googleblog.com/2017/12/tacotron-2-generating-human-like-speech.html) tha uses 256 LSTMs in each direction of the bidirectional-encoder, does it mean the input to this encoder is limited to 256 characters of text?

  2. Can the decoder output has to be same length as the encoder input or can it be different? If different what factor describes what the decoder output length should be?

",33580,,,,,5/21/2020 16:13,Is the number of bidirectional LSTMs in seq2seq model equal to the maximum length of input text/characters?,,0,0,,,,CC BY-SA 4.0 21384,2,,20959,5/21/2020 18:54,,1,,"

If your intention is to learn make the agent learn which has the min arbitrary value, then you would need to modify your rewards a bit.

The current reward structure provides the incentive to just move to a stage where it gets a reward.

For example, if it is at state 0, it gets the same reward to go to either state 2 or state 3, since both of them have a higher inverse value.

To make the agent learn to move to state 2, you would have to provide it with more incentives to go to state 2.

def reward(s,a,s_dash):
    if s_dash == 2:
        return 5
    elif inverse_values_for_states[s]<inverse_values_for_states[s_dash]: 
        return 1
    elif inverse_values_for_states[s]>inverse_values_for_states[s_dash]:
        return -1
    else:
        return 0

I tried using this and it converges to 2. This is a hard-coded version, but I guess you get the idea.

",36074,,,,,5/21/2020 18:54,,,,1,,,,CC BY-SA 4.0 21385,1,,,5/21/2020 19:25,,1,190,"

I have a string of characters encoding a molecule. I want to regress some properties of those molecules. I tried using an LSTM that encodes all one hot encdoed characters, and then I take the last hidden state fed into a linear layer to regress the property. This works fine, but I wanted to see if transformers can do better, since they are so good in NLP.

However, I am not quiet sure about two things:

  1. Pytorch transformer encoder layer has two masking parameters: ""src_mask"" and ""src_key_padding_mask"". The model needs the whole string to do the regression, so I dont think I need ""src_mask"", but I do padding with 0 for parallel processing, is that what ""src_key_padding_mask"" is for?
  2. What output from the transformer do I feed into the linear regression layer? For the LSTM I took the last hidden output. For the transformer, since everything is processed in parallel, I feel like I should rather use the sum of all, but it doesn't work well. Instead using only the last state works better, which seems arbitrary to me. Any ideas on how to properly do this, how do sentiment analysis model do it?
",31821,,,,,5/21/2020 19:25,Transformer encoding for regression,,0,0,,,,CC BY-SA 4.0 21387,2,,21282,5/21/2020 19:31,,3,,"

Softmax policy $\pi_\theta(s,a)$ is defined as $\frac{\exp{(\phi(s,a)^T \theta})}{\Sigma \exp{(\phi(s,a) ^T \theta) }}$, where the summation is over the action space.
Taking log, this becomes $$ \log \pi_\theta(s,a) = log(e^{\phi(s,a) ^T \theta}) - log({\Sigma e^{\phi(s,a) ^T \theta }}) \\ = \phi(s,a) ^T \theta - log({\Sigma e^{\phi(s,a)^T \theta }}) $$

Taking derivative wrt $\theta$, this becomes $$ \nabla_\theta \log \pi_\theta(s,a) = \phi(s,a) - \nabla_\theta log({\Sigma e^{\phi(s,a) ^T \theta }}) $$

We can rewrite $\nabla_\theta log({\Sigma e^{\phi(s,a)^T \theta }})$ as follows. $$ \nabla_\theta log({\Sigma e^{\phi(s,a)^T \theta }}) = \frac{\nabla_\theta \Sigma e^{\phi(s,a)^T \theta}}{\Sigma e^{\phi(s,a) ^T \theta}} = \frac{\Sigma \phi(s,a) e^{\phi(s,a) ^T \theta}}{\Sigma e^{\phi(s,a) ^T \theta}} = \Sigma \phi(s,a) \pi_\theta(s,a) $$

The final equation then becomes $$ \nabla_\theta \log \pi_\theta(s,a) = \phi(s,a) - \Sigma \phi(s,a) \pi_\theta(s,a) $$

",36074,,36074,,5/21/2020 20:08,5/21/2020 20:08,,,,1,,,,CC BY-SA 4.0 21388,1,21408,,5/22/2020 2:23,,6,1342,"

In reinforcement learning, temporal difference seem to update the value function in each new iteration of experience absorbed from the environment.

What would be the conditions for temporal-difference learning to converge in the end? How is it guaranteed to converge?

Any intuitive understanding of those conditions that lead to the convergence?

",37275,,2444,,5/22/2020 10:40,5/23/2020 18:41,What are the conditions of convergence of temporal-difference learning?,,1,0,,,,CC BY-SA 4.0 21389,1,21404,,5/22/2020 2:43,,9,2259,"

Attention idea is one of the most influential ideas in deep learning. The main idea behind attention technique is that it allows the decoder to ""look back” at the complete input and extracts significant information that is useful in decoding.

I am really having trouble understanding the intuition behind the attention mechanism. I mean how the mechanism works and how to configure.

In simple words (and maybe with an example), what is the intuition behind the attention mechanism?

What are some applications, advantages & disadvantages of attention mechanism?

",30725,,2444,,5/23/2020 13:49,5/23/2020 13:49,What is the intuition behind the attention mechanism?,,1,0,,,,CC BY-SA 4.0 21390,1,,,5/22/2020 3:31,,1,35,"

I have a bunch of text documents, split into source documents and transformed documents. These text documents have multiple lines and are edited at specific locations, in a specific way.

I make use of the difflib package available in Python to identify the associated transformation, for each source document and the resulting transformed document.

I wish to train and implement a ML technique which will help in identifying and automating this conversion activity.


Here is a sample result of how the transformation result looks like: (NOTE: This example contains only one line, but my actual use case contains several lines)

import difflib

Initial = 'This is my initial state'
Final = 'This is what I transform into'

diff = difflib.SequenceMatcher(None, Initial, Final)

for tag,i1,i2,j1,j2 in diff.get_opcodes():
    print('{:7} Initial[{:}:{:}] --> Final[{:}:{:}] {:} --> {:}'.format(tag,i1,i2,j1,j2,Initial[i1:i2],Final[j1:j2]))

#Result:
equal   Initial[0:8] --> Final[0:8] This is  --> This is 
insert  Initial[8:8] --> Final[8:23]  --> what I transfor
equal   Initial[8:9] --> Final[23:24] m --> m
delete  Initial[9:10] --> Final[24:24] y --> 
equal   Initial[10:13] --> Final[24:27]  in -->  in
delete  Initial[13:14] --> Final[27:27] i --> 
equal   Initial[14:15] --> Final[27:28] t --> t
replace Initial[15:24] --> Final[28:29] ial state --> o

This helps in outlining the transformation steps to transform Initial to Final. I wish to make use of ML to identify the common pattern in such transformation between a large collection of txt documents and train a model that I can use in future.


What will be the best method to approach this problem? I am not facing a problem in identifying and classifying text data, but in identifying the nature of editing and transformation of strings.

",37276,,,,,5/22/2020 3:31,Training a model for text document transformation?,,0,0,,,,CC BY-SA 4.0 21392,1,,,5/22/2020 5:20,,2,620,"

Typically it seems like reinforcement learning involves learning over either a discrete or a continuous action space. An example might be choosing from a set of pre-defined game actions in Gym Retro or learning the right engine force to apply in Continuous Mountain Car; some popular approaches for these problems are deep Q-learning for the former and actor-critic methods for the latter.

What about in the case where a single action involves picking both a discrete and a continuous parameter? For example, when choosing the type (discrete), pixel grid location (discrete), and angular orientation (continuous) of a shape from a given set to place on a grid and optimize for some reward. Is there a well-established approach for learning a policy to make both types of decisions at once?

",37277,,37277,,5/22/2020 5:30,5/25/2020 11:51,Learning policy where action involves discrete and continuous parameters,,1,2,,,,CC BY-SA 4.0 21394,1,21401,,5/22/2020 7:42,,3,981,"

Why do we need convolutional neural networks instead of feed-forward neural networks?

What is the significance of a CNN? Even a feed-forward neural network will able to solve the image classification problem, then why is the CNN needed?

",9863,,2444,,12/11/2020 14:49,12/12/2020 12:30,Why do we need convolutional neural networks instead of feed-forward neural networks?,,1,1,,,,CC BY-SA 4.0 21395,1,21397,,5/22/2020 8:55,,1,52,"

I have seen two deep Q-learning formulas:

$$Q\left(S_{t}, A_{t}\right) \leftarrow Q\left(S_{t}, A_{t}\right)+\alpha\left[R_{t+1}+\gamma \max _{a} Q\left(S_{t+1}, a\right)-Q\left(S_{t}, A_{t}\right)\right]$$

and this one

$$Q(s, a)=r(s, a)+\gamma \max _{a} Q\left(s^{\prime}, a\right)$$

Which one is correct?

",36107,,2444,,5/22/2020 10:42,5/22/2020 10:42,Do we have two Q-learning update formulas?,,1,2,,,,CC BY-SA 4.0 21396,2,,18682,5/22/2020 9:22,,2,,"

First let us note the definition of the advantage function:

$$A(s,a) = Q(s,a) - V(s) \; ,$$

where $Q(s,a)$ is the action-value function and $V(s)$ is the state-value function. In theory you could represent these by two different function approximators, but this would be quite inefficient. However, note that $$Q(s,a) = \sum_{s',r} \mathbb{P}(s',r|s,a)(r + V(s') = \mathbb{E}[r + V(s')|a,s]\;,$$ so we can actually use a single function approximation, for $V(s)$, to completely represent the advantage function. To optimise this function approximator you would use the returns at each step of the episode as in e.g. the REINFORCE algorithm like you mentioned.

",36821,,,,,5/22/2020 9:22,,,,0,,,,CC BY-SA 4.0 21397,2,,21395,5/22/2020 9:52,,2,,"

The first one is the update rule that we use in the $Q$-learning algorithm.

The second one is the ""definition"" of $Q(s, a)$ values, although I would personally write it as follows, with an expectation around the reward, to also support cases where rewards might be non-deterministic;

$$Q(s, a) \doteq \mathbb{E} \left[ r(s, a) \right] + \gamma \max_a Q(s', a)$$

Here, the $Q(s', a)$ itself would also be similarly defined, and again that one is also assumed to be a ""ground-truth"" value.

In practice, when learning, we do not actually know what any of these $Q$-values exactly are; that's why we're doing learning in the first place, we're trying to learn what these values are! More precisely, when we wish to assign a new value to $Q(s, a)$, the definition tells us that we should use $Q(s', a)$ for that, but we can't because we don't know exactly what the correct $Q(s', a$) value is either.

We do generally have some $Q(S_{t+1}, a)$ values, but they're generally going to be only approximations, resulting from the previous steps of our own learning process (or maybe even randomly initialised values if we only just started training!). There may also be an additional approximation error in stochastic environments, where the true $Q(s', a)$ value may be a weighted average over multiple different possible successor states $s'$, but during the training we only observed a single concrete successor state $S_{t+1}$.

So, we can follow the definition and use $Q(S_{t+1}, a)$ as an approximation of $Q(s', a)$, but we know that it's just an approximation and not entirely reliable. Therefore, instead of fully applying the definition and completely replacing our $Q(s, a)$ value based on it, we interpolate between our current estimate $Q(s, a)$ and the new one that we should have according to the definition + our approximations. We only slightly shift towards it. The learning rate $\alpha$ determines how much we shift towards it. Normally we use a learning rate $0 < \alpha < 1$. Note that if you were to use $\alpha = 1$, you would actually recover the definition for $Q$-values again (except it would still use some observations to estimate unknown quantities).

",1641,,,,,5/22/2020 9:52,,,,5,,,,CC BY-SA 4.0 21398,1,,,5/22/2020 9:54,,3,532,"

The technique for off-policy value evaluation comes from importance sampling, which states that

$$E_{x \sim q}[f(x)] \approx \frac{1}{n}\sum_{i=1}^n f(x_i)\frac{q(x_i)}{p(x_i)},$$ where $x_i$ is sampled from $p$.

In the application of importance sampling to RL, is the expectation of the function $f$ equivalent to the value of the trajectories, which is represented by the trajectories $x$?

The distributions $p$ represent the probability of sampling trajectories from the behavior policy and the distribution $q$ represents the probability of sampling trajectories from the target policy $q$?

How would the trajectories from distribution $q$ be better than that of $p$? I know from the equation how it is better, but it is hard to understand intuitively why this could be so.

",32780,,2444,,5/23/2020 0:12,11/26/2020 5:33,What is the intuition behind importance sampling for off-policy value evaluation?,,3,2,,,,CC BY-SA 4.0 21399,2,,21398,5/22/2020 11:16,,2,,"

In the application of importance sampling to RL, is the expectation of the function $f$ equivalent to the value of the trajectories, which is represented by the trajectories $x$?

I believe what you are asking here is if when using importance sampling in the off-policy RL setting that we set $f(x)$ from the general importance sampling formula to be our returns - the answer to this is yes. As always we are interested in calculating our expected returns.

How would the trajectories from the distribution $q$ be better than that of $p$? I know from the equation how it is better but it is hard to understand intuitively why this could be so.

I think here you got your $p$ and $q$ the wrong way around as we are using samples from $p$ to approximate our policy $q$. We typically will use importance sampling to generate samples from a different policy to our target policy for a few reasons - one reason might be that our target policy is hard to sample from whereas sampling from our behaviour policy $p$ might be relatively easy to sample from. Another reason is that we generally want to learn an optimal policy, but this could be difficult to learn if we don't explore enough. So we can follow some other policy that will explore sufficiently and still learn about our optimal target policy through the importance sampling ratio.

",36821,,2444,,5/23/2020 0:16,5/23/2020 0:16,,,,0,,,,CC BY-SA 4.0 21400,1,,,5/22/2020 11:41,,1,1246,"

I am new in reinforcement learning. I started reading the PyTorch's documentation about the cart pole control. Whenever an agent fails, they restart the environment.

When I run the code, the time in the game is the same as time in real life. Can we train models quicker? Can we make the game faster so that model will be training faster?

",36107,,2444,,5/22/2020 23:23,5/22/2020 23:53,Can we increase the speed of training a reinforcement learning algorithm?,,1,0,,,,CC BY-SA 4.0 21401,2,,21394,5/22/2020 11:51,,5,,"

Why are CNNs useful?

The main property of CNNs that make them more suitable than FFNNs to solve tasks where the inputs are images is that they perform convolutions (or cross-correlations).

Convolution

The convolution is an operation (more precisely, a linear operator) that takes two functions $f$ and $h$ and produces another function $g$. It's often denoted as $f \circledast h = g$, where $\circledast$ represents the convolution operation and $g$ is the function that results from the convolution of the functions $f$ and $h$.

In the case of CNNs,

  • $f$ is a multi-dimensional array (aka tensor) and it represents an image (or a processed version of an image, i.e. a feature map)
  • $h$ is a multi-dimensional array and it is called kernel (aka filter), which represents the learnable parameters of the CNN, and
  • $g$ is a processed version (with $h$) of $f$ and it is often called the feature map, so it's also a multi-dimensional array

Images as functions

To be consistent with the initial definition of the convolution, $f, h$, and $g$ can indeed be represented as functions.

Suppose that the input image is a greyscale (so it is initially represented as a matrix), then we can represent it as a function as follows $$f: [a, b] \times [c, d] \rightarrow [0, 1],$$ i.e. given two numbers $x \in [a, b]$ and $y \in [c, d]$, $f$ outputs a number in the range $[0, 1]$, i.e. $f(x, y) = z$, where $z$ is the grayscale intensity of the pixel at coordinates $x$ and $y$. Similarly, the kernel $h$ and $g$ can also be defined as a function $h: [a, b] \times [c, d] \rightarrow [0, 1]$ and $g: [a, b] \times [c, d] \rightarrow [0, 1]$, respectively.

To be more concrete, if the shape of the image $f$ is $28 \times 28$, then it is represented as the function $f: [0, 28] \times [0, 28] \rightarrow [0, 1]$.

Note that the domain of the images doesn't have to range from $0$ to $28$ and the codomain doesn't have to range from $[0, 1]$. For example, in the case of RGB images, the codomain can also equivalently range from $0$ to $255$.

RGB images can also be represented as functions, more precisely, vector-valued functions, i.e.

$$ f(x, y) = \begin{bmatrix} r(x, y) \\ g(x, y) \\ b(x, y) \end{bmatrix} $$ where

  • $r: [a, b] \times [c, d] \rightarrow [0, 1]$ represents the red channel,
  • $g: [a, b] \times [c, d] \rightarrow [0, 1]$ represents the green channel, and
  • $b: [a, b] \times [c, d] \rightarrow [0, 1]$ represents the blue channel

Or, equivalently, $f: [a, b] \times [c, d] \times [0, 1]^3$.

Why is the convolution useful?

The convolution of an image with kernels (e.g. the median kernel) can be used to perform many operations.

For example, the convolution of a noisy image with the median filter can be used to remove noise from that image.

This is a screenshot of an image from this article, which you should read if you want to understand more about noise removal. So, on the left, there's the noisy image, and, on the right, there's the convolution of the median filter with the noisy image, which removes (at least, partially) the initial noise (i.e. those dots, which are due to the so-called "pepper and salt" noise).

The convolution of any image with the Sobel filter can be used to compute the derivatives of that image (both in the $x$ and $y$ directions, from which you can compute the magnitude and orientation of the gradient at each pixel of the image). See this article for more info.

So, in general, the convolution of an image with a kernel processes the image and the results (i.e. another image, which, in the case of CNNs, is called a feature map) can be different depending on the kernel.

This is the same thing as in CNNs. The only difference is that, in CNNs, the kernels are the learnable (or trainable) parameters, i.e. they change during training so that the overall loss (that the CNN is making) reduces (in the case CNNs are trained with gradient descent and back-propagation). For this reason, people like to say that CNNs are feature extractors or are performing feature extraction (aka feature learning or representation learning).

(Moreover, note that the convolution and cross-correlation are the same operations when the kernels are symmetric (e.g. in the case of a Gaussian kernel). In the case of CNNs, the distinction between convolution and cross-correlation doesn't make much sense because the kernels are learnable. You can ignore this if you are a beginner, but you can find more details here.)

Other useful properties

There are other useful properties of CNNs, most of them are just a consequence of the use of the convolution

  • Translation invariance (or equivariance), i.e. they can potentially find the same features (if you think of them as feature extractors) in multiple places of the image independently of their position, orientation, etc. See this answer for more details.

  • The equivalent FFNN has a lot more parameters (so CNNs may be less prone to overfitting)

  • They often use a sub-sampling operation (known as pooling) to further reduce the number of parameters (which can possibly help to avoid overfitting) and introduce non-linearity.

Notes

Note that the FFNN can also be used to process images. It's just that the CNN is more suited to deal with images for the reasons described above.

",2444,,2444,,12/12/2020 12:30,12/12/2020 12:30,,,,0,,,,CC BY-SA 4.0 21402,2,,21200,5/22/2020 12:20,,1,,"

While this may not be the answer you were looking for, I hope this explanation will help you to understand applying backpropagation to a CNN. Fundamentally, convolutional layers are no different than dense layers, however there are restrictions. The key one is weight-sharing which allows a CNN to be much more efficient than a regular dense layer (as well as it being sparse due to locality). Imagine we are transforming a 4x4 image into a 2x2 image. Since we are inputting a 16-vector, and outputting a 4-vector, we need a weights matrix of 4x16:

This has 64 parameters. In a convolutional layer, we can accomplish this by convolving a 3x3 kernel over the image:

$$ K= \begin{bmatrix} k_{1,1} & k_{1,2} & k_{1,3} \\ k_{2,1} & k_{2,2} & k_{2,3} \\ k_{3,1} & k_{3,2} & k_{3,3} \end{bmatrix} $$

This convolution is equivalent to multiplying by the weights matrix:

As you can see, this only requires 9 parameters and backpropagation can be applied to update these parameters.

Image Source: https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1

",22373,,,,,5/22/2020 12:20,,,,1,,,,CC BY-SA 4.0 21403,2,,21400,5/22/2020 12:42,,1,,"

Can we make the game faster so that model will be training faster?

It depends on how much processing is required to run the simulation, how efficient that is implemented in whichever library you have loaded, and whether there is anything non-necessary for training that you can disable. Some environments for instance deliberately run ""real time"" so humans can appreciate the video output, and that is not necessary for training purposes (unless you want to experiment with real-time robotics).

For OpenAI Gym, there is one thing you can usually do: Switch off the rendering. The rendering for environments like CartPole slows down each time step considerably, and unless you are learning using computer vision processing on the pixels output, the agent does not need the pictures. You may even notice during training that moving the rendering window so it is not visible will speed up the training process considerably.

What I did for CartPole, LunarLander and a couple of similar environments is turn off rendering for 99 out of 100 episodes, and render just one of them in 100 to help me monitor progress. For Q learning, I also picked that to be a ""test"" episode where I stopped exploration.

Another option for speeding up training is to run a distributed system with multiple simulations at once. You will need mechanisms to share collected data too, so it is more work, but it is another approach to take if the simulation steps are the bottleneck for training speed.

",1847,,2444,,5/22/2020 23:53,5/22/2020 23:53,,,,0,,,,CC BY-SA 4.0 21404,2,,21389,5/22/2020 12:50,,3,,"

Simply put, the attention mechanism is loosely inspired on well, attention. Consider we are attempting machine translation on the following sentence: ""The dog is a Labrador."" If you were to ask someone to pick out the key words of the sentence, i.e. which ones encode the most meaning, they would likely say ""dog"" and ""Labrador."" Articles like ""the"" and ""a"" are not as relevant in translation as the previous words (though they aren't completely insignificant). Therefore, we focus our attention on the important words.

Attention seeks to mimic this by adding attention weights to a model as trainable parameters to augment important parts of our input. Consider an encoder-decoder architecture such as the one Google Translate uses. Our encoder recurrent neural network (RNN) encodes our input sentence as a context vector in some vector space, which is then passed along to the decoder RNN which translates it into the target language. The attention mechanism scores each word in the input (via dot product with attention weights), then passes these scores through the softmax function to create a distribution. This distribution is then multiplied with the context vector to produce an attention vector, which is then passed to the decoder. In the example in the first paragraph, our attention weights for ""dog"" and ""Labrador"" would hopefully become larger in comparison to those for the other words during training. Note that all parts of the input are still considered since a distribution must sum to 1, just some elements have more effect on the output than others.

Below is a diagram from Towards Data Science that illustrates this concept very nicely in terms of an encoder-decoder architecture.

The advantages of attention is its ability to identify the information in an input most pertinent to accomplishing a task, increasing performance especially in natural language processing - Google Translate is a bidirectional encoder-decoder RNN with attention mechanisms. The disadvantage is the increased computation. In humans, attention serves to reduce our workload by allowing us to ignore unimportant features, however in a neural network, attention entails overhead as we are now generating attention distributions and training our attention weights (we are not actually ignoring the unimportant features, just diminishing their importance).

",22373,,22373,,5/23/2020 9:26,5/23/2020 9:26,,,,1,,,,CC BY-SA 4.0 21405,2,,21398,5/22/2020 12:50,,4,,"

Recall that our goal is to be able to accurately estimate the true value of each state by computing a sample average over returns starting from that state: $$v_{q}(s) \doteq \mathbb{E}_{q}\left[G_{t} | S_{t}=s\right] \approx \frac{1}{n} \sum_{i=1}^{n} Return_i $$ where $Return_i$ is the return obtained from the $i^{th}$ trajectory.

The problem is that the $\approx $ does not hold, since in off-policy learning, we got those returns by following the behavior policy, $p$, and not the target policy, $q$.

To address that, we have to correct each return in the sample average by multiplying by the importance sampling ratio.

$$v_{q}(s) \doteq \mathbb{E}_{q}\left[G_{t} | S_{t}=s\right] \approx \frac{1}{n} \sum_{i=1}^{n} \rho_i Return_i$$

where the importance sampling ratio is : $\rho=\frac{\mathbb{P}(\text { trajectory under } q)}{\mathbb{P}(\text { trajectory under } p)}$

What this multiplication does is that it increases the importance of returns that were more likely to be seen under the target policy $q$ and it decreases those that were less likely. So, at the end, in expectation, it would be as if the returns were averaged following $q$.

(A side note: To avoid the risks of mixing $p$ and $q$, it might be a good idea to denote/think of the behavior policy as $b$ and the target policy as $\pi$, following the convention in Sutton and Barto's RL book.)

",34010,,2444,,5/23/2020 0:18,5/23/2020 0:18,,,,0,,,,CC BY-SA 4.0 21407,1,21409,,5/22/2020 12:58,,1,222,"

Vanilla policy gradient algorithm (using baseline to reduce variance) acc to here (page 16)

Initialize policy parameter θ, baseline b

for iteration=1, 2, . . . do

Collect a set of trajectories by executing the current policy

At each timestep in each trajectory, compute

the return $R_{t}= \sum_{t'=t}^{T-1}\gamma^{t'-t}r_{t'}$

the advantage estimate $\hat{A}_{t} = R_{t} - b(s_{t})$

Re-fit the baseline, by minimizing $\lVert b(s_{t}) - R_{t} \rVert^{2}$

summed over all trajectories and timesteps.

Update the policy, using a policy gradient estimate $\hat{g}$,

which is a sum of terms $\nabla_{\theta}log\pi(a_{t}|s_{t},\theta)\hat{A_{t}}$

  • At line 6, advantage estimate is computed by subtracting baseline from the returns
  • At line 7, baseline is re-fit minimizing mean squared error between state dependent baseline and return
  • At line 8, we update the policy using advantage estimate from line 6

So is the baseline expected to be used in the next iteration when our policy has changed?

To compute the advantage we subtract the state value $V(s_{t})$ from the action value $Q(s_{t},a_{t})$, under the same policy, then why is the old baseline used here in advantage estimation?

",36861,,,,,5/22/2020 14:23,In vanilla policy gradient is the baseline lagging behind the policy?,,1,0,,,,CC BY-SA 4.0 21408,2,,21388,5/22/2020 13:10,,6,,"

There are different TD algorithms, e.g. Q-learning and SARSA, whose convergence properties have been studied separately (in many cases).

In some convergence proofs, e.g. in the paper Convergence of Q-learning: A Simple Proof (by Francisco S. Melo), the required conditions for Q-learning to converge (in probability) are the Robbins-Monro conditions

  1. $\sum_{t} \alpha_t(s, a) = \infty$
  2. $\sum_{t} \alpha_t^2(s, a) < \infty,$

where $\alpha_t(s, a)$ is the learning rate at time step $t$ (that can depend on the state $s$ and action $a$), and that each state is visited infinitely often.

(The Robbins-Monro conditions (1 and 2) are due to Herbert Robbins and Sutton Monro, who started the field of stochastic approximation in the 1950s, with the paper A Stochastic Approximation Method. The fields of RL and stochastic approximation are related. See this answer for more details.)

However, note again that the specific required conditions for TD methods to converge may vary depending on the proof and the specific TD algorithm. For example, the Robbins-Monro conditions are not assumed in Learning to Predict by the Methods of Temporal Differences by Richard S. Sutton (because this is not a proof of convergence in probability but in expectation).

Moreover, note that the proofs mentioned above are only applicable to the tabular versions of Q-learning. If you use function approximation, Q-learning (and other TD algorithms) may not converge. Nevertheless, there are cases when Q-learning combined with function approximation converges. See An Analysis of Reinforcement Learning with Function Approximation by Francisco S. Melo et al. and SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation by Bo Dai et al.

",2444,,2444,,5/23/2020 18:41,5/23/2020 18:41,,,,2,,,,CC BY-SA 4.0 21409,2,,21407,5/22/2020 14:23,,0,,"

So is the baseline expected to be used in the next iteration when our policy has changed?

Yes.

To compute the advantage we subtract the state value $V(s_{t})$ from the action value $Q(s_{t},a_{t})$, under the same policy, then why is the old baseline used here in advantage estimation?

The precise value of the baseline is not that important. What is important is that the baseline does not depend on the action choice, $a$, so it does not impact the gradient estimations or update steps for the policy function you are trying to improve.

You could in theory use a fixed offset instead of $V(s)$, or any arbitrary function that does not depend on $a$. In some settings the average reward $\bar{R}$ seen so far is used.

Using a rough approximation to $V(s)$ - and thus an approximate advantage function overall is useful as it removes a large source of variance in gradient estimates (the inherent value of the current state under the current policy, which is irrelevant to search for adjustments to that policy). The more accurate $V(s)$ is, the lower variance, thus faster and more reliable convergence, so you do want it to be a good estimate. But a little bit of lag behind policy updates is acceptable and does not break the algorithm.

For more on this, see Sutton & Barto, chapter 13, section 13.4.

",1847,,,,,5/22/2020 14:23,,,,0,,,,CC BY-SA 4.0 21411,1,,,5/22/2020 17:17,,2,194,"

I wanted to implement the Policy Gradient on Tic-Tac-Toe. I tried to use the code that worked for any environment like CartPole-v0 to my Tic-Tac-To game. But it is not learning. There are no errors. Just the result is so bad.

RandomPlayer (""Player X"") vs PolicyAgent (""Player O"")

So one can see that the Policy Agent is not learning after 500 battles. Each battle consists of 100games against the random player. Together 500 * 100 games.

Can someone tell me the problem or the bug in my code. I can not figure it out. Or what I have to improve. It would be so great.

Here is also a project which did the same, which I want to do, but with success. https://github.com/fcarsten/tic-tac-toe/blob/master/tic_tac_toe/DirectPolicyAgent.py I did not get what I am making different.

Code:

Packages:

import torch
import torch as T
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

import numpy as np
import gym
from gym import wrappers

Neural Net:

class PolicyNetwork(nn.Module):
    def __init__(self, lr, input_dims, fc1_dims, fc2_dims, n_actions):
        super(PolicyNetwork, self).__init__()
        self.input_dims = input_dims
        self.lr = lr
        self.fc1_dims = fc1_dims
        self.fc2_dims = fc2_dims
        self.n_actions = n_actions

        self.fc1 = nn.Linear(self.input_dims, self.fc1_dims)
        self.fc2 = nn.Linear(self.fc1_dims, self.fc2_dims)
        self.fc3 = nn.Linear(self.fc2_dims, self.n_actions)

        self.optimizer = optim.Adam(self.parameters(), lr=lr)

    def forward(self, observation):
        state = T.Tensor(observation)
        x = F.relu(self.fc1(state))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

Policy Agent:

class PolicyAgent:
    def __init__(self, player_name):
        self.name = player_name
        self.value = PLAYER[self.name]

    def board_to_input(self, board):
        input_ = np.array([0] * 27)
        for i, val in enumerate(board):
            if val == self.value:
                input_[i] = 1  
            if val == self.value * -1:
                input_[i+9] = 1
            if val == 0:
                input_[i+18] = 1
        return np.reshape(input_, (1,-1))


    def start(self, learning_rate=0.001, gamma=0.1):
        self.lr = learning_rate
        self.gamma = gamma
        self.all_moves = list(range(0,9))
        self.policy = PolicyNetwork(self.lr, 27, 243, 91, 9)
        self.reward_memory = []
        self.action_memory = []

    def turn(self, board, availableMoves):
        state = self.board_to_input(board.copy())
        prob = F.softmax(self.policy.forward(state))
        action_probs = torch.distributions.categorical.Categorical(prob)
        action = action_probs.sample()

        while action.item() not in availableMoves:
            state = self.board_to_input(board.copy())
            prob = F.softmax(self.policy.forward(state))
            action_probs = torch.distributions.categorical.Categorical(prob)
            action = action_probs.sample()

        log_probs = action_probs.log_prob(action)
        self.action_memory.append(log_probs)

        self.reward_memory.append(0)
        return action.item()

    def learn(self, result):
        if result == 0:
            reward = 0.5
        elif result == self.value:
            reward = 1.0
        else:
            reward = 0

        self.reward_memory.append(reward)
        #print(self.reward_memory)

        self.policy.optimizer.zero_grad()
        #G = np.zeros_like(self.action_memory, dtype=np.float64)
        G = np.zeros_like(self.reward_memory, dtype=np.float64)


        #running_add = reward
        #for t in reversed(range(0, len(self.action_memory))):
        #    G[t] = running_add
        #    running_add = running_add * self.gamma

        #'''
        running_add = 0
        for t in reversed(range(0, len(self.reward_memory))):
            if self.reward_memory[t] != 0:
                running_add = 0
            running_add = running_add * self.gamma + self.reward_memory[t]
            G[t] = running_add
        for t in range(len(self.reward_memory)):
            G_sum = 0
            discount = 1
            for k in range(t, len(self.reward_memory)):
                G_sum += self.reward_memory[k] * discount
                discount *= self.gamma
            G[t] = G_sum
        mean = np.mean(G)
        std = np.std(G) if np.std(G) > 0 else 1
        G = (G-mean)/std
        #'''

        G = T.tensor(G, dtype=T.float)

        loss = 0
        for g, logprob in zip(G, self.action_memory):
            loss += -g * logprob

        loss.backward()
        self.policy.optimizer.step()

        self.reward_memory = []
        self.action_memory = []
",37287,,,,,9/11/2021 4:12,Policy Gradient on Tic-Tac-Toe not working,,1,1,,,,CC BY-SA 4.0 21412,2,,20283,5/22/2020 19:12,,0,,"

$Q(s,a)$ denotes the $Q-value$ for the state-action pair. It means the expected returns if we start from state $s$, take action $a$, and act according to whatever policy we are currently following.

Suppose we are in state $s_0$, take action $a_0$. To compute the returns, we would need to follow our current policy from whatever state we land up after taking $a_0$, till the end of the episode, and sum up the rewards (or discounted rewards) that we get along the way.

Why average of returns?
Because we would want to do this multiple times for a state-action pair and compute the average of all such episodes.

Why multiple times?
Generally, the environments and the transition function would have some randomness and we don't get the same reward every time.

Why would you want to compute this?
The idea is simple. Since our goal is to maximize the average return, if we compute Q-values for all the possible actions starting from state $s_0$, then we can compare between the values and decide which action is going to be most beneficial to take from state $s_0$.

Since this is a tabular approach, when they say update the Q-function, they just mean to update the Q-values.

As an example, suppose we are in state $s_0$ and can take actions $a_0$, $a_1$, and $a_2$. We first compute the Q-values for $(s_0, a_0), (s_0,a_1), (s_0, a_2)$ pairs, and then we would choose the action which has the maximum Q-value out of these three.

",36074,,,,,5/22/2020 19:12,,,,0,,,,CC BY-SA 4.0 21413,1,,,5/22/2020 19:20,,3,91,"

I thought about an algorithm that twists the standard Q-learning slightly, but I am not sure whether convergence to the optimal Q-value could be guaranteed.

The algorithm starts with an initial policy. Within each episode, the algorithm conducts policy evaluation and does NOT update the policy. Once the episode is done, the policy is updated using the greedy policy based on the current learnt Q-values. The process then repeats. I attached the algorithm as a picture.

Just to emphasize that the updating policy does not change within each episode. The policy at each state is updated AFTER one episode is done, using the Q-tables.

Has anyone seen this kind of Q-learning before? If so, could you please kindly guide me to some resources regarding the convergence? Thank you!

",37291,,37291,,5/23/2020 0:53,5/23/2020 0:53,Convergence of a delayed policy update Q-learning,,0,8,,,,CC BY-SA 4.0 21415,1,21419,,5/23/2020 0:28,,3,167,"

I am running into an issue in which the the target (label collums) of my dataset contain a mixture of binary label (yes/no) and some numeric value label.

The value of these numeric value (resource 1 and resource 2 collumns) experience a large variation margin. Sometime these numeric value can be like 0.389 but sometimes they can be 0.389 x 10^-4 or something.

My goal is to predict the binary decision and the amount of resource allocated to a new user who have input feature 1 (numeric) and input feature 2 (numeric).

My initial though would be that the output neuron corresponding to the 0-1 decision would use logistic regression activation function. But for the neuron that corresponding to the resource I am not quite sure.

What would be the appropriate way to tackle such situation in term of network structure or data pre-processing strategy ?

Thank you for your enthusiasm !

",37297,,37297,,5/24/2020 6:47,5/26/2020 3:48,How to train a neural network with a data set that in which the target is a mix of 0-1 label and numeric real value label?,,2,0,,,,CC BY-SA 4.0 21417,2,,21415,5/23/2020 1:24,,0,,"

In Neural Networks the function that provides the largest interval while activation is Tanh with a result between -1 and 1

You can use it to train your model , when the label has value of false it should be -1 , and when true it should be 1

In prediction , you'll see where the value is more close , for example if you get 0.4 more close to 1 so it'll be true

",37243,,,,,5/23/2020 1:24,,,,0,,,,CC BY-SA 4.0 21418,1,21445,,5/23/2020 4:39,,4,830,"

I am working on scheduling problem that has inherent randomness. The dimensions of action and state spaces are 1 and 5 respectively.

I am using DDPG, but it seems extremely unstable, and so far it isn't showing much learning. I've tried to

  1. adjust the learning rate,
  2. clip the gradients,
  3. change the size of the replay buffer,
  4. different neural net architectures, using SGD and Adam,
  5. change the $\tau$ for the soft-update.

So, I'd like to know what people's experience is with this algorithm, for the environments where it was tested on the paper but also for other environments. What values of hyperparameters worked for you? Or what did you do? How cumbersome was the fine-tuning?

I don't think my implementation is incorrect, because I pretty much replicated this, and every other implementation I found did exactly the same.

(Also, I am not sure this is necessarily the best website to post this kind of question, but I decided to give a shot.)

",36341,,2444,,5/24/2020 10:36,5/26/2020 13:35,What made your DDPG implementation on your environment work?,,1,5,,,,CC BY-SA 4.0 21419,2,,21415,5/23/2020 5:10,,1,,"

Your question are missing some details and i will assume some scenarios.

  • If you have a classification problem: you can try group the values in intervals that make sense (you should analyze and decide for this setup), if its possible. For example: 0.000-0.250 (0), 0.251-0.500 (1), 0.501-0.750 (2) and so on. Note that neural networks are sensible for distance between values (1 is closer to 0 than 2, so 1 is more similar to 0 than 2 and so on). If that is not your case, you should binarize the values in One Hot Encode manner.
  • If you have a regression problem, you should be ok without anything else. You can try normalize your outputs and observe the results, but generally it's not necessary for regression problems.
  • Be sure if your dataset are free of outliers and noisy data as much as possible.
  • It's important choose activations functions that are adequate for the range of values in your attributes and output. This can depend on how do you treat and setup your dataset, the range of values, normalization etc.

Update after more details in question

Your neural network should have 3 neurons in the output layer, with linear activation. As said before, normalization usually is not necessary in regression problems, but if your values are too diferent (like the range in resource 1 and resource 2) maybe some kind of adjustment (normalization, standardization etc) can be helpful. But you need try and see the results.

",37300,,37300,,5/26/2020 3:48,5/26/2020 3:48,,,,2,,,,CC BY-SA 4.0 21422,1,,,5/23/2020 6:00,,3,61,"

After reading a lot of articles (for instance, this one Understanding LSTM Networks), I know that the long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning.

How does backpropagation work in the specific case of LSTMs?

",30725,,2444,,5/23/2020 12:40,5/23/2020 12:40,How does backpropagation work in LSTMs?,,0,0,,,,CC BY-SA 4.0 21425,1,21426,,5/23/2020 9:28,,2,168,"

In a neural network, a neuron typically computes a linear function $f(x) = w*x$, where $w$ is the weight and $x$ is the input.

Why not replacing the linear function with more complex functions, such as $f(x,w,a,b,c) = w*(x + b)^a + c$?

It will provide much more diversity into neural networks.

Does this have a name? Has this been used?

",36420,,2444,,5/23/2020 11:06,5/23/2020 11:06,Why not replacing the simple linear functions that neurons compute with more complex functions?,,1,1,,,,CC BY-SA 4.0 21426,2,,21425,5/23/2020 10:01,,4,,"

It is definitley possible to make the links between neurons use more complex functions. Provided those functions are differentiable, backpropagation still works, and the resulting compound function might be able to learn something useful. The general name for such a thing is a computational graph and the standardised structures used in most neural networks are a subset of all possible (and maybe useful) computational graphs.

When adding complex and non-linear functions into a neural network, this is usually alternated with simpler linear layers using the weights. A generalised function of a single neuron as used in most neural networks looks like this:

$$a = f(\sum_i w_i x_i + b)$$

Where $i$ indexes all inputs to the neuron, $x_i$ are the input values, $w_i$ the weights associated with each input, $b$ is a bias term and $f()$ is a differentiable non-linear activation function. Training process learns $w_i$ and $b$. The output $a$ is the the neuron's activation value, that may be taken as output of the neural network, or fed in to some other neuron as one of the next neuron's $x_i$.

A simple feed-forward network using this basic neuron function, with at least one hidden layer which has a non-linear activation function can already learn approximations to any given function - a result proved in the universal approximation theorem.

The practical result of the universal approximation theorem is that you need motivation other than increasing diversity in order to make neural network functions more complex. If you were considering altering one of the $w_i x_i$ multiplications, and replacing with a more complex learnable function, you can effectively achieve the same thing by adding another neuron whose output $a$ is used as $x_i$ - or simply adding a layer in most neural network libraries.

In some situations there may be good reasons to make lower-level changes:

  • If you know the function you are learning relates to a theoretical model with a specific mathematical form, you can deliberately set up functions that mirror that with learnable parameters. Typcially that is done as transforms on inputs, but could also be part of a more complex computational graph if necessary.

  • In neural network architectures, you can consider things such as gate combinations in LSTM cells, or skip connections in residual networks as examples where the functions have deliberately be made more complex to achieve a specific goal - in both those cases in order to increase effectiveness of backpropagation in deep structures.

",1847,,,,,5/23/2020 10:01,,,,0,,,,CC BY-SA 4.0 21427,1,21428,,5/23/2020 14:39,,0,165,"

AIXI is a mathematical framework for artificial general intelligence developed by Marcus Hutter since the year 2000. It's based on many concepts, such as reinforcement learning, Bayesian statistics, Occam's razor, or Solomonoff induction. The blog post What is AIXI? — An Introduction to General Reinforcement Learning provides an accessible overview of the topic for those of you not familiar with it.

Are there any other mathematical frameworks of artificial general intelligence apart from AIXI?

I am aware of projects such as OpenCog, but that's not really a mathematical framework, but more a cognitive science framework.

",2444,,2444,,1/17/2021 15:29,1/17/2021 16:04,Are there other mathematical frameworks of artificial general intelligence apart from AIXI?,,1,0,,,,CC BY-SA 4.0 21428,2,,21427,5/23/2020 14:39,,1,,"

Gödel machine

There is another mathematical framework for AGI: the Gödel machine, which was proposed by Jürgen Schmidhuber (who also worked with Marcus Hutter). In the paper Gödel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements (2003), Schmidhuber describes the Gödel machine as follows

They are universal problem solving systems that interact with some (partially observable) environment and can in principle modify themselves without essential limits apart from the limits of computability. Their initial algorithm is not hardwired; it can completely rewrite itself, but only if a proof searcher embedded within the initial algorithm can first prove that the rewrite is useful, given a formalized utility function reflecting computation time and expected future success (e.g., rewards). We will see that self-rewrites due to this approach are actually globally optimal (Theorem 4.1, Section 4), relative to Gödel's well-known fundamental restrictions of provability. These restrictions should not worry us; if there is no proof of some self-rewrite's utility, then humans cannot do much either.

The initial proof searcher is $O$()-optimal (has an optimal order of complexity) in the sense of Theorem 5.1, Section 5. Unlike hardwired systems such as Hutter's and Levin's (Section 6.4), however, a Gödel machine can in principle speed up any part of its initial software, including its proof searcher, to meet arbitrary formalizable notions of optimality beyond those expressible in the $O$()-notation. Our approach yields the first theoretically sound, fully self-referential, optimal, general problem solvers.

Space-time embedded agents

This work combines Russell's bounded rationality with Legg's and Hutter's definition of universal intelligence to provide a framework for AGI that takes into account resource (space and time) constraints and other issues (such as that the environment can modify or even destroy the agent).

",2444,,2444,,1/17/2021 16:04,1/17/2021 16:04,,,,0,,,,CC BY-SA 4.0 21430,1,21431,,5/23/2020 16:03,,1,126,"

Similarly to What are the scientific journals dedicated to artificial general intelligence?, are there any conferences dedicated to artificial general intelligence?

",2444,,,,,7/13/2020 12:19,Are there any conferences dedicated to artificial general intelligence?,,1,0,,,,CC BY-SA 4.0 21431,2,,21430,5/23/2020 16:03,,2,,"

There are several conferences dedicated to AGI or human-level intelligence, such as

The conferences focus on topics such as cognitive architectures, autonomy, creativity, lifelong learning, and formal models of general intelligence. There are also journals associated with these conferences (see [1], [2], [3]). If you want to know more about them, I suggest that you go to their websites.

",2444,,2444,,7/13/2020 12:19,7/13/2020 12:19,,,,1,,,,CC BY-SA 4.0 21432,1,,,5/23/2020 17:57,,1,97,"

What is the relation between multi-agent learning and reinforcement learning?

Is one a sub-field of the other? For instance, would it make sense to state that your research interest are multi-agent learning and reinforcement learning, or would that be weird as one includes most of the topics of the other?

",36116,,2444,,5/23/2020 18:17,5/23/2020 19:14,What is the relation between multi-agent learning and reinforcement learning?,,1,0,,,,CC BY-SA 4.0 21433,1,,,5/23/2020 19:03,,2,281,"

I'm reading an article on reinforcement learning, and I don't understand why the agent's policy $\pi$ is not part of definition of Markov Decision process(MDP):

Bu, Lucian, Robert Babu, and Bart De Schutter. "A comprehensive survey of multiagent reinforcement learning." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38.2 (2008): 156-172.

My question is:

Why the policy is not a part of the MDP definition?

",36175,,-1,,6/17/2020 9:57,5/24/2020 10:24,Why is the policy not a part of the MDP definition?,,2,0,,,,CC BY-SA 4.0 21434,2,,21432,5/23/2020 19:14,,2,,"

I think there is an intersection. There are problems that are in reinforcement learning and in learning in multi-agent systems. There are problems in reinforcement learning, but not exactly in multi-agent systems. And there is learning in multi-agent systems that is not through reinforcement learning. For sort you can say: multi-agent reinforcement learning. I recommend take a look at these references:

Bu, Lucian, Robert Babu, and Bart De Schutter. "A comprehensive survey of multiagent reinforcement learning." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38.2 (2008): 156-172.

Busoniu, Lucian, Robert Babuska, and Bart De Schutter. "Multi-agent reinforcement learning: A survey." 2006 9th International Conference on Control, Automation, Robotics and Vision. IEEE, 2006.

",36175,,-1,,6/17/2020 9:57,5/23/2020 19:14,,,,2,,,,CC BY-SA 4.0 21435,2,,18567,5/23/2020 19:42,,0,,"

Could you figure out a workaround for this problem of the steady-state error using DDPG?

I'm currently facing the same problem. My application is Satellite attitude control, and no matter which cost function variation I use, the resulting controlled system maintains a constant steady-state error.

It seems to be a limitation of the algorithm. But I can't understand why this happens. Hence, I've decided to use PPO for now. But would like to know if you could solve this problem using DDPG.

All the best,

Wilson

",37316,,,,,5/23/2020 19:42,,,,2,,,,CC BY-SA 4.0 21436,2,,3889,5/23/2020 20:05,,1,,"

In the context of reinforcement learning, the idea of modeling your goal-oriented problem as a hierarchy of multiple sub-problems is called hierarchical reinforcement learning, which gives rise to concepts such as semi-Markov decision processes and options (aka macro actions). The article The Promise of Hierarchical Reinforcement Learning presents and describes the topic quite well, so I suggest you read it.

However, this idea of solving multiple sub-tasks in order to solve a bigger task isn't limited to RL. For example, in the paper Neural Programmer-Interpreters (NPIs), without referring to the traditional RL topics, a model is proposed to write programs by composing simpler ones.

Another example is genetic programming with automatically defined functions (ADFs), where sub-programs (the ADFs) can be re-used in different parts of the program if that's beneficial according to the fitness.

So, there are different ways you can design a system to solve a big task by solving multiple sub-tasks and then compose them. The approach that you choose depends on your use case. If you want to build programs, then NPIs can be a start. If you want to incorporate a time component in your system, then HRL is probably a viable approach. There are several HRL algorithms. Some of them (e.g. MAXQ-OP) have been successfully used to solve the RoboCup challenge.

",2444,,2444,,2/17/2021 22:34,2/17/2021 22:34,,,,0,,,,CC BY-SA 4.0 21437,2,,21433,5/23/2020 20:24,,7,,"

The MDP defines the environment (which corresponds to the task that you need to solve), so it defines e.g. the states of the environment, the actions that you can take in those states, the probabilities of transitioning from one state to the other and the probabilities of getting a reward when you take a certain action in a certain state.

The policy corresponds to a strategy that the RL agent can follow to act in that environment. Note that the MDP doesn't define what the agent does in each state. That's why you need the policy! An optimal policy for a specific MDP corresponds to the strategy that, if followed, is guaranteed to give you the highest amount of reward in that environment. However, there are multiple strategies, most of them are not optimal. This should clarify why the policy is not part of the definition of the MDP.

",2444,,,,,5/23/2020 20:24,,,,0,,,,CC BY-SA 4.0 21438,1,,,5/23/2020 23:20,,2,35,"

I'm trying to detect an object in a video (with slight camera movement), and then augment another video on top of it. What is the simplest approach to do that?

For instance, let's assume I have this simple video of a couch HERE. Now I want to augment the right cusion with a human or a dog. The dog or human are video itself (let's assume transparency is not an issue).

What's the simplest approach to do that?

",9053,,2444,,12/21/2021 15:12,12/21/2021 15:12,Detect object in video and augment another video on top of it,,0,0,,,,CC BY-SA 4.0 21439,1,21446,,5/24/2020 2:03,,3,70,"

There are many factors that cause the results of ML models to be different for every run of the same piece of code. One factor could be different initialization of weights in the neural network.

Since results might be stochastic, how would researchers know what their best performing model is? I know that a seed can be set to incorporate more determinism into the training. However, there could be other pseudo-random sequences that produce slightly better results?

",32780,,2444,,5/24/2020 10:23,5/24/2020 12:31,How would researchers determine the best deep learning model if every run of the code yields different results?,,1,1,,,,CC BY-SA 4.0 21440,1,,,5/24/2020 6:54,,2,89,"

I read Q-learning algorithm and also I know value iteration (when you update action values). I think the PyTorch example is value iteration rather than Q-learning.

Here is the link: https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html

",36107,,1847,,5/24/2020 10:19,5/24/2020 18:19,Is the PyTorch official tutorial really about Q-learning?,,1,0,,,,CC BY-SA 4.0 21441,2,,21440,5/24/2020 8:50,,4,,"

TL;DR: It is Q learning. However Q learning is basically sample-based value iteration, so not surprising you see a similarity.

Q learning* and value iteration are very strongly related. When considering action values, both approaches use the same Bellman equation for optimal policy, $q^*(s,a) = \sum_{r,s'}p(r,s'|s,a)(r+\gamma \text{max}_{a'} q^*(s', a'))$ as the basis for update steps. The differences are:

  • Value iteration makes updates using a model of the environment, Q learning works from samples from the environment made by an active agent.

    • By working from a simulated environment rather than a real one, it may not be clear when an agent is model-free or model-based (or planning rather than acting). However, the way that the simulated environment in the PyTorch example is used is consistent with a model-free method.
  • Value iteration loops through all possible states and actions for updates independently of any action an agent might take (in fact the agent need not exist). Q learning works with whichever states the agent experienced.

    • By adding experience replay memory in DQN, Q learning becomes a little bit closer to value iteration, as you can frame the memory as a learned model, plus consider it to be a type of planning (or a ""sweep"" through states). This is how it is described for instance in DynaQ which is an almost identical algorithm to experience replay as used in DQN when both are used in the simplest versions - see Sutton & Barto chapter 8.
  • Value iteration value update steps are over an expectation of next states and rewards - it processes the weighted sum $\sum_{r,s'}p(r,s'|s,a)$. Q learning update steps are over sampled next states and rewards - it ends up approximating the same expectation over many separate updates.

    • Even using large amounts of experience replay memory does not get Q learning the same as value iteration on this issue, samples are not guaranteed perfect. However, in a deterministic environment, this difference is not meaningful. So if you have a deterministic environment, Q learning and value iteration may also be considered a little closer in nature.

* Technically this applies to single-step Q-learning. n-step Q-learning and Q($\lambda$) use different estimates of future expected return, that are related but not the same as the single-step version shown here.

",1847,,1847,,5/24/2020 18:19,5/24/2020 18:19,,,,6,,,,CC BY-SA 4.0 21444,2,,21433,5/24/2020 10:24,,2,,"

Aside from the points raised in nbro's answer, I'd like to point out that for a single MDP (a single instance of a ""problem""), it may be sensible to study it from perspectives that include no policy at all, or multiple different policies.

For instance, if I have an MDP, I may be interested in studying it by looking at various inherent properties of the environment. And if I then have multiple different MDPs, all without any policies or anything like that, I could compare them based on those properties. For example, I might simply want to measure the sizes of the state and action spaces. Or write out something like a game tree, and measure properties like the branching factor and the average / min / max / median depth at which we can find a terminal state.

On the other hand, it can also be interesting sometimes to study multiple different policies all for the same MDP. A very common example would be any off-policy learning algorithm (like $Q$-learning): they all involve at least one ""target policy"" (for which they're learning the $Q(s, a)$ values -- usually the greedy policy with respect to the values learned so far), and at least one ""behaviour policy"" (which they're using to generate experience -- often something like an $\epsilon$-greedy policy). A more complex example would be population-based training setups, like the one DeepMind used for their StarCraft 2 training; here they have a large population of different policies that they're all using in a complex training setup (and technically I suppose we should say they also have many different MDPs, where every combination of StarCraft 2 level + training opponent would formally be a different MDP).

",1641,,,,,5/24/2020 10:24,,,,0,,,,CC BY-SA 4.0 21445,2,,21418,5/24/2020 10:29,,1,,"

Below are some tweaks that helped me accelerate the training of DDPG on a Reacher-like environment:

  • Reducing the neural network size, compared to the original paper. Instead of:

2 hidden layers with 400 and 300 units respectively

I used 128 units for both hidden layers. I see in your implementation that you used 256, maybe you could try reducing this.

  • As suggested in the paper, I added batch normalization:

... to manually scale the features so they are in similar ranges across environments and units. We address this issue by adapting a recent technique from deep learning called batch normalization (Ioffe & Szegedy, 2015). This technique normalizes each dimension across the samples in a minibatch to have unit mean and variance.

The implementation you used does not seem to include this.

  • Reducing the value of $\sigma$, which is a parameter of the Ornstein-Uhlenbeck process used to enable exploration. Originally, it was $0.2$, I used $0.05$. (I can't find where this parameter is set in your code.)

I am not entirely sure if this will help in your environment, but it was just to give you some ideas.

PS: Here is a link to the code I followed to build DDPG and here is a plot of rewards per episode.

",34010,,34010,,5/26/2020 13:35,5/26/2020 13:35,,,,6,,,,CC BY-SA 4.0 21446,2,,21439,5/24/2020 12:31,,3,,"

I know that a seed can be set to incorporate more determinism into the training. However, there could be other pseudo-random sequences that produce slightly better results?

That is correct. If you fix the seed for a process which inherently has stochastic behaviour by design (such as initialising neural network params), then what you know about the model is that it is the best one given the hyperparameters you have selected and that specific seed. Sometimes the value of the seed is highly relevant, other times less so.

Since results might be stochastic, how would researchers know what their best performing model is?

In general, as with any experiment where measurements are variable, by running the experiment multiple times and taking statistics over the set of results. This will give you a much better sense of how the algorithm does in general, independently of specific seeds. You can still fix your RNG seeds for repeatability, but you will need multiple sets of them.

For certain goals, such as making the best possible model that you can, independently of whether the approach you take is ""best in general"", this is not necessary. A single run which creates a state-of-the-art performance is still of interest, for instance. Or if you are creating a model that you want to use in production, you may care less about the stability of the technique (and being ""lucky"") than being in possession of a high performing agent for the task.

",1847,,,,,5/24/2020 12:31,,,,0,,,,CC BY-SA 4.0 21448,1,,,5/24/2020 12:37,,1,39,"

I'm looking for someone who can help me clarify a few details regarding the architecture of Bert model. Those details are necessary for me to come with a full understanding of Bert model, so your help would be really helpful. Here are the questions:

  • Does the self-attention layer of Bert model have parameters? Do the embeddings of words change ONLY according to the actual embeddings of other words when the sentence is passed through the self-attention layer?

  • Are the parameters of the embedding layer of the model (the layer which transforms the sequence of indexes passed as input into a sequence of embeddings of size=size of the model) trainable or not?

",37328,,30725,,6/1/2020 12:22,6/1/2020 12:22,Two questions about the architecture of Google Bert model (in particular about parameters),,0,0,,,,CC BY-SA 4.0 21449,1,,,5/24/2020 20:42,,1,124,"

I am using Q-learning to solve an engineering problem. The objective is to generate a Q-table associating state to Q-values.

I created a State vector DS = [s1, s2, ..., sN] containing all ""desired"" states. So the Q-table has the form of Q-table =[DS, Q-values].

On the other hand, my agent follows a trajectory. Playing action a at state s (which is a point of the trajectory) leads the agent to another state s' (another point of the trajectory). However, I don't have the s' state in the initially desired states vector DS.

One solution is to add new states to the DS vector while the Q-learning algorithm is running, but I do not want to add new states.

Any other ideas on how to handle this problem?

",37337,,30725,,5/31/2020 13:38,5/31/2020 13:38,Handle non-existing states in q-learning,,1,7,,,,CC BY-SA 4.0 21451,2,,8509,5/25/2020 4:50,,2,,"

From what I understand, don't bother with a CNN, you have essentially perfectly structured images.

You can hand code detectors to measure how much filled in a circle is.

Basically do template alignment and then search over the circles.

Ex a simple detector would measure the average blackness of the circle which you could then threshold.

",32390,,,,,5/25/2020 4:50,,,,0,,,,CC BY-SA 4.0 21453,2,,16714,5/25/2020 9:42,,1,,"

If you search Papers with Code for python ""machine learning"" (or a more specific query) you will get numerous results. Note these will be mostly scientific applications or methods.

",23503,,,,,5/25/2020 9:42,,,,1,,,,CC BY-SA 4.0 21454,2,,7488,5/25/2020 9:53,,2,,"

In addition to the ways the term topology is itself used generically to describe the ""shape"" of various aspects of Machine Learning, the term appears in the field Topological Data Analysis:

In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.

Some examples of its use in ML:

",23503,,23503,,5/25/2020 10:15,5/25/2020 10:15,,,,0,,,,CC BY-SA 4.0 21456,1,,,5/25/2020 10:28,,2,180,"

Reading through the CS229 lecture notes on generalised linear models, I came across the idea that a linear regression problem can be modelled as a Gaussian distribution, which is a form of the exponential family. The notes state that $h_{\theta}(x)$ is equal to $E[y | x; \theta]$. However, how can $h_{\theta}(x)$ be equal to the expectation of $y$ given input $x$ and $\theta$, since the expectation would require a sort of an averaging to take place?

Given x, our goal is to predict the expected value of $T(y)$ given $x$. In most of our examples, we will have $T(y) = y$, so this means we would like the prediction $h(x)$ output by our learned hypothesis h to satisfy $h(x) = E[y|x]$.

To show that ordinary least squares is a special case of the GLM family of models, consider the setting where the target variable y (also called the response variable in GLM terminology) is continuous, and we model the conditional distribution of y given x as a Gaussian $N(\mu,\sigma^2)$. (Here, $\mu$ may depend $x$.) So, we let the ExponentialFamily($\eta$) distribution above be the Gaussian distribution. As we saw previously, in the formulation of the Gaussian as an exponential family distribution, we had μ = η. So, we have $$h_{\theta}(x) = E[y|x; \theta] = \mu = \eta = \theta^Tx.$$

EDIT

Upon reading other sources, $y_i \sim N(\mu_i, \sigma^2)$ meaning that each individual output has it's own normal distribution with mean $\mu_i$ and $h_{\theta}(x_i)$ is set as the mean of the normal distribution for $y_i$. In that case, then the hypothesis makes sense to be assigned the expectation.

",32780,,-1,,6/17/2020 9:57,5/26/2020 2:49,Why is the hypothesis function $h_{\theta}(x)$ equivalent to $E[y | x; \theta]$ in generalised linear models?,,1,0,,,,CC BY-SA 4.0 21457,1,21476,,5/25/2020 11:19,,5,799,"

The AIMA book has an exercise about showing that an MDP with rewards of the form $r(s, a, s')$ can be converted to an MDP with rewards $r(s, a)$, and to an MDP with rewards $r(s)$ with equivalent optimal policies.

In the case of converting to $r(s)$ I see the need to include a post-state, as the author's solution suggests. However, my immediate approach to transform from $r(s,a,s')$ to $r(s,a)$ was to simply take the expectation of $r(s,a,s')$ with respect to s' (*). That is:

$$ r(s,a) = \sum_{s'} r(s,a,s') \cdot p(s'|s,a) $$

The authors however suggest a pre-state transformation, similar to the post-state one. I believe that the expectation-based method is much more elegant and shows a different kind of reasoning that complements the introduction of artificial states. However, another resource I found also talks about pre-states.

Is there any flaw in my reasoning that prevents taking the expectation of the reward and allow a much simpler transformation? I would be inclined to say no since the accepted answer here seems to support this. This answer mentions Sutton and Barto's book, by the way, which also seems to be fine with taking the expectation of $r(s, a, s')$.

This is the kind of existential question that bugs me from time to time and I wanted to get some confirmation.

(*) Of course, that doesn't work in the $r(s, a)$ to $r(s)$ case, as we do not have a probability distribution over the actions (that would be a policy, in fact, and that's what we are after).

",37359,,2444,,1/20/2021 17:05,1/20/2021 17:05,"How do I convert an MDP with the reward function in the form $R(s,a,s')$ to and an MDP with a reward function in the form $R(s,a)$?",,1,6,,,,CC BY-SA 4.0 21458,2,,21392,5/25/2020 11:51,,1,,"

There is a recent paper: Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics published by DeepMind that aims to solve this problem, as stated in the abstract:

Many real-world control problems involve both discrete decision variables – such as the choice of control modes, gear switching or digital outputs – as well as continuous decision variables – such as velocity setpoints, control gains or analogue outputs. However, when defining the corresponding optimal control or reinforcement learning problem, it is commonly approximated with fully continuous or fully discrete action spaces. These simplifications aim at tailoring the problem to a particular algorithm or solver which may only support one type of action space. Alternatively, expert heuristics are used to remove discrete actions from an otherwise continuous space. In contrast, we propose to treat hybrid problems in their ‘native’ form by solving them with hybrid reinforcement learning, which optimizes for discrete and continuous actions simultaneously.

The idea is that they use a hybrid policy that uses a Gaussian distribution for the continuous decision variables and a categorical distribution for the discrete decision variables. Then, they extend the Maximum a Posteriori Policy Optimisation (MPO) algorithm (also by DeepMind) to allow it to handle hybrid policies.

Here is a video showing how they used the resulting hybrid MPO policy in a robotics task, where in addition to the continuous actions, the robot can choose a discrete action which is the control mode to be used (coarse vs. fine).

",34010,,,,,5/25/2020 11:51,,,,0,,,,CC BY-SA 4.0 21460,2,,10644,5/25/2020 13:45,,2,,"

I found someone that has done this thing! You can hear a good explanation in Marcus Hutter's answer to this question about rewards given to AIXI. He describes a work that seems to be referring to this paper:

Universal Knowledge-Seeking Agents for Stochastic Environments

I'll edit this answer later with a full explanation of the approach, but essentially the idea is that you use an AIXI model that does optimal reinforcement learning, giving it a reward that is based on information gained (phrased in a careful way to avoid a few common pitfalls). As a result, it learns to choose actions that give it the most information possible to predict the impacts of it's actions. This results in a ""scientist"" like behaviour, and you could imagine it doing things like turing the entire earth into a supercollider to better understand some physics laws if it decides that is the best approach for gaining maximum information. It would probably also do plenty of very unethical psycology experiements, for example, if it ended up deciding that human actions were important to predict and understand.

It's not a ""safe"" singularity in that sense, but that's okay, I didn't require that. It's at least a formal definition. It requires doing some uncomputable things, but the hope of future research is that we can make close enough approximations to those uncomputable things to be good enough anyway.

I feel this theory is lacking any explanation of how feasable such a task is since it uses uncomputable agents, so I won't accept it yet, but it's the best answer I've seen so far. And I'll be watching future research closely to see if they can get a better handle on feasability, there seems to be quite a bit of work that has gone on in finding computable approximations to AIXI. The reason I care about feasability is because it is very relevant for mathmatically deciding how plausable something like an ""intelligence explosion"" actually is. So if a theory doesn't talk about feasability, it is missing out on a big piece of this question. Still this theory seems hopeful. For example, maybe there are fundamental computational limits to maximizing some reward functions, and we can prove that even certain levels of approximations for this reward function aren't computable. That would be a really interesting negative result.

In general, I think the idea of using reinforcement learning, then choosing a reward function that tries to capture something instrinsic (such as ""curiosity"") is a very good approach to trying to formally define the singularity. I look forward to seeing other potential reward functions defined in the future, I don't expect this to be the only one.

",6378,,6378,,5/25/2020 14:23,5/25/2020 14:23,,,,1,,,,CC BY-SA 4.0 21461,1,,,5/25/2020 15:33,,1,25,"

Let's assume that we make forecasting of another metric partially based on forecasts of the weather forecast, e.g. of temperature, pressure, then we can potentially obtain those forecasts from one of public APIs and have some information about future values of these features and do more precise prediction of another parameter taken as a label.

What is the approach to use in this case if one or more of the features in multivariate forecasting have some forecasted values for the predicted horizon? It looks that in this case not only values from historical data can be used but also predicted values, though not clear how to organize the model in this case, e.g. it can be multivariate LSTM.

",37364,,,,,5/25/2020 15:33,"How to make a multivariate forecasting if one of features becomes known for the future with some confidence level, e.g. weather forecast data",,0,0,,,,CC BY-SA 4.0 21462,2,,21449,5/25/2020 16:13,,2,,"

You have two options, either interpolate or restrict the actions only to values that produce states which are in your state vector.

The simplest interpolation scheme is a linear interpolation, which works as follows (assuming DS contains a set of grid points in increasing order). For a state $s'$ you can locate its closest neighbours from the array DS and the value in state $s'$ will, then, be a weighted average of the values in those neighbouring states. Formally, $$Q(s') = \frac{s_{i+1} - s'}{s_{i+1}-s_i}Q(s_i) + \frac{s'-s_i}{s_{i+1}-s_i}Q(s_{i+1}),$$ for $s_i < s'< s_{i+1}$, where $s_i$ and $s_{i+1}$ are the neighbours, $s_i$ is the $i$-th element of DS(i.e. si = DS[i] = Q-table[i,1]), and $Q(s_i)$ is related to the Q-table as Qsi = Q-table[i,2] (asuming array indexing starts from 1).

Restricting the actions would work as follows. For simplicity, assume that the agent chooses the next state directly, i.e. $s'=a$. Then, if you have an array of actions A = [a1, a2, ... , aM], $M \le N$ then each $a_i$ needs to be present in the state array DS (i.e. A is a subset of DS, formally $A \subseteq S$). This may not be desirable but it is an option.

",11495,,11495,,5/27/2020 7:43,5/27/2020 7:43,,,,1,,,,CC BY-SA 4.0 21464,1,,,5/25/2020 18:21,,2,154,"

I am a beginner in TensorFlow as well as in AI. I am basically from Pharma background and learning AI from scratch.

I have data with 5038 input (Float64) and 826 output (Categorical - Multi Labels in each column). I have utilized one-hot encoding but the neural network tackles only one output at a time.

[1]How to process all 826 output (which give 6689 one-hot output) at once in a neural network. Here is the code that I am using. [2] I am getting only 31% accuracy. I get this accuracy just in the second epoch. From second or third epoch onwards accuracy and other parameters become constant. Am I doing the wrong code here?

dataset = df.values
X = dataset[:,0:5038]/220
Y_smile = dataset[:,5038 :5864]

from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(handle_unknown='ignore')
enc.fit(Y_smile)
OneHotEncoder(handle_unknown='ignore')
enc.categories_
Y = enc.transform(Y_smile).toarray()
print(Y,Y.shape, Y.dtype)

from sklearn.model_selection import train_test_split
X_train, X_val_and_test, Y_train, Y_val_and_test = train_test_split(X, Y, test_size=0.3)
X_val, X_test, Y_val, Y_test = train_test_split(X_val_and_test, Y_val_and_test, test_size=0.5)

import numpy as np
X_train = np.asarray(X_train).astype(np.float64)
X_val = np.asarray(X_val).astype(np.float64)
X_test = np.asarray(X_test).astype(np.float64)

filepath = ""bestmodelweights.hdf5""
checkpoint = [tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_accuracy', mode='auto', save_best_only=True, Save_weights_only = True, verbose = 1), 
              tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5, verbose =1)]

model = tf.keras.Sequential([
                             tf.keras.layers.Dense(1024, activation='relu', input_shape=(5038,)),
                             tf.keras.layers.Dense(524, activation='relu'),
                             tf.keras.layers.Dense(524, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),
                             tf.keras.layers.Dense(1024, activation='relu'),                        
                             tf.keras.layers.Dense(6689, activation= 'softmax')])

model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.BinaryCrossentropy(from_logits = True), metrics=['accuracy'])

hist = model.fit(X_train, Y_train, epochs= 200, callbacks=[checkpoint],validation_data=(X_val, Y_val))
",34540,,30725,,5/30/2020 11:04,5/30/2020 11:04,How to use one-hot encoding for multiple columns (multi-class) with varying number of labels in each class?,,0,0,,,,CC BY-SA 4.0 21465,1,,,5/25/2020 19:59,,1,50,"

I was working on a CNN for HDR image generation from LDR images. I used an encoder-decoder architecture and merged the input with the decoder output. However I'm getting some banding artifacts in the model prediction as shown.

1)Input LDR image

2) Ground Truth HDR

3) Predicted output

Notice the fine bands in the prediction. What might be causing these bands? Also I trained only for 20 epochs yet. Is the problem due to inadequate training? Here's my model:

 class TestNet2(nn.Module):
    def __init__(self):
        super(TestNet2, self).__init__()

    def enclayer(nIn, nOut, k, s, p, d=1):
        return nn.Sequential(
            nn.Conv2d(nIn, nOut, k, s, p, d), nn.SELU(inplace=True)
        )
    def declayer(nIn, nOut, k, s, p):
        return nn.Sequential(
            nn.ConvTranspose2d(nIn, nOut, k, s, p), nn.SELU(inplace=True)
        )

    self.encoder = nn.Sequential(
        enclayer(3,64,5,3,1),
        #nn.MaxPool2d(2, stride=2),
        enclayer(64,128,5,3,1),
        #nn.MaxPool2d(2, stride=2),
        enclayer(128,256,5,3,1),
        #nn.MaxPool2d(2, stride=2),
    )
    self.decoder = nn.Sequential(
        declayer(256,128,5,3,1),
        declayer(128,64,5,3,1),
        declayer(64,3,5,3,1),


    )

  def forward(self, x):
      xc=x
      x = self.encoder(x)
      x = self.decoder(x)
      x = F.interpolate(
        x, (512, 512), mode='bilinear', align_corners=False
      )
      x=x+xc
      return x
",36767,,,,,5/25/2020 19:59,Banding artifacts in CNN,,0,0,,,,CC BY-SA 4.0 21466,1,,,5/25/2020 23:08,,1,58,"

I am working on stock price prediction project, I am using the support vector regression (SVR) model for it.

As I am splitting my data into train and test, I am getting high accuracy while predicting test data after fitting the model.

But now when I am trying to use another data which I separate out from the used dataset before start doing anything, it gives me very bad results. Can anyone tell me what's happening?

Looking forward to your response.

",33670,,2444,,5/25/2020 23:47,5/25/2020 23:47,Why is the accuracy of my model very low on a separate dataset from the training and test datasets?,,0,2,,,,CC BY-SA 4.0 21468,2,,21456,5/26/2020 2:49,,1,,"

In generalised Linear models, each output variable $y_i$ is modelled as a distribution from the exponential family, with the hypothesis function $h_{\theta}(x)$ for a given $\theta$ as the expected value of $y_i$ and maximum likelihood estimation is usually the method used to solve GLM's.

",32780,,,,,5/26/2020 2:49,,,,0,,,,CC BY-SA 4.0 21469,1,,,5/26/2020 2:56,,2,64,"

Can weighted importance sampling (WIS) and importance sampling (IS) be applied to off-policy evaluation for continuous state spaces MDPs?

Given that I have trajectories of $(s_t,a_t)$ pairs and the behavior policy distribution $\pi_b(a_t | s_t)$ can be approximated with a neural network.

A paper I came across where they say that IS can be used for continuous states whereas WIS cannot be used for function approximation. I am not sure why WIS cannot be applied to the continuous case whereas IS can be. Both of these techniques seem similar.

",32780,,2444,,5/26/2020 10:43,5/26/2020 10:43,Can weighted importance sampling be applied to off-policy evaluation for continuous state space MDPs?,,0,0,,,,CC BY-SA 4.0 21470,1,,,5/26/2020 5:12,,1,147,"

This is a general question.

Is there a general file type associated with AI projects?

Photoshop = .psd
Excel = csv
Artificial Intelligence = ?
",37373,,2444,,5/26/2020 10:57,5/26/2020 10:57,Is there a general file type associated with AI projects?,,1,2,,,,CC BY-SA 4.0 21472,2,,21470,5/26/2020 6:25,,3,,"

No, there is no file type associated with AI projects in general.

Your examples of Photoshop and Excel are specific corporate branded products. These store bespoke data that only works with those products (plus maybe a few converters that can read the files for competitor products).

Even more general examples such as .jpg for images or .txt for text documents are not a good match to AI in general. AI is such a broad field, it is next to impossible to define a standard set of components of an ""AI project"" in order to build a single file format that could handle the contents of all AI projects.

That said, practical work in AI is likely to include use of specific file formats and extensions, depending on what you do. For instance if you work with Python and TensorFlow you are likely to use .py files for your source code, .ckpt for neural network training checkpoints, .pb for saved models. The last two extensions - ckpt and pb - are only semi-formal naming conventions though.

This lack of a single file extension, and use of maybe a dozen ones that you would be familiar with on a project, is true of most coding and software development work. If you work with an integrated devlopment environment (IDE), then you might have a single ""master"" file that allows you to load up all the resources in the project to work on them. That is entirely optional though, and may have any of a number of extensions depending on which IDE you use.

As a quick example, the Transformers project on github is a popular resource for natural language processing, a topic often considered as part of AI. A quick look at that project shows:

  • Various development tool configuration files either starting with . or with various extensions - .cfg, .yaml
  • .py files for Python source code
  • Script files with .sh extension or with extension removed for convenience of using on command line
  • Dcoumentation written with .txt, .md and .rst extensions.
  • Some .ipynb files used for Jupyter notebooks - a format that bundles documentation, Python scripts and their output for sharing work.
",1847,,1847,,5/26/2020 6:46,5/26/2020 6:46,,,,0,,,,CC BY-SA 4.0 21474,1,,,5/26/2020 9:43,,1,33,"

I am trying to implement an extractive text summarization model. I am using keras and tensorflow. I have used bert sentence embeddings and the embeddings are fed into an LSTM layer and then to a Dense layer with sigmoid activation function. I have used adam optimizer and binary crossentropy as the loss function. The input to the model are the sentence embeddings.

The training y labels is a 2d-array i.e [array_of_documents[array_of_biniary_labels_foreach_sentence]]

The problem is that during training, I am getting the training accuracy of around 0.22 and loss 0.6.

How can I improve my accuracy for the model?

",37006,,,,,5/26/2020 9:43,Low accuracy during training for text summarization,,0,0,,,,CC BY-SA 4.0 21475,1,,,5/26/2020 10:28,,1,59,"

I started thinking about the fairness of machine learning models recently. Wiki page for Fairness_(machine_learning) defines fairness as:

In machine learning, a given algorithm is said to be fair, or to have fairness if its results are independent of some variables we consider to be sensitive and not related to it (f.e.: gender, ethnicity, sexual orientation, etc.).

UC Berkley CS 294 in turn defines fairness as:

understanding and mitigating discrimination based on sensitive characteristics, such as, gender, race, religion, physical ability, and sexual orientation

Many other resources, like Google in the ML Fairness limit the fairness to these aforementioned categories and no other categories are considered.

But fairness is a lot broader context than simply these few categories mentioned here, you could easily add another few like IQ, height, beauty anything that could have a real impact on your credit score, school application or job application. Some of these categories may not be popular in existing datasets nowadays but given the exponential growth of data, they will be soon, to the extent, that we will have an abundance of data about every individual with all their physical and mental categories mapped into the datasets.

Then the question would be how to define fairness given all these categories presented in the datasets. And will it even be possible to define fairness if all physical and mental dimensions are considered as it seems that when we do so, all our weights in, say, the neural nets should be exactly the same, i.e., giving no discriminator in any way or form towards or against any physical or mental category of a human being? That means that a machine learning system that is fair across all possible dimensions will have no way of distinguishing one human being from another which would render these machine learning models useless.

To wrap it up, while it does make perfect sense to withdraw bias towards any individual given the categories like gender, ethnicity, sexual orientation, etc., the set is not closed and with the increasing number of categories being added to this set, we will inevitably arrive at a point where no discrimination (in a statistical sense) would be possible.

And that's why my question, are fair machine learning models possible? Or perhaps, the only possible fair machine learning models are those that arbitrarily include some categories but ignore other categories which, of course, if far from being fair.

",15780,,30725,,6/1/2020 12:22,6/1/2020 12:22,Is it possible to create a fair machine learning system?,,0,7,,,,CC BY-SA 4.0 21476,2,,21457,5/26/2020 11:37,,3,,"

I think I may be in position to answer my own question. The Bellman equation (for the optimal policy) for a MDP with $r(s,a,s')$ rewards would look like this:

$$V(s) = \max_a \left\{ \sum_{s'} p(s'|s,a)(r(s,a,s') + \gamma V(s')) \right\} $$ $$V(s) = \max_a \left\{ \sum_{s'} p(s'|s,a) \cdot r(s,a,s') + \gamma \sum_{s'} p(s'|a,s) \cdot V(s') \right\} $$

Now, $ \sum_{s'} p(s'|s,a) \cdot r(s,a,s') $ is precisely $ \mathbb{E}\left[ r(s,a,s') | s,a \right] = r(s,a) $.

So all in all, the resulting Bellman equation looks like this:

$$V(s) = \max_a \left\{r(s,a) + \gamma \sum_{s'} p(s'|s,a) \cdot V(s') \right\} $$

It is clear, then, that a process with $ r(s,a,s') $ rewards can be transformed to a $ r(s,a) $ process without introducing artificial states and maintaining the optimal policies.

As a side note unrelated to the question itself, that leads me to believe that $ r(s,a,s') $ functions may be convenient in some scenarios, but they do not add ""expressive power"", in the sense that they do not allow to model problems more compactly (as it happens when we extend $ r(s) $ to $ r(s,a) $).

",37359,,37359,,5/26/2020 13:41,5/26/2020 13:41,,,,3,,,,CC BY-SA 4.0 21477,1,21668,,5/26/2020 13:34,,2,1451,"

I was reading here tips & tricks for training in DRL and I noticed the following:

  • always normalize your observation space when you can, i.e., when you know the boundaries
  • normalize your action space and make it symmetric when continuous (cf potential issue below) A good practice is to rescale your actions to lie in [-1, 1]. This does not limit you as you can easily rescale the action inside the environment

I am working on a discrete action space but it is quite difficult to normalize my states when I don't actually know the full range for each feature (only an estimation).

How does this affect training? And more specifically, why on continuous action spaces we need to normalize also the action's values?

",35978,,2444,,5/26/2020 14:19,6/5/2020 16:53,Why do we also need to normalize the action's values on continuous action spaces?,,1,0,,,,CC BY-SA 4.0 21478,2,,12411,5/26/2020 15:51,,0,,"

Branch and Bound is similar to an exhaustive search, except it incorporates a method for computing lower bounds on branches. If the lower bound on a given branch is greater than the upper bound on the problem (i.e. the current best solution encountered), that branch can be discarded since it will never produce an optimal solution.

Hence, since you explore all options except those you know will produce values less optimal than your current best value, you are guaranteed to encounter the global optimum.

Note this is a generic algorithm, and you will need to reference a specific implementation if you want proof of why it satisfies these criteria.

",23503,,23503,,5/26/2020 16:26,5/26/2020 16:26,,,,0,,,,CC BY-SA 4.0 21479,2,,19849,5/26/2020 16:15,,3,,"

ReLU is non-linear by definition

In calculus and related areas, a linear function is a function whose graph is a straight line, that is a polynomial function of degree one or zero.

Since the graph of the ReLU function $f(x) = \max(0,x)$ is not a straight line (equivalently, it cannot be expressed in the form $f(x) = mx + c$), by definition it is not linear.

ReLU is piecewise linear

ReLU is piecewise linear on the bounds $(-\inf,0]$ and $[0,\inf)$:

$$ f(x) = \max(0,x) = \begin{cases} 0 & x \le 0\\ x & x \gt 0\\ \end{cases} $$

But this is still non-linear on the entire domain:

",23503,,23503,,5/26/2020 16:34,5/26/2020 16:34,,,,0,,,,CC BY-SA 4.0 21480,1,,,5/26/2020 20:41,,2,99,"

The following mindmap gives an overview of multiple reasons for sample inefficiency. The list is definitely not complete. Can you see another reason not mentioned so far?

Some related links:

",35821,,,,,5/26/2020 20:41,Can you find another reason for sample inefficiency of model-free on-policy Deep Reinforcement Learning?,,0,0,,,,CC BY-SA 4.0 21481,2,,17883,5/26/2020 22:25,,0,,"

My immediate suggestion would be to zero-fill the missing values, but I recalled the below comment suggesting a more sophisticated method:

Karim: How to deal with different size of feature vectors?

Nabila: That's a problem I'm actually working on. I've seen that you can create separate networks for each type of node feature, and sort of project them - so train them separately, and project them to the same size.

Or you can do concatenation so you don't have to worry about that, but at some point they all need to be the same size to do classification at the end.

",23503,,-1,,6/17/2020 9:57,5/26/2020 22:25,,,,0,,,,CC BY-SA 4.0 21482,1,,,5/27/2020 0:52,,1,56,"

There are several different angles we can classify Reinforcement Learning methods from. We can distinguish three main aspects :

  • Value-based and policy-based
  • On-policy and off-policy
  • Model-free and model-based

Historically, due to their sample-efficiency, the model-based methods have been used in the robotics field and other industrial controls. That is happened due to the cost of the hardware and the physical limitations of samples that could be obtained from a real robot. Robots with a large number of degrees of freedom are not widely accessible, so RL researchers are more focused on computer games and other environments where samples are relatively cheap. However, the ideas from robotics are infiltrating, so, who knows, maybe the model-based methods will enter the focus quite soon.

As we know, ""model"" means the model of the environment, which could have various forms, for example, providing us with a new state and reward from the current state and action. From what I have seen so far, all the methods (i.e. A3C, DQN, DDPG) put zero effort into predicting, understanding, or simulating the environment. What we are interested in is proper behavior (in terms of the final reward), specified directly (a policy) or indirectly (a value), given the observation. The source of observations and reward is the environment itself, which in some cases could be very slow and inefficient.

In a model-based approach, we're trying to learn the model of the environment to reduce the ""real environment"" dependency. If we have an accurate environment model, our agent can produce any number of trajectories that it needs, simply by using the model instead of executing the actions in the real world.

Question:

I am interested in a day trading environment. Is it possible to create a model-based environment in order to build an accurate day trading environment model?

",35626,,35626,,5/27/2020 12:13,5/27/2020 12:13,Using a model-based method to build an accurate day trading environment model,,0,3,,,,CC BY-SA 4.0 21484,1,,,5/27/2020 9:50,,1,208,"

There are a lot of examples of balancing a pole (see image below) using reinforcement learning, but I find that almost all examples start close to the upright position.

Is there any good source (or paper) for when the pole actually starts all the way at the bottom?

",5344,,2444,,5/27/2020 11:40,5/27/2020 14:31,"Is there any good source for when the pole actually starts all the way at the bottom, in the cartpole problem?",,2,0,,,,CC BY-SA 4.0 21485,1,21486,,5/27/2020 10:19,,2,2426,"

I have recently watched David silver's course, and started implementing the deep Q-learning algorithm.

I thought I should make a switch between the Q-target and Q-current directly (meaning, every parameter of Q-current goes to Q-target), but I found a repository on GitHub where that guy updates Q-target as follows:

$$Q_{\text{target}} = \tau * Q_{\text{current}} + (1 - \tau)*Q_{\text{target}}$$.

where $\tau$ is some number probably between 0 and 1.

Is that update correct or I miss something?

I thought after some iterations (e.g. 2000 iteration), we should update Q-target as: $Q_{\text{target}}=Q_{\text{current}}$.

",36107,,2444,,5/27/2020 19:35,5/27/2020 19:35,How and when should we update the Q-target in deep Q-learning?,,1,0,,,,CC BY-SA 4.0 21486,2,,21485,5/27/2020 11:43,,5,,"

The update form $\theta^{\prime} \leftarrow \tau \theta+(1-\tau) \theta^{\prime}$ (where $\theta'$ and $\theta$ represent the weights of the target network and the current network, respectively) does exist and is correct.

It is called soft update and it has been used in the Deep Deterministic Policy Gradient (DDPG) paper, which uses the concept of a target network like DQN. The authors of the paper state that:

The weights of these target networks are then updated by having them slowly track the learned networks: $\theta ' \leftarrow \tau \theta + (1 − \tau )\theta'$ with $\tau << 1$. This means that the target values are constrained to change slowly, greatly improving the stability of learning.

This update will be made in each time step as follows. For example, for $\tau= 0.001$, the new weights for the target network will take $0.1\%$ of the main network’s weights and $99.9 \%$ of the old target network weights. This does not go against the purpose of fixed target networks (which have been introduced to address the problem of “moving targets”). In fact, by keeping $99.9\%$ of the old target network weights, they can still be considered as fixed.

Seeing that this resulted in improvements in DDPG, some DQN implementations/tutorials started to use soft updates for the target network.

This is opposed to the hard update scheme used in the original DQN paper, i.e. the weights are copied every $C$ steps. This means that the target network is kept fixed for $C$ steps (10000 in the paper) and then gets a significant update.

",34010,,34010,,5/27/2020 12:33,5/27/2020 12:33,,,,3,,,,CC BY-SA 4.0 21487,2,,21484,5/27/2020 11:57,,2,,"

It is difficult to prove a negative, but I doubt there will be a paper on that specific problem. It should be relatively easy to adjust the environment or write a new one that does this if you wished though.

A very similar environment that does have a lot more written about it is Acrobot, which does have a OpenAI Gym version. Instead of a cart on a track with forces applied and a free join to the pole, there is a longer pole fixed to a free joint, with an active joint in the middle (that the agent can apply forces to). It can be thought of as a very simple model of an acrobat on a trapeze swing (with poles instead of chains, so the swing is stable balanced upside-down)

The degrees of freedom and difficulty of the Acrobot task are similar to CartPole - I would rate it as a harder problem overall, but if you started CartPole with the pole hanging down in fact the two problems are very similar. Usually the joint motor is made too weak to achieve the goal of balancing in a single clean action, and the agent must learn to build momentum over a few swings, before moving to the balance point. That also makes both your alternate start position CartPole and Acrobot similar to MountainCar.

For a low-effort look at a very similar problem you could try Acrobot. Otherwise you will likely need to do some custom work on CartPole.

",1847,,,,,5/27/2020 11:57,,,,0,,,,CC BY-SA 4.0 21488,1,21489,,5/27/2020 12:44,,6,299,"

The following paragraph about $\epsilon$-greedy policies can be found at the end of page 100, under section 5.4, of the book "Reinforcement Learning: An Introduction" by Richard Sutton and Andrew Barto (second edition, 2018).

but with probability $\varepsilon$ they instead select an action at random. That is, all nongreedy actions are given the minimal probability of selection, $\frac{\varepsilon}{|\mathcal{A}(s)|}$, and the remaining bulk of the probability, $1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|}$, is given to the greedy action. The $\varepsilon$-greedy policies are

So, the non-greedy actions are given the probability $\frac{\varepsilon}{|\mathcal{A}(s)|}$, and the greedy action is given the probability $1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|}$. All clear up to this point.

However, I have a doubt in the policy improvement theorem that is mentioned in page 101, under section 5.4. I have enclosed a copy of this proof for your convenience:

$$ \begin{aligned} q_{\pi}(s, \pi^{\prime}(s)) &=\sum_{a} \pi^{\prime}(a \mid s) q_{\pi}(s, a) \\ &=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+(1-\varepsilon) \max _{a} q_{\pi}(s, a) \\ & \geq \frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+(1-\varepsilon) \sum_{a} \frac{\pi(a \mid s)-\frac{\varepsilon}{|\mathcal{A}(s)|}}{1-\varepsilon} q_{\pi}(s, a)\\ &=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)-\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+\sum_{a} \pi(a \mid s) q_{\pi}(s, a) \\ &=v_{\pi}(s) . \end{aligned} $$

My question is shouldn't the greedy action be chosen with a probability of $1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|}$?

The weighing factors do not add up to 1, as they are probability values. With this argument, the proof (with a slight modification) would be:

$$ \begin{aligned} q_{\pi}(s, \pi^{\prime}(s)) &=\sum_{a} \pi^{\prime}(a \mid s) q_{\pi}(s, a) \\ &=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+ \left( 1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|} \right) \max _{a} q_{\pi}(s, a) \\ & \geq \frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+\left(1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|} \right) \sum_{a} \frac{\pi(a \mid s)-\frac{\varepsilon}{|\mathcal{A}(s)|}}{1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|} } q_{\pi}(s, a)\\ &=\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)-\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} q_{\pi}(s, a)+\sum_{a} \pi(a \mid s) q_{\pi}(s, a) \\ &=v_{\pi}(s) . \end{aligned} $$

Though the end result isn't changed, I just want to know what I am conceptually missing, in order to understand the proof that is originally provided.

",37181,,2444,,4/3/2022 15:59,4/3/2022 15:59,Is this proof of $\epsilon$-greedy policy improvement correct?,,1,0,,,,CC BY-SA 4.0 21489,2,,21488,5/27/2020 13:29,,6,,"

The weights do sum to one. Note that in the second line where we have $$\frac{\epsilon}{|\mathcal{A}(s)|} \sum_a q_{\pi}(s,a) + (1-\epsilon)\max_aq_{\pi}(s,a) \; ,$$ the sum is over the whole action space, including the greedy action, so the sum of the weights will be $\frac{\epsilon}{|\mathcal{A}(s)|} \times |\mathcal{A}(s)| + (1-\epsilon) = 1$.

",36821,,36821,,5/28/2020 13:39,5/28/2020 13:39,,,,0,,,,CC BY-SA 4.0 21490,1,,,5/27/2020 14:04,,2,35,"

I'm trying to look for a task that predicts a discrete label first (classification), and then predicts the multiple continuous attributes of the predicted class. I found some papers about multi-output regression, but it wasn't what I wanted. Perhaps such a task exists in robot control or in video games, but I haven't found it yet. At the same time I want to train within a supervised learning framework, not reinforcement learning.

",37401,,,,,5/27/2020 14:04,Is there a classification task with multiple attribute regression?,,0,1,,,,CC BY-SA 4.0 21491,2,,21484,5/27/2020 14:31,,2,,"

Almost certainly, there is no such paper since that would be a trivial problem. The pole lying flat is the definition of failure, hence game over. If you started in that position, you would be permanently in the game-over state and you would never learn anything.

The reason is that if the pole is lying flat, then, if you apply a force on the cart (in the same direction as the pole is pointing, say), the pole head moves in exactly the same direction as the cart (i.e. the directional vectors of the cart and pole head movements are identical). Hence, the pole head never moves upwards.

In fact, I am fairly certain, that below certain angle with the surface, the pole can no longer be stabilized. This should be possible to prove from the dynamic equations governing the movement of the cart and the pole. This may not be easy, though, and definitely not easy for me (these are second-order differential equations). Anyway, with this in mind, you can see why one needs to start close enough to the stationary point for the problem to have a solution.

If the pole could go below the surface and swing, however, as here, it would be similar to the acrobot problem mentioned by Neil and you could start anywhere.

",11495,,,,,5/27/2020 14:31,,,,3,,,,CC BY-SA 4.0 21493,1,,,5/27/2020 18:17,,1,115,"

I'm new to working with neural networks and have recently began implementing neural networks for time series forecasting in some of my work. I've been particularly using Echo State Networks and have been doing some reading to understand how they work. For the most part, things seem pretty straight forward, but I'm confused as to why we use a 'delay' when feeding our input data (the delay concept mentioned in the paper Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication)?

I'm looking at some source code on Github, and they implement this delay as well (they feed two arrays inputData & targetData, into the network where one is delayed by one element relative to the other). I am noticing that the larger the delay, the worse the fit.

Why is this done? My interest is eventually to forecast past sample data.

",37403,,2444,,5/27/2020 19:32,10/26/2022 0:03,Why do we use a delay when feeding our input data to the echo state network?,,1,0,,,,CC BY-SA 4.0 21496,1,,,5/27/2020 19:30,,2,46,"

I was thinking about training a neural network to colourize images. The input would be the luminosity/value for each pixel, and the output would be a hue and/or saturation. Training data would be easily obtained just by selecting the luminosity/value channel from a full colour image.

Suppose all channels are scaled to 0.0-1.0, there is a problem with pixels whose hue is nearly 0.0 or nearly 1.0.

  • The input data may have sharp discontinuities in hue which are not visible to the human eye. This is an unstable, illusory boundary which seems like it would destablilize the training.
  • Also if the network outputs a value of 1.001 instead of 0.001 then this should NOT be penalised since.

Possible workarounds might be to preprocess the image to remap e.g. 0.99 to -0.01 if that pixel is near a region dominated by near-0 hues, or similarly to to remap e.g. 0.01 to 1.01 if that pixel is near a region dominated by near-1 hues. This has its own problems. Similarly, outputs could be wrapped to the range 0-1 before being scored.

But is there a better way to encode cyclic values such as hue so that they will naturally be continuous?

One solution I thought of would be to treat (hue,saturation) as a (theta,r) polar coordinate and translate this to Cartesian (x,y) and have that be the training target, but I don't know how this change of colour space will affect things (it might be find, I haven't tried it yet).

Are there alternative colour representations which are better suited to machine learning?

",22279,,,,,5/27/2020 20:37,How could a NN be trained to output a cyclic (e.g. hue) number?,,1,0,,,,CC BY-SA 4.0 21497,2,,21496,5/27/2020 20:37,,2,,"

Your solution is pretty much on spot. It corresponds to the YUV scheme used in television and designed to match human perception characteristics. As you already noticed, such an encoding wouldn't suffer from discontinuities.

",22993,,,,,5/27/2020 20:37,,,,1,,,,CC BY-SA 4.0 21498,1,,,5/28/2020 2:33,,1,53,"

I am reading through a paper (https://www.mitpressjournals.org/doi/pdf/10.1162/0891201053630273) where they describe logloss as a ranking function and can be simplified to the margin of the training data $X$. I am not sure what the transformation is in each step and could use a bit of help.

A precursor to this ranking loss is standard logloss which may clarify my understanding as well:

In this loss I only get from step 2 to here:

$$-\sum_{i=1}^n log(\frac{{e^{y_iF(x_i,\overline{a})}}}{1 + e^{yF(x,\overline{a})}})$$ $$=-\sum_{i=1}^ny_iF(x_i, \overline{a}) - log(1 + e^{yF(x,\overline{a})})$$

And here is the full ranking loss I am having trouble on:

",36486,,,,,5/28/2020 2:33,Simplifying Log Loss,,0,0,,,,CC BY-SA 4.0 21499,1,,,5/28/2020 5:54,,1,94,"

I am trying to implement conditional GAN using GAN-CLS loss as described in paper: https://arxiv.org/abs/1605.05396

So, while training discriminator, I should I have three batches of data:

  1. [Real_Image, Embeddings]
  2. [Generated_Image, Embeddings]
  3. [Wrong_Image,Embeddings]

And, while training generator, I should have one batch of data i.e [Generated_Image, Embeddings].

Is this correct way to train the model?

",37412,,,,,5/28/2020 5:54,Training Conditional DCGAN with GAN-CLS loss,,0,0,,,,CC BY-SA 4.0 21500,1,22845,,5/28/2020 8:07,,2,58,"

We have different kinds of algorithms to optimize the loss like AdaGrad, SGD + Momentum, etc. Some are more commonly used than the others. In some algorithms, they usually range out before they converge, reach to the steepest slope and find the minima. But some of these algorithms are significantly fast. So my question is that the speed is more of a deciding factor here or the route is important too? Or is it just problem dependent?

Here is a picture of what I mean by the Route.

",37414,,37414,,5/31/2020 5:47,8/3/2020 10:00,"Which one is more important in case of different loss optimization algorithms, Speed or the Route?",,1,10,,,,CC BY-SA 4.0 21501,1,,,5/28/2020 8:42,,1,128,"

I've found online this interesting algorithm:

From what I understand reading this algorithm, I can't figure out why I should ""perform the opposite action"" and consequently storing that second experience as then it is never used to update or doing anything else. Is this algorithm incorrect?

",37169,,,,,5/28/2020 8:42,Understanding the role of the target network in this DQN algorithm,,0,4,,,,CC BY-SA 4.0 21504,1,21505,,5/28/2020 11:18,,4,1327,"

How should I decay the $\epsilon$ in Q-learning?

Currently, I am decaying epsilon as follows. I initialize $\epsilon$ to be 1, then, after every episode, I multiply it by some $C$ (let it be $0.999$), when it reaches $0.01$. After that, I keep $\epsilon$ to be $0.01$ all the time. I think this has a terrible consequence.

So, I need a $\epsilon$ decay algorithm. I haven't found script or formula about it, so can you tell me?

",36107,,2444,,5/28/2020 11:49,5/28/2020 11:49,How should I decay $\epsilon$ in Q-learning?,,1,0,,,,CC BY-SA 4.0 21505,2,,21504,5/28/2020 11:31,,4,,"

The way you have described tends to be the common approach. There are of course other ways that you could do this e.g. using an exponential decay, or to only decay after a 'successful' episode, albeit in the latter case I imagine you would want to start with a smaller $\epsilon$ value and then decay by a larger amount.

",36821,,,,,5/28/2020 11:31,,,,0,,,,CC BY-SA 4.0 21506,1,21507,,5/28/2020 11:37,,1,182,"

I was reading the paper How to Combine Tree-Search Methods in Reinforcement Learning published in AAAI Conference 2019. It starts with the sentence

Finite-horizon lookahead policies are abundantly used in Reinforcement Learning and demonstrate impressive empirical success.

What is meant by ""finite horizon look-ahead""?

",35679,,2444,,5/28/2020 11:48,5/28/2020 11:48,What are finite horizon look-ahead policies in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 21507,2,,21506,5/28/2020 11:47,,1,,"

Per this paper a look ahead policy is a policy that will make decisions based on some 'horizon'. Here horizon means some time steps into the future, and so a finite horizon is simply a finite amount of time steps into the future. For example, as we are typically concerned with maximising returns in RL, a 10-step look ahead policy would choose an action at time $t$ that maximises the (expected) rewards at time $t+1, ... t+10$.

",36821,,,,,5/28/2020 11:47,,,,2,,,,CC BY-SA 4.0 21508,2,,21142,5/28/2020 13:15,,2,,"

Artificial Intelligence, as its name suggests, is intelligence made by humans. It's usually thought of as having human-like behaviors and characteristics. However, it doesn't have to resemble humans to be AI. It just has to be made by humans. Many common AI algorithms aren't even made to resemble humans, they may just have similarities. Reinforcement learning is present in humans, but also in the many creatures with intelligence.

Swarm Intelligence is basically a lot of small stupid things working together to do something complex. Take for example ants. Each individual ant only follows a few very simple "instructions" like if has this chemical: follow. Like Evolutionary AI, it's taking mimicking features of nature. We humans just take a feature made by nature (Evolution/Swarming) and try to replicate some behaviors. Much like Evolutionary AI, Swarm Intelligence is a type of AI.

tl;dr:

  • AI: intelligence made by humans
  • SI: feature made by nature that humans are trying to copy
",25130,,25130,,1/26/2021 2:41,1/26/2021 2:41,,,,2,,,,CC BY-SA 4.0 21510,1,,,5/28/2020 14:22,,1,38,"

I'm trying to optimize some reflective properties of curves of the form: $a_1x^n+a_2x^{n-1}+a_3x^{n-2} + ... + a_n + b_1y^n+b_2y^{n-1}+b_3y^{n-2} + ... + b_n = 0$

which is basically the curve that you get when you sum two polynomials of same degree in different variables:

$f(x) + g(y) = 0$

Anyways, I was wondering what would be a good way to do crossover on two such curves? I tried averaging the curves and then mutating them, but the problem is that the entire population quickly becomes homogeneous and the fitness starts to drop. Another typical method I tried is taking a cutoff point somewhere on the expansion above and mixing the left and right part from both parents.

Of course there are many ways to do a similar process. I could order the above expansion differently and then do a cutoff. I could seperate Xs and Ys and do two seperate cutoffs. Etc. The question is: in the context of algebraic curves, which method of generating offspring would be a good option considering I want to optimize some property of the curve (in this case I want it to have some reflective properties)?

",37422,,2444,,5/29/2020 19:42,5/29/2020 19:42,How to effectively crossover mathematical curves?,,0,0,,,,CC BY-SA 4.0 21511,1,,,5/28/2020 14:56,,1,162,"

At every node, MAX would always move to maximise the minimum payoff while MIN choose to minimise the maximum payoff, hence there is nash equilibrium.

By using backwards induction, at every node, MAX and MIN player would act optimally. Hence, there is subperfect nash equilibrium.

How do I formally prove this?

",37258,,2444,,5/28/2020 15:34,5/28/2020 15:34,How do you prove that minimax algorithm outputs a subgame-perfect Nash equilibrium?,,0,1,,,,CC BY-SA 4.0 21512,1,,,5/28/2020 14:58,,2,47,"

I am currently trying to solve a classification task with a recurrent artificial neural network (RNN).

Situation

There are up to 350 inputs (X) mapped on one categorical output (y)(13 differnt classes). The sequence to predict is deterministic in the sense that only specific state transitions are allowed based on the past. A simplified Abstraction of my problem:

  • y - Ground Truth: 01020
  • y - Model Prediction: 01200
  • Valid Transitions: 01, 10, 02, 20

The predicted transition 12 is consequently not valid.

Question

What would be the best way to optimize a model to make as less invalid transition predictions as possible (ideally none)? (temporal shifts of the predictions in comparison to the ground truth are still acceptable)

  • By the integration of the knowledge about the valid transitions into the artificial neural network. Is this even possible to code hard restrictions into a RNN?
  • By a custom loss function which penalizes these invalid transitions
  • Another approach

Current Approach

With a bidirectional recurrent network (one gated recurrent units layer (2000 neurons)) in a many to many fashion an accuracy of 99.5% on the training and 97.5% on the test set could be reached (Implemented with TensorFlow / keras 2.2)

",34155,,,,,5/28/2020 14:58,Incorporating domain knowledge into recurrent network,,0,0,,,,CC BY-SA 4.0 21514,1,21527,,5/28/2020 15:55,,2,91,"

There is some sort of art to using the right loss function. However, I was wondering if there is a way to derive the loss function if I gave you a neural network model (the weights) as well as the training data.

The point of this exercise is to see what family of loss functions we would get. And how that compares to the loss function that actually gave rise to the model.

",37423,,2444,,5/29/2020 21:01,5/29/2020 21:31,Is there a way of deriving a loss function given the neural network and training data?,,1,0,,,,CC BY-SA 4.0 21515,1,21517,,5/28/2020 15:55,,7,945,"

I am new in reinforcement learning, but I already know deep Q-learning and Q-learning. Now, I want to learn about double deep Q-learning.

Do you know any good references for double deep Q-learning?

I have read some articles, but some of them don't mention what the loss is and how to calculate it, so many articles are not complete. Also, Sutton and Barto (in their book) don't describe that algorithm either.

Please, help me to learn Double Q-learning.

",36107,,2444,,5/28/2020 16:06,11/1/2022 16:41,Is there any good reference for double deep Q-learning?,,2,0,,,,CC BY-SA 4.0 21516,2,,21515,5/28/2020 16:27,,5,,"

You should first read the introductory paper of Double DQN.

https://arxiv.org/abs/1509.06461

Then, depending on what you would like to do, search for other relevant papers that use this method.

I also propose studying the original Double Q-learning paper to understand important concepts/issues, such as the overestimation bias of Q-learning: https://proceedings.neurips.cc/paper/2010/file/091d584fced301b442654dd8c23b3fc9-Paper.pdf

",36055,,36055,,11/1/2022 16:41,11/1/2022 16:41,,,,0,,,,CC BY-SA 4.0 21517,2,,21515,5/28/2020 18:35,,7,,"

If you're interested in the theory behind Double Q-learning (not deep!), the reference paper would be Double Q-learning by Hado van Hasselt (2010).

As for Double deep Q-learning (also called DDQN, short for Double Deep Q-networks), the reference paper would be Deep Reinforcement Learning with Double Q-learning by Van Hasselt et al. (2016), as pointed out in ddaedalus's answer.

As for how the loss is calculated, it is not explicitly written in the paper. But, you can find it in the Dueling DQN paper, which is a subsequent paper where Van Hasselt is a coauthor. In the appendix, the authors provide the pseudocode for Double DQN. The relevant part for you would be:

$y_{j}=\left\{\begin{array}{ll}r & \text { if } s^{\prime} \text { is terminal } \\ r+\gamma Q\left(s^{\prime}, a^{\max }\left(s^{\prime} ; \theta\right) ; \theta^{-}\right), & \text {otherwise}\end{array}\right.$

Do a gradient descent step with loss $ \left\|y_{j}-Q(s, a ; \theta)\right\|^{2}$

Here, $y_j$ is the target, $\theta$ are the parameters of the regular network and $\theta^{-}$ are the target network parameters.

The most important thing to note here is the difference with the DQN target: $y_{i}^{D Q N}=r+\gamma \max _{a^{\prime}} Q\left(s^{\prime}, a^{\prime} ; \theta^{-}\right)$.

In DQN, we evaluate the Q-values based on parameters $\theta^{-}$ and we take the max over actions based on these Q-values parametrized with the same $\theta^{-}$. The problem with this is that it leads to an overestimation bias, especially at the beginning of the training process, where the Q-values estimates are noisy.

In order to address this issue, in double DQN, we instead take the max based on Q-values calculated using $\theta$ and we evaluate the Q-value of $a^{max}$ based on a different set of parameters i.e. $\theta^{-}$.

If you want to learn more about this, by watching a video lecture instead of reading a paper, I'd suggest you take a look at this lecture from UC Berkley's DRL course, where the professor (Sergey Levine) discusses this in detail with examples.

",34010,,34010,,5/28/2020 20:09,5/28/2020 20:09,,,,0,,,,CC BY-SA 4.0 21518,1,,,5/28/2020 19:47,,1,35,"

I have a question about the use of deep learning techniques with time-fixed features and images (setting 1) and time-dependent features (setting 2). (I am pretty new to the deep learning world so please excuse me if it's a basic question.)

Setting 1: Imagine having a training dataset composed of

  • some time-fixed features such as height, weight, and age of an individual at the first medical visit (these features are recorded once and therefore time-fixed, i.e., they do not change in time in the dataset).
  • some medical images for each individual, such as for example a CT scan.
  • a label defining if the patient has or not a specific disease.

Setting 2: Same as setting 1 but with some features that are repeated over time (time-dependent, longitudinal), such as for example blood pressure recorded twice a day for each individual for several days.

Let say that the goal is to classify if an individual has or not a specific disease given the aforementioned features.

I have seen zillions of papers and blogs talking about convolutional neural networks to classify images and a few million about recurrent neural networks for time-dependent features. However, I am not very aware of what to use in case I have time-fixed, time-dependent, and imaging features altogether.

I am wondering how you would attach this problem.

",37426,,30725,,5/29/2020 13:48,5/29/2020 13:48,"Deep learning techniques with time-fixed, time-dependent and imaging data",,0,0,,,,CC BY-SA 4.0 21519,1,,,5/28/2020 19:56,,2,26,"

I have a course assignment to use an LSTM to predict the movement directions of stock prices. One of the things I am asked to do is provide a visualization to compare the predictive powers of a set of N features (e.g. 1-day return, volatility, moving average, etc.). Let's assume that we use a window of 50 days as input to the LSTM.

The first thing that came to my mind is to use a RadViz plot (check below image taken from https://www.scikit-yb.org/en/latest/api/features/radviz.html).

However, I soon realized this will not work for the features since each sample will have 50 values. So if we have M samples, the shape of the input data will be something like Mx50xN. This, unfortunately, is not something RadViz can deal with (it can handle 2D data).

Given this, I would be grateful if someone can point me to a viable way to visualize the data. Is it even possible when each feature comprises 50 values?

",37428,,30725,,5/29/2020 13:48,5/29/2020 13:48,Visualisation for Features to Predict Timeseries Data,,0,0,,,,CC BY-SA 4.0 21521,1,,,5/29/2020 10:07,,1,132,"

This GAN being trained with CelebA dataset doesn't seem to mode collapse, discriminator is not really over confident, and yet the quality is stuck on these rough Picasso-like generator images. Using Leaky-ReLU, strided conv instead of maxpool, and dampened truths helped a little, but still no better than this. Not sure what else to try. training clip Discriminator feed is in top left corner.

",37443,,,,,7/18/2021 14:12,Why is this GAN not converging?,,0,0,,,,CC BY-SA 4.0 21522,2,,11679,5/29/2020 10:12,,3,,"

There are three problems

  1. Limited capacity Neural Network (explained by John)
  2. Non-stationary Target
  3. Non-stationary distribution

Non-stationary Target

In tabular Q-learning, when we update a Q-value, other Q-values in the table don't get affected by this. But in neural networks, one update to the weights aiming to alter one Q-value ends up affecting other Q-values whose states look similar (since neural networks learn a continuous function that is smooth)

This is bad because when you are playing a game, two consecutive states of a game are always similar. Therefore, Q-value updates will increase or decrease Q-values for both states together. So, when you take one as the target for the other, the target becomes non-stationary since it moves along with you. This is analogous to a donkey running to catch a carrot that is attached to its head. Since the target is non-stationary, the donkey will never reach its target. And, in our case, in trying to chase, the Q-values will explode.

In Human-level control through deep reinforcement learning, this problem is addressed by caching an OLD copy of the DQN for evaluating the targets, & updating the cache every 100,000 steps of learning. This is called a target network, and the targets remain stationary this way.

Non-stationary distribution

This is analogous to the "distribution drift" problem in imitation learning, which can be solved with the dataset aggregation technique called DAgger.

The idea is, as we train, our DQN gets better and better and our policy improves. And this causes our sampling distribution to change since we are doing online learning where we sample according to a policy with $\epsilon$ probability. This is a problem for supervised learning since it assumes stationary distribution or i.i.d. data.

As an analogy, this is like training a Neural Network to identify cats and dogs but showing the network only dogs during the first 100 epochs, and then showing only cats for the remainder epochs. What happens is, the network learns to identify dogs, then forgets it and learns to identify cats.

This is what happens when the distribution changes and we care only about the current distribution during training. So, in order to solve this, same paper starts aggregating data in a large buffer, and samples a mini-batch of both new data as well as old data every time during training. This is called experience replay, since we don't throw away our past experience and keep re-using them in training.

",17954,,2444,,12/19/2020 13:14,12/19/2020 13:14,,,,2,,,,CC BY-SA 4.0 21523,1,21524,,5/29/2020 11:47,,3,60,"

In David Silver's 8th lecture he talks about model learning and says that learning $r$ from $s,a$ is a regression problem whereas learning $s'$ from $s,a$ is a kernel density estimation. His explanation for the difference is that if we are in a stochastic environment and we are in the tuple $s,a$ then there might be a 30% chance the wind will blow me left, and a 70% chance the wind will blow me right, so we want to estimate these probabilities.

Is the main difference between these two problems, and hence why one is regression and the other is kernel density estimation, because with the reward we are mainly concerned with the expected reward (hence regression) whereas with the state transitioning, we want to be able to simulate this so we need the estimated density?

",36821,,,,,5/29/2020 12:35,"Why is learning $s'$ from $s,a$ a kernel density estimation problem but learning $r$ from $s,a$ is just regression?",,1,1,,,,CC BY-SA 4.0 21524,2,,21523,5/29/2020 12:14,,2,,"

Is the main difference between these two problems, and hence why one is regression and the other is kernel density estimation, because with the reward we are mainly concerned with the expected reward (hence regression) whereas with the state transitioning, we want to be able to simulate this so we need the estimated density?

Yes.

An expected reward function from $s,a$ is all you need to construct valid Bellman equations for value functions. For example

$$q_{\pi}(s,a) = r(s,a) + \gamma\sum_{s'}p(s'|s,a)\sum_{a'}\pi(a'|s')q(s',a')$$

is a valid way of writing the Bellman equation for action values. You can derive this from $r(s,a) = \sum_{r,s'}rp(r,s'|s,a)$ and $q_{\pi}(s,a) = \sum_{r,s'}p(r,s'|s,a)(r + \gamma\sum_{a'}\pi(a'|s')q(s',a'))$ if you have the equations in that form.

However, in general there is no such thing as an ""expected state"" when there is more than one possible outcome (i.e. in environments with stochastic state transitions). You can take a mean of the state vector representations over the samples you see for $s'$ but that is not the same thing at all and could easily be a representation of an unreachable/nonsense state.

In some cases, the expectation $\mathbb{E}_{\pi}[x(S_{t+1})|S_t=s, A_t=a]$ where $x(s)$ creates a feature vector from any given state $s$, $x(s): \mathcal{S} \rightarrow \mathbb{R}^d$, can be meaningful. The broadest and most trivial example of this is for deterministic environments. You may be able to construct stochastic environments where there is a good interpretation of such a vector, even if it does not represent any reachable state.

Simple one-hot encoded states could maybe made to work like this by representing a probability distribution over states (this would also require re-interpretations of expected reward function and value functions). That is effectively a kernel density function over discrete state space.

In general knowing this $\mathbb{E}_{\pi}[x(S_{t+1})|S_t=s, A_t=a]$ expected value does not help resolve future rewards, as they can depend arbitrarily on specific state transitions.

",1847,,1847,,5/29/2020 12:35,5/29/2020 12:35,,,,3,,,,CC BY-SA 4.0 21526,1,,,5/29/2020 17:21,,3,54,"

There’s a lot of talk of undercover cops intentionally starting violence in otherwise peaceful protests. The evidence, primarily, are images like this.

https://images.app.goo.gl/4n3o2EXwFzMQfsKq6

It looks pretty convincing, but I’d like something more solid. Does anyone know of a model that can detect with a high level of certainty if the “mask” area of two photos represents the same person?

",37455,,,,,8/21/2021 0:06,Does anyone know of a model for comparing the eyes of people in two images to see if they match?,,1,0,,,,CC BY-SA 4.0 21527,2,,21514,5/29/2020 21:25,,1,,"

I don't think there's a way of doing what you want, at least, I've never seen such a thing (and, currently, I am not seeing how this could be done in the general case).

The same neural network model but with different (or same) weights could have been trained with the same loss function or not. For example, although it may not be a good idea, you can train a neural network for classification with the mean squared error, as opposed to the typical cross-entropy. Moreover, even if you know the loss function that the neural network is trained with, the training data alone may not lead to the same set of weights because the actual weights depend on different (possibly stochastic) factors, such as if (or how) you shuffle the data or the batch size.

",2444,,2444,,5/29/2020 21:31,5/29/2020 21:31,,,,5,,,,CC BY-SA 4.0 21528,2,,5840,5/29/2020 22:32,,0,,"

Note that ""algorithmically"" can refer to anything that uses an algorithm. Currently, ML systems are trained with algorithms and neural networks can be seen as algorithms (although black-box ones), so ML is also algorithmic. Everything that runs on a computer (a concrete version of a Turing machine) can be seen as an algorithm (or program)! In fact, computers were invented exactly for this purpose: to perform some algorithmic operation (i.e. a set of instructions, like a recipe).

So, by algorithmic, I assume you're referring to techniques that are typically taught in an ""Algorithms and Data Structures"" course for a computer science student, such as the binary search (one of the most simple and yet beautiful and useful algorithms!), which is an algorithm that, given some constraints (a sorted array), gives you an exact correct solution in $\mathcal{O}(\log n)$ time. However, I think that you are also referring to every program that is primarily based on if-then statements and loops (e.g. desktop applications, websites, etc.)

To answer your question, you first need to understand the scope of the machine learning field.

Machine learning (like statistics) is a set of techniques that attempt to learn from data. So, every problem where data is available (and you can get insight from) can potentially be solved with a machine learning technique. ML techniques typically produce approximative solutions and are typically used to solve problems where an exact solution is infeasible. However, note that machine learning isn't the only approach to solve hard problems (e.g. you can also use meta-heuristics, e.g. ant colony optimization algorithms).

If you have an algorithm that produces an exact solution (without requiring data) in polynomial time (preferably, in $\mathcal{O}(n^2)$ time), then machine learning (or any other technique that produces approximative solutions, e.g. heuristics) is quite useless.

",2444,,,,,5/29/2020 22:32,,,,0,,,,CC BY-SA 4.0 21530,1,,,5/30/2020 9:59,,1,354,"

What is an auto-associator, and how does it work? How can we design an auto-associator for a given pattern? I couldn't find a clear explanation for this anywhere on the internet.

Here's an example of a pattern.

",37467,,2444,,5/30/2020 11:05,5/30/2020 11:05,What is an auto-associator?,,0,3,,,,CC BY-SA 4.0 21531,1,21545,,5/30/2020 10:08,,3,566,"

The aim of weight initialization is to prevent layer activation outputs from exploding or vanishing during the course of a forward pass through a deep neural network

I am really having trouble understanding weights initialization technique and Xavier Initialization for deep neural networks (DNNs).

In simple words (and maybe with an example), what is the intuition behind the Xavier initialization for DNNs? When should we use Xavier's initialization?

",30725,,2444,,5/30/2020 11:38,5/31/2020 15:45,What is the intuition behind the Xavier initialization for deep neural networks?,,1,1,,,,CC BY-SA 4.0 21532,1,21534,,5/30/2020 10:16,,3,591,"

I know how pooling works, and what effect it has on the input dimensions - but I'm not sure why it's done in the first place. It'd be great if someone could provide some intuition behind it - while explaining the following excerpt from a blog:

A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to down sample the feature maps. This has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image, referred to by the technical phrase “local translation invariance.”

What's local translation invariance here?

",35585,,2444,,1/1/2022 10:03,1/1/2022 10:03,What is the effect of using pooling layers in CNNs?,,2,0,,,,CC BY-SA 4.0 21533,1,21543,,5/30/2020 12:47,,1,88,"

From the David Silver's lecture 8: Integrating Learning and Planning - based on Sutton and Barto - he talks about using sample-based planning to use our model to take a sample of a state and then use model-free planning, such as Monte Carlo, etc, to run the trajectory and observe the reward. He goes on to say that this effectively gives us infinite data from only a few actual experiences.

However, if we only experience a handful of true state-action-rewards and then start sampling to learn more then we will surely end up with a skewed result, e.g., If I have 5 experiences but then create 10000 samples (as he says, infinite data). I am aware that as the experience set grows the Central Limit Theorem will come into play and the distribution of experience will more accurately represent the true environment's state-actions-rewards distribution but before this happens is sampled based planning still useful?

",36082,,2444,,6/1/2020 14:14,6/1/2020 14:38,Is the distribution of state-action pairs from sample based planning accurate for small experience sets?,,1,0,,,,CC BY-SA 4.0 21534,2,,21532,5/30/2020 16:04,,5,,"

Pooling has multiple benefits

  • Robust feature detection.
  • Makes it computationally feasible to have deeper CNNs

Robust Feature Detection

Think of max-pooling (most popular) for understanding this. Consider a 2*2 box/unit in one layer which is mapped to only 1 box/unit in the next layer (Basically pooling). Let's say the feature map (kernel) detects a petal of a flower. Then qualifying a petal if any of the 4 units of the previous layer is fired makes the detection robust to noise. There is no strict requirement that all 4 units should be fired to detect a petal. Thus, the next layer (after pooling) captures the features with noise invariance. We can also say it is local translation invariance (in a close spatial sense) as a shifted feature will also be captured. But also remember translation invariance in general is captured by the convolution with kernels in the first place. (See how 1 kernel is convolved with the whole image)

Computational advantage

The dimensions of the inputs in image classification are so huge that the number of the multiplication operation is in billions even with very few layers. Pooling the output layer reduces the input dimension for the next layer thus saving computation. But also now one can aim for really deep networks(number of layers) with the same complexity as before.

",26489,,35585,,5/31/2020 10:08,5/31/2020 10:08,,,,0,,,,CC BY-SA 4.0 21535,2,,21532,5/30/2020 17:32,,0,,"

In addition in general it somewhat aides in detection as only the strongest feature feature filter is activated so in a sense it removes additional information.

But it obviously has draw backs resulting in combinations of features being detected which aren't actual.objects.

",32390,,,,,5/30/2020 17:32,,,,0,,,,CC BY-SA 4.0 21536,1,22113,,5/30/2020 23:50,,1,267,"

Here's the code in question.

https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L491

class BertOnlyNSPHead(nn.Module):
    def __init__(self, config):
        super().__init__()
        self.seq_relationship = nn.Linear(config.hidden_size, 2)

    def forward(self, pooled_output):
        seq_relationship_score = self.seq_relationship(pooled_output)
        return seq_relationship_score

I think it was just ranking how likely one sentence would follow another? Wouldn't it be one score?

",18358,,,,,6/23/2020 14:38,Why does the BERT NSP head linear layer have two outputs?,,1,0,,,,CC BY-SA 4.0 21537,1,,,5/31/2020 7:39,,1,78,"

I am learning to use a LSTM model to predict time series data. Specifically, I hope the network should output a sequence (with multiple time steps) only after the input sequence has finished feeding in, as shown in the left figure.

However, most of the LSTM sequence-to-sequence prediction tutorial I have read seems to be the right figure (i.e. each time step of the output sequence is generated after each time step of the input sequence). What's more, as far as I understand, the LSTM implementation in PyTorch (and probably Keras) can only return output sequence corresponding to each time step of the input sequence. It cannot make predictions after the input sequence is over.

I hope to know is there any way to make a sequence-to-sequence LSTM network which starts output only after the input sequence has finished feeding in? And it would be better if someone can show me an example implementation code.

",37482,,,,,3/6/2022 15:54,How to make a LSTM network to predict sequence only after input sequence is finished?,,2,0,,,,CC BY-SA 4.0 21538,1,21539,,5/31/2020 8:56,,2,564,"

I'm working on a deep q-learning model in an infinite horizon problem, with a continous state space and 3 possible actions. I'm using a neural network to approximate the action-value function. Sometimes it happens that, after a few steps, the algorithm starts choosing only one between the possible actions (apart from a few steps where I suppose it explores, given the epsilon-greedy policy it follows), leading to bad results in terms of cumulative rewards. Is this a sign that the algorithm diverged?

",37169,,2444,,5/31/2020 11:13,5/31/2020 11:13,"If deep Q-learning starts to choose only one action, is this a sign that the algorithm diverged?",,1,0,,,,CC BY-SA 4.0 21539,2,,21538,5/31/2020 9:48,,4,,"

Is this a sign that the algorithm diverged?

It is a common sign of a problem with learning process. That includes divergence due to poor hyper-parameters, even just bad luck. But it can also point to a design/architecture problem.

Other common causes of algorithm failing with a fixed action choice include:

  • Neural network inputs not scaled before use.

  • Large reward values causing large initial squared errors (either re-scale rewards or reduce learning rate to fix)

  • State representation too far from Markov property assumption

  • A bug in code (almost anywhere, unfortunately)

  • Catastrophic forgetting due to focusing too much on specific results and generalising from them incorrectly. Your agent might be suffering from this if it starts to learn correctly, reaching some level of competence at the task before failing.

",1847,,,,,5/31/2020 9:48,,,,1,,,,CC BY-SA 4.0 21540,1,,,5/31/2020 9:56,,4,337,"

SGD is able to jump out of local minima that would otherwise trap BGD

I don't really understand the above statement. Could someone please provide a mathematical explanation for why SGD (Stochastic Gradient Descent) is able to escape local minima, while BGD (Batch Gradient Descent) can't?

P.S.

While searching online, I read that it has something to do with ""oscillations"" while taking steps towards the global minima. What's that?

",35585,,2444,,6/2/2020 15:41,6/2/2020 15:41,How does SGD escape local minima?,,0,6,,,,CC BY-SA 4.0 21542,1,,,5/31/2020 10:22,,2,162,"

I have recently solved the Cartpole problem using double deep Q-learning. When I saw how the agent was doing, it used to go right every time, never left, and it did similar actions all the time.

Did the model overfit the environment? It seems that the agent just memorized the environment.

What are the common techniques to prevent the agent to overfit like that? Is that a common problem?

",36107,,2444,,5/31/2020 11:16,5/31/2020 11:16,How to prevent deep Q-learning algorithms to overfit?,,0,1,,,,CC BY-SA 4.0 21543,2,,21533,5/31/2020 11:23,,2,,"

I am aware that as the experience set grows the Central Limit Theorem will come into play and the distribution of experience will more accurately represent the true environment's state-actions-rewards distribution

I believe here you mean the Law of Large Numbers which states that for a large enough sample ($n \rightarrow \infty$) the sample mean will converge to the true mean. The central limit theorem (CLT) states that if you take the sum/mean of a set of independent random variables then the distribution of this new RV will be approximately normal as $n \rightarrow \infty$.

before this happens is sampled based planning still useful

As you mentioned, if you had only 5 full episodes of experience to choose from, then this likely would not represent true underlying distributions and so the approximations would not be good -- of course, this will depend on how complex your MDP is, if you had a trivially simple one then it may be enough to represent it well. As David Silver says in his lecture, then one of the disadvantages of planning with a model is that you introduce another set of uncertainty, mainly from when you approximate the properties of the model.

",36821,,36821,,6/1/2020 14:38,6/1/2020 14:38,,,,2,,,,CC BY-SA 4.0 21545,2,,21531,5/31/2020 15:45,,1,,"

Weight initialization is one of the most critical factors for successfully training a deep neural network. This explanation by deeplearning.ai is probably the best that one could give for the need for initializing a DNN with Xavier initialization. Here is what it talks about in a nutshell:

The problem of exploding and vanishing gradients has been long-standing in the DL community. Initialize all the weights as zeros and the model learns identical features across all hidden layers, initialize random but large weights and the backpropagated gradients explode, initialize random but small weights and gradients vanish. The intuition is aptly captured by this simple mathematical observation: $1.1^{50} = 117.390$, while at the same time, $0.9^{50} = 0.00515$. Note that the difference between the two numbers is just $0.1$ but it has a tremendous effect when multiplied repeatedly! A typical NN is a series of function compositions involving weight matrices and linear/non-linear activation functions. When stripped to a bare minimum, it essentially is a series of matrix multiplications. Therefore, the way in which the elements of these weight matrices are initialized plays a major role in how the network learns.

The standard weight initialization methods come into the picture here. They reinforce what are the de-facto rules of thumb when it comes to weight initializations: (1) the mean of the activations should be zero, and (2) the variance of these activations across all the layers should be the same.

Note: The link given above has complete mathematical justification for why Xavier initialization works, along with an interactive visualization for the same.

",36971,,,,,5/31/2020 15:45,,,,0,,,,CC BY-SA 4.0 21546,1,,,5/31/2020 17:46,,1,167,"

I've implemented a vanilla actor-critic and have run into a wall. My model does not seem to be learning the optimal policy. The red graph below shows its performance in cartpole, where the algorithm occasionally does better than random but for the most part lasts between 10-30 timesteps.

I am reasonably sure that the critic part of the algorithm is working. Below is a graph of the delta value (r + Q_w(s',a') - Q_w(s,a)), which seems to show that for the most part the predicted future reward is quite similar to the approximate future reward.

Thus, I am at a loss for what the problem could be. I have an inkling it lies within the actor, but I am not sure. I have double checked the loss function and that seems to be correct to me as well. I would appreciate any advice. Thanks!

",36404,,,,,5/31/2020 17:46,Actor-Critic implementation not learning,,0,0,,,,CC BY-SA 4.0 21547,1,,,5/31/2020 23:01,,1,64,"

Considering the paper Disentangling by Factorising, in addition to introducing a new model for Disentangled Representation Learning, FactorVAE (see figure), what is the main theoretical contribution provided by the paper?

",1963,,2444,,6/1/2020 14:37,6/1/2020 14:37,What is the main contribution of the paper Disentangling by Factorising?,,1,0,,,,CC BY-SA 4.0 21548,2,,21547,5/31/2020 23:01,,1,,"

One of the core contributions presented in the paper consists of understanding at a deeper level the objective function used in Beta VAE and improving it

More specifically, the authors started from Beta VAE OF

$$\frac{1}{N} \sum_{i=1}^{N}\left[\mathbb{E}_{q\left(z | x^{(i)}\right)}\left[\log p\left(x^{(i)} | z\right)\right]-\beta K L\left(q\left(z | x^{(i)}\right) \| p(z)\right)\right]$$

which consists of the classical reconstruction loss and the regularization term which steers the Latent Code PDF towards a target PDF.

Developing the second term

$$\mathbb{E}_{p_{\text{data}}(x)}[K L(q(z | x) \| p(z))]=I(x ; z)+K L(q(z) \| p(z))$$

they observed there are 2 terms which push in different directions:

$I(x,z)$ is the mutual information between the input data and the code, and we do not want to penalize this as in fact we want this to be as high as possible, so that the decode contains as much information as possible about the input $KL(q(z) || p(z))$ is the KL divergence term which if penalized pushes the latent code PDF $q(z)$ towards the prior $p(z)$ which we choose as factorized So penalizing the second term in Beta VAE OF means also penalizing the mutual information hence ultimately loosing reconstruction capability

The authors then propose a new OF which fixes this

$$\frac{1}{N} \sum_{i=1}^{N}\left[\mathbb{E}_{q\left(z | x^{(i)}\right)}\left[\log p\left(x^{(i)} | z\right)\right]-K L\left(q\left(z | x^{(i)}\right)|| p(z)\right)\right] -\gamma K L(q(z) \| \bar{q}(z))$$

In the rest of the paper they run experiments and show they have improved the disentanglement of the representation (according to the new metric they propose in the paper) while preserving reconstruction hence improving the reconstruction vs disentanglement tradeoff with respect to the Beta VAE paper

",1963,,2444,,6/1/2020 14:36,6/1/2020 14:36,,,,0,,,,CC BY-SA 4.0 21552,2,,17324,6/1/2020 6:20,,1,,"

From mathematical point of view you are correct as are your calculations. To catch all the patterns you need that many filters, but this is where a whole idea of a training comes in. Main objective of the training in the CNNs is to find just a few good patterns from billions possible ones.

So the direct answer to your question is: The standard layers of 64 to 1024 filters are only able to catch a small part of (perhaps) useful patterns, yes but this is assuming no training taking place. If you conducted training on given data with given model, then 64 to 1024 filters could already extract a lot of useful patterns, perhaps more than needed.

",22659,,,,,6/1/2020 6:20,,,,0,,,,CC BY-SA 4.0 21553,1,21555,,6/1/2020 8:40,,1,371,"

My understanding of tabular Q-learning is that it essentially builds a dictionary of state-action pairs, so as to maximize the Markovian (i.e., step-wise, history-agnostic?) reward. This incremental update of the Q-table can be done by a trade-off exploration and exploitation, but the fact remains that one ""walks around"" the table until it converges to optimality.

But what if we haven't ""walked around"" the whole table? Can the algorithm still perform well in those out-of-sample state-action pairs?

",30959,,2444,,6/2/2020 15:24,6/2/2020 17:36,Can tabular Q-learning converge even if it doesn't explore all state-action pairs?,,1,3,,,,CC BY-SA 4.0 21554,1,21557,,6/1/2020 9:23,,1,561,"

I have encountered the gym environment and decided to create AI that plays breakout. Here is the link: https://gym.openai.com/envs/Breakout-ram-v0/.

The documentation says that the state is represented as a RAM state, but what is the RAM in this context? Is it the random access memory? What does the RAM state represent?

",36107,,2444,,6/1/2020 10:47,6/1/2020 10:47,What is a RAM state in the gym's breakout-ram environment?,,1,0,,,,CC BY-SA 4.0 21555,2,,21553,6/1/2020 10:07,,1,,"

In the tabular case, then the Q table will only converge if you have walked around the whole of the table. Note that to guarantee convergence we need $\sum\limits_{n=1}^{\infty}\alpha_n(a) = \infty$ and $\sum\limits_{n=1}^\infty \alpha_n^2(a) < \infty$. These conditions imply that in the limit each state-action pair will have been visited an infinite number of times, thus we will have walked around the whole table, so there are no out-of-sample state-action pairs.

However, in the case of function approximation, then convergence is no longer guaranteed. Generalisability is possible though - assuming we have an infinite state or action space then we will only ever visit the same state-action pair once, so the role of a function approximator is to allow us to generalise the state/action space.

NB that the convergence conditions I mentioned are only required in some proofs of convergence, depending on what type of convergence you are looking to prove. See this answer for more details.

",36821,,2444,,6/2/2020 17:36,6/2/2020 17:36,,,,2,,,,CC BY-SA 4.0 21556,2,,21537,6/1/2020 10:08,,1,,"

You should try an architecture with an encoder and a decoder. The encoder will consume all the data you give as in put and decoder will give out the series of output.

",37504,,,,,6/1/2020 10:08,,,,0,,,,CC BY-SA 4.0 21557,2,,21554,6/1/2020 10:25,,2,,"

Yes, it is the state of the memory; this would mainly involve variables, since the code would be in ROM. Since it is only 128 bytes in size, the screen memory would also not be included in this.

The idea is that all information relevant to the game is captured in these 128 bytes; they represent the state of the game world at any given time. Movements of the ball, the game controller position, etc are all encoded there.

For machine learning it is not actually relevant which bytes are representing which value, as any optimisation of outcomes will treat all these 128 bytes as parameters describing the state. A machine learning system will pick up the optimal configuration through the learning process, eg that the 'racket' position should always near the ball x-coordinate. That will just be a correlation of two bytes, no matter which ones they actually are.

",2193,,2193,,6/1/2020 10:35,6/1/2020 10:35,,,,3,,,,CC BY-SA 4.0 21559,1,,,6/1/2020 11:25,,1,119,"

I am using LSTM network for predicting IOT time-series data receiving from un-reliable devices and networks.
This results in several multiple sections [continuous streak of bad data for several days until the problem is fixed].
I need to exclude this bad data section before feeding it to model training.
Since I am using LSTM-RNN network, it requires to do an un-roll data based on the previous records.

How can I properly exclude this bad data?
I thought of an approach as training model separately using each batch of good data, and use subsequent good-data batch for fine-tuning the model.
Please let me know if this is a good approach? or is there a better method?

example data:
""1-01"",266.0
""1-02"",145.9
""1-03"",183.1
""1-04"",0  [bad data]
""1-05"",0  [bad data]
""1-06"",0  [bad data]
""1-07"",0  [bad data]
""1-08"",224.5
""1-09"",192.8
""1-10"",122.9
""1-11"",0  [bad data]
""1-12"",0  [bad data]
""2-01"",194.3
""2-02"",149.5
",37505,,,,,7/6/2020 13:28,How to exclude sections of bad data from time-series data before training an LSTM network,,1,0,,,,CC BY-SA 4.0 21560,1,,,6/1/2020 11:42,,1,239,"

I stumbled across a question asking to draw a conceptual dependency for the following statement:

Place all ingredients in a bowl and mix thoroughly

My attempt so far

Explanation: Both the sender and recipient are the same, except that the states of their contents are different.

Something feels like it isn't right. I would appreciate if you could correct errors, if any.

",37506,,2193,,6/1/2020 16:47,10/25/2021 1:02,"How can I draw a conceptual dependency for the statement ""Place all ingredients in a bowl and mix thoroughly""?",,1,0,,,,CC BY-SA 4.0 21562,1,,,6/1/2020 14:11,,3,156,"

I have a binary classifier (think of it as a content moderation system) that is deployed after having being trained via batch learning.

Once deployed, humans review and check for correctness only items predicted positive by the algorithm.

In other words, once in production if I group predictions of the model on unseen examples in the confusion matrix

+-----------+-----------------+
|           |   Ground-truth  |
|           +-----+-----------+
|           |     | Neg | Pos |
+-----------+-----+-----+-----+
|           | Neg | x11 | x12 |
| Predicted +-----+-----+-----+
|           | Pos | x21 | x22 |
+-----------+-----+-----+-----+
  • I have access to all the ground-truth labels of the elements counted in $x_{21}$, $x_{22}$ (the predicted-positive)
  • I know the sum of $x_{11}$ and $x_{12}$, but not their values
  • I do not have access to the ground-truth labels of the elements predicted-negative.

This (suboptimal) setup allows to measure precision $\frac{x_{22}}{x_{21} + x_{22}}$, while recall stays unknown as elements predicted negative are not examined at all (ground-truth labels of negatives can't be assigned due to resource constraints).

Information gathered from users about the (true and false) positive elements can be used to feed a retraining loop... but

  1. are there any ""smart"" learning recipes that are expected to make the algorithm improve its overall performance (say, the F1 score for the positive class) in this setting?
  2. what's a meaningful metric to monitor to ensure that the performance of the model is not degrading?* (given the constraint specified here, F1 score is unknown).

Thanks for any hint on how to deal with this!

* One solution could be to continuously monitor the F1 score on a labeled evaluation set, but maybe there's more one can do?

",33032,,33032,,6/2/2020 15:20,6/3/2020 19:02,How do I keep my system (online) learning if I can get ground truth labels only for examples flagged positive?,,1,0,0,,,CC BY-SA 4.0 21563,2,,21560,6/1/2020 17:02,,1,,"

As it's two clauses (""place in bowl"" and ""mix""), I would actually use two separate CD structures. The first one is a PTRANS (not ATRANS, as you don't change ownership, but location); the second one is a bit trickier.

Mix can be paraphrased as stir, or move around. You would typically do this with a spoon or similar implement. I would do it as a PROPEL with the instrument spoon and the object the ingredients. PROPEL, according to Schank (1975) is the ""application of a physical force to an object"".

So the first one is:

(PTRANS
  (ACTOR *you*)
  (OBJECT ""all ingredients"")
  (TO ""bowl""))

And the second is:

(PROPEL
  (ACTOR *you*)
  (OBJECT ""ingredients"")
  (INSTRUMENT ""a spoon"")
  (DIRECTION ""circular""))

It's debatable if you PROPEL the ingredients with the instrument spoon, or you PROPEL the object spoon with the target/location/direction ingredients.

If you draw them in a graphical representation, you can have both ACTs leading off the shared actor '*you*'; or you could have them as separate graphs linked by 'then' or some other indicator of sequentiality.

Schank, Roger (1975) The primitive ACTs of conceptual dependency, Proc of 1975 workshop on theoretical issues in natural language processing, 34-37

",2193,,,,,6/1/2020 17:02,,,,0,,,,CC BY-SA 4.0 21565,1,21568,,6/1/2020 18:49,,2,684,"

From Sutton and Barto's book Reinforcement Learning (Adaptive Computation and Machine Learning series) (p. 99), the following definition for first-visit MC prediction, for estimating $V \sim V_\pi$ is given:

Is determining the reward for each action a requirement of SARSA? Can the agent wait until the end of the episode to determine the reward?

For example, the reward for the game tic-tac-toe is decided at the end of the episode, when the player wins, loses, or draws the match. The reward is not available at each step $t$.

Does this mean then that depending on the task it is not always possible to determine reward at time step $t$, and the agent must wait until the end of an episode? If the agent does not evaluate reward until the end of an episode, is the algorithm still SARSA?

",12964,,2444,,6/1/2020 18:54,6/1/2020 20:49,Can the agent wait until the end of the episode to determine the reward in SARSA?,,1,0,,,,CC BY-SA 4.0 21566,1,,,6/1/2020 19:03,,1,76,"

What are the real applications of hierarchical temporal memory (HTM) in machine learning (ML) these days?

",4446,,2444,,6/1/2020 19:19,6/2/2020 17:52,What are the real applications of hierarchical temporal memory?,,1,0,,,,CC BY-SA 4.0 21567,2,,21566,6/1/2020 19:03,,1,,"

The following figure [1] is inspiring:

It means HTM can be used as a memory unit in different neural networks such as RNN. In the same reference, we can find the following advantages and applications of HTM:

  1. Sequence learning: Being able to model temporally correlated patterns represents a key property of intelligence, as it gives both biological and artificial systems the essential ability to predict the future. It answers the basic question “what will happen next?” based on what it has seen before. Every machine learning algorithm should be able to provide valuable predictions not just based on static spatial information but also grounding it in time.
  2. High-order predictions: Real-world sequences contain contextual dependencies that span multiple time steps, hence the ability to make high-order predictions becomes fundamental. The term “order” refers to Markov's order, specifically the minimum number of previous time steps the algorithm needs to consider in order to make accurate predictions. An ideal algorithm should learn the order automatically and efficiently.
  3. Multiple simultaneous predictions: For a given temporal context, there could be multiple possible future outcomes. With real-world data, it is often insufficient to only consider the single best prediction when information is ambiguous. A good sequence learning algorithm should be able to make multiple predictions simultaneously and evaluate the likelihood of each prediction online. This requires the algorithm to output a distribution of possible future outcomes.
  4. Continual Learning: Continuous data streams often have changing statistics. As a result, the algorithm needs to continuously learn from the data streams and rapidly adapt to changes. This property is important for processing continuous real-time perceptual streams, but has not been well studied in machine learning, especially without storing and reprocessing previously encountered data.
  5. Online learning: For real-time data streams, it is much more valuable if the algorithm can predict and learn new patterns on-the-fly without the need to store entire sequences or batch several sequences together as it normally happens when training gradient-based recurrent neural networks. The ideal sequence learning algorithm should be able to learn from one pattern at a time to improve efficiency and response time as the natural stream unfolds.
  6. Noise robustness and fault tolerance: Real world sequence learning deals with noisy data sources where sensor noise, data transmission errors and inherent device limitations frequently result in inaccurate or missing data. A good sequence learning algorithm should exhibit robustness to noise in the inputs.
  7. No hyperparameter tuning: Learning in the cortex is extremely robust for a wide range of problems. In contrast, most machine-learning algorithms require optimizing a set of hyperparameters for each task. It typically involves searching through a manually specified subset of the hyperparameter space, guided by performance metrics on a cross-validation dataset. Hyperparameter tuning presents a major challenge for applications that require a high degree of automation, like data stream mining. An ideal algorithm should have acceptable performance on a wide range of problems without any task-specific hyperparameter tuning.

One of the applications of HTM is in Anomaly detection. See, for example, this paper Unsupervised real-time anomaly detection for streaming data (2017) which used from a network of HTMs to solve this problem in streaming data.

The implementation of HTM (NuPIC) can help to see more applications of HTM in ML.

",4446,,4446,,6/2/2020 17:52,6/2/2020 17:52,,,,2,,,,CC BY-SA 4.0 21568,2,,21565,6/1/2020 20:39,,1,,"

There are a couple of things to break down here.

The first thing is to correct this:

For example, the reward for the game tic-tac-toe is decided at the end of the episode, when the player wins, loses, or draws the match. The reward is not available at each step $t$.

In a Markov Decision Process (MDP), there is always an immediate reward for each time $t$ from $t=1$ to $t=T$ (terminal state). This is the reward distribution that the algorithms refer to as $R_t$.

It does not matter if nearly all the rewards are $0$. That is still a reward value, even if it is not an interesting or informative one. Some environments will have non-zero rewards on nearly all time steps. Some environments might have zero rewards for every transition apart from one or two important exceptions that define the goal of the agent.

So in tic-tac-toe, if the game has not ended, the reward will be $0$ because neither player has won or lost, and obtaining a win is a learning objective. If you use $0$ reward value for all incomplete time steps in tic-tac-toe, and SARSA algorithm as written from Sutton & Barto, then it will work as expected. You don't need to wait until the end of the episode, and the algorithm will still learn to predict values of moves despite experiencing many $r_t = 0$ during start and middle of the game.

You show the MC prediction algorithm, which does wait until the end of each episode until calculating estimated values, using the experienced return.

I think one of the things that you might want to review is the difference between immediate reward following a timestep (noted $R_{t+1}$) and return (noted $G_t$). There is a relationship between the two $G_t = \sum_{k=0}^{\infty} \gamma^k R_{t+k+1}$. All the value based methods, such as MC prediction, or SARSA, are ways to estimate the expected return $\mathbb{E}_{\pi}[G_t]$ given some context, such as the state or state and action at time $t$.

Given that, this should answer your other questions.

Is determining the reward for each action a requirement of SARSA?

It is a general requirement for MDPs, and is expected by all solvers. However, a default of $0$ is fine, if that's how the environment works. The reinforcement learning (RL) algorithms can all cope with delays to important reward values (such as those from winning or losing a game). Bigger delays - e.g. thousands of $0$ reward steps between more meaningful values - are often associated with harder problems to solve, so the simpler algorithms might not be practical. But they all cope in theory given enough experience.

Can the agent wait until the end of the episode to determine the reward?

No, you should always be able calulate an immediate reward value $r_{t+1}$ after taking each action $a_t$ from state $s_t$.

However, it is OK in general to wait until you have summed up more of those rewards (e.g. $r_{t+1}$ to $r_{t+n}$) before making an estimate of the return $g_t$ to update a value estimate. Single step SARSA (sometimes called SARSA(0)) does not wait - it makes update steps using a single immediate reward value. However, other agent types including Monte Carlo (MC), and some variants of SARSA, can and do wait - sometimes just for some steps, sometimes until the end of each episode.

It is a choice you can make when designing agents, there are consequences. Waiting until end of the episode (as in MC) means you have unbiased estimates of value, but takes longer, and leads to higher variability in updates. Using reward values after each step (as in SARSA(0)) means you get to make more frequent updates which may converge fatser, but you start with biased incorrect values.

",1847,,1847,,6/1/2020 20:49,6/1/2020 20:49,,,,0,,,,CC BY-SA 4.0 21569,1,,,6/1/2020 23:10,,1,76,"

I have the following problem.

I am given a graph with a lot (>30000) nodes. Nodes are associated with a low (<10)-dimensional feature vector, and edges are associated with a low (<10)-dimensional feature vector. In addition, all nodes start out having the color white.

At every time step until completion, I want to select a subset of the nodes in the graph and color them blue. Then I receive a reward based on my coloring. I continue until all nodes are colored blue, and the total reward is the sum (maybe with a gamma factor) of my total rewards.

Do you have suggestions of papers to read where the task was choosing an appropriate subgraph from a larger graph?

Just doing a node classification task using a Graph Convolutional Network doesn't seem to do well, I suspect because, given that a good heuristic for reward is connectivity, it would need to learn to choose an optimal neighborhood in the graph and upweight only that neighborhood.

To contextualize, each of the nodes of the graph represents a constraint that will be sent to an incremental SMT solver, and edges represent shared variables or other relationships between the constraints. I have found empirically that giving these constraints incrementally to the SMT solver when in a good order can be faster than just dumping the entire problem into an SMT solver, since the SMT solver doesn't have the best heuristics for this particular SMT problem. However, eventually, I want to add all the constraints, i.e., color the entire graph. The cost is the amount of time the solver takes on each set, with a reward at the end for completing all the constraints.

",37521,,2444,,7/10/2022 8:36,7/10/2022 8:36,How to learn how to select a subgraph via reinforcement learning?,,0,0,,,,CC BY-SA 4.0 21571,1,,,6/1/2020 23:40,,1,67,"

I have a simple question about model-free reinforcement. In a model I'm writing about, I want to know the value 'gain' we'd get for executing an action, relative to the current state. That is, what will I get if I moved from the current state $s$ taking action $a$.

The measure I want is:

$$G(s, a)=V(s^{\prime})-V(s)$$

where $s'$ is the state that I would transition to if the underlying MDP was deterministic. If the MDP has a stochastic transition function, the model I want is:

$$G(s, a)=\left[\sum_{s' \in S } P(s^{\prime} \mid a, s) V(s^{\prime})\right]-V(s)$$

In a model-free environment, we don't have $P(s' \mid a,s)$.

If we had a Q-function $Q(s,a)$, could we represent $G(s,a)$?

NOTE: This is not the same as an 'advantage function' as first proposed by Baird (Leemon C Baird. Reinforcement learning in continuous time: Advantage updating. In Proceedings of 1994 IEEE International Conference on Neural Networks, pages 448–2453. IEEE, 1994.), which means the advantage of actions relative to the optimal action. What I'm looking for is the gain of actions relative to the current state.

",37522,,37522,,6/2/2020 10:52,6/2/2020 10:52,Calculating the advantage 'gain' of actions in model-free reinforcement learning,,0,3,,,,CC BY-SA 4.0 21573,1,,,6/2/2020 1:29,,1,51,"

The problem I currently have is that I want to train an AI to produce music, like music that contains voices etc... However, the problem is that with a WAV file, one second of audio can be up to 48,000 inputs, which is extremely detrimental to the ai's learning process and prevents it from really gaining any knowledge about context. I've tried to do the fast Fourier transformation, but the amount of data coming in varies depending on what part of the song I'm training it on, which will not work since I can't know ahead of time what every single time unit of every single song will have! And the max number of inputs is still 24,000

Is there any other way of compressing data down in a way that I can give my ai something within the range of a couple hundred inputs for a second or two?

",36808,,,,,6/2/2020 1:29,What is the most compressed audio that I can feed an AI?,,0,0,,,,CC BY-SA 4.0 21576,1,21579,,6/2/2020 9:16,,1,80,"

I am doing an online course on Reinforcement Learning from university of Alberta. It focus too much on theory. I am engineering and I am interested towards applying RL to my applications directly.

My question is, is there any website which has sample programmers for beginners. Small sample programs. I have seen several websites for other machine learning topics such as CNN/RNN etc. But the resources for RL are either limited, or I couldn't find them

",36710,,,,,6/2/2020 11:11,Is there any programming practice website for beginners in Reinforcement Learning,,1,2,,6/2/2020 11:31,,CC BY-SA 4.0 21577,1,21582,,6/2/2020 9:40,,1,53,"

I've been reading the paper Reinforcement Knowledge Graph Reasoning for Explainable Recommendation (by Yikun Xian et al.) lately, and I don't understand a particular section:

Specifically, the scoring function $f((r,e)|u)$ maps any edge $(r,e)$ to a real-valued score conditioned on user $u$. Then, the user-conditional pruned action space of state $s_t$ denoted by $A_t(u)$ is defined as:

$A_t(u) = \{(r,e)| rank(f((r,e)|u))) \leq \alpha ,(r,e) \in A_t\} $

where $\alpha$ is a predefined integer that upper bounds the size of the action space.

Details about the scoring function can be found in the attached paper.

What I don't understand is: What does rank mean, here? Is the thing inside of it a matrix?

It would be great if someone could explain the expression for the user conditional pruned action space in greater detail.

",35585,,2444,,12/26/2021 13:34,12/26/2021 13:34,What is meant by the rank of the scoring function here?,,1,0,,,,CC BY-SA 4.0 21579,2,,21576,6/2/2020 11:11,,1,,"

Firstly, since you are a beginner, I strongly recommend you start reading Sutton's book. It is a really great book.

Then, some tutorials:

udemy rl

udemy deep rl

rl-with-tensorflow

learndatasci

stackabuse

",36055,,,,,6/2/2020 11:11,,,,0,,,,CC BY-SA 4.0 21582,2,,21577,6/2/2020 12:33,,1,,"

$\text{rank}(f((r,e)|u))$ in $A_t(u)$ means to compute the value of scoring function $f$ for all pairs $(r,e)\in A_t$ which are conditioned by $u$, then sort them in a descending order. The rank of the $f((r,e)|u)$ in this order is equal to $\text{rank}(f((r,e)|u))$. Hence $\text{rank}(f((r,e)|u)) \leqslant \alpha$ means to select the $\alpha$ top most scored pairs.

",4446,,4446,,6/15/2020 10:07,6/15/2020 10:07,,,,0,,,,CC BY-SA 4.0 21583,1,,,6/2/2020 14:10,,2,135,"

I am reading this paper and in algorithm 3 they describe an $n$-step Q-Learning algorithm. Below is the pseudo-code.

From this pseudo-code, it looks as though the final tuples that they would visit in don't get added to the memory buffer $M$. They define a sample size $T$, but also say in the paper that an episode terminates when $|S| = b$.

This leaves me two questions:

  1. Have I understood the episode termination correctly? It seems from the pseudocode they are just running an episode for $T$ time steps but also in the paper they have a definition for when an episode terminates, so I'm not sure why they would want to truncate the episode size.
  2. As I mentioned, it seems as though the final state $S_T$ that you would be in won't get added to the experience buffer as we only append the $(T-n)$th tuple. Why would you want to exclude the information you get from the final tuples you visit?
",36821,,2444,,6/2/2020 15:48,6/2/2020 15:48,Are the final states not being updated in this $n$-step Q-Learning algorithm?,,0,0,,,,CC BY-SA 4.0 21584,1,21585,,6/2/2020 16:25,,10,2006,"

What is the difference between reinforcement learning (RL) and evolutionary algorithms (EA)?

I am trying to understand the basics of RL, but I do not yet have practical experience with RL. I know slightly more about EAs, but not enough to understand the difference between RL and EA, and that's why I'm asking for their main differences.

",37533,,2444,,1/20/2021 19:11,8/7/2021 12:30,What is the difference between reinforcement learning and evolutionary algorithms?,,1,0,,,,CC BY-SA 4.0 21585,2,,21584,6/2/2020 16:56,,8,,"

Evolutionary algorithms (EAs) are a family of algorithms inspired by the biological evolution that can be used to solve (constrained or not) optimization problems where the function that needs to be optimized does not necessarily need to be differentiable (or satisfy any strong constraint). In EAs, you typically only need to define

  • an encoding of the solution (aka chromosome or individual)
  • a fitness function that determines the relative quality of each solution
  • operations that stochastically change or combine solutions (e.g. the cross-over or the mutation operators, in genetic algorithms)

There are other parameters that you need to defined (such as the number of solutions to consider at each generation or the number of generations to run the algorithms for), but these are the three most important things to take into account when attempting to solve an optimization problem with EAs (in particular, GAs).

Reinforcement learning (RL) is the field that studies how agents can sequentially take actions in a certain environment in order to maximize some notion of long-term reward (aka return). The strategy that determines the behavior of the agent (i.e. which actions the agent takes) is called the policy. So, the goal of RL is to find a policy that maximizes the (expected) return, which depends on the reward function of the environment. For example, in the case of chess, a reward function may be any function that gives you a positive number if you win the game or a negative number if you lose it. The RL algorithms typically assume that the agent is able to interact with the environment in order to understand its dynamics.

RL is thus concerned with a specific type of optimization problem, i.e. finding policies (strategies) that maximize the return, while an agent interacts with an environment in time steps. On the other hand, EAs can be applied to any optimization problem where you can encode solutions, define a fitness function that compares solutions and you can stochastically change those solutions. Essentially, EAs can be applied to almost any optimization problem. In principle, you could use EAs to find policies, as long as you're able to compare them with a fitness function (e.g. the amount of reward that you obtain by following these policies).

Of course, this does not mean that EAs are the most efficient and appropriate approach to solve all optimization problems! You typically use EAs when you need to solve certain problems where better approaches do not exist. For example, when your objective function is not differentiable, then you cannot apply gradient-based solutions, so, in that case, EAs may be a viable option (but there are also other alternatives to EAs, such as simulated annealing).

",2444,,2444,,8/7/2021 12:30,8/7/2021 12:30,,,,0,,,,CC BY-SA 4.0 21586,1,,,6/2/2020 17:21,,2,79,"

I'm trying to understand DQN. I understand where the loss function comes from. I'm just unsure about why the target function works in practice. Given the loss function $$ L_i(\theta_i) = [(y_i - Q(s,a;\theta_i))^2] $$ where $$ y_i = r + \gamma * max_{a'}Q(s',a';\theta_{i-1}) $$ is the target value in the loss function.

From my understanding, experiences are pulled from the replay buffer, then the DQN is used to estimate the future sum of the discounted rewards for the next state (assuming it plays optimally) and adds this onto the current rewards $r$ to create the target value. Then the DQN is used again to estimate $Q$ value for the current state. Then the loss function is just the difference between the target and the estimated $Q$ value for the current state. Afterward, you optimize the loss function.

But, if the parameters of the DQN start off randomly, then surely the target value will be completely wrong since the parameters that define that target function are random. So, if the target function is wrong, then it will minimize the difference between the target value and the estimated value, but it will be learning to predict incorrect values, since the target values are wrong?

I don't understand why the target value works if the parameters of the DQN needed to create that target are completely random.

What obvious mistake am I making here?

",37535,,2444,,6/4/2020 14:41,6/4/2020 14:41,How can the target rely on untrained parameters?,,0,3,,,,CC BY-SA 4.0 21588,1,,,6/2/2020 18:33,,2,61,"

Attention-scoring mechanism seems to be a commonly-used component in various seq2seq models, and I was reading about the original ""Location-based Attention"" in Bahadanau well-known paper at https://arxiv.org/pdf/1506.07503.pdf. (it seems this attention is used in various forms of GNMT and text-to-speech sythesizers like tacotron-2 https://github.com/Rayhane-mamah/Tacotron-2).

Even after repeated readings of this paper and other articles about Attention-mechanism, I'm confused about the dimensions of the matrices used, as the paper doesn't seem to describe it. My understanding is:

  • If I have decoder hidden dim 1024, that means ${s_{i-1}}$ vector is 1024 length.

  • If I have encoder output dim 512, that means $h_{j}$ vector is 512 length.

  • If total inputs to encoder is 256, then number of $j$ can be from 1 to 256.

  • Since $W x S_{i-1}$ is a matrix multiply, it seems $cols(W)$ should match $rows(S_{i-1})$, but $rows(W)$ still remain undefined. Same seems true for matrices $V, U, w, b$.

This is page-3/4 from the paper above that describes Attention-layer:

I'm unsure how to make sense of this. Am I missing something, or can someone explain this?

What I don't understand is:

  • What is the dimension of previous alignment (denoted by $alpha_{i-1})$? Shouldn't it be total values of $j$ in $h_{j}$ (which is 256 and means total different encoder output states)?

  • What is the dimension of $f_{i,j}$ and convolution filter $F$? (the paper says $F$ belongs to $kxr$ shape but doesn't define $'r'$ anywhere). What is $'r'$ and what does $'k x r'$ mean here?

  • How are these unknown dimensions for matrices $'V, U, w, b'$ described above determined in this model?

",33580,,33580,,6/2/2020 20:09,6/2/2020 20:09,How to understand the matrices used in the Attention layer?,,0,5,,,,CC BY-SA 4.0 21589,1,,,6/2/2020 18:47,,2,41,"

Apart from its use in word embeddings (e.g word2vec algorithm), are there any other applications of hierarchical softmax? If yes, can you please give me some reference papers?

",36055,,,,,6/2/2020 18:47,What are the applications of hierarchical softmax?,,0,0,,,,CC BY-SA 4.0 21590,1,21619,,6/2/2020 19:08,,0,53,"

To update the Q table Q-learning takes the arg max of the Q values - the state, value mappings.

For example, in tic tac toe the state XOX OXO -X- contains two available positions, each marked by the - character. In order to evaluate the arg max should temporal difference calculate the arg max of just the available positions?

For the state XOX OXO -X- the arg max should be taken at index 6 and 8 ? (assuming zero indexing) If not then how should the arg max indexes be updated? The indexes 0,1,2 3,4,5 7 can be used as they are already been taken by X or O so should not be evaluated for their value ? This also means that indexes which are not available will not have their Q value updated, will this break the Q-learning procedure ?

",12964,,34010,,6/3/2020 22:20,6/3/2020 22:20,What is correct update when the some indexes are not available?,,1,0,,,,CC BY-SA 4.0 21592,2,,18026,6/2/2020 21:27,,0,,"

The exhaustive nearest neighbor search performs an exhaustive search of the nearest neighbor (i.e. the closest image or matches, depending on your application). Here an exhaustive search means that you will compare your query image with any other image, or, in the case of feature matching, you will compare every feature of the query image with every other feature of the target/reference image.

The $k$-nearest neighbor search finds the $k$ closest images. You typically do that by first building a KD-tree with your query (or target) image, so that to speed up the search for the $k$-nearest neighbors of your query image.

(Btw, in case you are interested in the concept of feature matching, in OpenCV, the class that you can use to perform an exhaustive search for the matches between two images is the BFMatcher (which stands for Brute Force Matcher), while the class to perform the search with a KD-tree for the k-nearest neighbhours is FlannBasedMatcher).

",2444,,2444,,6/2/2020 22:11,6/2/2020 22:11,,,,0,,,,CC BY-SA 4.0 21593,1,,,6/3/2020 0:19,,14,1341,"

Say we have a machine and we give it a task to do (vision task, language task, game, etc.), how can one prove that a machine actually know's what's going on/happening in that specific task?

To narrow it down, some examples:

Conversation - How would one prove that a machine actually knows what it's talking about or comprehending what is being said? The Turing test is a good start, but never actually addressed actual comprehension.

Vision: How could someone prove or test that a machine actually knows what it's seeing? Object detection is a start, but I'd say it's very inconclusive that a machine understands at any level what it is actually seeing.

How do we prove comprehension in machines?

",22840,,2444,,6/3/2020 2:52,6/4/2020 10:05,How does one prove comprehension in machines?,,2,1,,,,CC BY-SA 4.0 21594,1,,,6/3/2020 2:42,,-1,347,"

I am having trouble making a reinforcement algorithm than can win the 2048 game.

I have tried with deep Q (which I think is the simplest algorithm that should be able to learn a winning strategy).

My Q function is given by a NN of two hidden layers 16 -> 8 -> 4. Weight initialization is XAVIER. Activation function is RELU. Loss function is cuadratic loss. Correction is via gradient descent.

To train the NN I used a reward given by :

$$r_t = \frac{1}{1024} \sum_{i=0}^{n}{p^i r_{((t-n)+i)}}$$

Where n is 20 or the amount of iterations since the last update if a game is lost and $p = 1.4$.

There is an epsilon for discovery, set at 100% at the start and it decreases by 10% until it reaches 1%.

I have tried to optimize the parameters but can't get better results than a ""256"" in the board. And the cuadratic loss seems to get stuck at 0.25:

Is there something I am missing?

Code:


public enum GameAction {
    UP, DOWN, LEFT, RIGHT
}

public final class GameEnvironment {

    public final int points;
    public final boolean lost;
    public final INDArray boardState;

    public GameEnvironment(int points, boolean lost, int[] boardState) {
        this.points = points;
        this.lost = lost;
        this.boardState = new NDArray(boardState, new int[] {1, 16}, new int[] {16, 1});
    }
}

public class SimpleAgent {
    private static final Random random = new Random(SEED);

    private static final MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
            .seed(SEED)
            .weightInit(WeightInit.XAVIER)
            .updater(new AdaGrad(0.5))
            .activation(Activation.RELU)
            .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
            .weightDecay(0.0001)
            .list()
            .layer(new DenseLayer.Builder()
                    .nIn(16).nOut(8)
                    .build())
            .layer(new OutputLayer.Builder()
                    .nIn(8).nOut(4)
                    .lossFunction(LossFunctions.LossFunction.SQUARED_LOSS)
                    .build())
            .build();
    MultiLayerNetwork Qnetwork = new MultiLayerNetwork(conf);

    private GameEnvironment oldState;
    private GameEnvironment currentState;
    private INDArray oldQuality;

    private GameAction lastAction;

    public SimpleAgent() {
        Qnetwork.init();
        ui();
    }

    public void setCurrentState(GameEnvironment currentState) {
        this.currentState = currentState;
    }

    private final ArrayList<INDArray> input = new ArrayList<>();
    private final ArrayList<INDArray> output = new ArrayList<>();
    private final ArrayList<Double> rewards = new ArrayList<>();

    private int epsilon = 100;

    public GameAction act() {
        if(oldState != null) {
            double reward = currentState.points - oldState.points;

            if (currentState.lost) {
                reward = 0;
            }

            input.add(oldState.boardState);
            output.add(oldQuality);
            rewards.add(reward);

            if (currentState.lost || input.size() == 20) {
                for(int i = 0; i < rewards.size(); i++) {
                    double discount = 1.4;
                    double discountedReward = 0;

                    for(int j = i; j < rewards.size(); j++) {
                        discountedReward += rewards.get(j) * Math.pow(discount, j - i);
                    }

                    rewards.set(i, lerp(discountedReward, 1024));
                }

                ArrayList<DataSet> dataSets = new ArrayList<>();

                for(int i = 0; i < input.size(); i++) {
                    INDArray correctOut = output.get(i).putScalar(lastAction.ordinal(), rewards.get(i));

                    dataSets.add(new DataSet(input.get(i), correctOut));
                }

                Qnetwork.fit(DataSet.merge(dataSets));

                input.clear();
                output.clear();
                rewards.clear();
            }

            epsilon = Math.max(1, epsilon - 10);
        }

        oldState = currentState;
        oldQuality = Qnetwork.output(currentState.boardState);

        GameAction action;


        if(random.nextInt(100) < 100-epsilon) {
            action = GameAction.values()[oldQuality.argMax(1).getInt()];
        } else {
            action = GameAction.values()[new Random().nextInt(GameAction.values().length)];
        }

        lastAction = action;

        return action;
    }

    private static double lerp(double x, int maxVal) {
        return x/maxVal;
    }

    private void ui() {
        UIServer uiServer = UIServer.getInstance();
        StatsStorage statsStorage = new InMemoryStatsStorage();
        uiServer.attach(statsStorage);
        Qnetwork.setListeners(new StatsListener(statsStorage));
    }
}
",14892,,14892,,6/3/2020 2:54,6/13/2020 7:50,Help with deep Q learning for 2048 game getting stuck,,2,6,,,,CC BY-SA 4.0 21595,2,,21593,6/3/2020 2:45,,14,,"

This is one of the most important issues in the philosophy of artificial intelligence.

The most famous philosophical argument that attempts to address this issue is the Chinese Room argument published by the philosopher John Searle in 1980.

The argument is quite simple. Suppose that you are inside a room and you need to communicate (in a written form) with people outside the room in a certain language that you do not understand (in the particular example given by Searle, Chinese), but you are given the rules to manipulate the characters of this language (for a given input, you have the rules to produce the correct output). If you follow these rules, to the people outside the room, it will seem as if you understand this language, but you don't.

To be more concrete, when I say ""apple"", you understand that it refers to a specific fruit because you have eaten apples and you have a model of the world. That's understanding, according to Searle.

The most famous mathematical model of computers, the Turing machine, is essentially a system that manipulates symbols, so the Chinese Room argument directly applies to computers.

Many replies or counterarguments to the CR argument have been discussed, such as

  • the system reply (the symbol manipulator is only a part of the larger system).
  • the robot reply (the symbol manipulator does not understand the meaning of the symbols because it has not experienced the associated real-world objects, so it suggests that understanding requires a body with sensors and controllers)
  • the brain simulator reply (the symbol manipulator can actually simulate the activity in the brain of a person that understands the unknown language)

So, can we prove that machines really understand? Even before Searle, Turing had already asked the question ""Can machines think?"". To prove this, you need a rigorous definition of understanding and thinking that people agree on. However, many people do not want to agree on a definition of intelligence and understanding (hence the many counterarguments to the CR argument). So, if you want to prove that machines understand, you need to provide a proof with respect to a specific definition of understanding. For example, if you think that understanding is just a side effect of symbol manipulation, you can easily prove that machines understand many concepts (it just follows from the definition of a Turing machine). However, even if understanding was just a side effect (what does a side effect actually mean in this case?) of symbol manipulation, would a machine be able to understand the same concepts and in the same way that humans understand? It's harder to answer this question because we really do not know if humans only manipulate symbols in our brains.

",2444,,2444,,6/3/2020 20:39,6/3/2020 20:39,,,,1,,,,CC BY-SA 4.0 21596,1,,,6/3/2020 7:51,,3,44,"

There are libraries for recognizing individual video frames, but I need to recognize an object in motion. I can recognize a person in every single frame, but I need to know if the person is running or waving. I can recognize a tree in every single frame, but I need to find out if the tree is swaying in the wind. I can recognize a wind turbine in every frame, but I need to know if it's spinning right now.

So the question is: do technologies, libraries, concepts, or algorithms exist for recognizing objects over a certain period of time? For example, I have a series of several pieces of frames, each of which exactly has a person, and need to find out if person are walking or waving their hands.

",37545,,,,,6/3/2020 7:51,"Video recognition (specifically video, not individual frames)",,0,0,,,,CC BY-SA 4.0 21597,1,21602,,6/3/2020 8:50,,3,2017,"

Should I use minimax or alpha-beta pruning (or both)? Apparently, alpha-beta pruning prunes some parts of the search tree.

",37258,,2444,,6/3/2020 22:46,2/25/2021 11:21,Should I use minimax or alpha-beta pruning?,,2,0,,,,CC BY-SA 4.0 21599,2,,18567,6/3/2020 9:01,,0,,"

After some research on the subject, I found a solution to my problem of steady state errors in continuous control using DDPG:

I added a reward component based on the integral of the error. This yields maximum reward if the error integral is zero and yields lower rewards down to 0 if the integral has some value in it.

This integral component in the reward was found to be very effective, but introduces some overshoot after set-point changes. By limiting the integral to quite small quantities, this behavior could be overcome.

All these findings are detailed out in the ""Reward Engineering"" section of my master's thesis. Please have a look into https://github.com/opt12/Markov-Pilot/tree/master/thesis

I'll be glad to get feedback on it.

Regards, Felix

",25972,,,,,6/3/2020 9:01,,,,0,,,,CC BY-SA 4.0 21600,2,,17774,6/3/2020 9:05,,1,,"

After some research on the subject, I found a possible solution to my problem of high frequency oscillations in continuous control using DDPG:

I added a reward component based on the actuator movement, i. e. the delta of actions from one step to the next.

Excessive action changes are punished now and this could mitigate the tendency to oscillate. The solution is nnot really perfect, but it works for the moment.

This finding is detailed out in the ""Reward Engineering"" section of my master's thesis. Please have a look into https://github.com/opt12/Markov-Pilot/tree/master/thesis

I'll be glad to get feedback on it. And I'll be glad to hear better solutions than adding a delta-punishment.

Regards, Felix

",25972,,,,,6/3/2020 9:05,,,,0,,,,CC BY-SA 4.0 21601,2,,21593,6/3/2020 9:22,,5,,"

I recently came across a neat definition of understanding in Roger Schank's Dynamic Memory:

Basically, you store everything you experience in your memory, but you need to index it in order to be able to use it for processing. Obviously, all experiences are slightly different, eg going to a restaurant is broadly the same, but the details vary. So you need to abstract away the details and store those only if necessary (eg if the food or service was particularly good or bad). Otherwise you just store a general template (or 'script') of the event.

In your memory (note: this is modeled, not neurologically correct) you thus have a whole set of event scripts that you can retrieve. So currently I would be accessing my reply-to-stack-exchange-question script to guide me how to best write this answer without getting downvoted for ludicrous claims etc.

Understanding, then, would be to receive (through sensory input, or language) an event, and putting it into the right area in your memory. So if I told you I just went to Burger King, you would understand it when this activates your fast-food-restaurant memory set. If I then told you I went there to wipe the floor, it should instead activate cleaning-job, rather than fast-food-restaurant. So you understand the sequence ""I went to Burger King to clean the floor"" by linking it to the correct memory region. If a computer then responded with ""What did you eat?"" it would clearly not have understood the input. But a response of ""Do you get free food for working there?"" would indicate some level of comprehension/understanding, as it might recognise that people working in food outlets might get free food as a work-related benefit.

If you experience something completely new, you recognise it as a new experience, and start a new cluster of experiences. For example, if you have been to restaurants before, but never to fast food ones. The first time it will be strange and different, but you remember it as differences to the existing restaurant script. Over time it becomes strong enough (assuming to go to more fast-food restaurants), and it will become its own area, still linked to restaurants, but also not quite the same.

What I like about this is that it is a generic mechanism, rather than an explicit processing of content. It is based on learning and experience, which I believe are key aspects of intelligent behaviour.

UPDATE: This answer is more concerned with trying to find a workable definition of what it means to comprehend something, rather than trying to operationalise it in a dialogue system. You can probably pass the Turing test with some clever tricks, without any comprehension at all. But the point is, what does it mean to understand something? And in the current definition it means to classify related events together, and to recognise similarities and differences between similar experiences. The reaction (ie a response) is not the understanding itself, but only a reflection of the internal state that would demonstrate understanding.

The difference to a neural network is, I would guess, that it can cope with a broad range of experiences, where a NN would need vast amounts of training data (as it doesn't comprehend). Comprehension involves compression of information through abstraction and evaluating differences. This is still a hard problem, and I'd think difficult to achieve just with automated machine learning.

UPDATE 2: With regards to the Turing Test, in a way it goes back to deep philosophical points about empiricism. How do you know the world around you exists? You can see it. But how do you know your eyes tell you the true picture? You can quickly descend into a Matrix-like scenario where you don't know anything for certain.

The Turing Test is a proxy for showing understanding. You don't know the computer understands what you say, so you observe its responses and interpret them accordingly. Just like at school: the teachers asks a question, and from the pupils' answers infers whether they show understanding. If you simply regurgitate a memorised answer, that's not understanding. If you paraphrase in different words, that shows some sort of comprehension. If you draw analogies to similar issues and analyse why and how they are distinct, now there you show that you really get it.

Because we cannot inspect the internal state of a pupil, we cannot measure objectively whether they understood something. We only have communication as an interface between our mind and theirs, and so far chatbots have focused on getting that right. But I think what we really need is to work on memory and memory processing to get further towards comprehension or understanding. And I say this as a computational linguist who specialises in the language parts...

",2193,,2193,,6/4/2020 10:05,6/4/2020 10:05,,,,1,,,,CC BY-SA 4.0 21602,2,,21597,6/3/2020 10:35,,7,,"

Both algorithms should give the same answer. However, their main difference is that alpha-beta does not explore all paths, like minimax does, but prunes those that are guaranteed not to be an optimal state for the current player, that is max or min. So, alpha-beta is a better implementation of minimax.

Here are the time complexities of both algorithms

  • Minimax: $\mathcal{O}(b^d)$,
  • Alpha-beta (best-case scenario): $\mathcal{O}(b^{d/2}) = \mathcal{O}(\sqrt{b^d})$

where $b$ is an average branching factor with a search depth of $d$ plies.

",36055,,2444,,6/4/2020 11:19,6/4/2020 11:19,,,,0,,,,CC BY-SA 4.0 21603,1,,,6/3/2020 11:23,,1,115,"

Is there any good tutorials about training reinforcement learning agent from raw pixels using PyTorch?

I don't understand the official PyTorch tutorial. I want to train the agent on the atari breakout environment. Unfortunately, I failed to train the agent on the RAM version. Now, I am looking for a way to train the agent from raw pixels.

",36107,,2444,,6/3/2020 22:41,6/5/2020 6:12,Are there any good tutorials about training RL agent from raw pixels using PyTorch?,,2,0,,,,CC BY-SA 4.0 21604,1,,,6/3/2020 12:13,,0,158,"

I have an excel sheet filled with my own personal appreciations of movies I've watched, and I want to use it to train an AI model so that it can predict if I'll like a specific movie or not, based on the ones I've already seen.

My data is formatted as following (just a sample, the spreadsheet is filled with hundreds of movies):

And I would like to use all the columns to train my model. Because I am going to say if I liked the movie or not, I know it will be Supervised Learning. I already cleaned the data so there's no blank or missing data, but I do not know how to train my model using every column.

If required, I can be more specific on something, just ask and I'll edit the post.

",32862,,,,,10/28/2022 0:00,Train a model using a multi-column text-filled excel sheet,,1,1,,,,CC BY-SA 4.0 21605,2,,21603,6/3/2020 12:16,,1,,"

I am using Jetson reinforcement for my quadcopter reinforcement in simulation. Maybe it will help you. because you can create AI agents that can learn from the interactive environment, gather experience, and system of reward with deep RL. You can also use end-to-end neural networks that translate raw pixels into action as per your need and use that RL-trained agent to complete complex tasks.

The best thing is you can easily transfer RL-agent which simulated in the simulator to real-world robots. and We are using multiple Nano's easily to perform the complex tasks of navigation and co-ordination of Our Quad-copter.

Webinar of Deep RL on Jetson

Google group to get help for Deep RL on jetson nano

Hope it helps you.

",32861,,32861,,6/5/2020 6:12,6/5/2020 6:12,,,,1,,,,CC BY-SA 4.0 21606,1,21640,,6/3/2020 12:22,,2,77,"

If a neural network has a limited number of neuron parameters to find, -let's say only 1000 parameters-, it is generally better to spend the parameters on weights or neuron bias?

For example, if each neuron has 2 weights and one bias, it uses 3 parameters per neuron, so only 333 neurons would be available.

But if each neuron uses no bias parameter, then 500 neurons are available with 1000 parameters.

I'm concerned with overfiting by using too many parameters, so I want to minimize the number of parameters meanwhile maximizing the quality of the result.

",37558,,,,,6/4/2020 17:26,Is better to spend parameters on weights or bias?,,1,3,,,,CC BY-SA 4.0 21607,1,,,6/3/2020 12:25,,1,46,"

Deep belief networks (DBNs) are generative models, where, usually, you sample by thermalising the deepest layer (as it's a restricted Boltzmann machine), and then forward propagating a sample towards the visible layer to get a sample from the learned distribution.

This is less flexible sampling than in a single layer DBN: a restricted Boltzmann machine. There, we can start our sampling chain at any state we want, and get samples ""around"" that state. In particular, we can clamp some visible nodes $\{v_i\}$ and get samples from the conditional probability $𝑝(v_j|\{v_i\})$

Is there a way to do something similar in DBNs? When we interpret the non-RBM layers as RBMs by removing directionality, can we treat it as a deep Boltzmann machine and start sampling at e.g. a training example again?

",37557,,37557,,6/4/2020 11:57,6/4/2020 11:57,How do I sample conditionally from deep belief networks?,,0,0,,,,CC BY-SA 4.0 21608,2,,21603,6/3/2020 12:55,,1,,"

I always found that towards data science articles can be a good source for code for these types of problems. They usually have a git repo with everything you need and walk through the less obvious steps in the article. This article may be of interest to you.

",36821,,,,,6/3/2020 12:55,,,,1,,,,CC BY-SA 4.0 21610,1,21611,,6/3/2020 15:50,,2,125,"

In the RL textbook by Sutton & Barto section 7.4, the author talked about the ""True online TD($\lambda$)"". The figure (7.10 in the book) below shows the algorithm.

At the end of each step, $V_{old} \leftarrow V(S')$ and also $S \leftarrow S'$. When we jump to next step, $\Delta \leftarrow V(S') - V(S')$, which is 0. It seems that $\Delta$ is always going to be 0 after step 1. If that is true, it does not make any sense to me. Can you please elaborate on how $\Delta$ is updated?

",36120,,2444,,6/4/2020 11:38,6/4/2020 11:38,How is $\Delta$ updated in true online TD($\lambda$)?,,1,0,,,,CC BY-SA 4.0 21611,2,,21610,6/3/2020 16:21,,4,,"

Let us denote the state we are in at time $t$ by $S_t$. Then at iteration $t$ we create a placeholder $V_{old} = V(S_{t+1})$ for the state we will transition into. We then update the value function $V(s) \; \forall s \in \mathcal{S}$ - i.e. we update the value function for all states in our state space. Let us denote this updated value function by $V'(S)$.

At iteration $t+1$ we calculate $\Delta = V'(S_{t+1}) - V_{old} = V'(S_{t+1}) - V(S_{t+1})$, which does not necessarily equal 0 because the placeholder $V_{old}$ was created using the value function before the last update.

",36821,,36821,,6/3/2020 18:41,6/3/2020 18:41,,,,1,,,,CC BY-SA 4.0 21614,1,,,6/3/2020 18:33,,1,39,"

I'm working on a time series forecasting task, and, in some specific cases, I don't need perfect accuracy, but the network cannot by any means miss by a lot. So, in detriment of a smaller mean error, I want to have fewer big mistakes.

Any suggestions of loss functions or other methods to solve this issue?

",37572,,2444,,6/3/2020 22:37,6/3/2020 22:37,What are some good loss functions used to minimize extreme errors in regression and time series forecasting?,,0,1,,,,CC BY-SA 4.0 21615,2,,21562,6/3/2020 19:02,,3,,"

A first question that I think is important to consider is: do you expect the data that you're dealing with to be changing over time (i.e. do you expect there to be concept drift)? This could be any kind of change. Simply changes in how frequent certain inputs are, changes in how frequent positives/negatives are, or even changes in relations between inputs and ground-truth positive/negative labels.

If you do not expect there to be concept drift, I'd almost consider suggesting that you may not have that big of a problem. It might be worth not doing anything at all with the data you receive online, and just stick to what you learned initially from offline data. Or you could try to use those few extra predicted-positive samples that you get for finetuning. You'd just have to be careful not too change your model too much based on this, because you know that you're not receiving a representative sample of all the data anymore here, so you might bias your model if you pay too much attention to only this online data relative to the offline data.


I guess the question becomes much more interesting if you do expect there to be concept drift, and it also seems likely that you are indeed dealing with this in most of the situations that would match the problem description. In this case, you will indeed want to make good use of the new data that you get online, because it can allow you to adapt to changes in the data that you're dealing with.

So, one ""solution"" could be to just... ignore the problem that you're only learning online from a biased sample of all your data (only from the predicted-positives), and just learn anyway. This might actually not perform too badly. Unless your model is really incredibly good already, you'll likely still get false positives, and so also still be able to learn from some of those -- you're not learning exclusively from positives. Still, the false positives won't be representative of all the negatives, so you still have bias.

The only better solution I can think of is relaxing this assumption:

Once deployed, humans review and check for correctness only items predicted positive by the algorithm.

You can still have the humans focus on predicted positives, but maybe have them inspect a predicted-negative also sometimes. Not often, just a few times. You can think of this as doing exploration like you would in reinforcement learning settings. You could do it randomly (randomly pick predicted negatives with some small probability), but you could also be smarter about it and explicitly target exploration of instances that your model is ""unsure"" about, or instances that are unlike data you've seen before (to specifically target concept drift).

I have a paper about something very similar to this right here: Adapting to Concept Drift in Credit Card Transaction Data Streams Using Contextual Bandits and Decision Trees. Here the assumption is that we're dealing with (potentially fraudulent) transactions, of which we can pick out and manually inspect a very small sample online. The only real difference in this paper is that we assumed that different transactions also had different monetary ""rewards"" for getting correctly caught as positives, based on the transaction amount. So a transaction of a very high amount could be worth inspecting even if we predicted a low probability of being fraudulent, whereas a transaction of a very low amount might be ignored even if it had a higher predicted probability of being fraudulent.


what's a meaningful metric to monitor to ensure that the performance of the model is not degrading?* (given the constraint specified here, F1 score is unknown).

Having a labelled evaluation set for this could be useful if possible... but it also might not be representative if concept drift is expected to be a major issue in your problem setting (because I suppose that the concept drift that you deal with online would not be reflected in an older, labelled evaluation set).

Just keeping track of things that you can measure online, like precision, and how it changes over time, could be useful enough already. With some additional assumptions, you could get rough estimates of other metrics. For instance, if you assume that the ratio $\frac{TP + FN}{FP + TN}$ between ground-truth-positives and ground-truth-negatives remains constant (remains the same as it was in your offline, labelled data), you could also try to extrapolate approximately how many positives you've missed out on. If your precision is dropping over time (your true positives are getting lower), you know -- assuming that fraction remains constant -- that your false negatives somewhere else in the dataset must be growing by approximately the same absolute number.

",1641,,,,,6/3/2020 19:02,,,,0,,,,CC BY-SA 4.0 21617,1,,,6/3/2020 20:22,,1,38,"

The Kantorovich-Rubinstein duality for the optimal transport problem implies that the Wasserstein distance between two distributions $\mu_1$ and $\mu_2$ can be computed as (equation 2 in section 3 in the WGAN paper)

$$W(\mu_1,\mu_2)=\underset{f\in \text{1-Lip.}}{\sup}\left(\mathbb{E}_{x\sim \mu_1}\left[f\left(x\right)\right]-\mathbb{E}_{x \sim \mu_2}\left[f\left(x\right)\right]\right).$$

Under what conditions can one find the optimal $f$ that achieves the maximum? Is it possible to have an analytical expression for $f$ that achieves the maximum in such scenarios?

Any help is deeply appreciated.

",28286,,2444,,1/25/2021 19:00,1/25/2021 19:00,Under what conditions can one find the optimal critic in WGAN?,,0,0,,,,CC BY-SA 4.0 21618,1,,,6/3/2020 20:55,,1,228,"

Can we solve an $8 \times 8$ sliding puzzle using a random-restart hill climbing technique (steepest-ascent)? If yes, how much computing power will this need? And what is the maximum $n \times n$ that can be solved normally (e.g. with a Google's colab instance)?

",37575,,37575,,6/4/2020 0:33,6/4/2020 0:33,Can we solve an $8 \times 8$ sliding puzzle using hill climbing?,,0,8,,,,CC BY-SA 4.0 21619,2,,21590,6/3/2020 22:13,,3,,"

What you are referring to as the situation where

some indexes are not available

is simply the situation where some actions are not available/valid in some state. So, yes, the ${\arg \max }$ will be calculated based only on the available actions in that state. More formally, $$\underset{a \in \mathcal{A}(s)}{\arg \max } \, Q(s, a)$$

where $Q(s,a)$ has been initialized for all $s \in \mathcal{S}^{+}$ and $a \in \mathcal{A}(s)$ and $\mathcal{A}(s)$ is defined as the set of all actions available in state $s$. See Sutton and Barto's Intro to RL book, chapter 6 (the part on Q-learning).

In the same book (chapter 3), the authors state that:

To simplify notation, we sometimes assume the special case in which the action set is the same in all states and write it simply as $\mathcal{A}$.

Therefore, in many sources, you may see the $\arg \max$ expressed as $\underset{a \in \mathcal{A}} {\arg \max}$ or even $\underset{a} {\arg \max}$. It's implicit that it is taken with respect to the set of actions $\mathcal{A}(s)$ available in state $s$.

",34010,,,,,6/3/2020 22:13,,,,0,,,,CC BY-SA 4.0 21620,1,,,6/3/2020 23:57,,1,330,"

In the appendix of Representation Learning with Contrastive Predictive Coding, van den Oord et al. prove that optimizing InfoNCE is equivalent to maximize the mutual information between input image $x_t$ and the context latent $c_t$ as follows:

where $x_{t+k}$ is the image at time step $t+k$, $X_{neg}$ is a set of negative samples that do not appear in the sequence $x_t$ belongs to, and $N-1$ is the negative samples used to compute InfoNCE.

I'm confused about Equation $(8)$. van den Oord et al. stressed that Equation $(8)$ becomes more accurate as $N$ increases, but I cannot see why. Here's my understanding, for $x_j\in X_{neg}$, we have $p(x_j|c_t)\le p(x_j)$ . Therefore, $\sum_{x_j\in X_{neg}}{p(x_j|c_t)\over p(x_j)}\le N-1$ and this does not become accurate as $N$ increases. In fact, I think the gap between the left and right of $\le$ increases as $N$ increases. Do I make any mistake?

",8689,,-1,,6/17/2020 9:57,12/2/2022 1:04,Confusion about the proof that optimizing InfoNCE equals to maximizing mutual information,,1,0,,,,CC BY-SA 4.0 21628,1,21629,,6/4/2020 3:35,,6,2052,"

I am reading Sutton and Barto's book on reinforcement learning. I thought that reward and return were the same things.

However, in Section 5.6 of the book, 3rd line, first paragraph, it is written:

Whereas in Chapter 2 we averaged rewards, in Monte Carlo methods we average returns.

What does it mean? Are rewards and returns different things?

",36710,,2444,,6/4/2020 11:14,6/5/2020 17:34,Is there any difference between reward and return in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 21629,2,,21628,6/4/2020 4:22,,5,,"

Return refers to the total discounted reward, starting from the current timestep.

",35585,,,,,6/4/2020 4:22,,,,0,,,,CC BY-SA 4.0 21632,1,,,6/4/2020 7:18,,3,463,"

I have recently encountered the paper on NLP. It is very new to me and I am still unable to see how that works. I have used all the resources over there from the original paper to Youtube videos and the very famous ""Illustrated Transformer"".

Suppose I have a training example of ""I am a student"" and I have the respective French as ""Je suis etudient"".

I want to know how these 3 words are converted to 4 words. What are the query, keys, values?

This is my understanding of the topic so far.

The encoder part is:

  • Query: a single word embedded in a vector form. such as ""I"" expressed as a vector of length 5 as $[.2, 0.1, 0.4, 0.9, 0.44]$.

  • Keys: the matrix of all the vectors or in simple words, a matrix that has all the words from a sentence in the form of embeddings.

  • Values = Keys

For decoder:

  • Query: the input word in the form of a vector (which is output given by the decoder from the previous pass).

  • Keys = values = outputs from the encoder's layers.

BUT there are 2 different attention layers and one of which do not use the encoder's output at all. So, what are the keys and values now? (I think they are just like encoder, but just the generated until that pass)?

",36062,,2444,,11/20/2020 12:55,12/10/2022 18:02,"What are the keys and values of the attention model for the encoder and decoder in the ""Attention Is All You Need"" paper?",,1,0,,,,CC BY-SA 4.0 21637,2,,7926,6/4/2020 16:06,,0,,"

You guys are missing the point completely.

Everything I read assumes that being is based on science. You are trying to fit the reality of being into a brain, that is, into a machine. You know that you exist. A billion sensors of every imaginable type cannot equate to reality. Even with a billion more neurons than what's in the brain.

It's like the mathematics you use. You believe you can be accurate when the truth is, you cannot be. The position between 2 numbers can never be accurate, they are infinite.

It's like time. You believe it to be organic when the truth is, there is only the now.

Try as you may, you will never fit being into an algorithm. The real illusion is that you believe you can.

",37600,,,,,6/4/2020 16:06,,,,0,,,,CC BY-SA 4.0 21639,1,,,6/4/2020 17:05,,1,93,"

Most of the traditional machine learning algorithms need a feature vector of a constant dimension to predict the label.

Which algorithms can be used to predict a class label with a shorter or partial feature vector?

For example, consider a search engine. In search engines, when the user types a few letters, the search engine predicts the context of the query and suggests more queries to user.

Similarly, how can I predict a class label with an incomplete feature vector? I know one way is to pad the sequence, but I want a better solution.

",29645,,2444,,6/5/2020 23:55,6/5/2020 23:55,How can I predict the label given a partial feature vector?,,0,4,,,,CC BY-SA 4.0 21640,2,,21606,6/4/2020 17:26,,1,,"

First of all, your estimates are a bit off. If you have 300 neurons, you won't have just 2 weights per neuron, but much more, assuming full connectivity

Bias isn't just an extra parameter to fit, it is an important adjustable parameter that sets the offset of the separating hyperplane represented by each neuron. Think of the simple equation $ax+b$, there's no way to shift the line unless you use the $b$ (bias) part.

This would be especially important for small number of nodes and classification tasks (think perceptrons etc)

",36518,,,,,6/4/2020 17:26,,,,2,,,,CC BY-SA 4.0 21641,1,21650,,6/4/2020 17:52,,4,343,"

My questions concern a particular formulation of the Bayes error rate from Wikipedia, summarized below.

For a multiclass classifier, the Bayes error rate may be calculated as follows: $$p = 1 - \sum_{C_i \ne C_{\max,x}} \int_{x \in H_i} P(C_i|x)p(x)\,dx$$ where $x$ is an instance, $C_i$ is a class into which an instance is classified, $H_i$ is the area/region that a classifier function $h$ classifies as $C_i$.

We are interested in the probability of misclassifying an instance, so we wish to sum up the probability of each unlikely class label (hence we want to look at $C_i \ne C_{\max, x}$).

However, the integral is confusing me. We want to integrate an area corresponding to the probability that we choose label $C_i$ given $x$. But we drew $x$ from $H_i$, the region covered/classified by $C_i$, so wouldn't $P(C_i|x) = 1$?

I think most of my confusion will be resolved if someone can help clarify the intention of the integral.

Is it to draw random samples from the total space of $h$ (the classifier function), and then sum the probabilities from each classified $C_i \ne C_{\max, x}$? How does $x$ exist in the outer summation before it has been sampled from $H_i$ in the integral?

",37604,,2444,,12/13/2021 9:17,12/13/2021 9:17,How is the formula for the Bayes error rate with an integral derived?,,1,0,,,,CC BY-SA 4.0 21643,1,21644,,6/4/2020 19:27,,4,131,"

In equation 3.17 of Sutton and Barto's book:

$$q_*(s, a)=\mathbb{E}[R_{t+1} + \gamma v_*(S_{t+1}) \mid S_t = s, A_t = a]$$

$G_{t+1}$ here have been replaced with $v_*(S_{t+1})$, but no reason has been provided for why this step has been taken.

Can someone provide the reasoning behind why $G_{t+1}$ is equal to $v_*(S_{t+1})$?

",37611,,2444,,6/4/2020 19:45,6/5/2020 17:08,Why is $G_{t+1}$ is replaced with $v_*(S_{t+1})$ in the Bellman optimality equation?,,2,0,,,,CC BY-SA 4.0 21644,2,,21643,6/4/2020 20:45,,3,,"

Can someone provide the reasoning behind why $G_{t+1}$ is equal to $v_*(S_{t+1})$?

The two things are not usually exactly equal, because $G_{t+1}$ is a probability distribution over all possible future returns whilst $v_*(S_{t+1})$ is a probability distribution derived over all possible values of $S_{t+1}$. These will be different distributions much of the time, but their expectations are equal, provided the conditions of the expectation match.

In other words,

$$G_{t+1} \neq v_*(S_{t+1})$$

But

$$\mathbb{E}[G_{t+1}] = \mathbb{E}[v_*(S_{t+1})]$$

. . . when the conditions that apply to the expectations on each side are compatible. The relevant conditions are

  • Same initial state or state/action at given timestep $t$ (or you could pick any earlier timestep)

  • Same state progression rules and reward structure (i.e. same MDP)

  • Same policy

More details

The definition of $v(s)$ can be given as

$$v(s) = \mathbb{E}_\pi[G_t \mid S_t = s]$$

If you substitute step s' and index $t+1$ you get

$$v(s') = \mathbb{E}_\pi[G_{t+1} \mid S_{t+1} = s']$$

(This is the same equation, true by definition, the substitution just shows you how it fits).

In order to put this into equation 3.17, you need to note that:

  • It is OK to substitute terms inside an expectation if they are equal in separate expections, amd the conditions $c$ and $Y$ apply to both (or are irrelevant to either one or both). So if for example $\mathbb{E}_c[Z] = \mathbb{E}_c[X \mid Y]$ where $X$ and $Z$ are random variables, and you know $Z$ is independent of $Y$ then you can say $\mathbb{E}_c[W + 2X \mid Y] = \mathbb{E}_c[W + 2Z \mid Y]$ even if $X$ and $Z$ are different distributions.

  • $A_{t+1} = a'$ does not need to be specified because it is decided by the same $\pi$ in both $q(s,a)$ and $v(s')$, making the conditions on the expectation compatible already. So the condition of following $\pi$ is compatible with $\mathbb{E}_\pi[G_{t+1} \mid S_{t} = s, A_{t}=a] = \mathbb{E}_\pi[v_*(S_{t+1}) \mid S_{t} = s, A_{t}=a]$

  • The expectation over possible $s'$ in $\mathbb{E}_\pi[v_*(S_{t+1})|S_t=s, A_t=a] = \sum p(s'|s,a)v_*(s')$ is already implied by conditions on the original expectation that the functions are evaluating the same environment - something that is not usually shown in the notation.

Also worth noting, in 3.17 $\pi$ is the optimal policy $\pi^*$, but actually the equation holds for any fixed policy.

",1847,,1847,,6/5/2020 17:08,6/5/2020 17:08,,,,0,,,,CC BY-SA 4.0 21645,1,21652,,6/4/2020 20:57,,2,90,"

While I was studying the equations for the computation inside GRU and LSTM units, I realized that although the different gates have different Weight matrices, their overall structure is the same. They are all dot products of a weight matrix and their inputs, plus bias, followed by a learned gating activation. Now, the difference between computation depends on the weight matrices being different from each other, that is, those weight matrices are specifically for specializing in the particular tasks like forgetting/keeping etc.

But these matrices are all initialized randomly, and it seems that there's no special tricks in the training scheme to make sure these weight matrices are learned in a manner that the associated gates specialize in their desired tasks. They are all random matrices that kept getting updated with gradient descent.

So how does, for example, a forget gate learn to function as a forgetting unit? Same question applies to others as well. Am I missing a part of the training for these networks? Can we ever say that these units learn truly disentangled functions from each other?

",37614,,,,,6/5/2020 4:21,How do LSTM or GRU gates learn to specialize in their desired tasks?,,1,0,,,,CC BY-SA 4.0 21646,1,,,6/4/2020 21:16,,1,138,"

Many of multi-armed bandit(MAB) algorithms are used when the total reward is the sum of all rewards. However, in RL, the discounted reward is mainly used. Why is the discounted reward not prevailing in MAB problem, and in what cases is this type of modeling valid and might be better?

",10191,,40671,,11/2/2020 9:50,3/5/2021 10:23,When discounted MAB is useful?,,1,0,,,,CC BY-SA 4.0 21647,2,,21312,6/4/2020 21:51,,1,,"

I've actually implemented this game before using deep reinforcement learning. You are dealing with a dynamic action space here, where the action space may change at each time step of the game (or more generally the MDP). First, let's discuss the actual action spaces in each one of the two phases of Crib (or Cribbage) and formalize the question.

Phase 1: The Discard:In this phase, you are concurrently discarding 2 cards without respect to order. Therefore, you have a fixed discrete action space of size ${{6}\choose{2}} = 15$.

Phase 2: The Play: In this phase, you and your opponent are sequentially playing one of each of your remaining 4 cards (the original 6 minus the 2 discarded from phase 1). Therefore, you have a discrete action space of size $4! = 24$. Here's the catch - not every one of these actions is legal. The current state of the game restricts which cards you are allowed to play (the sum of all cards currently played must not be greater than 31). Since you do not know your opponent's cards and/or policy, you do not know which of these 24 actions are valid. To remedy this, the action space should dictate which of your remaining cards may be played at the current time step. Thus, you have a dynamic discrete action space of size 1, 2, 3, or 4 at each time step.

How can I make an action space for these actions?

Since you didn't specify any implementation standard (e.g. OpenAI Gym), there are multiple paths to take, and they usually depend on your implementation of the state feature vector. In my own implementation, I tinkered with two possible state representations, which are fairly simple to describe.

Possibility 1: Separate State Representations for each Phase: In phase 1, you need to know the cards in your hand and the scores; i.e., the state feature vector could be encoded as a list of [card0, card1, card2, card3, card4, card5, your score, opponent score]. This state represents all information known about the game during phase 1 from a single player's viewpoint; after each time step, the current player may change, and the state must be updated according to the current player's viewpoint. Each card can be encoded as an integer from 1 to 52 (not starting from 0, as we will see in the next paragraph), and the score is an integer from 0 to 120 (tip: sort your cards for a reduced state space and faster convergence). Your action space can be the set of integers from 0 to 14 that maps to a 2-card combination in your hand. Alternatively, you could have a dynamic action space that sequentially asks for 1 of your 6 cards to discard and then for 1 of your remaining 5 cards to discard. The action space can be a subset of integers from 0 to 5 that maps to a single card in your hand. Be careful here - when choosing the second card to discard, your algorithm must know which card was discarded first. You can solve this by adding another component to your state vector that represents the first card that was discarded (set to 0 at the beginning of the phase), and therefore, make sure to update the state after the first discard.

In phase 2, you need to know the 4 cards in your hand, the cut card, the scores, and the cards currently played. Another helpful but unnecessary feature for learning is the current sum of played cards. A possible representation is a list of [card0, card1, card2, card3, card played0, card played 1, card played 2, …, card played 7, your score, opponent score, cut card, current sum of played cards]. The values of played cards should be initialized to 0 at the beginning of the phase. The state can be updated so that any card in your hand that you have played is set to 0, and any card played by your opponent can be set to its negative value. This will correctly encode which cards have been played from your hand, which cards played are your from your opponent, and all other available information. Consequently, the action space is dynamic and is a subset of integers from 0 to 3 that maps to a single card in your hand.

Possibility 2: Identical State Representations for each Phase Another possibility is to have all of the above information for each phase encoded into a single state representation along with the phase number. State features that are unique to phase 2 that aren't relevant for phase 1 can be set to 0 during phase 1 and vice versa. With this representation, the state feature vector is of the same length at all time steps of the game, and the action space will change as described above. The important ideas for the encoding are exactly the same as above and will change based on your particular implementation, so I won't include the details here.

How do I model this?

If you are going to implement something similar to Possibility 1, then you may need two agents that each learn a policy for a separate phase. For starters, you could use Q-learning or DQN and take the action with greatest q-value at each timestep, making sure that the chosen action is always a member of the current action space.

If you are going to implement something similar to Possibility 2, then you may only need a single agent that learns a policy for each phase, simply because the phase is a feature of the state. Essentially you are trading off a more complicated state representation for a simpler learning algorithm.

I've read this post on using MultiDiscrete spaces, but I'm not sure how to define this space based on the previous chosen action. Is this even the right approach to be taking?

After reading the OpenAI Gym documentation, it looks like the MultiDiscrete space is a product of Discrete spaces. Therefore, it is a fixed action space and inherently not what you want here (a dynamic action space). I don't believe that OpenAI Gym standards will support dynamic action spaces natively. You would need to do some extra work such as providing a method that returns the current action space of the environment. Alternatively, if you want to follow the (state, reward, done, info) signal paradigm from OpenAI Gym, you may provide the current action space in the info dictionary. Finally, another idea is to allow the agent to always choose an action from a larger fixed action space (e.g. the set of integers from 0 to 3 for phase 2) and then penalize the agent through the reward signal whenever it chooses an action that is not a member of the current action space (e.g. if the chosen card was already played in phase 2). Afterward, you would return the current game state as the next state and make the agent try again.

My advice is to first determine the state representation, and the rest of your implementation should follow, using the ideas above.

",37607,,,,,6/4/2020 21:51,,,,1,,,,CC BY-SA 4.0 21648,2,,8554,6/4/2020 22:00,,1,,"

This means that the reward set is actually R={0,1,−3} (we assume that in each timestep, the robot can only collect one can).

@riceissa While I agree with the rest of your demonstration, I wouldn't assume that the robot can only collect 0 or 1 can. As Neil Slater suggest, I think the robot could pick any number of cans between 0 and N.

Below is how I solve the problem for the more general case, assuming specific values for $s',s, a$. This generalization encompasses @riceissa answer.


Let :

  • $S_t=s'=\texttt{high}$
  • $S_{t-1}=s=\texttt{high}$
  • $A_{t-1}=a=\texttt{search}$

We have the following equality : $$r_{search}=\sum_{r \in R}r\cdot\frac{p(s', r|s, a)}{p(s'|s, a)}$$ For these values of $s', s, a$ we also have:

  • $p(s'|s,a)=\alpha$.
  • $p(s', -3|s,a)=0$

Writing $\eta_r:=p(s',r|s,a)$ and for $R=\{0, 1, 2, \dots,N\}$ (we omit $r=-3$ since the proba is 0 for this case) we have then:

\begin{align*} r_{\texttt{search}}&=\sum_{r=0}^N r\cdot\frac{\eta_r}{\alpha}\\ r_{\texttt{search}}\cdot\alpha&=\sum_{r=0}^N r\cdot\eta_r\\ r_{\texttt{search}}\cdot\alpha&=i\cdot\eta_i + \sum_{\substack{r=0 \\ r\neq i}}^N r\cdot\eta_r\\ i\cdot\eta_i&= r_{\texttt{search}}\cdot\alpha- \sum_{\substack{r=0 \\ r\neq i}}^N r\cdot\eta_r \end{align*}

For $i\neq 0$ :

$$\eta_i= \frac{1}{i}\cdot\left(r_{\texttt{search}}\cdot\alpha- \sum_{\substack{r=0 \\ r\neq i}}^N r\cdot\eta_r\right)$$

For $i=0$, we use that fact that \begin{align*} \alpha&=p(s'|s,a)=\sum_{r=0}^N p(s',r|s,a)=\sum_{r=0}^N\eta_r\\ &\Rightarrow\eta_0=\alpha - \sum_{r>0} \eta_r \end{align*}

So substituting $\eta_r$ by its probability formula we end up having, for these specific values of $s', s, a$ :

\begin{equation} \begin{cases} p(s',i|s,a)= \frac{1}{i}\cdot\left(r_{\texttt{search}}\cdot\alpha- \sum_{\substack{r=0 \\ r\neq i}}^N r\cdot p(s',r|s,a)\right) & \texttt{$\forall i \in [1,N]$}\\ p(s',0|s,a)=\alpha -\sum_{r>0} p(s',r|s,a) & \texttt{if $i=0$} \end{cases} \end{equation}

",37613,,37613,,6/5/2020 15:30,6/5/2020 15:30,,,,1,,,,CC BY-SA 4.0 21649,1,,,6/4/2020 22:14,,2,62,"

I've written a Double DQN-based stock trading bot using mainly time series stock data.

I've recently upgraded my Experience Replay(ER) code with a version of Prioritized Experience Replay (PER) similar to the one written by OpenAI. My DQN's reward function is the stock return over 30 days (the length of my test window).

The strange thing is, once the bot has been trained using the same set of time series data and let free to trade on unseen stock data, the version that uses PER actually comes up with worse stock returns than the version using a regular ER.

This is not quite what I'd expected but it's very hard to debug and see what might have gone wrong.

So my question is, will PER always perform better than a regular ER? If not, when/why not?

",37615,,,,,6/4/2020 22:14,"In a DQN, can Prioritized Experience Replay actually perform worse than a regular Experience Replay?",,0,0,,,,CC BY-SA 4.0 21650,2,,21641,6/4/2020 23:49,,3,,"

Bayes Error Rate

For the general case of K different classes, the probability of classifing x instance correctly is:

\begin{equation} \label{eq1} \begin{split} P(correct) & = \sum_{i=1}^{K} p(x \in H_i, C_i) \\ & = \sum_{i=1}^{K} \int_{x \in H_i} p(x,C_i) \, dx\\ & = \sum_{i=1}^{K} \int_{x \in H_i} P(C_i|x)p(x)\,dx \\ \end{split} \end{equation}

where $H_i$ is the region where class $i$ has the highest posterior. So the Bayes Error Rate is:

\begin{equation} \label{eq2} \begin{split} P(error) & = 1 - p(correct) \\ & = 1 - \sum_{i=1}^{K} \int_{x \in H_i} P(C_i|x)p(x)\,dx \end{split} \end{equation}

Be careful if we drew $x$ from $H_i$, the region covered/classified by $C_i$, $P(C_i|x) \ne 1$, because that would mean that we surely predict correctly all the time. If $x$ belongs to a decision area, this does not imply that it belongs to the corresponding class.

",36055,,1641,,7/3/2020 18:07,7/3/2020 18:07,,,,1,,,,CC BY-SA 4.0 21651,2,,21646,6/5/2020 3:57,,1,,"

One of the reasons a discount factor is used, is to make sure the reward maximization is a well-defined problem and to make the sum of all rewards convergent.

In the MAB problem, the number of trials is typically finite owing to some sort of budget in the number of trials. Hence, this is less of problem. However, by all means discounts are still valid and helpful in cases where the analysis in asymptotic in the number of trials.

",37618,,,,,6/5/2020 3:57,,,,1,,,,CC BY-SA 4.0 21652,2,,21645,6/5/2020 4:21,,2,,"

It comes down to the order they're computed in, and what they're used in. I will be referring to the LSTM in this answer.

Looking at the forget gate, you can see that it has the ability to manipulate the cell state. This gives it the ability to force a forget. Say (after training) it sees a super important input that means some previous data is irrelevant (say, like a full stop). This forget gate, while it might not force a forget, has the ability to force one, and will likely learn to.

The input gate ultimately adds to the cell state. This gate doesn't have direct influence over the cell state (it can't make it 0, like the forget gate can), but it can add to it and influence it that way. So it is an input gate.

The output gate is used to interpret the hidden state, and get it ready to be combined with the cell state for a final output at that time step.

While these gates all use sigmoid functions, are all initialised randomly and have the same dimensionality, what their output is used in, and the order they're computed in gives them a certain role to play. Initially, they won't conform to this role, but logically as they learn, they likely will.

",26726,,,,,6/5/2020 4:21,,,,2,,,,CC BY-SA 4.0 21655,1,,,6/5/2020 9:11,,2,65,"

I decided to train deep Q-learning agent based on getting raw pixels from environment.I have one particular problem:when I input stack of frames, suppose 4 consecutive frames, if action space is 6,then output is 4 by 6 matrix.So which one is real Q-value?I mean, I input batch of frames and it outputs batch of values and question is which is real Q-value out of those batch values?

",36107,,,,,7/5/2020 10:03,How to predict Q-values based on stack of frames,,1,0,,,,CC BY-SA 4.0 21656,2,,21655,6/5/2020 9:30,,2,,"

You do not output the batch of Q-values. Input frame stacking is needed to gain full observability of the environment. In your case the output would be 6 elements for your current frame. If $F$ is a frame then you would stack 4 frames $[F_{k-3}, F_{k-2}, F_{k-1}, F_k]$ and the output would be 6 Q-values for frame $F_k$.

",20339,,,,,6/5/2020 9:30,,,,2,,,,CC BY-SA 4.0 21659,2,,21643,6/5/2020 11:34,,2,,"

Note that for a general policy $\pi$ we have that $q_{\pi}(s,a) = \mathbb{E}_{\pi}[G_t | S_t = s, A_t = a]$, where in state $S_t$ we take action $a$ and thereafter following policy $\pi$. Note that the expectation is taken with respect to the reward transition distribution $\mathbb{P}(R_{t+1} = r, S_{t+1} = s' | A_t = a, S_t = s)$ which I will denote as $p(s',r,|s,a)$.

We can then rewrite the expectation as follows

\begin{align} q_{\pi}(s,a) &= \mathbb{E}_{\pi}[G_t | S_t = s, A_t = a] \\ & = \mathbb{E}_{\pi}[R_{t+1} + \gamma G_{t+1} | S_t = s, A_t = a] \\ & = \sum_{r,s'}p(s',r|s,a)(r + \gamma \mathbb{E}_\pi[G_{t+1} | S_{t+1} = s']) \\ & = \sum_{r,s'}p(s',r|s,a)(r + \gamma v_{\pi}(s')) \; . \end{align}

The key thing to note is that these two terms, $G_{t+1}$ and $v_{\pi}(s')$, are only equal in expectation, which is why in the equation you can exchange the terms because we are taking the expectation.

Note that I have shown this for a general policy $\pi$ not just the optimal policy.

",36821,,,,,6/5/2020 11:34,,,,0,,,,CC BY-SA 4.0 21661,1,,,6/5/2020 12:31,,0,3179,"

We hear this many time for different problems

Train a model to solve this problem!

What do we really mean by training a model?

",26489,,26489,,6/5/2020 15:22,6/6/2020 5:39,What does it mean to train a model?,,2,0,,,,CC BY-SA 4.0 21662,1,,,6/5/2020 12:43,,0,61,"

I want to obtain SHAP values with kernel SHAP without using python but I don't really understand the algorithm. If I have a kNN classifier, do I have to run the classifier for all the coalitions possible? For $n$ variables, $2^n$ predictions?

Also, how do I obtain the SHAP values after that? Linear regression?

",37628,,2444,,6/5/2020 14:58,6/5/2020 14:58,How to obtain SHAP values,,0,2,,,,CC BY-SA 4.0 21663,1,21664,,6/5/2020 12:58,,5,699,"

I'm trying to solve exercise 3.11 from the book Sutton and Barto's book (2nd edition)

Exercise 3.11 If the current state is $S_t$ , and actions are selected according to a stochastic policy $\pi$, then what is the expectation of $R_{t+1}$ in terms of $\pi$ and the four-argument function $p$ (3.2)?

Here's my attempt.

For each state $s$, the expected immediate reward when taking action $a$ is given in terms of $p$ by eq 3.5 in the book:

$$r(s,a) = \sum_{r \in R} r \, \sum_{s'\in S} p(s',r \mid s,a) = E[R_t \mid S_{t-1} = s, A_{t-1} = a] \tag{1}\label{1}$$

The policy $\pi(a \mid s)$, on the other hand, gives the probability of taking action $a$ given the state $s$.

Is it possible to express the expectation of the immediate reward over all actions $A$ from the state $s$ using (1) as

$$E[R_t \mid S_{t-1} = s, A] = \sum_{a \in A} \pi(a \mid s) r(a,s) \tag{2}\label{2}$$

?

If this is valid, is this also valid in the next time step

$$E[R_{t+1} \mid S_{t} = s, A] = \sum_{a \in A} \pi(a \mid s) r(a, s) \tag{3}\label{3}$$

?

If (2) and (3) are OK, then

$$E[R_{t+1} \mid S_{t} = s, A] = \sum_{a \in A} \pi(a \mid s) \sum_{r \in R} r \, \sum_{s'\in S} p(s',r \mid s,a)$$

",37627,,2444,,11/20/2021 12:33,11/20/2021 12:33,"If the current state is $S_t$ and the actions are chosen according to $\pi$, what is the expectation of $R_{t+1}$ in terms of $\pi$ and $p$?",,1,0,,,,CC BY-SA 4.0 21664,2,,21663,6/5/2020 14:17,,3,,"

First note that $\mathbb{E}[R_{t+1} |S_t=s] = \sum_{s',r}rm(s',r|s)$ where $m(\cdot)$ is the mass function for the joint distribution of $S_{t+1},R_{t+1}$.

If you are currently in state $S_t$ and we condition on taking action $a$ then the expected reward at time $t+1$ is given as follows:

\begin{align} \mathbb{E}[R_{t+1} | S_t = s, A_t=a] & = \sum_{s',r}rp(s'r|s,a)\;. \end{align}

However, action $A_t$ is taken according to some stochastic policy $\pi$ so we need to marginalise this out of our expectation by using the tower law - i.e. we take

$$\mathbb{E}_{A_t \sim \pi}[\mathbb{E}[R_{t+1} | S_t = s, A_t=a]|S_t = s] = \sum_a \pi(a|s)\sum_{s',r}rp(s'r|s,a) = \mathbb{E}[R_{t+1} | S_t = s]\;.$$

To see why this holds we can re-write using some arbitrary mass functions $f(\cdot),h(\cdot),g(\cdot),m(\cdot)$ as

$$\pi(a|s)p(s'r|s,a) = \frac{f(a,s)}{g(s)} \times \frac{h(s',r,a,s)}{f(a,s)} = m(s',r,a|s)\;,$$ so when we would end up with (after re-arranging the summations)

$$\sum_{s',r}r \sum_{a}m(s',r,a|s) = \sum_{s',r}r m(s',r|s) = \mathbb{E}[R_{t+1}|S_t = s]\;;$$ as required.

NB: what you have done is mostly correct except be careful when going from (2) to (3). They are exactly the same equation except for the time stamps, which means you would have to change the time stamps in your $r(s,a)$. Note that when you are at time step $t$ you take your action $A_t$ from your current $S_t$ to transition into state $S_{t+1}$ and then receive reward $R_{t+1}$ (and the next state).

",36821,,36821,,6/7/2020 10:29,6/7/2020 10:29,,,,2,,,,CC BY-SA 4.0 21665,2,,21661,6/5/2020 14:37,,3,,"

In machine learning, when you train a model, you adjust (or change) the parameters (or weights) of the model so that its performance in solving a certain task increases.

There's little difference between the idea of training a model and the idea of training an animal. In fact, here's the dictionary definition of the verb to train

teach (a person or animal) a particular skill or type of behaviour through practice and instruction over a period of time

If you train a model, you also teach a skill or type of behavior through practice and instruction. For example, if you train a model to solve an object classification problem, then you teach the model to classify certain objects according to their properties (which is the skill that the model learns).

There are different ways to train a model, depending on the problem you want to solve, the algorithms that you use to train the model, and the available data.

If you have a labeled dataset, then you train a model with a supervisory signal (the labels), i.e. you explicitly tell the model the output that it is supposed to produce for each input, and, if it does not produce it, then you adjust its parameters so that next time it is more likely to produce the correct output for that input. This is called supervised learning (or training).

In certain cases, you do not have the correct output that the model is supposed to produce for each input, but you only have a reward (or reinforcement) signal. So, your training (or learning) algorithm needs to adjust the parameters of the model only based on the reward signal. This is called reinforcement learning (or training).

Finally, there's also unsupervised learning (or training), where you are given a dataset without labels or rewards, but you want to learn e.g. a probability distribution that this data was likely sampled from or separate this data into groups. For example, in k-means (a clustering algorithm), you want to split the data into groups so that similar objects belong to the same group and dissimilar objects belong to different groups. Note that k-means is a learning algorithm, so it's not a model, but you could consider the centroids of the clusters the parameters of the model (a clustering model).

There are variations or subcategories of these learning paradigms and you can also combine them, so sometimes the difference between them is not so clear.

There are also different types of models. There are parametric (e.g. a linear regression model) and non-parametric (e.g. neural networks) models.

",2444,,2444,,6/5/2020 15:35,6/5/2020 15:35,,,,0,,,,CC BY-SA 4.0 21666,1,,,6/5/2020 15:21,,2,438,"

For the problems that can be solved algorithmically.

We have very good formal literature for which problems can be solved in polynomial, exponential time and which cannot. P/NP/NP-hard

But do we know some problems in machine learning paradigm for which no model can be trained? (With/without infinite computation capacity)

",26489,,2444,,6/5/2020 15:41,6/6/2020 16:40,What kind of problems cannot be solved using machine learning techniques?,,2,0,,,,CC BY-SA 4.0 21667,2,,21666,6/5/2020 16:33,,2,,"

At least you should be aware of two points:

  • P/NP/NP-hard (and all other class of complexities) are thoroughly valid for the machine learning area as well. Because these concepts are related to the fundamental of computations (theory of computation), and machine learning is not an exception here.
  • One of the useful concepts in the complexity of the learning problem is the VC dimension, PAC learnability, and their related concepts (such as sample complexity). Although these concepts can't be enough to measure the time complexity, they are useful for finding the learner model's capacity.
",4446,,4446,,6/5/2020 16:39,6/5/2020 16:39,,,,0,,,,CC BY-SA 4.0 21668,2,,21477,6/5/2020 16:53,,2,,"

Notably, these two tips/tricks are useful because we are assuming the context of deep reinforcement learning here, as you pointed out. In DRL, the RL algorithm is guided in some fashion by a deep neural network, and the reasons for normalizing stem from the gradient descent algorithm and the architecture of the network.

How does this affect training?

An observation from the observation space is often used as an input to a neural network in DRL algorithms, and normalizing the input to neural networks is beneficial for many reasons (e.g. increases convergence speed, aids computer precision, prevents divergence of parameters, allows for easier hyperparameter tuning, etc.). These are standard results in DL theory and practice, so I won't provide details here.

And more specifically, why on continuous action spaces we need to normalize also the action's values?

Most popular discrete action space DRL algorithms (e.g. DQN) have one output node for each possible action in the neural net. The value of the output node may be a q-value (value-based algorithm) or a probability of taking that action (policy-based algorithm).

In contrast, a continuous action space DRL algorithm simply cannot have an output node for each possible action, as the action space is continuous. The output is usually the actual action to be taken by the agent or some parameters that could be used to construct the action (e.g. PPO outputs a mean and standard deviation and then an action is sampled from the corresponding Gaussian distribution - this phenomenon is mentioned in your linked reference). Therefore, normalizing the action space of a DRL algorithm is analogous to normalizing the outputs of the corresponding neural network, which is known to increase training speed and prevent divergence. Again, a quick search will yield some good resources if you are interested in these results.

",37607,,,,,6/5/2020 16:53,,,,0,,,,CC BY-SA 4.0 21669,1,21671,,6/5/2020 17:30,,3,136,"

I'm new to RL and to deep q-learning and I have a simple question about the architecture of the neural network to use in an environment with a continous state space a discrete action space.

I tought that the action $a_t$ should have been included as an input of the neural network, togheter with the state. It also made sense to me as when you have to compute the argmax or the max w.r.t. $a_t$ it was like a ""standard"" function. Then I've seen some examples of networks that had as inputs only $s_t$ and that had as many outputs as the number of possible actions. I quite understand the logic behind this (replicate the q-values pairs of action-state) but is it really the correct way? If so, how do you compute the $argmax$ or the $max$? Do I have to associate to each output an action?

",37640,,,,,6/5/2020 17:54,What's the right way of building a deep Q-network?,,1,0,,,,CC BY-SA 4.0 21670,2,,21628,6/5/2020 17:34,,3,,"

As the accepted answer states, the return at the current timestep is equal to the sum of discounted rewards from all future timesteps until the end of the episode. In Chapter 5 of Sutton and Barto, returns must be used to estimate the state-value and action-value functions because episode lengths are unrestricted and may be greater than one. In contrast, Chapter 2 deals with the very special case of multi-armed bandits in which episode lengths are always equal to one: The agent begins each episode in a fixed start state, takes an action, receives a reward, and then the episode terminates and the agent begins the next episode at the same start state. Therefore, a return is equivalent to a reward in Chapter 2 because all episodes have length one.

",37607,,,,,6/5/2020 17:34,,,,0,,,,CC BY-SA 4.0 21671,2,,21669,6/5/2020 17:54,,1,,"

Do I have to associate to each output an action?

You are absolutely correct! In DQN, each output node of the neural network will be associated with one of your possible actions (assuming a finite discrete action space). After passing an input through the network, the value of each output node is the estimated q-value of the corresponding action. One benefit of this architecture is that you only need to pass the input through the neural network once to compute the q-value of each action, which is constant in the number of actions. If you were to include an action as an input to the neural network along with an observation, then you would need to pass an input for each action, which scales linearly in the number of actions. This is mentioned in paragraph 2 of Section 4.1 in the original DQN paper (https://arxiv.org/abs/1312.5602)

Is it really the correct way? If so, how do you compute the argmax or the max?

It's one possible way that is used in many popular algorithms such as DQN. To find the argmax, you simply take the action corresponding to the output node with highest q-value after passing an input through the network.

",37607,,,,,6/5/2020 17:54,,,,0,,,,CC BY-SA 4.0 21672,1,21673,,6/5/2020 19:10,,2,988,"

How would you train a reinforcement learning agent from raw pixels?

For example, if you have 3 stacked images to sense motion, then how would you pass them to neural networks to output Q-learning values?

If you pass that batch output, it would be a batch of values, so from here it is impossible to deduce which ones are the true Q-values for that state.

Currently, I am watching a YouTuber: Machine Learning with Phil, and he did it very differently. On the 13th minute, he defined a network that outputs a batch of values rather than Q-values for 6 states. In short, he outputs a matrix rather than a vector.

",36107,,2444,,6/5/2020 23:06,6/5/2020 23:06,How to train a reinforcement learning agent from raw pixels?,,1,0,,,,CC BY-SA 4.0 21673,2,,21672,6/5/2020 20:11,,3,,"

How would you train a reinforcement learning agent from raw pixels? For example, if you have 3 stacked images to sense motion, then how would you pass them to neural networks to output Q-learning values?

A Convolutional Neural Network (CNN) structure is a standard neural network architecture when working with 2D pixel input in reinforcement learning, and it is the technique used in the original DQN paper (see paragraphs 1 & 3 of Section 4.1 of https://arxiv.org/abs/1312.5602). CNNs typically take 3-dimensional input, where the first two dimensions are height and width of your images, and the third is rgb color. The technique in the paper was to convert each RGB frame (or image) to greyscale format (so it has only 1 color channel/dimension instead of 3) and instead use the rgb_color dimension as a frames dimension that is indexed by each stacked frame.

Currently, I am watching a YouTuber: Machine Learning with Phil, and he did it very differently. On the 13th minute, he defined a network that outputs a batch of values rather than Q-values for 6 states. In short, he outputs a matrix rather than a vector.

Later in the tutorial series, he most likely will discuss the training of the neural network. During training, you need to find the q-values of a batch of sets of stacked frames. Specifically, each element of the batch is a set of stacked frames. In other words, a set of stacked frames is treated as a single observation, so a batch of sets of stacked frames is a batch of observations.

To find these q-values, you will perform a forward pass of the batch of observations through the neural network. A forward pass of a single observation (set of stacked frames) through the neural network yields a vector of q-values (one for each action). Thus, a forward pass of a batch of observations (batch of stacked frames) will yield a matrix of q-values (one vector of q-values for each observation (or set of stacked frames)). This technique is used because many standard neural network libraries are designed to perform a forward pass on a batch of inputs through the neural network much faster than performing a forward pass on each input separately.

",37607,,37607,,6/5/2020 21:04,6/5/2020 21:04,,,,0,,,,CC BY-SA 4.0 21674,1,,,6/5/2020 20:32,,3,109,"

In deep Q learning, we execute the algorithm for each episode, and for each step within an episode, we take an action and record a reward.

I have a situation where my action is 2-tuple $a=(a_1,a_2)$. Say, in episode $i$, I have to take the first half of an action $a_1$, then for each step of the episode, I have to take the second half of the action $a_2$.

More specifically, say we are in episode $i$ and this episode has $T$ timesteps. First, I have to take $a_1(i)$. (Where $i$ is used to reference episode $i$.) Then, for each $t_i\in\{1,2,\ldots,T\}$, I have to take action $a_2(t_i)$. Once I choose $a_2(t_i)$, I get an observation and a reward for the global action $(a_1(i), a_2(t_i))$.

Is it possible to apply deep Q learning? If so, how? Should I apply the $\epsilon$-greedy twice?

",37642,,37642,,6/5/2020 21:43,6/5/2020 21:43,How to take actions at each episode and within each step of the episode in deep Q learning?,,0,9,,,,CC BY-SA 4.0 21675,1,21701,,6/5/2020 22:28,,3,194,"

From the AlphaGo Zero paper, AlphaGo Zero uses an exponentiated visit count from the tree search.

Why use visit count instead of the mean action value $Q(s, a)$?

",37645,,2444,,6/5/2020 22:57,6/7/2020 4:31,Why does AlphaGo Zero select move based on exponentiated visit count?,,1,0,,,,CC BY-SA 4.0 21676,1,,,6/6/2020 0:30,,1,144,"

Has anyone investigated ways to initialize a network so that everything is considered ""unknown"" at the start?

When you consider the ways humans learn, if something doesn't fit a class well enough, it falls into an ""unknown category"".

I would argue we all ultimately do some type of correlation matching internally with a certain threshold existing for recognition.

Deep networks don't currently have this ability, everything falls into a class. What I'm curious about is how might we force a network to initially classify things as ""unknown"" as the default state.

",32390,,2444,,6/6/2020 0:41,6/6/2020 0:41,"Can we force the initial state of a neural network to produce an ""unknown"" class?",,0,0,,,,CC BY-SA 4.0 21679,2,,21661,6/6/2020 5:39,,0,,"

Training a model simply means learning good values for all the weights and the bias from labeled examples. In supervised learning, a machine learning algorithm builds a model by examining many examples and attempting to find a model that minimizes loss; this process is called empirical risk minimization.

Loss is the penalty for a bad prediction. That is, loss is a number indicating how bad the model's prediction was on a single example. If the model's prediction is perfect, the loss is zero; otherwise, the loss is greater. The goal of training a model is to find a set of weights and biases that have low loss, on average, across all examples. shows a high loss model on the left and a low loss model on the right.

",32861,,,,,6/6/2020 5:39,,,,0,,,,CC BY-SA 4.0 21680,2,,5814,6/6/2020 5:47,,1,,"

Typically, there are mainly three steps in an object bounding box detection.

First, a model or algo is used to generate ROI(region of interest) or region proposals. These region proposals are an all large set of bounding boxes spanning the full images. (that is, an object localization component).

In the second step, visual features(Face, Person, etc... using Convolution) are extracted for each of the bounding boxes, they are evaluated. It is determined whether and which objects are present in the interested area based on visual features (i.e. an object classification component).

In the final post-processing step, overlapped boxes are combined into a single bounding box (that is, non-maximum suppression).

",32861,,,,,6/6/2020 5:47,,,,0,,,,CC BY-SA 4.0 21681,1,21685,,6/6/2020 7:34,,2,151,"

In equation 4.9 of Sutton and Barto's book on page 79, we have (for the policy iteration algorithm):

$$\pi'(s) = arg \max_{a}\sum_{s',r}p(s',r|s,a)[r+\gamma v_{\pi}(s')]$$

where $\pi$ is the previous policy and $\pi'$ is the new policy. Hence in iterations $k$ it must mean

$$\pi_{k+1}(s) = arg \max_{a}\sum_{s',r}p(s',r|s,a)[r+\gamma v_{\pi_{k}}(s')]$$

But in the example given in the same book on page 77 we have:

Now, for the concerned state marked in red -

  • $v_{\pi_{1}} = -1$ for all four surrounding states
  • $r = -1$ for all four surrounding states
  • $p(s',r|s,a) = 1$ for all four surrounding states
  • $\pi_{2}(s) = arg \max_{a}[1*[-1+1*-1],1*[-1+1*-1],1*[-1+1*-1],1*[-1+1*-1]]$
  • $\pi _{2}(s) = arg \max_{a}(-2,-2,-2,-2)$

Hence this should give us a criss-cross symbol (4 directional arrow) in $\pi_{2}$(s) but here a left arrow symbol is given.

What's wrong with my calculations?

",37611,,2444,,4/18/2022 8:31,4/18/2022 8:31,Why is there an inconsistency between my calculations of Policy Iteration and this Sutton & Barto's diagram?,,1,0,,,,CC BY-SA 4.0 21682,1,,,6/6/2020 7:57,,2,216,"

I have coded the breakout RAM version, but, unfortunately, its highest reward was 5. I trained it for about 2 hours and never reached a higher score. The code is huge, so I can't paste here, but, in short, I used double deep Q-learning, and trained it like it was CartPole or lunar-lander environment. In CartPole, the observation was a vector of 4 components. In that case, my double deep Q-learning agent solved the environment, but in the breakout-ram version whose observation was a vector of 128 elements, it was not even close.

Did I miss something?

",36107,,2444,,6/6/2020 10:28,11/26/2022 8:01,How to implement RAM versions of Atari games,,1,0,,,,CC BY-SA 4.0 21683,1,,,6/6/2020 8:18,,1,164,"

While training a GAN-based model, every time the discriminator's loss gets a constant value of nearly 0.63 while the generator's loss keeps on changing from 0.5 to 1.5, so I am not able to understand if this thing is happening either due to the generator being successful in fooling the discriminator or some instability in training. I have been stuck in this confusion for so many days.

",37648,,2444,,6/6/2020 17:19,6/6/2020 17:19,What does it mean when the discriminator's loss gets a constant value while the generator's loss keeps on changing?,,0,0,,,,CC BY-SA 4.0 21684,1,21687,,6/6/2020 8:55,,7,449,"

I often see that the state-action value function is expressed as:

$$q_{\pi}(s,a)=\color{red}{\mathbb{E}_{\pi}}[R_{t+1}+\gamma G_{t+1} | S_t=s, A_t = a] = \color{blue}{\mathbb{E}}[R_{t+1}+\gamma v_{\pi}(s') |S_t = s, A_t =a]$$

Why does expressing the future return in the time $t+1$ as a state value function $v_{\pi}$ make the expected value under policy change to expected value in general?

",31324,,2444,,1/24/2022 11:38,1/25/2022 13:10,"Why does the state-action value function, defined as an expected value of the reward and state value function, not need to follow a policy?",,2,1,,,,CC BY-SA 4.0 21685,2,,21681,6/6/2020 9:36,,3,,"

Your calculations are correct, but you have misinterpreted the equations and the diagram. The index $k$ in $v_k$ for the diagram refers to the policy evaluation update iteration only, and is not related to the policy update step (which uses the notation $\pi'$ and does not mention $k$).

Policy improvement consists of multiple sweeps through states to fully evaluate the current policy and estimate the value function for it. After that, it updates the policy in a separate policy improvement step. There are two loops - an inner loop indexed by $k$ in the equations and diagram, plus an outer loop which is not given an index notation.

The diagram is not showing incremental $\pi'$ policies from outer loops over policy iteration. Instead it is showing ""Greedy Policy w.r.t. $v_k$"" steps in the inner loop - you can think of that as the policy $\pi'$ that you would get inside the first outer loop, if you terminated the policy evaluation stage after that iteration $k$ of the inner loop.

The diagram only shows behaviour of policy iteration for a single outer loop. It demonstrates at least two interesting things:

  • In the case of this very simple environment, if you ran a single outer loop with long enough policy evaluation stage ($k \ge 3$) you would find the optimal policy.

  • Even before the value function estimate is close to convergence (with a high $k$), the policy that could be derived from the new estimates could be used to improve the policy. That leads to the idea of the value iteration method.

",1847,,1847,,6/6/2020 10:07,6/6/2020 10:07,,,,0,,,,CC BY-SA 4.0 21687,2,,21684,6/6/2020 11:07,,9,,"

Let's first write the state-value function as

$$q_{\pi}(s,a) = \mathbb{E}_{p, \pi}[R_{t+1} + \gamma G_{t+1} | S_t = s, A_t = a]\;,$$

where $R_{t+1}$ is the random variable that represents the reward gained at time $t+1$, i.e. after we have taken action $A_t = a$ in state $S_t = s$, while $G_{t+1}$ is the random variable that represents the return, the sum of future rewards. This allows us to show that the expectation is taken under the conditional joint distribution $p(s', r \mid s, a)$, which is the environment dynamics, and future actions are taken from our policy distribution $\pi$.

As $R_{t+1}$ depends on $S_t = s, A_t = a$ and $p(s', r \mid s, a)$, the only random variable in the expectation that is dependent on our policy $\pi$ is $G_{t+1}$, because this is the sum of future reward signals and so will depend on future state-action values. Thus, we can rewrite again as $$q_{\pi}(s,a) = \mathbb{E}_{p}[R_{t+1} + \gamma \mathbb{E}_{\pi}[ G_{t+1} |S_{t+1} = s'] | S_t = s, A_t = a]\;,$$ where the inner expectation (coupled with the fact its inside an expectation over the state and reward distributions) should look familiar to you as the state value function, i.e. $$\mathbb{E}_{\pi}[ G_{t+1} |S_{t+1} = s'] = v_{\pi}(s')\;.$$ This leads us to get what you have $$q_{\pi}(s,a) = \mathbb{E}_{p}[R_{t+1} + \gamma v_{\pi}(s') | S_t = s, A_t = a]\;,$$ where the only difference is that we have made clear what our expectation is taken with respect to.

The expectation is taken with respect to the conditional joint distribution $p(s', r \mid s, a)$, and we usually include the $\pi$ subscript to denote that they are also taking the expectation with respect to the policy, but here this does not affect the first term as we have conditioned on knowing $A_t = a$ and only applies to the future reward signals.

",36821,,2444,,1/25/2022 13:10,1/25/2022 13:10,,,,0,,,,CC BY-SA 4.0 21688,1,,,6/6/2020 11:47,,1,51,"

I'm trying to optimize a neural network. For that, I'm changing parameters like the batch size, learning rate, weight initialization, etc.

A neural network is not a deterministic algorithm, so, in each training set, I train the neural network from scratch and I stop it when it's full converged.

After training is complete, I calculate the performance of the neural network in a test dataset. The problem is, I trained the neural network from scratch 2 times with the same parameters, but the difference in performance was almost 5%, which is a BIG DIFFERENCE.

So, what's the reasonable number of training runs to obtain a credible performance number of a neural network?

",33545,,2444,,6/6/2020 11:54,6/6/2020 11:54,How many training runs are needed to obtain a credible value for performance?,,0,10,,,,CC BY-SA 4.0 21692,1,21714,,6/6/2020 14:32,,4,367,"

I recently read the DQN paper titled: Playing Atari with Deep Reinforcement Learning. My basic and rough understanding of the paper is as follows:

You have two neural networks; one stays frozen for a duration of time steps and is used in the computation of the loss function with the neural network that is updating. The loss function is used to update the neural network using gradient descent.

Experience replay is used, which basically creates a buffer of experiences. This buffer of experiences is randomly sampled and these random samples are used to update the non-frozen neural network.

My question pertains to the DQN algorithm illustrated in the paper: Algorithm 1, more specifically lines 4 and 9 of this algorithm. My understanding, which is also mentioned early on in the paper, is that the states are actually sequences of the game-play frames. I want to know, since the input is given to a CNN, how would we encode these frames to serve as input to the CNN?

I also want to know since $s_{1}$ is equal to a set, which can be seen in line 4 of the algorithm, then why is $s_{t+1}$ equal to $s_{t}$, $a_{t}$, $x_{t+1}$?

",30174,,37607,,6/8/2020 14:07,6/8/2020 14:07,How to convert sequences of images into state in DQN?,,2,0,,,,CC BY-SA 4.0 21694,2,,21666,6/6/2020 16:40,,0,,"

Unsupervised disentanglement learning with arbitrary generative models is impossible without inductive biases [1].

In fact, in general, any kind of learning is impossible without inductive biases.

[1]: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

",37618,,,,,6/6/2020 16:40,,,,1,,,,CC BY-SA 4.0 21695,1,21697,,6/6/2020 17:48,,3,1011,"

I have a problem which I believe can be described as a contextual bandit.

More specifically, in each round, I observe a context from the environment consisting of five continuous features, and, based on the context, I have to choose one of the ten available actions. The actions do not influence the next context.

Based on the above I have the following questions:

  1. Is this a contextual bandit or an MDP with a discount equal to zero (one step RL)? I have read that, in contextual bandits, we receive a different context for each action and I am a little bit confused.

  2. Can I use the DQN algorithm with TD Target only the observed reward instead of the reward plus the predicted value of the next state?

  3. Can I use a policy gradient algorithm, like REINFORCE or A2C? If yes, should I use a baseline and what this baseline should be?

  4. I have seen in the literature that there are some algorithms for contextual bandits such as LinUCB, LinRel, NeuralBandit, etc. And I am wondering why the DQN, A2C and REINFORCE algorithms, which seem to work well in MDP setting, are not used in contextual bandits, given the fact that this problem can be described as an MDP with a discount equal to zero?

",37663,,2444,,6/6/2020 19:42,6/6/2020 21:11,Can I apply DQN or policy gradient algorithms in the contextual bandit setting?,,1,1,,,,CC BY-SA 4.0 21697,2,,21695,6/6/2020 20:52,,6,,"

MDPs are strict generalisations of contextual bandits, adding time steps and state transitions, plus the concept of return as a measure of agent performance.

Therefore, methods used in RL to solve MDPs will work to solve contextual bandits. You can either treat a contextual bandit as a series of 1-step episodes (with start state chosen randomly), or as a continuous problem with discount factor zero.

Can I use the DQN algorithm with TD Target only the observed reward instead of the reward plus the predicted value of the next state?

Yes. That's identical mathematically to having a discount of zero, or having 1-step episodes.

Can I use a policy gradient algorithm, like REINFORCE or A2C? If yes, should I use a baseline and what this baseline should be?

Yes. Once converted to an MDP, you can use the same baselines in these algorithms as normal (A2C's use of advantage instead of action value is already a baseline). Generally adding a baseline can help reduce variance, so it may still help when applying RL to contextual bandit problems.

I have seen in the literature that there are some algorithms for contextual bandits such as LinUCB, LinRel, NeuralBandit, etc. And I am wondering why the DQN, A2C and REINFORCE algorithms, which seem to work well in MDP setting, are not used in contextual bandits

There are a couple of reasons that contextual bandit problems are not solved using RL techniques more often:

  • The goal in contextual bandits is commonly focused on creating a highly efficient online learner that minimises regret. Regret is the long term difference in total reward between always exploiting the best action choice compared to the exploration necessary to find it. Some RL solvers - e.g. DQN - are poor by this metric.

  • The lack of timesteps and state transitions can be used in algorithm design to be more efficient.

  • Improvements to RL methods designed to help with sparse rewards and the assignment problem in MDPs are pointless for contextual bandits and may be wasteful or even counter-productive.

Some RL algorithms do resolve to be nearly identical to their contextual bandit counterparts, and have the same performance characteristics e.g. REINFORCE with baseline for 1-step episodes is essentially the Contextual Gradient Bandit algorithm.

It is also worth noting than many problem domains where contextual bandit algorithms do well - e.g. website recommendations and advertising - have research showing that that a more sophisticated MDP model and RL-like approach can do even better. Although that is not quite the same as your question, it typically means extending the model so that timesteps and state transitions are meaningful.

",1847,,1847,,6/6/2020 21:11,6/6/2020 21:11,,,,4,,,,CC BY-SA 4.0 21699,1,21703,,6/6/2020 21:38,,3,1157,"

In policy gradient algorithms the output is a stochastic policy - a probability for each action.

I believe that if I follow the policy (sample an action from the policy) I make use of exploration because each action has a certain probability so I will explore all actions for a given state.

Is it beneficial or is it common to use extra exploration strategies, like UCB, Thompson sampling, etc. with such algorithms?

",37663,,2444,,6/16/2020 16:19,6/16/2020 16:19,Should I use exploration strategy in Policy Gradient algorithms?,,2,0,,,,CC BY-SA 4.0 21700,1,21705,,6/7/2020 2:17,,3,95,"

In Sutton-Barto's book on page 63 (81 of the pdf): $$\mathbb{E}[R_{t+1} + \gamma v_\pi(S_{t+1}) \mid S_t=s,A_t=\pi'(s)] = \mathbb{E}_{\pi'}[R_{t+1} + \gamma v_\pi(S_{t+1}) \mid S_{t} = s]$$

How does $\mathbb{E}$ suddenly change to $\mathbb{E}_{\pi'}$ and the $A_t = \pi'(s)$ term disappears?

Also, in general, in the conditional expectation, which distribution do we compute the expectation with respect to? From what I have seen, in $\mathbb{E}[X \mid Y]$, we always calculate the expected value over distribution $X$.

",37611,,2444,,6/7/2020 12:55,6/7/2020 13:33,How does $\mathbb{E}$ suddenly change to $\mathbb{E}_{\pi'}$ in this equation?,,1,2,,,,CC BY-SA 4.0 21701,2,,21675,6/7/2020 4:31,,3,,"

The answer is surprisingly hidden in the original AlphaGo paper:

At the end of search AlphaGo selects the action with maximum visit count; this is less sensitive to outliers than maximizing action value.

Unfortunately, there did not appear to be further details in the paper or in the related reference. The root child node (corresponding to an action) with maximum visit count is fittingly known as the robust child, as described here and referenced in a MCTS survey here.

",37607,,,,,6/7/2020 4:31,,,,1,,,,CC BY-SA 4.0 21703,2,,21699,6/7/2020 9:27,,2,,"

I believe that if I follow the policy (sample an action from the policy) I make use of exploration because each action has a certain probability so I will explore all actions for a given state.

Yes, having a stochastic policy function is the main way that a lot of policy gradient methods achieve exploration, including REINFORCE, A2C, A3C.

Is it beneficial or is it common to use extra exploration strategy like UCB, Thompson sampling etc. n such algorithms?

It can be, but needs to be done carefully, as the gradient sampling for the policy function is different. Many policy gradient methods are strictly on-policy and will not work if you simply add extra exploration. It is relatively straightforward to adjust the critic part of actor-critic methods by using e.g. Q learning update rules for it. However, the gradient of the policy function is trickier.

There are some policy gradient methods that do work with a separate, tunable, exploration function. Deep Deterministic Policy Gradient (DDPG) may be of interest to you - as per the title, it works with a deterministic policy function, and exploration is achieved by adding a separate noise function on top. The sampling for policy gradient is then corrected for being off-policy.

",1847,,,,,6/7/2020 9:27,,,,2,,,,CC BY-SA 4.0 21704,2,,21692,6/7/2020 10:23,,1,,"

I read the DQN paper titled: Playing Atari with Deep Reinforcement Learning again

I read, in the pre-processing and model architecture section (section 4.1), that for each state that is input to the CNN, that this state is actually stacked frames of the game, so basically what has to be done, to my understanding, is that for each time step you stack 4 frames (current frame and 3 previous frames) and this will serve as input to the CNN as the dimensions would be side * side * 4, 4 because the frames are converted to grey-scale and 4 frames are being used.

",30174,,2444,,6/7/2020 13:09,6/7/2020 13:09,,,,1,,,,CC BY-SA 4.0 21705,2,,21700,6/7/2020 11:40,,3,,"

Also, in general, in the conditional expectation, which distribution do we compute the expectation with respect to? From what I have seen, in $\mathbb{E}[X|Y]$, we always calculate the expected value over distribution $X$.

No, for $\mathbb{E}[X|Y]$ we take expectation of $X$ with respect to the conditional distribution $X|Y$, i.e.

$$\mathbb{E}[X|Y] = \int_\mathbb{R} x p(x|y)dx\;;$$

where $p(x|y)$ is the density function of the conditional distribution. If your random variables are discrete then replace the integral with a summation. Also note that $\mathbb{E}[X|Y]$ is still a random variable in $Y$.

How does $\mathbb{E}$ suddenly change to $\mathbb{E}_{\pi '}$ and the $A_t = \pi '(s)$ term disappears?

This is because in this instance $\pi '(s)$ is a deterministic policy, i.e. in state $s$ the policy will take action $b$ with probability 1 and all other actions with probability 0. NB: this is the convention used in Sutton and Barto to denote a deterministic policy.

Without loss of generality, assume that $\pi'(s) = b$. The implication of this is that in the first expectation we have $$\mathbb{E}[R_{t+1} + \gamma v(S_{t+1}) | S_t = s, A_t = \pi'(s) = b] = \sum_{s',r}p(s',r|s,a=b)(r + \gamma v(s'))\;,$$ and in the second expectation we have $$\mathbb{E}_{\pi'}[R_{t+1} + \gamma v(S_{t+1}) | S_t = s] = \sum_a\pi'(a|s)\sum_{s',r}p(s',r|s,a)(r + \gamma v(s'))\;;$$ However, we know that $\pi'(a|s) = 0 \; \forall a \neq b$, so this sum over $a$ would equal 0 for all $a$ except when $a=b$, in which case we know that $\pi'(b|s) = 1$, and so the expectation becomes

$$\mathbb{E}_{\pi'}[R_{t+1} + \gamma v(S_{t+1}) | S_t = s] = \sum_{s',r}p(s',r|s,a=b)(r + \gamma v(s'))\;;$$

and so we have equality of the two expectations.

",36821,,36821,,6/7/2020 13:33,6/7/2020 13:33,,,,1,,,,CC BY-SA 4.0 21706,2,,21493,6/7/2020 14:41,,0,,"

It is not a 'delay' but a 'look ahead' as some people call it. This is how far in the future the model can forecast (or predict).

In reference to the GitHub source, the code uses a look ahead ('delay') of one. The echo state network learns on the 'inputData' using a future version of itself called 'targetData'.

Here is an analogy. It is early on Friday, June 5, and I have the last 51 days worth of Amazon stock prices. I can use the first 50 days of prices, which go up to and include Wednesday, June 3, to train a network to predict Thursday, June 4. If I have a lot of historical data on Amazon, I can train many 50 day windows. Each 50 day window would have as its target a stock price one day in the future.

Performance generally degrades the further into the future you try to forecast. One reason is due to the build up of error over time. This is true in life. We usually know fairly accurately what we will do tomorrow, less so next week, and it would be a wild guess to predict what we will be doing in ten years.

",5763,,5763,,6/7/2020 14:52,6/7/2020 14:52,,,,0,,,,CC BY-SA 4.0 21707,1,21725,,6/7/2020 15:19,,2,91,"

In Batch Normalisation, are the sample mean and standard deviation we normalise by the mean/sd of the original data put into the network, or of the inputs in the layer we are currently BN'ing over?

For instance, suppose I have a mini-batch size of 2 which contains $\textbf{x}_1, \textbf{x}_2$. Suppose now we are at the $k$th layer and the outputs from the previous layer are $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$. When we perform batch norm at this layer would be subtract the sample mean of $\textbf{x}_1, \textbf{x}_2$ or of $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$?

My intuition tells me that it must be the mean,sd of $\tilde{\textbf{x}}_1,\tilde{\textbf{x}}_2$ otherwise I don't think it would be normalised to have 0 mean and sd of 1.

",36821,,36821,,6/7/2020 18:39,6/8/2020 17:26,"In Batch Normalisation, are $\hat{\mu}$, $\hat{\sigma}$ the mean and stdev of the original mini-batch or of the input into the current layer?",,1,0,,,,CC BY-SA 4.0 21708,1,,,6/7/2020 16:24,,2,210,"

I'm training a dueling double DQN agent with prioritized replay buffer and notice that the min Q values are decreasing, while the max Q values are increasing.

Is this a sign that it is diverging?

Or should be just be looking at the mean Q value, which has a slight uptrend?

1 There are 2 different colored lines because the initial training (in orange) was stopped at around 1.3M time steps, and resumed (in blue) from a checkpoint at around 1.1M time steps

2 Plots are from Tensorboard, visualizing data generated by Ray/RLlib

3 Epsilon starts at 1.0 and anneals to 0.02 over 10000 time steps. The sudden increase in magnitude of Q-values appear to come after resuming from checkpoint, but might just be a coincidence.


After training for more steps...

",37519,,37519,,6/8/2020 3:52,6/8/2020 3:52,"If the minimum Q value is decreasing and the maximum Q value increasing, is this a sign that dueling double DQN is diverging?",,0,0,,,,CC BY-SA 4.0 21714,2,,21692,6/7/2020 21:28,,3,,"

I want to know, since the input is given to a CNN, how would we encode these frames to serve as input to the CNN?

As nbro mentioned in a comment to your answer, this question has very recently been asked and answered here.

I also want to know since $s_1$ is equal to a set, which can be seen in line 4 of the algorithm, then why is $s_{t+1}$ equal to $s_t$, $a_t$, $x_{t+1}$?

The algorithm presented in the original DQN paper is relatively simple and written to express the main ideas of their approach (e.g. experience replay, preprocessing histories, gradient descent, etc.); in fact, it isn't even the exact algorithm used in the experiments! For example, the experiments use frame-skipping to reduce computation - this is not mentioned in Algorithm 1 in the paper. With that in mind, setting $s_{t+1}$ equal to $s_t, a_t, x_{t+1}$ in the algorithm signifies a general notion of constructing the next raw state $s_{t+1}$ from the previous preprocessed state $s_t$, previous action $a_t$, and current frame $x_{t+1}$. For example:

  • If the action space at the next timestep is constrained by the state, then the state may need additional parameters to encode the action space.
  • The algorithm needs some indication if a state is terminal, and such an indication may need to be encoded in the state.
  • If there is frame skipping, then multiple frames will be needed to construct the next state, possibly using the previous state as well.

The above examples should display how the encoding of the state cannot always simply be a stack of raw frames, or even a function of $s_t$, $a_t$ and $x_{t+1}$, and therefore a more general approach is often required.

",37607,,,,,6/7/2020 21:28,,,,0,,,,CC BY-SA 4.0 21715,1,,,6/7/2020 22:13,,1,60,"

When I say template matching, I'm referring to finding occurrences of a small image (the template) in a larger image.

The OpenCV library provides the trivial solution, that slides the template over every possible location of the image. While this implementation provides translation invariance, it provides no stretching/scaling invariance. To get stretch and translation invariance, one would need to stretch the template (or shrink the main image) until iteratively, running the original template check over the image. This increases complexity to $O(S * n^2)$ where S is the number of different resolutions one checks - if one want's to check every possible resolution, the overall complexity is $O(n^4)$. Effectively, you generate $O(n^4)$ subsections and check if they're equal to the template.

From what I was taught, Image Segmentation Networks do just this - however, instead of using the basic template matching (i.e. checking the pixels match) the generated subsection is put through a classifier network - so this would be more expensive than standard templating.

My question is, are my calculations correct - for complete subsection generation, is the complexity $O(n^4)$ and is there really no better algorithm for generating these subsections - used by both image detection algorithms?

",37690,,2444,,6/8/2020 20:01,6/8/2020 20:01,Is subsection generation $O(n^4)$,,0,1,,,,CC BY-SA 4.0 21717,2,,21684,6/8/2020 3:27,,4,,"

David Ireland gives a fantastic answer, and I will provide an intuitive and gentle (but less rigorous) answer for those who are unfamiliar with the relevant statistical concepts.

Next reward $R_{t+1}$: The next reward $R_{t+1}$ is solely dependent on the current state $S_t$ and action $A_t$. It is only dependent on the policy because the policy details the probability distribution of actions given a state. Since we assume that the current state and action are given when calculating the expectation $\left(S_t = s, A_t = a\right)$, then the policy does not give us any new information, and therefore the next reward is independent of the policy.

Return $G_{t+1}$: By definition, $v_{\pi}(s') = \mathbb{E}_{\pi}[G_{t+1}|S_{t+1} = s']$. The value function is unaffected by sampling actions from the policy in the outer expectation $\left(\mathbb{E}_{\pi}[v_{\pi}(s')] = \mathbb{E}[v_{\pi}(s')]\right)$ since the value function is an expectation under the policy, and hence samples actions from the policy already.

Dropping $\pi$ from $\mathbb{E}_{\pi}$: The expectation under the current policy samples next states and rewards from the environment and also samples actions from our policy $\pi$. Because the next reward is independent of the policy given the current state and action, and because the value function is unaffected by sampling actions from the policy in the outer expectation, we can simply drop the policy from the outer expectation (the outer expectation will still sample next states and rewards from the environment).

",37607,,,,,6/8/2020 3:27,,,,0,,,,CC BY-SA 4.0 21719,1,21727,,6/8/2020 5:34,,9,3073,"

Lately, there are lots of posts on one-shot learning. I tried to figure out what it is by reading some articles. To me, it looks like similar to transfer learning, in which we can use pre-trained model weights to create our own model. Fine-tuning also seems a similar concept to me.

Can anyone help me and explain the differences between all three of them?

",32861,,2444,,6/8/2020 14:22,6/8/2020 15:51,"What is the difference between one-shot learning, transfer learning and fine tuning?",,1,0,,,,CC BY-SA 4.0 21723,1,21724,,6/8/2020 12:26,,4,608,"

The task (exercise 3.13 in the RL book by Sutton and Barto) is to express $q_\pi(s,a)$ as a function of $p(s',r|s,a)$ and $v_\pi(s)$.

$q_\pi(s,a)$ is the action-value function, that states how good it is to be at some state $s$ in the Markov Decision Process (MDP), if at that state, we choose an action $a$, and after that action, the policy $\pi(s,a)$ determines future actions.

Say that we are at some state $s$, and we choose an action $a$. The probability of landing at some other state $s'$ is determined by $p(s',r|s,a)$. Each new state $s'$ then has a state-value function that determines how good is it to be at $s'$ if all future actions are given by the policy $\pi(s',a)$, therefore:

$$q_\pi(s,a) = \sum_{s' \in S} p(s',r|s,a) v_\pi(s')$$

Is this correct?

",37627,,55968,,6/29/2022 4:15,6/29/2022 4:15,"How do we express $q_\pi(s,a)$ as a function of $p(s',r|s,a)$ and $v_\pi(s)$?",,1,0,,,,CC BY-SA 4.0 21724,2,,21723,6/8/2020 12:42,,6,,"

Not quite. You are missing the reward at time step $t+1$.

The definition you are looking for is (leaving out the $\pi$ subscripts for ease of notation)

$$q(s,a) = \mathbb{E}[R_{t+1} + \gamma v(s') | S_t=s,A_t=a] = \sum_{r,s'}(r +\gamma v(s'))p(s',r|s,a)\;.$$

Because $q(s,a)$ relates to expected returns at time $t$, and returns are defined as $G_t = \sum_{b = 0}^\infty \gamma ^b R_{t+b+1}$, thus $R_{t+1}$ is also a random variable at time $t$ that we need to take expectation with respect to, not just the state that we transition into.

",36821,,55968,,6/28/2022 13:12,6/28/2022 13:12,,,,0,,,,CC BY-SA 4.0 21725,2,,21707,6/8/2020 12:42,,1,,"

Your intuition is correct. We will be normalizing the inputs of the layer under consideration (just right before applying the activation function).

So, if this layer receives an input $\mathrm{x}=\left(x^{(1)} \ldots x^{(d)}\right)$, the formula for normalizing the $k^{th}$ dimension of $\mathrm{x}$ would look as follows: $$\widehat{x}^{(k)}=\frac{x^{(k)}-\mathrm{E}\left[x^{(k)}\right]}{\sqrt{\operatorname{Var}\left[x^{(k)}\right]}}$$

Note that in practice a constant $\epsilon$ is also added under the square root in the denominator to ensure stability.

Source: The original Batch Normalization paper (Section 3).

Andrew Ng's video on this topic might also be useful for illustration.

",34010,,34010,,6/8/2020 17:26,6/8/2020 17:26,,,,2,,,,CC BY-SA 4.0 21726,1,21733,,6/8/2020 13:06,,6,256,"

I'm having difficulty understanding the distinction between a bandit problem and a non-bandit problem.

An example of the bandit problem is an agent playing $n$ slot machines with the goal of discovering which slot machine is the most probable to return a reward. The agent learns to find the best strategy of playing and is allowed to pull the lever of one slot machine per time step. Each slot machine obeys a distinct probability of winning.

In my interpretation of this problem, there is no notion of state. The agent potentially can utilise the slot results to determine a state-action value? For example, if a slot machine pays when three apples are displayed, this is a higher state value than a state value where three apples are not displayed.

Why is there just one state in the formulation of this bandit problem? As there is only one action (""pulling the slot machine lever"" ), then there is one action. The slot machine action is to pull the lever, which starts the game.

I am taking this a step further now. An RL agent purchases $n$ shares of an asset and its not observable if the purchase will influence the price. The next state is the price of the asset after the purchase of the shares. If $n$ is sufficiently large, then the price will be affected otherwise there is a minuscule if any effect on the share price. Depending on the number of shares purchased at each time step, it's either a bandit problem or not.

It is not a bandit problem if $n$ is large and the share price is affected? It is a bandit problem if $n$ is small and the share price is not affected?

Does it make sense to have a mix of a bandit and non-bandit states for a given RL problem? If so, then the approach to solving should be to consider the issue in its entirety as not being a bandit problem?

",12964,,2444,,6/8/2020 17:15,6/8/2020 17:18,How do I recognise a bandit problem?,,1,0,,,,CC BY-SA 4.0 21727,2,,21719,6/8/2020 14:14,,6,,"

They are all related terms.

From top to bottom:

One-shot learning aims to achieve results with one or very few examples. Imagine an image classification task. You may show an apple and a knife to a human and no further examples are needed to continue classifying. That would be the ideal outcome, but for algorithms.

In order to achieve one-shot learning (or close) we can rely on knowledge transfer, just like the human in the example would do (we are trained to be amazing at image processing, but here we would also exploit other knowledge like abstract reasoning abilities, and so on).

This brings us to transfer learning. Generally speaking, transfer learning is a machine learning paradigm where we train a model on one problem and then try to apply it to a different one (after some adjustments, as we'll see in a second).

In the example above, classifying apples and knives is not at all trivial. However, if we are given a neural network that already excels at image classification, with super-human results in over 1000 categories... perhaps it is easy to adapt this model to our specific apples vs knives situation.

This ""adapting"", those ""adjustments"", are essentially what we call fine-tuning. We could say that fine-tuning is the training required to adapt an already trained model to the new task. This is normally much less intensive than training from scratch, and many of the characteristics of the given model are retained.

Fine-tuning usually covers more steps. A typical pipeline in deep learning for computer vision would be this:

  1. Get trained model (image classifier champion)
  2. Note the head of our model does not match our needs (there's probably one output per category, and we only need two categories now!)

  3. Swap the very last layer(s) of the model, so that the output matches our needs, but keeping the rest of the architecture and already trained parameters intact.

  4. Train (fine-tune!) our model on images that are specific to our problem (only a few apples and knives in our silly example). We often only allow the last layers to learn at first, so they ""catch up"" with the rest of the model (in this case we talk about freezing and unfreezing and discriminative learning rates, but that's a bit beyond the question).

Note that some people may sometimes use fine-tuning as a synonym for transfer learning, so be careful about that!

",14744,,2444,,6/8/2020 15:51,6/8/2020 15:51,,,,6,,,,CC BY-SA 4.0 21733,2,,21726,6/8/2020 14:48,,4,,"

The bandit problem has one state, in which you are allowed to choose one lever among $n$ levers to pull.

Why is there just one state in the formulation of this bandit problem?

There is one state because the state does not change over time. Two notable consequences are that (i) pulling a lever does not change the internals of any slot machine (e.g. the distribution of rewards) and (ii) you are allowed to choose any lever without restrictions. More generally, there is no sequential aspect of the state in this problem, as future states are unaffected by past states, actions, and rewards.

It is not a bandit problem if $n$ is large and the share price is affected?

Correct! If the share price is affected, then future states would be influenced by past actions. This is because the price per share is affected, which is one aspect of the state. Thus, you would need to plan a sequential strategy for your purchases.

It is a bandit problem if $n$ is small and the share price is not affected?

It all depends on the problem: as long as the state before buying shares remains completely the same after you purchase some shares, then yes. Share price being unaffected is only one of the requirements; another example requirement is that the maximum number of shares purchased is fixed at each time step, independent of the shares purchased previously.

Does it make sense to have a mix of a bandit and non-bandit states for a given RL problem? If so, then the approach to solving should be to consider the issue in its entirety as not being a bandit problem?

It makes sense to allow the share price to either be affected or unaffected based on $n$ in the same problem. Since some actions (large $n$) change the state, then there are multiple states, and actions sequentially affect the next state. Hence it is not a bandit problem as a whole, as you correctly stated.

The agent potentially can utilise the slot results to determine a state-action value?

Correct! I suggest reading Chapter 2 of Sutton and Barto to learn some fundamental algorithms of developing such strategies.

Nice work on analyzing this problem! To help solidify your understanding and formalize the arguments above, I suggest that you rewrite the variants of this problem as MDPs and determine which variants have multiple states (non-bandit) and which variants have a single state (bandit).

",37607,,2444,,6/8/2020 17:18,6/8/2020 17:18,,,,0,,,,CC BY-SA 4.0 21734,2,,20997,6/8/2020 15:56,,0,,"

Are you sure the image quality in your test set and phone camera image similar? I once trained a CNN model on poor quality images and with very good validation accuracy but when I tested on image from my camera it didn't work at all. I degraded the image quality by resizing the image from my camera to a small size then again back to required size and it worked perfectly.

",37512,,,,,6/8/2020 15:56,,,,0,,,,CC BY-SA 4.0 21735,2,,21682,6/8/2020 17:21,,0,,"

To my knowledge from reading about model-based and model-free reinforcement learning,

DQN and Double DQN are model-free reinforcement learning methods. (Why am I mentioning this, see below):

https://link.springer.com/referenceworkentry/10.1007%2F978-1-4614-7320-6_674-1

You should see under the heading "Definition" (you may need to scroll down in the web-page) in the web-page (can be accessed by the above link), it states that: "model-free techniques require extensive experience."

Now extensive experience, depending of how fast you can go through states, can take a few days even weeks...model-free methods require a lot of samples to learn.

Also there are a lot of states: there are ($256^{128}$) states. That is a really big number (I'm just emphasizing that training may take a long while)

Due implementation specifics not being supplied in question, I am assuming your implementation is correct, although...you are using RAM. DQN used image data, I very quickly skimmed the Double DQN paper: https://arxiv.org/pdf/1509.06461.pdf . I am assuming they used image data as well because Double DQN was compared to DQN.

",30174,,2444,,7/4/2021 2:31,7/4/2021 2:31,,,,0,,,,CC BY-SA 4.0 21737,1,,,6/9/2020 1:28,,5,623,"

I have taken an algorithms course where we talked about LP significantly, and also many reductions to LPs. As I recall, normal LP is not NP-Hard. Integer LP is NP-Hard. I am currently taking an introduction to AI course, and I was wondering if CSP is the same as LP.

There seems an awful lot of overlap, and I haven't been able to find anything that confirms or denies my suspicions.

If they are not the same (or one cannot be reduced to the other), what are the core differences in their concepts?

",37718,,2444,,6/9/2020 18:34,7/9/2020 19:00,What are the differences between constraint satisfaction problems and linear programming?,,1,0,,,,CC BY-SA 4.0 21738,1,,,6/9/2020 2:25,,0,173,"

I have two sets of data, one training and one test set. I use the train set to train the deep q network model variant. I also continuously evaluate the agent Q values obtained on the test set every 5000 epochs I find that the agent Q values on the test set do not converge and neither do the policies.

iteration $x$: Q values for the first 5 test data are [15.271439, 13.013742, 14.137051, 13.96463, 11.490129] with policies: [15, 0, 0, 0, 15]

iteration $x+10000$: Q values for the first 5 test data are [15.047309, 15.5233555, 16.786497, 16.100864, 13.066223] with policies: [0, 0, 0, 0, 15]

This means that the weights of the neural network are not converging. Although I can manually test each policy at each iteration and decide which of the policy performs best, I would like to know if correct training of the network would lead to weight convergence ?

Training loss plot:

You can see that the loss decreases over time however, there are occasional spikes in the loss which does not seem to go away.

",32780,,32780,,6/9/2020 12:50,6/9/2020 12:50,Should the network weights converge when training Deep Q networks?,,0,13,,,,CC BY-SA 4.0 21740,1,,,6/9/2020 7:09,,-1,93,"

Recently, I am working on an action recognition project where my input data is from the video stream. I read some of the concepts like ConvLstm, Convolutional Lstm, etc. I am looking for someone who already those kinds of staff already and can share his work with me that will be a really good help for me?

",28048,,,,,6/9/2020 7:46,Action recognition using video stream data,,1,0,,,,CC BY-SA 4.0 21741,1,,,6/9/2020 7:22,,1,290,"

I am building an LSTM for predicting a price chart. MAPE resulted in the best loss function compared to MSE and MAE. MAPE function looks like this

$$\frac{1}{N}\sum^N_{i=0}\frac{|P_i - A_i|}{A_i}$$

Where $P_i$ is the current predicted value and $A_i$ is the corresponding actual value. In neural network, it is always advised to scale the data between a small range close to zeros such as [0, 1]. In this case scaling range of [0.001, 1] is imperative to remove a possible division by zero.

Due to the MAPE denominator, the close the scaling range is to zero the larger the loss function becomes for a given $|P_i - A_i|$. If on the other hand, the data is de-scaled just before it is inserted in the MAPE function, the same $|P_i - A_i|$ would give a smaller MAPE

Consider a hypothetical example with a batch size of 1, $|P_i - A_i| = 2$ (this is scale indipendent) and $A_i = 200$. Therefore scaled $A_i = 0.04$. The MAPE error loss for the scaled version would be $\frac{2}{0.04} = 50$, and for the unscaled version $\frac{2}{200} = 0.01$

This will mean that the derivative w.r.t each weight of the scaled version will also be larger, therefore making the weights themselves even smaller. Is this correct?

I am concluding that scaling the data when using MAPE will effectively shrink the weights down more than necessary. Is that a good reason why I am seeing significantly better performance with de-scaled MAPE calculation?

Note: I am not keeping the same hyperparameters for scaled and de-scaled MAPE but a Bayesian optimization is performed with both runs, In the later a deeper network was preferred but in the scaled MAPE more regularisation was preferred.

Some expertise on this would be helpful.

",37728,,,,,7/10/2020 17:36,LSTM - MAPE Loss Function gives Better Results when Data is De-Scaled before Loss Calculation,,0,0,,,,CC BY-SA 4.0 21742,2,,21740,6/9/2020 7:46,,1,,"

If you want to build a stream recognition pipeline than you can use OpenCV, Kafka, and Spark together.how to build pipeline of video stream

and for Action recognition using ConvLstm. This link will help you.action recognition using Convlstm

",32861,,,,,6/9/2020 7:46,,,,0,,,,CC BY-SA 4.0 21743,1,22577,,6/9/2020 8:43,,1,200,"

Not sure if this is the right place, but I was wondering if someone could briefly explain to me the differences & similarities between simulated annealing and deterministic annealing?

I know that both methods are used for optimization and both originate from statistical physics with the intuition of reaching a minimum energy (cost) configuration by cooling (i.e. slowly reducing the temperature in the Boltzmann distribution to calculate probabilities for configurations).

Unfortunately, Wikipedia has no article about deterministic annealing and the one about simulated annealing does not mention any comparison.

This resource has a brief comparison section between the two methods, however, I do not understand why the search strategy of DA is

based on the steepest descent algorithm

and how

it searches the local minimum deterministically at each temperature.

Any clarification appreciated.

",37120,,2444,,6/11/2020 13:26,7/19/2020 14:58,What is the difference between simulated annealing and deterministic annealing?,,1,3,,,,CC BY-SA 4.0 21758,1,,,6/9/2020 12:02,,2,48,"

I am interested in capsule neural networks. I have already read the paper Dynamic Routing Between Capsules, but it is a little bit difficult to follow. Is there a tutorial for beginners on capsule neural networks?

",32861,,2444,,6/9/2020 14:51,6/9/2020 14:51,Is there a tutorial for beginners on capsule neural networks?,,0,0,,,,CC BY-SA 4.0 21759,2,,21737,6/9/2020 12:52,,3,,"

LP is a mathematical problem to optimize a linear function subject to linear (in)equality constraints of the sort: $$ min_x w^t x $$ $$ Ax<b $$

where $x$ is a continuous variable (vector). If the problem is feasible, then a unique global optimal solution exists for $x$

A constraint satisfaction problem on the other hand is just the constraints part of the above LP. So we are not interested in the optimal solution, just a set of values that will satisfy the constraints. If the domain of your variables is continuous, then it can be recast as a dummy LP with a fake objective e.g. $min_x 0, s.t. Ax<b$, and then solved with an LP solver (e.g. simplex algorithm)

However many CSPs have discrete variables so they are not LP. Also CSP problems may have too many variables for plain LP algorithms to handle, so there are problem specific shortcut algorithms. We are often happy to find a feasible solution and forget about ""optimality""

",36518,,,,,6/9/2020 12:52,,,,2,,,,CC BY-SA 4.0 21760,1,,,6/9/2020 13:09,,2,143,"

I have some trouble with the probability densities described in the original paper. My question is based on Goodfellow's paper and tutorial, respectively: Generative Adversarial Networks and NIPS 2016 Tutorial: Generative Adversarial Networks.

When Goodfellow et al. talk about probability distributions/densities in their paper, are they talking about discrete or continuous probability distributions? I don't think it's made clear.

In the continuous case, it would imply, for instance, that both $p_{data}$ and $p_g$  must be differentiable since the optimal discriminator (see Prop. 1) is essentially a function of their ratio and is assumed to be differentiable. Also, the existence of a continuous $p_g$ is non-trivial. One sufficient condition would be that $G$ is a diffeomorphism (see normalising flows), but this is rarely the case. So it seems that much stronger assumptions are needed.

In the case that the answer is discrete distributions: the differentiability of $G$ implies continuous outputs of the generator. How can this work together with a discrete distribution of its outputs? Does the answer have something to do with the fact that we can only represent a finite set of numbers with computers anyway?

",37736,,2444,,6/9/2020 20:33,6/9/2020 20:33,Is the generator distribution in GAN's continuous or discrete?,,0,0,,,,CC BY-SA 4.0 21761,1,,,6/9/2020 13:52,,1,401,"

While implementing the Shi-Tomasi corner detection algorithm, I got stuck in deciding a suitable threshold for corner detection.

In the Shi-Tomasi algorithm, all those points that qualify $\min( \lambda_1, \lambda_2) > \text{threshold}$ are considered as corner points. (where $\lambda_1, \lambda_2$ are eigenvalues).

My question is: what is a suitable criterion to decide that threshold?

",36484,,2444,,12/2/2021 8:34,5/1/2022 9:08,How to choose a suitable threshold value for the Shi-Tomasi corner detection algorithm?,,1,1,,,,CC BY-SA 4.0 21764,1,,,6/9/2020 19:42,,5,1209,"

In Reinforcement Learning, when reward function is not differentiable, a policy gradient algorithm is used to update the weights of a network. In the paper Neural Architecture Search with Reinforcement Learning they use accuracy of one neural network as the reward signal then choose a policy gradient algorithm to update weights of another network.

I cannot wrap my head around the concept of accuracy as a non-differentiable reward function. Do we need to find the function and then check if it is mathematically non-differentiable?

I was wondering if I can use another value, for example silhouette score (in a different scenario) as the reward signal?

",37744,,1847,,6/10/2020 13:03,6/10/2020 15:45,Non-differentiable reward function to update a neural network,,1,5,,,,CC BY-SA 4.0 21767,1,,,6/10/2020 2:46,,4,414,"

Why are the state-value and action-value functions are sometimes written in small letters and other times in capitals? For instance, why in the Q-learning algorithm (page 131 of Barto and Sutton's book but not only), we the capitals are used $Q(S, A)$, while the Bellman equation it is $q(s,a)$?

",2254,,2444,,6/10/2020 10:13,6/10/2020 10:17,Why are the value functions sometimes written with capital letters and other times with lower-case letters?,,2,0,,,,CC BY-SA 4.0 21769,2,,21767,6/10/2020 4:45,,1,,"

Ordinary variables vs Random Variables

The difference is whether you're talking about a ordinary variable or a random variable.

For instance, the q-function (lowercase) is an expectation value (i.e. not a random variable), conditioned on a specific state-action pair: $$ q(s,a)\ =\ \mathbb{E}_t\left\{ R_t+\gamma R_{t+1} + \gamma^2R_{t+2}+\dots\,\Big|\, S_t=s, A_t=a \right\} $$ Then, in some case, some authors may abuse notation slightly by feeding in a random variable into the q-function, e.g. $q(S_t,a)$, $q(s,A_t)$ or even $q(S_t,A_t)$, thereby undoing some or all of the conditioning in the definition of the q-function as an expectation value.

Feeding a random variable into a function like the q-function results in an output that is a random variable in its own right. It is for this reason that some authors choose to give the function itself an uppercase letter as well.

My advice would be to think to yourself, is this a random variable? For the rest, I would interpret upper/lowercase as no more than a hint to the reader.

",37751,,,,,6/10/2020 4:45,,,,3,,,,CC BY-SA 4.0 21771,1,,,6/10/2020 8:12,,1,60,"

I have tried implementing a basic version of shi-tomasi corner detection algorithm. The algorithm works fine for corners but I came across a strange issue that the algorithm also gives high values for slanted(titled) edges.

Here's what i did

  • Took gray scale image
  • computer dx, and dy of the image by convolving it with sobel_x and sobel_y
  • Took a 3 size window and moved it across the image to compute the sum of the elements in the window.
  • computed sum of the window elements from the dy image and sum of window elements from the dx image and saved it in sum_xx and sum_yy.
  • created a new image (call it result) where that pixel for which the window sum was computed was replaced with min(sum_xx, sum_yy) as shi-tomasi algorithm requires.

I expected it to give maximum value for corners where dx and dy both are high, but i found it giving high values even for titled edges.

Here are the some outputs of the image i received:

Result:

so far so good, corners have high values.

Another Image:

Result:

Here's where the problem lies. edges have high values which is not expected by the algorithm. I can't fathom how can edges have high values for both x and y gradients (sobel being close approximation of gradient).

I would like to ask your help, if you can help me fix this issue for edges. I am open to any suggestions and ideas .

Here's my code (if it helps):

def shi_tomasi(image, w_size):
    ans = image.copy()
    dy, dx = sy(image), sx(image)

    ofset = int(w_size/2)
    for y in range(ofset, image.shape[0]-ofset):
        for x in range(ofset, image.shape[1]-ofset):

            s_y = y - ofset
            e_y = y + ofset + 1

            s_x = x - ofset
            e_x = x + ofset + 1

            w_Ixx = dx[s_y: e_y, s_x: e_x]
            w_Iyy = dy[s_y: e_y, s_x: e_x]

            sum_xx = w_Ixx.sum()
            sum_yy = w_Iyy.sum()

            ans[y][x] = min(sum_xx, sum_yy)
    return ans

def sy(img):
    t = cv2.Sobel(img,cv2.CV_8U,0,1,ksize=3)
    return t
def sx(img):
    t = cv2.Sobel(img,cv2.CV_8U,1,0,ksize=3)
    return t
",36484,,,,,6/10/2020 12:04,Corner detection algorithm gives very high value for slanted edges?,,1,1,,,,CC BY-SA 4.0 21772,2,,21767,6/10/2020 10:17,,4,,"

In the Sutton and Barto book $q(s,a)$ is used to denote the true expected value of taking action $a$ in state $s$, whereas capital $Q(s,a)$ is used to denote an estimate of $q(s,a)$. However, there is likely to be a lot of inconsistency in the literature as each author has their own preference on how to denote things. I would encourage you to consider whether the value you are reading is to denote an estimate or the true value.

",36821,,,,,6/10/2020 10:17,,,,0,,,,CC BY-SA 4.0 21776,2,,21771,6/10/2020 12:04,,1,,"

I can't go into details of the algorithm but here's some intuition about what's apparently going wrong:

The Sobel transformation identifies mostly-vertical and mostly-horizontal edges. For slanted edges, it also shows a response, just a bit weaker.

By using a window and taking the minimum of vertical and horizontal response, you identify points where you have both a vertical and a horizontal response. However, this is also the case at the slanted edges.

Given the structure of this simplistic corner detection algorithm, you can only expect to get meaningful results for strictly horizontal and vertical lines/edges and their intersections/corners. It is just not suitable for slanted edges.

",22993,,,,,6/10/2020 12:04,,,,2,,,,CC BY-SA 4.0 21780,2,,21382,6/10/2020 13:14,,3,,"

Mathematical equations are generally expressed in a sequential form known as 'infix notation'. It is characterised by the placement of operators between operands. To make the order of the operations in the Infix notation unambiguous, a lot of parenthesis are needed. Infix notation is more difficult to parse by computers than prefix notation (e.g. + 2 2) or postfix notation (e.g. 2 2 +).

There is a deep learning approach to symbolic mathematics recommended in the research paper by Guillaume Lample and François Charton. They have found an interesting approach to use deep neural networks for symbolic integration and differentiation equations. This paper proposes a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models.

Deep Learning for Symbolic Mathematics

This approach is essentially representing mathematical problems in prefix notation. First a symbolic syntax tree is constructed that captures the order and values of the operations in the expression. Second, the tree is traversed from top to bottom and from left to right. If the current node is a primitive value (a number), add it to the sequence string. If the current node is a binary operation, add the operations symbol to the sequence string. Then, add the representation of the left child node (could be recursive). Then, add the representation of the right child node. This procedure resulted in the following expression.

We can expect further more advances in this area with the emergence of better symbolic learning models leveraging attention based transformers and other neural symbolic learning models. Recent work by MIT, DeepMind and IBM has shown the power of combining connectionist techniques like deep neural networks with symbolic reasoning. Please find the details in the following article.

The Neuro-Symbolic Concept Learner

",12861,,,,,6/10/2020 13:14,,,,0,,,,CC BY-SA 4.0 21783,2,,21764,6/10/2020 13:21,,4,,"

I cannot wrap my head around the concept of accuracy as a non-differentiable reward function. Do we need to find the function and then check if it is mathematically non-differentiable?

In reinforcement learning (RL), the reward function is often not differentiable with respect to any learnable parameters. In fact it is quite common to not know the function at all, and apply a model-free learning method based purely on sampling many observations of state, action, reward and next state. For reward, you don't need to know the reward function as a function of your parameters, but you do need to know how to caclulate it from each observation. That means that the reward is either provided as part of the environment, or it is clear how to calculate it from the initial state, action or next state. In the case of an agent that creates a neural network then reports the accuracy, you can probably view the NN training process and its results including accuracy on a validation set as a ""black box"" that reports back an arbitrary reward signal.

In Reinforcement Learning, when reward function is not differentiable, a policy gradient algorithm is used to update the weights of a network

Policy gradient methods are one broad family of RL methods which are also often model-free. Alternatively, value-based methods are broadly the other choice (e.g. Q-learning) or can be used in combination with policy gradient methods (e.g. Actor-Critic). All of these can be model-free, and often are.

Unless you want to apply a model-based RL method, then you have no need to find the reward function in an explicit form based on any parameters. Even if you did want to use a model-based approach, the reward function does not need to be differentiable.

I was wondering if I can use another value, for example silhouette score (in a different scenario) as the reward signal?

Yes this would likely be viable, and could be made to work similarly to the paper, provided there is some meaningful connection between the parameters you manipulate and the end result.

RL can be made to work as a generic way of optimising numeric measure influenced due to a vector of parameters. That includes indirect optimisation of neural networks by non-differentiable metrics such as accuracy. There is nothing special about the choice of accuracy here, the metric just needs to be related to the parameters being learned.

There is a catch. Indirect learning using RL can produce very noisy training data, due to e.g. covariance between all the parameters, and it can take many samples to get a clear gradient signal through all the noise and make meaningful updates. It can be inefficient compared to other optimisation methods if they are available - however, the ability to extract a gradient for the parameters even when rules are complex and non-differentiable is a nice feature of RL.

",1847,,1847,,6/10/2020 15:45,6/10/2020 15:45,,,,0,,,,CC BY-SA 4.0 21784,1,,,6/10/2020 13:38,,2,331,"

I am following the OpenAI's spinning up tutorial Part 3: Intro to Policy Optimization. It is mentioned there that the reward-to-go reduces the variance of the policy gradient. While I understand the intuition behind it, I struggle to find a proof in the literature.

",37770,,2444,,10/10/2020 15:51,10/10/2020 15:51,"What is the proof that ""reward-to-go"" reduces variance of policy gradient?",,0,3,,,,CC BY-SA 4.0 21786,2,,15371,6/10/2020 15:31,,1,,"

Mask RCNN can be a very heavy function for a simple class classification. It is designed to handle multiple object in a single image. So I would suggest you could use much simpler models like VGGnet or Resnet which are backbones of the MaskRCNN. The biggest hurdle you might face is the dataset. If you are trying to capture even small difference between knives and classify them you will need atleast 2000 images per knife type. which might be difficult to get. You may have to do a data augmentation.

",37773,,,,,6/10/2020 15:31,,,,0,,,,CC BY-SA 4.0 21787,1,,,6/10/2020 15:48,,1,31,"

I want to predict how open is the mouth given a face image. It's a regression problem (0= mouth not open, 1=mouth completely open). And something between 0 and 1 is also allowed. ConvNet works fine for one person. But when I train with many people with hope that it will generalize to an unseen person, the model suffers from not knowing the limit of a person's mouth.

For example, if a new person uses the model to predict, the model doesn't have a clue whether this person has completely opened the mouth or not. Because it's hard to know how much a person can open the mouth from one image. People's mouth openness capability is not the same. Some guys cannot open their mouth that much, but some guys can open the mouth like they can swallow an apple. The only way you can know how much a person can open the mouth is to look at multiple images of their mouth movements, especially when they open the mouth completely.

I want to know how to make the model know the limit of a person's mouth by using the info from past images.

Is there a way for me to use a few unlabeled images of a new person in order to help the model calibrate its prediction? How do I do it?

This should help the model know the min/max of the person's mouth and also knows the intermediate values between 0 and 1. If you run the model continuously on a webcam, I expect the prediction to be smooth (not noisy).

My idea is to encode many images into an embedding that can be used as a calibration vector. The vector will be fed into the model along with the person's image. But I am not sure how to do it. Any suggestions/tutorials would be welcomed.

",20819,,,,,6/10/2020 15:48,How to calibrate model's prediction given past images?,,0,0,,,,CC BY-SA 4.0 21789,1,,,6/10/2020 18:17,,1,459,"

If my understanding of an LSTM is correct then the output from each LSTM unit is the hidden state from that layer. For the final layer if I wanted to predict e.g. a scalar real number, would I want to add a dense layer with 1 neuron or is it recommended to have a final LSTM layer where the output has just one hidden unit (i.e. the output dimension of the final hidden state is 1)? If we didn't add the dense layer, then the output from the hidden layer I believe would be between (-1,1), if you use the traditional activations in the LSTM unit.

Apologies if I've used wrong terminology, there seems to be some inconsistency with LSTM's when going between literature and definition in TensorFlow etc.

",36821,,,,,6/10/2020 18:17,Do you have to add a dense layer onto the final layer of an LSTM?,,0,3,,,,CC BY-SA 4.0 21790,2,,18068,6/10/2020 21:50,,2,,"

If you used your five $X_{test}$ sets multiple times (to measure the average AUC) to decide on the best set of hyperparameters (i.e. optimizer, learning rate, batch size, dropout, activation) then yes, you successfully conducted hyper-parameter optimization. However, the AUC you received for the best set of hyperparameters found (by manual tuning) is not representative of the real performance of your model.

This is because the fact of using a test set to tune the parameters of your model degenerates it back to a ""training"" set, because the data is not being used to measure the performance of the model but to improve it instead (although on a different level of abstraction, i.e. not to directly influence the parameters of the model, such as neural network weights), making the resulting AUC an overly optimistic biased estimator of the real AUC (that it could have resulted in for an unseen test dataset).

That's why, if you care both about hyperparameter optimisation and being able to measure the ""real"" performance of your model, you need to split your dataset into three ""buckets"": training set $X_{train}$, validation st $X_{val}$ and test set $X_{test}$. $X_{test}$ should only be used once (after you've trained and tuned the model), and assuming it has enough samples and the samples are representative of the unseen data, you should get a good estimate of your model performance. $X_{val}$, on the other hand, is your validation set which you can reuse as many times as you want to find the optimal set of hyperparameters that result in the highest performance (i.e. AUC).

",22835,,,,,6/10/2020 21:50,,,,1,,,,CC BY-SA 4.0 21791,2,,18111,6/11/2020 2:18,,1,,"

How does policy evaluation work for continuous state space model-free approaches? ... Let's say you use a DQN to find another policy, how does model-free policy evaluation work then?

Policy evaluation is the process of determining state-value $v_{\pi}(s)$ or action-value $q_{\pi}(s, a)$ functions for the current policy. In the context of continuous state and action spaces without a model of the environment, policy evaluation must incorporate the agent's past experience instead of the model dynamics and will generally use a function approximator such as a neural network to estimate the action-values. Many popular approaches apply online updates to the function approximator; e.g., DQN combines Temporal-Difference targets and gradient descent to change the weights of the neural network and the resultant action-value estimates. Since

  • we gradually change the weights of the neural network at each gradient descent step,
  • the estimated action-values are solely dependent on the weights of the neural network,
  • the current policy is solely dependent on the estimated action-values (e.g. DQN takes the action with largest action-value),

then policy evaluation (updating the estimated action-value function to better match the true action-value function under the current policy) and policy improvement (greedily changing the current policy based on the new estimated action-value function) occur simultaneously at each gradient descent step. In DQN, a gradient descent step occurs at each time step.

I am thinking of Monte Carlo simulation, but that would require many many episodes.

After changing the action-value function, we may get a new policy. We assume that the action-value function of the old policy is similar to that of the new policy (although it is not guaranteed) because the change in weights of the neural network was small. Therefore, we use the estimated action-value function of the old policy as an initial estimate of the action-value function of the new policy. Specifically, we use the same neural network with the same weights as the initial approximation. This is computationally convenient, as it prevents the need to start the next policy evaluation update from scratch (e.g. with a Monte Carlo simulation over a painfully large number of episodes).

Theoretically, a model-based approach for the discrete state and action space can be computed via dynamic programming and solving the Bellman equation.

This same technique of using the estimated action-values of the old policy as the initial estimates for the action-values of the new policy is employed by some Dynamic Programming methods such as value iteration, albeit with known dynamics. Generalized Policy Iteration (GPI) is the notion of letting policy evaluation and policy iteration interact on whatever granularity deemed necessary for the problem at hand. A consequence of adopting the GPI paradigm is the choice to halt policy evaluation before convergence of the action-value function. Many deep reinforcement learning algorithms take this to an extreme and perform policy evaluation and policy improvement simultaneously during a single gradient descent step. For reference, Chapter 4 of Sutton and Barto provides a brief summary of these ideas.

",37607,,,,,6/11/2020 2:18,,,,3,,,,CC BY-SA 4.0 21792,1,,,6/11/2020 3:31,,0,133,"

For traditional neural networks, I know that we can't constraint the output to be strict integers. My question is what technique does GANs use to produce integer outputs, that can be then converted to RGB colored pictures?

",37178,,2444,,6/11/2020 10:23,6/11/2020 10:23,How GAN generator produce integer RGB colored picture?,,0,2,,,,CC BY-SA 4.0 21793,1,21796,,6/11/2020 4:05,,1,125,"

I had read some blogs (like 1, 2 or 3) about what the difference between all three of them is. I am trying to build an open domain conversation agent using natural language AI. That agent can do casual conversation, like a friend. So, for that, I want to know what is the importance of NLP, NLG, and NLU, so that I can learn that part first.

",32861,,2444,,6/11/2020 10:31,6/11/2020 10:31,"When to use NLP, NLG and NLU in conversation agents?",,1,0,,,,CC BY-SA 4.0 21796,2,,21793,6/11/2020 8:30,,1,,"

They're all important. NLP is an umbrella term that includes the other two; NLG is only concerned with generating language, ie transforming some internal data structure into human language. NLU is about processing information contained in language, and putting it into relation with a knowledge base etc.

If you don't know anything about any of these fields, then I suggest your aim is far too optimistic. I work for a company that provides a conversational AI platform for businesses to develop their own agents, and it is a complex area.

If you want to have a quick go and pick up some experience, I suggest you start with ELIZA. Even though this is 'ancient', many modern chatbots still work on the same principles. There are many implementations in a number of programming languages, so you should be able to find one that suits you and you can tinker around with it.

",2193,,,,,6/11/2020 8:30,,,,0,,,,CC BY-SA 4.0 21797,1,,,6/11/2020 10:16,,5,2548,"

During my readings, I have seen many authors using the two terms interchangeably, i.e. as if they refer to the same thing. However, we all know about Google's first quotation of "knowledge graph" to refer to their new way of making use of their knowledge base. Afterward, other companies are claiming to use knowledge graphs.

What are the technical differences between the two? Concrete examples will be very useful to understand better the nuances.

",32217,,2444,,12/1/2021 10:27,10/25/2022 6:31,What are the differences between a knowledge base and a knowledge graph?,,3,0,,,,CC BY-SA 4.0 21798,1,,,6/11/2020 10:24,,0,74,"

I want to make a drone which can follow static and dynamic waypoints. I am a total beginner in the drone field so I can't figure out that should I use Reinforcement Learning or any other learning methods for the drone to make it follow both static and dynamic waypoints. If RL is the best choice for the task, then how would I go about training the model and upload it to the flight controller. And if RL is not required, then what should I use in order to achieve this task.

Please let me know how should I begin with this task

",32071,,,,,6/11/2020 10:24,Can Reinforcement Learning be used for UAV waypoint control?,,0,3,,,,CC BY-SA 4.0 21801,2,,7088,6/11/2020 12:25,,3,,"

***Take my answer as a side note to that given by cantordust:

If one can verify that an activation function perform well in some cases, that good behavior often extrapolates to other problems. Thus, by testing activation functions on a few different problems, one can often infer how well (or badly) it will perform on most problems. The following video shows how different activation functions perform in different problems:

https://www.youtube.com/watch?v=Hb3vIYUQ_I8

One can verify that an activation function usually perform well in all cases, or the other way around: it does it poorly in all cases. As cantordust says, I would recommend always starting with leaky ReLU: it is simple, efficient, and generally produces nice results in a wide variety of problems. It also evades the dying ReLU problem, and does not suffer from the vanishing gradient problem. The only thing to keep in mind is the exploding gradient problem if the neural network is too deep, or if it is a recurrent neural network, which are essentially the same concept.

The video shows that other activation functions worth trying (in addition to leaky ReLU) are Gaussian, Sinusoid, or Tanh.

",37800,,37800,,6/11/2020 17:41,6/11/2020 17:41,,,,3,,,,CC BY-SA 4.0 21802,2,,21797,6/11/2020 12:27,,1,,"

Based on the related Wikipedia, a knowledge base (KB) is:

a technology used to store complex structured and unstructured information used by a computer system. The initial use of the term was in connection with expert systems which were the first knowledge-based systems.

As there are different representation model for a KB, we can find different terminology in different domains. For example, in some AI articles, it's called ontology.

Knowledeg graph (KG) is another object model to KB realization which is introduced by Google for its search engine (as you have mentioned). Hence, KG is a specification of KB. You can find more information in the paper Knowledge Graphs, such as more history about the KG or a formal definition of that:

knowledge graph is a graph of data intended to accumulate and convey knowledge of the real world, whose nodes represent entities of interest and whose edges represent relations between these entities.

Moreover, you can find some articles about contextual KG (CKG) in the paper Learning Contextual Embeddings for Knowledge Graph Completion and KG$^2$: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings.

",4446,,2444,,6/11/2020 13:59,6/11/2020 13:59,,,,3,,,,CC BY-SA 4.0 21803,1,,,6/11/2020 16:35,,1,77,"

When I was learning about neural networks, I saw that a complex neural network can understand the MNIST dataset and a simple convolution network can also understand the same. So I would like to know if we can achieve a CNN's functionality with just using a simple neural network without the convolution layer and if we can then how to convert a CNN into an ANN.

",17024,,2444,,6/11/2020 16:41,6/12/2020 7:59,Can we achieve what a CNN can do with just a normal neural network?,,2,0,,,,CC BY-SA 4.0 21805,2,,21803,6/11/2020 16:47,,1,,"

The convolutional aspect of a CNN comes purely from the connections between layers. Instead of a fully-connected network, which can be difficult to train and tends to overfit more, the convolutional network utilizes hierarchical patterns in the data to limit the number of connections - a local edge detection feature in an image analysis network, for example, only needs input from a small number of local pixels, not the entire image. But in principle, you could assign weights to a fully-connected network to perfectly mimic a convolutional one - you just set the weights of the unneeded connections to zero. Because a general ANN has all the connections present in a CNN plus more, it can do anything a CNN can do plus more, although the training can be more difficult.

",2841,,,,,6/11/2020 16:47,,,,2,,,,CC BY-SA 4.0 21807,1,,,6/11/2020 19:19,,1,42,"

In section 6.1 of the paper Neural Networks in Economics, the authors say

this leads to the problem, that no risk can be formulated which shall be minimized by a Neural Network learning algorithm.

So, why can't neural networks be applied to preference learning problems?

See sections 6.0 of the same paper for a definition of preference learning.

",37807,,2444,,6/11/2020 23:43,6/11/2020 23:43,Why can't neural networks be applied to preference learning problems?,,0,1,,,,CC BY-SA 4.0 21808,2,,20831,6/11/2020 21:52,,2,,"

Sutton and Barto state, ""The reward signal is your way of communicating to the robot [agent] what you want it to achieve, not how you want it achieved."" Since you stated that the goal is to reach the finish line first, then a reward of $1$ for winning, $0$ for losing, and $0$ at all other time steps seems to fit that narrative. If a draw is identical to a loss, then it should provide reward $0$; otherwise, a reward of $0.5$ seems reasonable. These rewards provide model interpretability: an expected return of $p$ (estimated with a state-value or action-value) at a certain state under the current policy would signify a $p$ chance of winning. Also, keeping the rewards in absolute value at most 1 can aid in training speed and prevent divergence, but it often isn't necessary for deep reinforcement learning problems. You most certainly can add other rewards based on partial progress towards the goal, but as it seems you found out, they may lead to incorrect results.

That being said, I would focus on the training process instead of a finely-tuned reward signal. Since there is a known goal state in the racing game (the finish line), I suggest training the RL agent by first initializing all racer agents only a few steps away from the goal state at the beginning of each episode. These episodes are shorter and therefore should provide a more dense reward signal. When your RL agent has learned a winning policy (e.g. wins more often than not), then initialize the agents slightly further from the goal state at the beginning of each episode. Also, continue to use and train the same neural network. Since the neural network presumably knows a winning policy at states near the goal state, then by initializing the agents only a few states further back, the RL agent is given an warm start and only needs to learn a policy for a few more states. The policy encoded by the neural network essentially contains a refined reward signal for the states close to the goal state since it is based on a winning policy; this helps prevent the sparsity problem caused by only supplying a reward at episode completion. You can repeat this process by initializing the agents slightly further from the goal state once the RL agent has learned a winning policy while continuing to use and train the same neural network.

Depending on your access to the environment internals, you may need other analogous approaches. For example, you could initialize the agents at the original starting line (i.e. not partway down the map) and then see which agent makes it $n$ units down the map first to determine the winner. Once a winning policy is learned by the RL agent, then gradually increase $n$ until $n$ matches the distance from the starting line to the finish line. Since it seems like you have distance traveled and distance to the opponent as features, you may instead try this method if you are unable to initialize the agents wherever you want on the map and instead can only initialize them on the starting line.

A notable benefit of the overall approach is that you can more quickly debug your algorithm on the easier environments (i.e. ones with shorter episode lengths) to be confident that the learning process is correct and focus your efforts elsewhere (e.g. the training process, including the reward signal).

",37607,,,,,6/11/2020 21:52,,,,0,,,,CC BY-SA 4.0 21809,1,,,6/11/2020 22:02,,0,150,"

I've written a Double DQN-based stock trading bot using mainly time series stock data. The internal network of the Double DQN is a LSTM which handles the time series data. An Experience Replay buffer is also used. The objective function is cumulative stock return over the test period. My epsilon used for exploration is 0.1 (which I think is already very high).

My trading bot has a very simple action space, trade or no-trade.

-- When it decides to trade, it sends a signal to buy and own a stock for a day. I'd get a positive return if stock price has gone up from today to tomorrow; equally would get a negative return if stock price has gone down.

-- When it decides to not trade, I own no stock and the daily return is 0 because there is no trading. Strangely, my algorithm gives a daily 'no trade' signal most of the time when I run the algo through a number of different test periods.

Very often, after giving a 'no trade' signal for many days, the algo would finally give a 'trade' signal but the next day reverse back to giving 'no trade' right away.

My questions:

Why am I getting this phenomenon? Most importantly, what can I do to make the algo not stuck in giving out 'no trade' signal most of the time?

",37615,,,,,3/2/2021 21:47,My Double DQN with Experience Replay produces a no-action decision most of the time. Why?,,0,5,,,,CC BY-SA 4.0 21810,1,21824,,6/12/2020 1:35,,16,24003,"

I was surveying some literature related to Fully Convolutional Networks and came across the following phrase,

A fully convolutional network is achieved by replacing the parameter-rich fully connected layers in standard CNN architectures by convolutional layers with $1 \times 1$ kernels.

I have two questions.

  1. What is meant by parameter-rich? Is it called parameter rich because the fully connected layers pass on parameters without any kind of ""spatial"" reduction?

  2. Also, how do $1 \times 1$ kernels work? Doesn't $1 \times 1$ kernel simply mean that one is sliding a single pixel over the image? I am confused about this.

",30910,,2444,,6/12/2020 15:47,6/18/2020 12:35,What is a fully convolution network?,,1,0,,,,CC BY-SA 4.0 21811,1,,,6/12/2020 3:26,,2,414,"

I was wondering if the BERT or T5 models can do the task of generating sentences in English. Most of the models I have mentioned are trained to translate from English to German or French. Is it possible that I can use the output of BERT as an input to my Decoder? My theory is that when I already have the trained Embeddings, I do not need to train the Encoder part. I can just add the outputs of sentences to the decoder to generate the sentences.

In place of finding the loss value from the translated version, Can I compute loss on the reply of a sentence?

Can someone point me toward a tutorial where I can use the BERT output for the decoder part? I have a data of conversation with me. I want to build a Chatbot from that data.

I have already implemented LSTM based Sequence2sequence model but it is not providing me satisfactory answer.

After some research, 2 such models are there as T5 and BART which are based on the same idea.

If possible, can someone tell me how can I use BART or T5 to make a conversational bot?

",36062,,52355,,10/31/2022 5:58,11/30/2022 6:07,"Can we use a pre trained Encoder (BERT, XLM ) with a Decoder (GPT, Transformer-XL) to build a Chatbot instead of Language Translation?",,1,0,,,,CC BY-SA 4.0 21813,2,,21811,6/12/2020 6:29,,0,,"

=> I don't have more ideas about BART and T5 Right now. but I had created a chatbot Based on GPT-2 Model-based on Microsoft DialoGPT. Which is fine-tuned on millions of parameters of Reddit. you can fine-tune on your own data using DialoGPT. I had not found a public decoding method. So, We tried to generate diverse responses using simple nucleus sampling at each time step. So, We used greedy nucleus sampling multiple times in parallel to generate multiple candidates responses. We generated 30 such responses for each turn and also used the last 3 turn conversation history as extra context for generator. So, We can generate responses related to the current Context of Conversation.

We used Code reference from here. you need to implement greedy nucleus sampling multiple on this code.

=>Now To Choose the best-matched response from response generated I created a Reranker.I used two sub-component.

=>Component one Counts Cosine similarity between Response and Query using sentence Encoder.

=>In Component Two we cross-entropy error for generating original query from the response.For that I used reverse-generator of DialogueGPT.

We Combined Score of Both Component as Score=norm(comp1Score)+norm(1-comp2Loss). Using this score you can find best response.

For Tutorial My Favourite Hugging Face they have BERT,GPT, GPT-2, XLNET,etc... and Second Favourite Facebook ParlAI, try Blender Model also.

",32861,,32861,,6/12/2020 11:09,6/12/2020 11:09,,,,3,,,,CC BY-SA 4.0 21814,2,,21803,6/12/2020 7:59,,0,,"

It can be argued that CNN will outperform a fully connected network if they have the same structure (number of neurons).

Normal neural networks can probably learn to detect things like CNNs, but the task would be a lot more computationally expensive. In a CNN, all neurons in a feature maps share the same parameters, so if CNN learns to recognize a pattern in one location, it can detect the pattern in any other location. Furthermore, CNNs take into account the fact that pixels that are closer in proximity with each other are more heavily related than the pixels that are further apart, this information is lost in a Normal neural network.

Read More here.

",34170,,,,,6/12/2020 7:59,,,,0,,,,CC BY-SA 4.0 21817,1,,,6/12/2020 10:03,,1,85,"

I was reading AlexNet paper and the authors quoted

the kernels on one GPU were ""largely color agnostic,"" whereas the kernels on the other GPU were largely ""color-specific.""

The upper GPU takes operates on filters on the top and lower GPU deals with the lower half. But what is the reason for each of them learning a different set of features, i.e. the top half of kernels learning the edges mostly and the bottom kernels learning color variation? Is there any reason behind it?

",37821,,2444,,6/13/2020 14:18,6/13/2020 14:18,What is the reason for different learned features in upper and lower half in AlexNet?,,0,0,,,,CC BY-SA 4.0 21824,2,,21810,6/12/2020 13:25,,21,,"

Fully convolution networks

A fully convolution network (FCN) is a neural network that only performs convolution (and subsampling or upsampling) operations. Equivalently, an FCN is a CNN without fully connected layers.

Convolution neural networks

The typical convolution neural network (CNN) is not fully convolutional because it often contains fully connected layers too (which do not perform the convolution operation), which are parameter-rich, in the sense that they have many parameters (compared to their equivalent convolution layers), although the fully connected layers can also be viewed as convolutions with kernels that cover the entire input regions, which is the main idea behind converting a CNN to an FCN. See this video by Andrew Ng that explains how to convert a fully connected layer to a convolutional layer.

An example of an FCN

An example of a fully convolutional network is the U-net (called in this way because of its U shape, which you can see from the illustration below), which is a famous network that is used for semantic segmentation, i.e. classify pixels of an image so that pixels that belong to the same class (e.g. a person) are associated with the same label (i.e. person), aka pixel-wise (or dense) classification.

Semantic segmentation

So, in semantic segmentation, you want to associate a label with each pixel (or small patch of pixels) of the input image. Here's a more suggestive illustration of a neural network that performs semantic segmentation.

Instance segmentation

There's also instance segmentation, where you also want to differentiate different instances of the same class (e.g. you want to distinguish two people in the same image by labeling them differently). An example of a neural network that is used for instance segmentation is mask R-CNN. The blog post Segmentation: U-Net, Mask R-CNN, and Medical Applications (2020) by Rachel Draelos describes these two problems and networks very well.

Here's an example of an image where instances of the same class (i.e. person) have been labeled differently (orange and blue).

Both semantic and instance segmentations are dense classification tasks (specifically, they fall into the category of image segmentation), that is, you want to classify each pixel or many small patches of pixels of an image.

$1 \times 1$ convolutions

In the U-net diagram above, you can see that there are only convolutions, copy and crop, max-pooling, and upsampling operations. There are no fully connected layers.

So, how do we associate a label to each pixel (or a small patch of pixels) of the input? How do we perform the classification of each pixel (or patch) without a final fully connected layer?

That's where the $1 \times 1$ convolution and upsampling operations are useful!

In the case of the U-net diagram above (specifically, the top-right part of the diagram, which is illustrated below for clarity), two $1 \times 1 \times 64$ kernels are applied to the input volume (not the images!) to produce two feature maps of size $388 \times 388$. They used two $1 \times 1$ kernels because there were two classes in their experiments (cell and not-cell). The mentioned blog post also gives you the intuition behind this, so you should read it.

If you have tried to analyze the U-net diagram carefully, you will notice that the output maps have different spatial (height and weight) dimensions than the input images, which have dimensions $572 \times 572 \times 1$.

That's fine because our general goal is to perform dense classification (i.e. classify patches of the image, where the patches can contain only one pixel), although I said that we would have performed pixel-wise classification, so maybe you were expecting the outputs to have the same exact spatial dimensions of the inputs. However, note that, in practice, you could also have the output maps to have the same spatial dimension as the inputs: you would just need to perform a different upsampling (deconvolution) operation.

How $1\times 1$ convolutions work?

A $1 \times 1$ convolution is just the typical 2d convolution but with a $1\times1$ kernel.

As you probably already know (and if you didn't know this, now you know it), if you have a $g \times g$ kernel that is applied to an input of size $h \times w \times d$, where $d$ is the depth of the input volume (which, for example, in the case of grayscale images, it is $1$), the kernel actually has the shape $g \times g \times d$, i.e. the third dimension of the kernel is equal to the third dimension of the input that it is applied to. This is always the case, except for 3d convolutions, but we are now talking about the typical 2d convolutions! See this answer for more info.

So, in the case we want to apply a $1\times 1$ convolution to an input of shape $388 \times 388 \times 64$, where $64$ is the depth of the input, then the actual $1\times 1$ kernels that we will need to use have shape $1\times 1 \times 64$ (as I said above for the U-net). The way you reduce the depth of the input with $1\times 1$ is determined by the number of $1\times 1$ kernels that you want to use. This is exactly the same thing as for any 2d convolution operation with different kernels (e.g. $3 \times 3$).

In the case of the U-net, the spatial dimensions of the input are reduced in the same way that the spatial dimensions of any input to a CNN are reduced (i.e. 2d convolution followed by downsampling operations). The main difference (apart from not using fully connected layers) between the U-net and other CNNs is that the U-net performs upsampling operations, so it can be viewed as an encoder (left part) followed by a decoder (right part).

",2444,,2444,,6/18/2020 12:35,6/18/2020 12:35,,,,5,,,,CC BY-SA 4.0 21825,2,,21699,6/12/2020 15:15,,2,,"

Neil Slater's answer is very nice, but I have a couple more suggestions:

  • You can use entropy regularization. Basically, you modify your loss function to penalize low policy entropy (so less loss for more entropy) which should prevent your policy from becoming ""too deterministic"" too early.

  • You can also try maximum-entropy methods, like SAC, which employ a different strategy for promoting policy entropy.

",37829,,,,,6/12/2020 15:15,,,,0,,,,CC BY-SA 4.0 21828,2,,13821,6/12/2020 15:56,,3,,"

Are fully connected layers necessary in a CNN?

No. In fact, you can simulate a fully connected layer with convolutions. A convolutional neural network (CNN) that does not have fully connected layers is called a fully convolutional network (FCN). See this answer for more info.

An example of an FCN is the u-net, which does not use any fully connected layers, but only convolution, downsampling (i.e. pooling), upsampling (deconvolution), and copy and crop operations. Nevertheless, u-net is used to classify pixels (more precisely, semantic segmentation).

Moreover, you can use CNNs only for the purpose of feature extraction, and then feed these extracted features in another classifier (e.g. an SVM). In fact, transfer learning is based on the idea that CNNs extract reusable features.

",2444,,2444,,6/12/2020 16:03,6/12/2020 16:03,,,,2,,,,CC BY-SA 4.0 21829,1,21830,,6/12/2020 16:04,,0,259,"

I have two questions

  1. When we use our network to approximate our Q values, is the Q target a single value?

  2. During backpropagation, when the weights are updated, does it automatically update the Q values, shouldn’t the state be passed in the network again to update it?

",37831,,2444,,6/12/2020 16:15,6/12/2020 16:39,"When we use a neural network to approximate the Q values, is the Q target a single value?",,2,0,,,,CC BY-SA 4.0 21830,2,,21829,6/12/2020 16:30,,1,,"

When we use our network to approximate our Q values,is the Q target a single value?

Yes, the target Q value is a single value if you are just updating a single training example. The loss function of a vanilla DQN for a single experience tuple $(s_t,a_t,r_t,s_{t+1})$ is calculated as $$L(\theta) = [r_t + \gamma \,max\,Q_{a_{t+1}}(s_{t+1},a_{t+1};\theta) - Q(s_t,a_t;\theta)]^2$$ where $r_t + \gamma \,max\,Q(s_{t+1},a_{t+1};\theta)$ is the target Q value. However, when using mini-batch gradient descent, you would have to compute multiple target Q values equivalent to the batch size

During backpropagation, when the weights are updated, does it automatically update the Q values, shouldn’t the state be passed in the network again to update it?

During backpropagation of the loss function, the weights $\theta$ are automatically updated. You do not need to pass in the state again. Because in the first place, you would have computed $Q(s_t,a_t;\theta)$ by passing in the state as input to the neural network. That is how backpropagation works for Deep Q networks.

Training for the DQN is as follows:

  1. Collect experience tuples of $(s_t,a_t,r_t,s_{t+1})$ and store them in a replay buffer.
  2. Sample mini-batch of experiences from the replay buffer.
  3. From these sampled batch of experiences, compute $Q(s_{t+1}.a_{t+1};\theta)$ by passing $s_{t+1}$ into the network and take the Q value with the maximum values
  4. Compute $Q(s_t,a_t;\theta)$ by passing $s_t$ into the network.
  5. Compute the Loss for this experience and propagate the loss back to the network, hence updating the weights.

Also, since the weights have changed after backpropagation, the Q values for the same state would also be updated if you pass the same state in to the network again. Check out this paper as it explains how Deep Q Network works.

",32780,,-1,,6/17/2020 9:57,6/12/2020 16:39,,,,2,,,,CC BY-SA 4.0 21831,2,,21829,6/12/2020 16:33,,0,,"

What the network represents can be a little confusing - in tabular Q-learning you have a $Q$ function that you pass a state $s$ and action $a$ into and receive a scalar value. In the Human Level Control paper where the DQN gained its popularity, the network is a little different to the tabular function. You pass into the network your current state and it outputs values for every action in your action space. You then choose the action which has the highest value.

The way we then train it is as follows - first we get our scalar value which corresponds to the target $r + \gamma max_{a'} Q(s',a')$; I will denote this as $y$ for short. As our network outputs a vector of dimension $|\mathcal{A}|$ where $\mathcal{A}$ is our action space, we get our prediction for the current state which is a vector of the dimension I just mentioned. We will now assume our action space has two possible actions and say that for state $s$ our network outputs the vector $[\hat{y}_1,\hat{y}_2]$. Assume that when we calculated $y$ earlier we took the first action, so we only want to update our prediction of $\hat{y}_1$. To do this, if we were working in e.g. PyTorch I would put into the loss function, which is mean squared error,

input = $[\hat{y}_1,\hat{y}_2$], target = $[y, \hat{y}_2]$

so essentially all we change is the position in the vector corresponding to the action we took with the Q-target we observed. The weights will thus be updated accordingly to move more into the direction of producing this output for state $s$.

So to explicitly answer your first question, the Q-target is a scalar but the target we pass into the network is a vector.

",36821,,,,,6/12/2020 16:33,,,,2,,,,CC BY-SA 4.0 21832,2,,9934,6/12/2020 17:17,,2,,"

It should not be much more difficult to predict a rotated rectangle compared to a bounding box.

A bounding box can be parameterized with 4 floats: $x_c$, $y_c$, width, height.

A rotated rectangle can be parameterized with 5 floats: $x_c$, $y_c$, width, height, angle.

However, to avoid the wrap-around issue with predicting the angle with one value (0° is same as 360°), it should be better to predict sine and cosine instead.

It is actually useful to predict rotated rectangles for text detection (each text field is a rotated rectangle). Indeed, in the wild, text can be in any orientation, and it is important to predict precisely rotated rectangles for OCR to work well. It is especially true for long text boxes near 45° (an axis-aligned bounding box around this would be useless because too big).

Here are 2 links I found about this topic:

",37830,,2444,,1/28/2021 23:41,1/28/2021 23:41,,,,0,,,,CC BY-SA 4.0 21835,2,,21594,6/12/2020 18:18,,1,,"

There is an epsilon for discovery, set at 100% at the start and it decreases by 10% until it reaches 1%.

After looking through your code on the linked GitHub repository, I think that the annealing of the epsilon parameter is a major issue. As clarified in the above comments, the act() method is called once per episode time step to determine the agent's choice of action. Within this method, it seems that epsilon is decreased extremely rapidly. The code states that epsilon = Math.max(1, epsilon - 10), which means epsilon is decreased to 1 after 10 time steps. Also as clarified in the comments, epsilon is never reset to a larger number. Therefore, it seems that epsilon will be reduced from 100 to 1 after 10 time steps after (most likely) a single episode, which in my opinion is much too quick and will stifle exploration.

As a first guess, I suggest annealing epsilon more slowly from 100 to 1 after roughly one million time steps. If you want to anneal the parameter linearly, then each of the first million time steps could reduce epsilon by (starting epsilon - ending epsilon) / annealing time steps, where starting epsilon = 100, ending epsilon = 1, and annealing time steps = 1000000.

Other issues may crop up after making the above change, but I think this is a good starting point. This seems like a very fun project, and it would be fun if you kept us posted about the results!

",37607,,,,,6/12/2020 18:18,,,,4,,,,CC BY-SA 4.0 21838,1,,,6/12/2020 20:25,,0,41,"

I am currently studying this model speech generation known as WaveNet model by Google using the linked original paper and this implementation.

I find the model very confusing in the input it takes and the output it generates, and some of the layer dimensions didn't seem to match based on what I understood from the WaveNet paper, or am I misunderstanding something?

  1. What is the input to the WaveNet, isn't this a mel-spectrum input and not just 1 floating point value for raw audio? E.g. the input kernel layer shows as shaped 1x1x128. Isn't the input to the input_convolution layer the mel-spectrum frames, which are 80 float values * 10,000 max_decoder_steps, so the in_channels for this conv1d layer should be 80 instead of 1?

inference/input_convolution/kernel:0 (float32_ref 1x1x128) [128, bytes: 512]

  1. Is there reason for upsampling stride values to be [11, 25], like are the specific numbers 11 and 25 special or relevant in affecting other shapes/dimensions?
inference/ConvTranspose1D_layer_0/kernel:0 (float32_ref 1x11x80x80) [70400, bytes: 281600]
inference/ConvTranspose1D_layer_1/kernel:0 (float32_ref 1x25x80x80) [160000, bytes: 640000]
  1. Why is the input-channels in residual_block_causal_conv 128 and residual_block_cin_conv 80? What exactly is their inputs? (e.g. is it mel-spectrum or just a raw floating point value?) Is the wavenet-vocoder generating just 1 float value per 1 input mel-spectrum frame of 80 floats?
inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 3x128x256) [98304, bytes: 393216]
inference/ResidualConv1DGLU_0/residual_block_cin_conv_ResidualConv1DGLU_0/kernel:0 (float32_ref 1x80x256) [20480, bytes: 81920]

I was able to print the whole WaveNet network using print(tf.trainable_variables()), but the model still seems very confusing.

",33580,,4709,,7/5/2022 5:21,7/5/2022 5:21,Studying the speech-generation model and have question about the confusing nature of model input and outputs,,1,0,,,,CC BY-SA 4.0 21839,1,26678,,6/12/2020 20:34,,5,498,"

I was trying to solve an XOR problem, and the dataset seems like the one in the image.

I plotted the tree and got this result:

As I understand, the tree should have depth 2 and four leaves. The first comparison is annoying, because it is close to the right x border (0.887). I've tried other parameterizations, but the same result persists.

I used the code below:

from sklearn.tree import DecisionTreeClassifier    

clf = DecisionTreeClassifier(criterion='gini')
clf = clf.fit(X, y)

fn=['V1','V2']

fig, axes = plt.subplots(nrows = 1,ncols = 1,figsize = (3,3), dpi=300)

tree.plot_tree(clf, feature_names = fn, class_names=['1', '2'], filled = True);

I would be grateful if anyone can help me to clarify this issue.

",37836,,37836,,6/13/2020 12:01,1/20/2023 17:14,Why isn't my decision tree classifier able to solve the XOR problem properly?,,2,0,,,,CC BY-SA 4.0 21840,1,,,6/12/2020 23:26,,0,103,"

Given equation 7.3 of Sutton and Barto's book for convergence of TD(n):

$\max_s|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi(s)| \leqslant \gamma^n \max_s|V_{t+n-1}(s) - v_\pi(s)|$

$\textbf{PROBLEM 1}$ : Why is this error $|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi(s)|$ compared with the error $|V_{t+n-1}(s) - v_\pi(s)|$.

There can be two other logical comparisons for the convergence of algorithm($TD(n)$):

1) If we compare and say that $V_{t+n-1}(s)$ is more close to $v_\pi(s)$ than $V_{t+n-2}(s)$ i.e we compare $|V_{t+n-1}(s) - v_\pi(s)|$ with $|V_{t+n-2}(s) - v_\pi(s)|$

2) We can also compare $|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi(s)|$ with $|\mathbb{E}_\pi[V_{t+n-1}(S_t)|S_t = s] - v_\pi(s)|$ to show that $\mathbb{E}_\pi[G_{t:t+n}|S_t = s]$ is better than $\mathbb{E}_\pi[V_{t+n-1}(S_t)|S_t = s]$ hence moving $V_{t+n-1}(S_t)$ towards $G_{t:t+n}$(as done in eq 7.2) can lead to convergence.

$\textbf{PROBLEM 2}$ : Are the above 2 methods of comparison for testing convergence correct.

Equations for reference:

Eq 7.1: $G_{t:t+n} = R_{t+1} + \gamma R_{t+2} +......+\gamma^{n-1}R_{t+n} + \gamma^{n}V_{t+n-1}(S_{t+n})$

Eq 7.2: $V_{t+n}(S_t) = V_{t+n-1}(S_t) + \alpha [G_{t:t+n} - V_{t+n-1}(S_t)]$

",37611,,37611,,6/13/2020 12:14,9/10/2020 19:41,Problem in understanding equation given for convergence of TD(n) algorithm,,0,3,,,,CC BY-SA 4.0 21844,1,,,6/13/2020 6:29,,1,64,"

In the paragraph given between eq 7.12 and 7.13 in Sutton & Barto's book:

$G_{t:h} = R_{t+1} + G_{t+1:h} , t < h < T$

where $G_{h:h} = V_{h-1}(S_h)$. (Recall that this return is used at time h, previously denoted t + n.) Now consider the effect of following a behavior policy $b$ that is not the same as the target policy $\pi$. All of the resulting experience, including the first reward $R_{t+1}$ and the next state $S_{t+1}$ must be weighted by the importance sampling ratio for time $t, \rho_t = \frac{\pi(A_t|S_t)}{b(A_t|S_t)}$ . One might be tempted to simply weight the righthand side of the above equation, but one can do better. Suppose the action at time t would never be selected by $\pi$, so that $\rho_t$ is zero. Then a simple weighting would result in the n-step return being zero, which could result in high variance when it was used as a target.

Why does the n-step return being zero results in high variance?

Also, why is the experience weighted by $\rho_t$, it should be weighted by $\rho_{t:t+h}$?

",37611,,-1,,6/17/2020 9:57,6/13/2020 13:44,Why does the n-step return being zero result in high variance in off policy n-step TD?,,0,0,,,,CC BY-SA 4.0 21845,1,,,6/13/2020 6:43,,1,48,"

Seung et.al recently published GameGAN paper, GameGAN learned and stored the whole Pacman game and was able to reproduce it without a game engine. The uniqueness of GameGAN is that it had added memory to its discriminator/generator which helped it to store the game states.

In Bayesian interpretation, a supervised learning system learns by optimizing weights which maximizes a likelihood function.

$$\hat{\boldsymbol{\theta}} = \mathop{\mathrm{argmax}} _ {\boldsymbol{\theta}} P(X \mid \boldsymbol{\theta})$$

Will adding memory which can store prior information makes GameGAN a Bayesian learning system?

Can GameGAN or similar neural network with memory can be considered as a bayesian learning system. If yes, then which of these two equations(or something other) correctly explains this system (considering prior as memory)?

  1. $$\mathop{\mathrm{argmax}} \frac{P(X \mid \boldsymbol{\theta})P(\boldsymbol{\theta})}{P(X)}$$

or

  1. $$\mathop{\mathrm{argmax}} \frac{P(X_t \mid \boldsymbol{X^{t+1}})P(\boldsymbol{X^{t+1}})}{P(X_t)}$$

PS: I understand GAN's are unsupervised learning systems, but we can assume discriminator and generator models separately trying to find weights that maximize their individual likelihood function.

",39,,2444,,6/13/2020 13:50,6/13/2020 13:50,Will adding memory to a supervised learning system makes it into a Bayesian learning system?,,0,2,,,,CC BY-SA 4.0 21846,2,,21594,6/13/2020 7:50,,2,,"

Q learning on its own isn't enough to learn a winning strategy for a game like 2048. 2048 requires predictive thinking for possible outcomes and good positional awareness. Performance of the agent is heavily dependent on the reward function. The approach to give the reward proportional to the obtained points after every move is naive since it might sacrifice better positional play for short term rewards. The way it's posed it seems like the agent will try to maximize its performance for the last 20 moves or so. That could lead an agent to the positional situation which leads to a loss in couple of moves. Possibly better strategy would be to give positive reward when the agent actually completes the 2048 tiles and negative reward if it loses. Such parse rewards would add difficulty to training since it would require sophisticated exploration strategies which $\epsilon$-greedy certainly isn't. The stochastic nature of the game would pose difficulty to the agent as well. Similar positions might lead to different outcomes because in some cases the new tile would spawn in a bad position for the continuation while in other cases it might be beneficial. The suggested approach would be to definitely include Monte Carlo tree search approach along with some RL algorithm which was successfully applied to agents like AlphaZero and AlphaGo. MCTS would sample moves ahead and agent would get better representation of how good certain actions are.

",20339,,,,,6/13/2020 7:50,,,,1,,,,CC BY-SA 4.0 21847,1,,,6/13/2020 9:42,,2,26,"

I've been going through ""Online Interactive Collaborative Filtering Using Multi-Armed Bandit with Dependent Arms"" by Wang et al. and am unable to understand how the update equations for the hyperparameters (section 4.3, equation set (23)) were derived. I'd deeply appreciate it if anyone could provide a full or partial derivation of the updates. Any general suggestions regarding how to proceed with the derivation would also be appreciated.

ICTR Graphical Model

The variables are sampled as below

$$\mathbb{p}_m|\lambda \sim \text{Dirichlet}(\lambda)$$

$$\sigma^2_n|\alpha,\beta \sim \text{Inverse-Gamma}(\alpha,\beta)$$

$$\mathbb{q}_n |\mu_{\mathbb{q}}, \Sigma_{\mathbb{q}}, \sigma_n^2 \sim \mathcal{N}(\mu_{\mathbb{q}}, \sigma_n^2\Sigma_{\mathbb{q}})$$

$$\mathbb{\Phi}_k |\eta \sim \text{Dirichlet}(\eta)$$

$$z_{m,t} | \mathbb{p}_m \sim \text{Multinomial}(\mathbb{p}_m)$$

$$x_{m,t} | \mathbb{\Phi}_k \sim \text{Multinomial}(\mathbb{\Phi}_k) $$

$$y_{m,t} \sim \mathcal{N}(\mathbb{p}_m^T\mathbb{q}_n, \sigma_n^2)$$

And the update equations are below

",37848,,37849,,6/13/2020 10:29,6/13/2020 10:29,Deriving hyperparameter updates in Online Interactive Collaborative Filtering,,0,0,,,,CC BY-SA 4.0 21849,1,23500,,6/13/2020 11:03,,2,620,"

In per-decison importance sampling given in Sutton & Barto's book:

Eq 5.12 $\rho_{t:T-1}R_{t+k} = \frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}......\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}R_{t+k}$

Eq 5.13 $\mathbb{E}\left[\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}\right] = \displaystyle\sum_ab(a|S_k)\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})} = \displaystyle\sum_a\pi(a|S_k) = 1$

Eq.5.14 $\mathbb{E}[\rho_{t:T-1}R_{t+k}] = \mathbb{E}[\rho_{t:t+k-1}R_{t+k}]$

As full derivation is not given, how do we arrive at Eq 5.14 from 5.12?

From what i understand :

1) $R_{t+k}$ is only dependent on action taken at $t+k-1$ given state at that time i.e. only dependent on $\frac{\pi(A_{t+k-1}|S_{t+k-1})}{b(A_{t+k-1}|S_{t+k-1})}$

2) $\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}$ is independent of $\frac{\pi(A_{k+1}|S_{k+1})}{b(A_{k+1}|S_{k+1})}$ , so $\mathbb{E}\left[\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}\frac{\pi(A_{k+1}|S_{k+1})}{b(A_{k+1}|S_{k+1})}\right] = \mathbb{E}\left[\frac{\pi(A_{k}|S_{k})}{b(A_{k}|S_{k})}\right]\mathbb{E}\left[\frac{\pi(A_{k+1}|S_{k+1})}{b(A_{k+1}|S_{k+1})}\right], \forall \, k\in [t,T-2]$

Hence, $\mathbb{E}[\rho_{t:T-1}R_{t+k}]= \mathbb{E}\left[\frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}......\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}R_{t+k}\right] \\= \mathbb{E}\left[\frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}....\frac{\pi(A_{t+k-2}|S_{t+k-2})}{b(A_{t+k-2}|S_{t+k-2})}\frac{\pi(A_{t+k}|S_{t+k})}{b(A_{t+k}|S_{t+k})}......\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}\right]\mathbb{E}\left[\frac{\pi(A_{t+k-1}|S_{t+k-1})}{b(A_{t+k-1}|S_{t+k-1})}R_{t+k}\right] \\= \mathbb{E}\left[\frac{\pi(A_{t}|S_{t})}{b(A_{t}|S_{t})}\right]\mathbb{E}\left[\frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}\right]\mathbb{E}\left[\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}\right]....\mathbb{E}\left[\frac{\pi(A_{t+k-2}|S_{t+k-2})}{b(A_{t+k-2}|S_{t+k-2})}\right]\mathbb{E}\left[\frac{\pi(A_{t+k}|S_{t+k})}{b(A_{t+k}|S_{t+k})}\right]......\mathbb{E}\left[\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}\right]\mathbb{E}\left[\frac{\pi(A_{t+k-1}|S_{t+k-1})}{b(A_{t+k-1}|S_{t+k-1})}R_{t+k}\right] \\= \mathbb{E}[\frac{\pi_{t+k-1}}{b_{t+k-1}}R_{t+k}]\\=\mathbb{E}[\rho_{t+k-1}R_{t+k}]$

which is not equal to eq 5.14. What's the mistake in the above calculations? Are 1 and 2 correct?

",37611,,37611,,6/13/2020 22:10,9/11/2020 10:54,How is per-decision importance sampling derived in Sutton & Barto's book?,,2,4,0,,,CC BY-SA 4.0 21851,1,,,6/13/2020 12:34,,1,733,"

SegNet and U-Net are created for segmentation problem and EfficientNet is created for classification problem. I have a task and it is saying that train these models on the same dataset and compare results. Is it possible?

",37854,,,,,6/13/2020 12:34,"How to compare SegNet, U-Net and EfficientNet?",,0,5,,,,CC BY-SA 4.0 21854,1,21856,,6/13/2020 16:10,,3,208,"

We assume infinite horizon and discount factor $\gamma = 1$. At each step, after the agent takes an action and gets its reward, there is a probability $\alpha = 0.2$, that agent will die. The assumed maze looks like this

Possible actions are go left, right, up, down or stay in a square. The reward has a value 1 for any action done in the square (1,1) and zero for actions done in all the other squares.

With this in mind, what is the value of a square (1,1)?

The correct answer is supposed to be 5, and is calculated as $1/(1\cdot 0.2) = 5$. But why is that? I didn't manage to find any explanation on the net, so I am asking here.

",37858,,2444,,6/14/2020 15:53,6/14/2020 15:53,What is the value of a state when there is a certain probability that agent will die after each step?,,2,2,,,,CC BY-SA 4.0 21855,2,,21854,6/13/2020 16:49,,3,,"

The value of a state depends on the policy that you use, so I'll make the assumption here that you're talking about value using the optimal policy.

According to the optimal policy, the agent would choose to stay in the square (1,1) every time, but since it has a 0.8 probability of actually staying (and 0.2 probability of dying), we can compute the value of the agent using the Bellman equation as:

$$ V(1,1) = 1 + 0.8 V(1,1) + 0.2 V(\text{death state}) \\ \implies V(1,1) = 1 + 0.8 V(1,1) \\ \implies V(1,1) = \frac{1}{1 - 0.8} \\ \implies V(1,1) = 5 $$

There are other ways of deriving the same number (value function has multiple definitions) but they are equivalent.

",20714,,20714,,6/13/2020 17:03,6/13/2020 17:03,,,,0,,,,CC BY-SA 4.0 21856,2,,21854,6/13/2020 18:40,,2,,"

I will fill in some details in shaabhishek's answer for people who are interested.

With this in mind, what is the value of a square (1,1)?

First of all, the value function is dependent on a policy. The supposed correct answer you provided is the value of $(1, 1)$ under the optimal policy, so from now on, we will assume that we are finding the value function under the optimal policy. Also, we will assume that the environment dynamics are deterministic: choosing to take an action will guarantee that the agent moves in that direction.

Possible actions are go left, right, up, down or stay in a square. Reward has a value 1 for any action done in square (1,1) and zero for actions done in all the other squares.

Based on this information, the optimal policy at $(1, 1)$ should be to always stay in that square. The agent doesn't receive any reward for being in another square, and the probability of dying is the same for each square, so choosing the action to stay in square $(1, 1)$ is best.

The correct answer is supposed to be 5, and is calculated as $\frac{1}{1 \cdot 0.2} = 5$. But why is that?

By the Bellman Equation, the value function under the optimal policy $\pi_*$ at $(1,1)$ can be written as follows:

$$v_{\pi_*}((1, 1)) = \mathbb{E}_{\pi_*}\left[R_t + \gamma v_{\pi_{*}}(s') | s = (1,1)\right],$$

where $R_t$ denotes the immediate reward, $s$ denotes the current state, and $s'$ denotes the next state. By the problem statement, $\gamma = 1$. The next state is the $\texttt{dead}$ terminal state $\alpha = 20\%$ of the time. Terminal states have value $0$, as they do not accrue future rewards. The next state $s'$ is equal to $(1, 1)$ the remaining $(1-\alpha) = 80\%$ of the time because our policy dictates to remain in the same state and we assumed the dynamics were deterministic. Since expectation is linear, we can rewrite the expectation as follows (replacing $\gamma$ with $1$):

\begin{align*} v_{\pi_*}((1,1)) &= \mathbb{E}_{\pi_*}\left[R_t + v_{\pi_{*}}(s') | s = (1,1)\right]\\ &= \mathbb{E}_{\pi_*}\left[R_t |s=(1, 1)\right]+ \mathbb{E}_{\pi_*}\left[v_{\pi_{*}}(s') | s = (1,1)\right].\qquad (*) \end{align*}

We have

$$\mathbb{E}_{\pi_*}\left[R_t |s=(1, 1)\right] = 1\qquad (**)$$

because we are guaranteed an immediate reward of $1$ when taking an action in state $(1, 1)$. Also, from the comments above regarding the next state values and probabilities, we have the following:

\begin{align*}\mathbb{E}_{\pi_*}\left[v_{\pi_{*}}(s') | s = (1,1)\right] &= (1-\alpha) \cdot v_{\pi_{*}}((1,1)) + \alpha \cdot v_{\pi_*}(\texttt{dead})\\ &= 0.8 \cdot v_{\pi_{*}}((1,1)) + 0.2 \cdot 0\\ &= 0.8 \cdot v_{\pi_{*}}((1,1)).\qquad (***) \end{align*}

Substituting $(**)$ and $(***)$ into $(*)$ yields the following:

\begin{align*} v_{\pi_*}((1,1)) &= 1 + 0.8 \cdot v_{\pi_{*}}((1,1))\\ v_{\pi_*}((1,1)) - 0.8 \cdot v_{\pi_{*}}((1,1)) &= 1\\ (1-0.8)v_{\pi_*}((1,1)) &= 1\\ v_{\pi_*}((1,1)) &= \frac{1}{1-0.8} = \frac{1}{0.2} = 5. \end{align*}

",37607,,,,,6/13/2020 18:40,,,,0,,,,CC BY-SA 4.0 21859,1,21882,,6/13/2020 19:18,,3,562,"

Why don't we use an importance sampling ratio in Q-Learning, even though Q-Learning is an off-policy method?

Importance sampling is used to calculate expectation of a random variable by using data not drawn from the distribution. Consider taking a Monte Carlo average to calculate $\mathbb{E}[X]$.

Mathematically an expectation is defined as $$\mathbb{E}_{x \sim p(x)}[X] = \sum_{x = \infty}^\infty x p(x)\;;$$ where $p(x)$ denotes our probability mass function, and we can approximate this by $$\mathbb{E}_{x \sim p(x)}[X] \approx \frac{1}{n} \sum_{i=1}^nx_i\;;$$ where $x_i$ were simulated from $p(x)$.

Now, we can re-write the expectation from earlier as

$$\mathbb{E}_{x \sim p(x)}[X] = \sum_{x = \infty}^\infty x p(x) = \sum_{x = \infty}^\infty x \frac{p(x)}{q(x)} q(x) = \mathbb{E}_{x\sim q(x)}\left[ X\frac{p(X)}{q(X)}\right]\;;$$ and so we can calculate the expectation using Monte Carlo averaging $$\mathbb{E}_{x \sim p(x)}[X] \approx \frac{1}{n} \sum_{i=1}^nx_i \frac{p(x)}{q(x)}\;;$$ where the data $x_i$ are now simulated from $q(x)$.

Typically importance sampling is used in RL when we use off-policy methods, i.e. the policy we use to calculate our actions is different from the policy we want to evaluate. Thus, I wonder why we don't use the importance sampling ratio in Q-learning, even though it is considered to be an off-policy method?

",36821,,36821,,6/15/2020 16:07,6/15/2020 16:07,Why we don't use importance sampling in tabular Q-Learning?,,1,6,,,,CC BY-SA 4.0 21862,2,,18157,6/13/2020 20:47,,0,,"

FCNs can and typically have downsampling operations. For example, u-net has downsampling (more precisely, max-pooling) operations. The difference between an FCN and a regular CNN is that the former does not have fully connected layers. See this answer for more info.

Therefore, FCNs inherit the same properties of CNNs. There's nothing that a CNN (with fully connected layers) can do that an FCN cannot do. In fact, you can even simulate a fully connected layer with a convolution (with a kernel that has the same shape as the input volume).

",2444,,,,,6/13/2020 20:47,,,,0,,,,CC BY-SA 4.0 21863,2,,9569,6/13/2020 21:11,,0,,"

$\mathbb{E}_{X\thicksim P}[f(X)] = \sum P(X)f(X) = \sum Q(X)\frac{P(X)}{Q(X)}f(X) = \mathbb{E}_{X\thicksim Q}\left[\frac{P(X)}{Q(X)}f(X)\right]$

Here $P = \pi, Q = b ,f = G_t|S_t $

",37611,,,,,6/13/2020 21:11,,,,0,,,,CC BY-SA 4.0 21866,1,,,6/14/2020 2:35,,1,82,"

I was reading this report: https://www.theverge.com/2017/4/12/15271874/ai-adversarial-images-fooling-attacks-artificial-intelligence

Researchers used noise to trick machine learning algorithms to misidentify or misclassify an image of a fish as a cat. I was wondering if something like that can be used to create subliminals.

What I mean by subliminals: United Nations has defined subliminal messages as perceiving messages without being aware of them, it is unconscious perception, or perception without awareness. Like you may be aware of a message but cannot consciously perceive that message in the form of text, etc.

All the reports about the noise trick said the noise was so transparent that humans couldn't detect it. This can be changed to make it noticeable unconsciously but unnoticeable at a conscious level so a human can register the subliminal but not be aware of it.

Is it possible to take an output from a hidden layer to construct such subliminal for humans, with trial and error one can find right combination? Can it be possible to come up with a pixel pattern or noise with ML which allows one to impose subliminals?

",31307,,2444,,6/15/2020 1:15,11/12/2020 4:03,Can the addition of unnoticeable noise to images be used to create subliminals?,,1,1,,,,CC BY-SA 4.0 21867,1,,,6/14/2020 3:56,,2,81,"

I have a stream of data coming in like below (random numbers 0-9)

7, 7, 0, 0, 8, 9, 2, 7, 3, 8, 2, 8, 5, 7, 0, 8, 7, 8, 5, 3, 2, 6, 1, 9, 5, 7, 5, 3, 4, 9, 1, 3, 5, 5, 0, 7, 7, 5, 2, 8, 8, 7, 5, 5, 5, 2, 9, 7, 2, 1, 0, 0, 5, 7, 1, 4, 2, 7, 8, 8, 5, 2, 7, 5, 7, 1, 7, 2, 0, 5, 7, 5, 2, 6, 3, 6, 3, 6, 1, 9, 1, 9, 7, 2, 3, 9, 8, 8, 4, 9, 8, 2, 5, 3, 4, 0, 3, 1, 0, 7, 2, 3, 8, 7, 5, 7, 3, 6, 0, 3, 3, 3, 6, 3, 1, 3, 0, 6, 9, 8, 0, 1, 4, 4, 9, 9, 3, 7, 4, 1, 0, 5, 0, 6, 8, 8, 8, 1, 7, 6

Ask: is to Predict the next numbers(at least 3-10).

Which approach would be helpful in getting through this problem?

",37871,,2444,,6/14/2020 9:46,6/14/2020 9:46,Which machine learning approach can be used to predict a univariate value?,,1,0,,,,CC BY-SA 4.0 21868,1,,,6/14/2020 8:07,,1,20,"

I want to improve quality of translations for open-source projects in Ukrainian language. We have multiple translations from different authors. We can also translate messages using machine translations. Sometimes machine translation is even better than human translation.

Given multiple variants of translation of the same original text, I want to create AI which will be able to ""translate"" from Ukrainian to Ukrainian, using these multiple variants in parallel as the source, to produce one variant of higher quality.

So, in general, given multiple similar input sequences, the neural network needs to ""understand"" them, and produce a single output sequence.

$$S_1, S_2, \dots \rightarrow S$$

For a simple example, we may want to train a NN to recognize a sequence of natural numbers: $1,2,3,4, \dots$. We give two sequences to NN: $23,4,24,6,8$ and $3,65,5,6,23$, then trained the NN is expected to produce $3,4,5,6,7$.

How to modify an existing neural network to achieve that? Is it possible at all?

",37863,,2444,,6/14/2020 9:58,6/14/2020 9:58,How to autocorelate multiple variants of same text into one?,,0,0,,,,CC BY-SA 4.0 21869,2,,21866,6/14/2020 8:30,,-1,,"

I would say no, at least not now. To achieve that you need to have a model that represents the structure of a human brain at a very detailed level, so that inherits the same flaws.

",28041,,,,,6/14/2020 8:30,,,,2,,,,CC BY-SA 4.0 21871,2,,21867,6/14/2020 9:16,,2,,"

Assuming your data is not purely random (otherwise it will be difficult to make useful predictions), you can try the following:

  • Hidden Markov Models: Treating your discrete numbers as states, you can use HMMs to predict the next state (next number) by looking at the last or the $n$ last states.
  • Recurrent Neural Networks: Assuming you have a lot of number sequences, you can treat your numbers as time series. Train a model and predict the next number(s). Here is a nice tutorial which uses LSTM cells.
  • Autoregressive models: AR models are quite powerful as well and have already been successfully used in finance and signal processing before the rise of deep learning.

Further this post (Problem in discrete valued time series forecasting) on Cross Validated might answer your question as well.

",37120,,,,,6/14/2020 9:16,,,,0,,,,CC BY-SA 4.0 21872,1,21878,,6/14/2020 9:56,,1,236,"

Using the tutorial from: SentDex - Python Programming I added Q Learning to my script that was previously just picking random actions. His script uses the MountainCar Environment so I had to amend it to the CartPole env I am using. Initially, the rewards seem sporadic but, after a while, they just drop off and oscillate between 0-10. Does anyone know why this is?

Learning_rate = 0.1
Discount_rate = 0.95
episodes = 200

# Exploration settings
epsilon = 1  # not a constant, qoing to be decayed
START_EPSILON_DECAYING = 1
END_EPSILON_DECAYING = episodes//2
epsilon_decay_value = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)

env = gym.make(""CartPole-v0"") #Create the environment. The name of the environments can be found @ https://gym.openai.com/envs/#classic_control
#Each environment has a number of possible actions. In this case there are two discrete actions, left or right

#Each environment has some integer characteristics of the state.
#In this case we have 4:

#env = gym.wrappers.Monitor(env, './', force=True)

DISCRETE_OS_SIZE = [20, 20, 20, 20]

discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/ DISCRETE_OS_SIZE 

def get_discrete_state(state):
    discrete_state = (state - env.observation_space.low)/discrete_os_win_size
    return tuple(discrete_state.astype(np.int))

q_table = np.random.uniform(low = -2, high = 0, size = (20, 20, 20, 20, env.action_space.n))

plt.figure() #Instantiate the plotting environment
rewards_list = [] #Create an empty list to add the rewards to which we will then plot
for i in range(episodes):
    discrete_state = get_discrete_state(env.reset())
    done = False
    rewards = 0
    frames = []

    while not done:
        #frames.append(env.render(mode = ""rgb_array""))

        if np.random.random() > epsilon:
            # Get action from Q table
            action = np.argmax(q_table[discrete_state])

        else:
            # Get random action
            action = np.random.randint(0, env.action_space.n)

        new_state, reward, done, info = env.step(action)

        new_discrete_state = get_discrete_state(new_state)

        # If simulation did not end yet after last step - update Q table
        if not done:

            # Maximum possible Q value in next step (for new state)
            max_future_q = np.max(q_table[new_discrete_state])

            # Current Q value (for current state and performed action)
            current_q = q_table[discrete_state, action]

            # And here's our equation for a new Q value for current state and action
            new_q = (1 - Learning_rate) * current_q + Learning_rate * (reward + Discount_rate * max_future_q)

            # Update Q table with new Q value
            q_table[discrete_state, action] = new_q

        else:
            q_table[discrete_state + (action,)] = 0

        discrete_state = new_discrete_state

        rewards += reward
        rewards_list.append(rewards)
    #print(""Episode:"", i, ""Rewards:"", rewards)
    #print(""Observations:"", obs)

    # Decaying is being done every episode if episode number is within decaying range
    if END_EPSILON_DECAYING >= i >= START_EPSILON_DECAYING:
        epsilon -= epsilon_decay_value

plt.plot(rewards_list)
plt.show()
env.close()

It becomes even more pronounced when I increase episodes to 20,000 so I don't think it's related to not giving the model enough training time.

If I set START_EPSILON_DECAYING to say 200 then it only drops to < 10 rewards after episode 200 which made me think it was the epsilon that was causing the problem. However, if I remove the epsilon/exploratory then the rewards at every episode are worse as it gets stuck in picking the argmax value for each state.

",36082,,2444,,6/14/2020 14:40,6/14/2020 15:17,Why do my rewards fall using tabular Q-learning as I perform more episodes?,,1,10,,,,CC BY-SA 4.0 21873,1,,,6/14/2020 12:09,,0,158,"

Is there any sanity check to know whether the Q functions learnt are appropriate in deep Q networks? I know that the Q values for end states should approximate the terminal reward. However, is it normal that Q values for the non-terminal states have higher values than those of the terminal states?

The reason why I want to know whether Q values learnt are appropriate is because I want to apply the doubly robust estimator for off-policy value evaluation. Using doubly robust requires a good Q value estimate to be learnt for each state.

",32780,,2444,,6/14/2020 13:04,6/15/2020 7:27,How do I know that the DQN has learnt an appropriate Q function?,,1,5,,,,CC BY-SA 4.0 21874,2,,11172,6/14/2020 12:51,,5,,"

To show how the convolution (in the context of CNNs) can be viewed as matrix-vector multiplication, let's suppose that we want to apply a $3 \times 3$ kernel to a $4 \times 4$ input, with no padding and with unit stride.

Here's an illustration of this convolutional layer (where, in blue, we have the input, in dark blue, the kernel, and, in green, the feature map or output of the convolution).

Now, let the kernel be defined as follows

$$ \mathbf{W} = \begin{bmatrix} w_{0, 0} & w_{0, 1} & w_{0, 2} \\ w_{1, 0} & w_{1, 1} & w_{1, 2} \\ w_{2, 0} & w_{2, 1} & w_{2, 2} \end{bmatrix} \in \mathbb{R}^{3 \times 3} $$

Similarly, let the input be defined as

$$ \mathbf{I} = \begin{bmatrix} i_{0, 0} & i_{0, 1} & i_{0, 2} & i_{0, 3} \\ i_{1, 0} & i_{1, 1} & i_{1, 2} & i_{1, 3} \\ i_{2, 0} & i_{2, 1} & i_{2, 2} & i_{2, 3} \\ i_{3, 0} & i_{3, 1} & i_{3, 2} & i_{3, 3} \\ \end{bmatrix} \in \mathbb{R}^{4 \times 4} $$

Then the convolution above (without padding and with stride 1) can be computed as a matrix-vector multiplication as follows. First, we redefine the kernel $\mathbf{W}$ as a sparse matrix $\mathbf{W}' \in \mathbb{R}^{4 \times 16}$ (which is a circulant matrix because of its circular nature) as follows.

$$ {\scriptscriptstyle \mathbf{W}' = \begin{bmatrix} w_{0, 0} & w_{0, 1} & w_{0, 2} & 0 & w_{1, 0} & w_{1, 1} & w_{1, 2} & 0 & w_{2, 0} & w_{2, 1} & w_{2, 2} & 0 & 0 & 0 & 0 & 0 \\ 0 & w_{0, 0} & w_{0, 1} & w_{0, 2} & 0 & w_{1, 0} & w_{1, 1} & w_{1, 2} & 0 & w_{2, 0} & w_{2, 1} & w_{2, 2} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & w_{0, 0} & w_{0, 1} & w_{0, 2} & 0 & w_{1, 0} & w_{1, 1} & w_{1, 2} & 0 & w_{2, 0} & w_{2, 1} & w_{2, 2} & 0 \\ 0 & 0 & 0 & 0 & 0 & w_{0, 0} & w_{0, 1} & w_{0, 2} & 0 & w_{1, 0} & w_{1, 1} & w_{1, 2} & 0 & w_{2, 0} & w_{2, 1} & w_{2, 2} \end{bmatrix} } $$ Similarly, we reshape the input $\mathbf{I}$ as a 16-dimensional vector $\mathbf{I}' \in \mathbb{R}^{16}$.

$$ {\scriptstyle \mathbf{I}' = \begin{bmatrix} i_{0, 0} & i_{0, 1} & i_{0, 2} & i_{0, 3} & i_{1, 0} & i_{1, 1} & i_{1, 2} & i_{1, 3} & i_{2, 0} & i_{2, 1} & i_{2, 2} & i_{2, 3} & i_{3, 0} & i_{3, 1} & i_{3, 2} & i_{3, 3} \end{bmatrix}^T } $$

Then the convolution of $\mathbf{W}$ and $\mathbf{I}$, that is

$$\mathbf{W} \circledast \mathbf{I} = \mathbf{O} \in \mathbb{R}^{2 \times 2},$$ where $\circledast$ is the convolution operator, is equivalently defined as $$\mathbf{W}' \cdot \mathbf{I}' = \mathbf{O}' \in \mathbb{R}^{4},$$ where $\cdot$ is the matrix-vector multiplication operator. The produced vector $\mathbf{O}'$ can then be reshaped as a $2 \times 2$ feature map.

You can easily verify that this representation is correct by multiplying e.g. the 16-dimensional input vector $\mathbf{I}'$ with the first row of $\mathbf{W}'$ to obtain the top-left entry of the feature map.

$$w_{0, 0} i_{0, 0} + w_{0, 1} i_{0, 1} + w_{0, 2} i_{0, 2} + 0 i_{0, 3} + w_{1, 0} i_{1, 0} + w_{1, 1} i_{1, 1} + w_{1, 2}i_{1, 2} + 0 i_{1, 3} + w_{2, 0} i_{2, 0} + w_{2, 1}i_{2, 1} + w_{2, 2} i_{2, 2} + 0 i_{2, 3} + 0 i_{3, 0} + 0 i_{3, 1} + 0 i_{3, 2} + 0 i_{3, 3} = \\ w_{0, 0} i_{0, 0} + w_{0, 1} i_{0, 1} + w_{0, 2} i_{0, 2} + w_{1, 0} i_{1, 0} + w_{1, 1} i_{1, 1} + w_{1, 2}i_{1, 2} + w_{2, 0} i_{2, 0} + w_{2, 1}i_{2, 1} + w_{2, 2} i_{2, 2} = \\ \mathbf{O}'_{0} \in \mathbb{R} ,$$ which is equivalent to an element-wise multiplication of $\mathbf{W}$ with the top-left $3 \times 3$ sub-matrix of the input followed by a summation over all elements (i.e. convolution), that is

$$ \sum \left( \begin{bmatrix} w_{0, 0} & w_{0, 1} & w_{0, 2} \\ w_{1, 0} & w_{1, 1} & w_{1, 2} \\ w_{2, 0} & w_{2, 1} & w_{2, 2} \end{bmatrix} \odot \begin{bmatrix} i_{0, 0} & i_{0, 1} & i_{0, 2} \\ i_{1, 0} & i_{1, 1} & i_{1, 2} \\ i_{2, 0} & i_{2, 1} & i_{2, 2} \end{bmatrix} \right) = \mathbf{O}_{0, 0} = \mathbf{O}'_{0} \in \mathbb{R}, $$ where $\odot$ is the element-wise multiplication and $\sum$ is the summation over all elements of the resulting matrix.

The advantage of this representation (and computation) is that back-propagation can be computed more easily by just transposing $\mathbf{W}'$, i.e. with $\mathbf{W}'^T$.

Take also a look at this Github repository that explains how the convolution can be implemented as matrix multiplication.

",2444,,2444,,11/7/2020 0:33,11/7/2020 0:33,,,,4,,,,CC BY-SA 4.0 21875,1,,,6/14/2020 13:38,,0,299,"

I had done with different classification, regression and clustering approaches for predictions of values, etc. I was wondering if there is a machine learning approach for distribution of a whole based on some features (I do not know if there is an approach for that I just could not find one with my research).

An easy example might be lets consider we have height and weight data of many children and we have to distribute a given number of pizza slices amongst them so that skinny children get more pizza as compared to obese ones because pizza is more beneficial for skinny as compared to obese. So might have to find out the optimum number of slices for each child out of the total number of slices so that each child gets maximum possible nutrients. A more complex version could incorporate more features like age, overall health, blood sugar content, physical activity index, daily calorie consumption, and others.

A similar example might be to find out the optimal value of fuel to be allocated to each vehicle if we have a total of 100 gallons. Features might be distance they have to travel, mpg, driver competency, engine horsepower, etc., so that all of them might travel the maximum distance possible.

So, can we achieve a task like this with machine learning/deep learning approaches? If not what are the hurdles achieving this?

",6635,,2444,,6/15/2020 10:23,6/15/2020 10:47,Can a machine learning approach solve this constrained optimisation problem?,,1,2,,,,CC BY-SA 4.0 21876,1,21932,,6/14/2020 13:55,,0,770,"

I'm trying to implement a soft actor-critic algorithm for financial data (stock prices), but I have trouble with losses: no matter what combination of hyper-parameters I enter, they are not converging, and basically it caused bad reward return as well. It sounds like the agent is not learning at all.

I already tried to tune some hyperparameters (learning rate for each network + number of hidden layers), but I always get similar results. The two plots below represent the losses of my policy and one of the value functions during the last episode of training.

My question is, would it be related to the data itself (nature of data) or is it something related to the logic of the code?

",37880,,2444,,11/23/2020 1:48,11/23/2020 1:48,Why is my Soft Actor-Critic's policy and value function losses not converging?,,1,0,,,,CC BY-SA 4.0 21878,2,,21872,6/14/2020 14:49,,1,,"

The problem here is likely related to the state approximations you are using.

Unfortunately, OpenAI's gym does not always give reasonable bounds when using env.observation_space, and that seems to be the case for CartPole:

>>> env = gym.make('CartPole-v0')
>>> env.observation_space.high
array([4.8000002e+00, 3.4028235e+38, 4.1887903e-01, 3.4028235e+38],
      dtype=float32)
>>> env.observation_space.low
array([-4.8000002e+00, -3.4028235e+38, -4.1887903e-01, -3.4028235e+38],
      dtype=float32)

Processing this, similarly to your code:

>>> discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/ DISCRETE_OS_SIZE
__main__:1: RuntimeWarning: overflow encountered in subtract
>>> discrete_os_win_size
array([0.48000002,        inf, 0.0418879 ,        inf])

>>> discrete_state = (state - env.observation_space.low)/discrete_os_win_size
>>> discrete_state
array([11.27318768,  0.        , 19.50682776,  0.        ])

That means that all the velocities will get squashed down to $0$ in your approximation. Your agent cannot tell the difference between a nearly static balancing position (generally the goal) and transitioning through it really fast - it will think that both are equally good. It is also not able to tell difference between moving towards balance point, or moving away from it.

I suggest you check for what reasonable bounds are on the space (a quick look suggests +/- 2.0 might be a reasonable starting point) and use that instead.

The approximation approach of discrete grid is also very crude, although it does allow you do use tabular approaches. If you want to stick with a linear system (and avoid trying neural networks and DQN) then the next step up would be some form of tile coding, which uses multiple offset grids to obtain smoother interpolation between states.

",1847,,1847,,6/14/2020 15:17,6/14/2020 15:17,,,,1,,,,CC BY-SA 4.0 21879,1,,,6/14/2020 14:56,,0,372,"

Why is it useful to track loss while the model is being trained?

Options are:

  1. Loss is only useful as a final metric. It should not be evaluated while the model is being trained.
  2. Loss dictates how effective the model is.
  3. Loss can help understand how much the model is changing per iteration. When it converges, that's an indicator that further training will have little benefit.
  4. None of the above
",37882,,2444,,6/14/2020 15:04,6/14/2020 20:01,Why is it useful to track loss while model is being trained?,,1,2,,,,CC BY-SA 4.0 21880,1,21884,,6/14/2020 15:16,,3,244,"

Deepfakes (a portmanteau of ""deep learning"" and ""fake"") are synthetic media in which a person in an existing image or video is replaced with someone else's likeness.

Nowadays most of the news circulating in the news and social media are fake/gossip/rumors which may false-positives or false-negatives except WikiLeaks

I know there has been a Deepfake Detection Challenge Kaggle competition for a whooping sum $1,000,000 prize money.

I would like to know how deepfakes work and how they might be dangerous?

",30725,,30725,,6/14/2020 15:59,6/14/2020 19:41,How do deepfakes work and how they might be dangerous?,,1,0,,,,CC BY-SA 4.0 21881,1,21895,,6/14/2020 15:29,,1,123,"

I'm using Q-learning (off-policy TD-control as specified in Sutton's book on pg 131) to train an agent to play connect four. My goal is to create a strong player (superhuman performance?) purely by self-play, without training models against other agents obtained externally.

I'm using neural network architectures with some convolutional layers and several fully connected layers. These train surprisingly efficiently against their opponent, either a random player or another agent previously trained through Q-learning. Unfortunately the resulting models don't generalise well. 5000 episodes seems enough to obtain a high (> 90%) win rate against whichever opponent, but after > 20 000 episodes, they are still rather easy to beat by myself.

To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent, i.e.:

  • I train a batch of models against a completely random agent (let's call them the generation one)
  • Then I train a second generation of agents against this first generation
  • Then I train a third generation against generation two
  • ...

So far this helped in creating a slightly stronger/more general connect four model, but the improvement is not as good as I was hoping for. Is it just a matter of training enough models/generations or are there better ways for using Q-learning in combination with self-play?

I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation?

Thanks for your help!

",34059,,,,,6/15/2020 7:54,Generalising performance of Q-learning agent through self-play in a two-player game (MCTS?),,2,0,,,,CC BY-SA 4.0 21882,2,,21859,6/14/2020 16:21,,3,,"

In Tabular Q-learning the update is as follows

$$Q(s,a) = Q(s,a) + \alpha \left[R_{t+1} + \gamma \max_aQ(s',a) - Q(s,a) \right]\;.$$

Now, as we are interested in learning about the optimal policy, this would correspond to the $\max_aQ(s',a)$ term in the TD target because that is how the optimal policy chooses its actions - i.e. $\pi_*(a|s) = \arg\max_aQ_*(s,a)$, so eventually the greedy TD update would be greedy with respect to the optimal state-action value function due to the guaranteed convergence of Q-learning.

The action $a$ in the update rule, i.e. the action we chose in state $s$ to receive the reward $R_{t+1}$, was chosen according to some non-optimal policy, e.g. $\epsilon$-greedy. However, as the $Q$ function is defined as the expected returns assuming we are in state $s$ and have taken action $a$ -- we thus don't need an importance sampling ratio for the $R_{t+1}$ term, even though it was generated from an action that the optimal policy might not have taken, because we are only updating the $Q$ function for state $s$ and action $a$, and by the definition of a $Q$ function it is assumed that we have taken action $a$ as we condition on this.

",36821,,,,,6/14/2020 16:21,,,,1,,,,CC BY-SA 4.0 21883,1,21945,,6/14/2020 17:28,,3,1723,"

I'm currently trying to understand the difference between a vanilla LSTM and a fully connected LSTM. In a paper I'm reading, the FC-LSTM gets introduced as

FC-LSTM may be seen as a multivariate version of LSTM where the input, cell output and states are all 1D vectors

But is not really expanded further upon. Google also didn't help me much in that regard as I can't seem to find anything under that keyword.

What is the difference between the two? Also, I'm a bit confused by the quote - aren't inputs, outputs, etc. of a vanilla LSTM already 1D vectors?

",37883,,37883,,6/15/2020 14:10,6/16/2020 13:49,What is the difference between LSTM and fully connected LSTM?,,1,0,,,,CC BY-SA 4.0 21884,2,,21880,6/14/2020 17:32,,2,,"

In general, deepfakes rely on advanced context-aware digital signal manipulations - usually image, video or audio - that allow for very natural looking modifications of content that previously have been costly or near impossible to produce in high quality.

The AI models, often based on generative adversarial networks (GANs), style transfer, pose estimation and similar technologies, are capable of tasks such as transferring facial features from subject A to replace those of subject B in a still image or video, whilst copying subject B's pose, expression, and matching the scene's lighting. Similar technologies exist for voices.

A good example of this might be these Star Wars edits, where actors faces have been changed. It is not perfect, you can in a few shots see a little instability if you study the frames - but the quality is still pretty good, and it was done with a relatively inexpensive setup. The work was achieved using freely-available software, such as DeepFaceLab on Github.

The technology is not limited to simple replacements - other forms of puppet-like control over output are possible, where an actor can directly control the face of a target in real time using no more than a PC and webcam.

Essentially, with the aid of deepfakes, it becomes possible to back up slanderous or libelous commentary with convincing media, at a low price point. Or the reverse, to re-word or re-enact an event that would otherwise be negative publicity for someone, in order to make it seem very different yet still naturally captured.

The danger of this technology is it puts tools for misinformation into a lot of people's hands. This leads to potential problems including:

  • Attacks on integrity of public figures, backed by realistic-looking ""evidence"". Even with the knowledge that this fakery is possible (and perhaps likely given a particular context), then damage can still be done especially towards feeding people with already-polarised opinions with manufactured events, relying on confirmation bias.

  • Erosion of belief in any presented media as proof of anything. With deepfakes out in the wild, someone confronted with media evidence that went against any narrative can claim ""fake"" that much more easily.

Neither of these issues are new in the domains of reporting, political bias, propaganda etc. However, it adds another powerful tool for people willing to spread misinformation to support any agenda, alongside things such as selective statistics, quoting out of context, lies in media that is text-only or crudely photoshopped etc.

A search for papers studying impact of deep fakes should find academic research such as Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.


Opinion: Video content, presented feasibly as a live capture or report, is especially compelling, as unlike text and still image media, it directly interfaces to two key senses that humans use to understand and navigate the world in real time. In short, it is more believable by default at an unconscious and emotional level compared to a newspaper article or even a photo. And that applies despite any academic knowledge of how it is produced that you might possess as a viewer.

",1847,,1847,,6/14/2020 19:41,6/14/2020 19:41,,,,4,,,,CC BY-SA 4.0 21885,1,,,6/14/2020 18:21,,-1,279,"

After a GAN is trained, which parts of it are used to generate new outputs from data?

Options are:

  1. Neither
  2. Discriminator
  3. Generator
  4. Both Generator and Discriminator
",37882,,2444,,6/14/2020 19:32,11/12/2020 11:00,"After a GAN is trained, which parts of it are used to generate new outputs from data?",,1,1,,,,CC BY-SA 4.0 21886,2,,21879,6/14/2020 19:49,,1,,"

The loss function (aka cost function) measures the correctness of the predictions of the model.

For example, a simple cost function could be $|y - f(x)|$, where

  • $\hat{y} = f(x)$ is the prediction of the model $f$ when the input is $x$,
  • $y$ is the ground-truth label for input $x$ (i.e. what the model is supposed to output when the input is $x$), and
  • $|\cdot|$ is the absolute value

The loss function is also used to understand whether a model is overfitting (and underfitting) or not. More specifically, if the loss of the model evaluated on the training data (the training loss) keeps decreasing while the loss of the model evaluated on the validation data (validation loss) starts to increase, that's a good sign that the model is overfitting (i.e. just memorizing the training data, but it will likely perform poorly on non-training data). If the training loss does not decrease, that's a good sign of underfitting (i.e. the model is not capable of learning the patterns in the training data), so you may need to use a model with a bigger capacity, change the loss, or do something else.

The loss is typically not directly used as a measure of the performance of the model because the loss doesn't directly represent the performance of the model. In fact, the loss can be relatively big (with respect to another model's loss) and the model still performs well, but typically a lower loss corresponds to higher performance. The performance of the model is calculated differently depending on the task, model, and your goals. The most common performance measure is the accuracy (the number of correct predictions over the total number of predictions), but there are many other performance measures, such as the precision, recall, f1 score, AUC, etc., that emphasize different behavior that you expect from your model.

",2444,,2444,,6/14/2020 20:01,6/14/2020 20:01,,,,0,,,,CC BY-SA 4.0 21889,2,,21885,6/15/2020 2:20,,0,,"

I think so you are new to GANs.

As the name itself implies, the generator task is to generate new images to match the distribution of the model and the input data, hence it is the generator that contributes in the generation of the data, while the discriminator just tries to differentiate between the generated images by the generator and real data.

",37648,,2444,,6/15/2020 10:04,6/15/2020 10:04,,,,0,,,,CC BY-SA 4.0 21893,1,,,6/15/2020 6:27,,1,30,"

I have a chromosome where each gene contain s set of values. Like the following:

chromosome = [[A,B,C],[C,B,A],[C,D,],[],[E,F]]

  • The order in each gene values matters. (A,B,C is different to A,C,B)
  • Each value should not appear more than once in a gene. ([A,B,B] is not desirable, B is repeated.)

In my current two-point crossover method. The genes values that are crossover is the whole set of values. (E.g the whole of [A,B,C] is crossed to another chromosome)

Soon, I realize my population lacks variations very quickly because the values within a gene always remain the same. Hence, my algorithm is evolving very slowly, and limited by the variation of gene values at initialization stage.

What crossover can I implement to cross values within the set as well?

I am pretty new to genetic algorithm. Any help will be much appreciated. Thank you.

",37898,,,,,6/15/2020 6:27,Crossover method for gene value containing a set of values,,0,2,,,,CC BY-SA 4.0 21894,2,,21873,6/15/2020 7:27,,1,,"

DQN is famous for doing over-approximation on Q function. However, having over approximated Q does not imply that it does not perform well in the environment. (unless it looks ridiculously high) From my experience, high learning rate usually cause over approximated Q, or mistakes made in the code. Best way to check is to see plot of Q function when running on environment to see if it makes any sense. (e.g. It should go lower for bad states and vice versa)

",37899,,,,,6/15/2020 7:27,,,,4,,,,CC BY-SA 4.0 21895,2,,21881,6/15/2020 7:39,,1,,"

To solve this, I now train batches of models (~ 10 models per batch), which are then used in group as a new opponent,

This seems quite a reasonable approach on the surface, but possibly the agents will still lose generalisation if the solutions in each generation are too similar. It also looks like from your experiment that learning progress is too slow.

One simple thing you could do is progress through the generations faster. You don't need to train until agents win 90% of games before upping the generation number. Yuo could set the target as low as 60% or even 55%.

For generalisation, it may also help to train against a mix of previous generations. E.g. if you use ten opponents, have five from previous generation, two from each of two iterations before that, and one even older one.

Although the setup you have created plays an agent you are training against another agent that you have created, it is not quite self-play. In self-play, an agent plays against itself, and learns as both players simultaneously. This requires a single neural network function that can switch its evaluation to score for each player - you can either make it learn to take the current player into account and make the change in viewpoint itself, or in zero-sum games (which Connect 4 is one) it can be more efficient to have it evaluate the end result for player 1 and simply take the negative of that as the score for player 2. This is also equivalent to using $\text{max}_a$ and $\text{argmax}_a$ for player 1's action choices and $\text{min}_a$ and $\text{argmin}_a$ for player 2's action choices - applying the concept of minimax to Q learning.

You can take minimax further to improve your algorithm's learning rate and performance during play. Essentially what Q learning and self-play does is learn a heuristic for each state (or state/action pair) that can guide search. You can add search algorithms to your training and play in multiple ways. One simple approach during training is to perform some n-step look ahead using negamax with alpha-beta pruning (an efficient variant of minimax in zero-sum games), and if it finds the end of the game:

  • when training, use the result (win/draw/lose) as your ground truth value instead of the normal Q-learning TD target.

  • when evaluating/playing vs human, prefer the action choice over anything the Q function returns. In practice, only bother with the Q function if look-ahead search cannot find a result.

In the last few months, Kaggle have been running a ""Connect X"" challenge (which is effectively only Connect 4 at the moment). The forums and example scripts (called ""Kernels"") are a good source of information for writing your own agents, and if you choose to compete, then the leaderboard should give you a sense for how well your agent is performing. The top agents are perfect players, as Connect 4 is a solved game. I am taking part in that competition, and have trained my agent using self-play Q-learning plus negamax search as above - it is not perfect, but is close enough that it can often beat a perfect playing opponent when playing as player 1. It was trained on around 100,000 games of self-play as I described above, plus extra training games versus previous agents.

I know the most successful techniques (e.g. alpha zero) rely on MCTS, but I'm not sure how to integrate this with Q-learning? Neither how MCTS helps to solve the problem of generalisation?

MCTS is a variant of search algorithm, and could be combined with Q-learning similarly to negamax, although in Alpha Zero it is combined with something more like Actor-Critic. The combination would be similar - from each position in play use MCTS to look ahead, and instead of picking the direct action with the best Q value, pick the one with the best MCTS score. Unlike negamax, MCTS is stochastic, but you can still use its evaluations as ground truth for training.

MCTS does not solve generalisation issues for neural networks, but like negamax it will improve the performance of a game-playing agent by looking ahead. Its main advantage over negamax in board games is a capability to scale to large branching factors. MCTS does work well for Connect 4. Some of the best agents in the Kaggle competition are using MCTS. Howver, it is not necessary for creating a ""superhuman"" Connect 4 agent, Q-learning plus negamax can do just as well.

",1847,,1847,,6/15/2020 7:54,6/15/2020 7:54,,,,1,,,,CC BY-SA 4.0 21896,2,,21881,6/15/2020 7:41,,0,,"

MCTS does not help with generalization directly, but it enables the agent to plan ahead (see depth-first search or breadth-first search). Having the state space search embedded in the algorithm is very important for playing zero sum games (we also plan ahead in our head when making moves right?). Now Q-learning is generally good for simple environments, but to achieve superhuman performance on board games you would need HUGE amounts of data without using any planning algorithm. I don't even know if practically achieving superhuman performance by only Q-learning is even possible.

",37899,,,,,6/15/2020 7:41,,,,0,,,,CC BY-SA 4.0 21897,1,21904,,6/15/2020 9:14,,1,673,"

The goal is to find an optimal deterministic policy for this MDP:

There are two possible policies: left (L) and right (R). What is the optimal policy, when different discounts are used:

A $\gamma = 0$

B $\gamma = 0.9$

C $\gamma = 0.5$

The optimal policy $\pi_* \ge \pi$ if $v_{\pi^*}(s) \ge v_{\pi}(s), \forall s \in S$, so to find the optimal policy, the goal is to check which one of those results in the largest state value function for all states in the system given discount factors (A,B,C).

The Bellman equation for the state value function is

$v(s) = E_\pi[G_t | S_t= s] = E_\pi[R_{t+1} + \gamma v(S_{t+1}) | S_t = s]$

The suffix $_n$ marks the current iteration, and $_{n+1}$ marks the next iteration. The following is valid if the value function is initialized to $0$ or some random $x \ge 0$.

A) $\gamma = 0$

$v_{L,n+1}(S_0) = 1 + 0 v_{L,n}(S_L) = 1$

$v_{R,n+1}(S_0) = 0 + 0 v_{R,n}(S_R) = 0$

$L$ is optimal in case A.

B) $\gamma = 0.9$

$v_{L,n+1}(S_0) = 1 + 0.9 v_{L,n}(S_L) = 1 + 0.9(0 + 0.9 v_{L,n}(S_0)) = 1 + 0.81v_{L,n}(S_0)$

$v_{R,n+1}(S_0) = 0 + 0.9 v_{R,n}(S_R) = 0 + 0.9(2 + 0.9 v_{R,n}(S_0)) = 1.8 + 0.81v_{R,n}(S_0)$

$R$ is optimal in case B.

C) $\gamma = 0.5$

$v_{L,n+1}(S_0) = 1 + 0.5 v_{L,n}(S_L) = 1 + 0.5(0 + 0.9 v_{L,n}(S_0)) = 1 + 0.45v_{L,n}(S_0)$

$v_{R,n+1}(S_0) = 0 + 0.5 v_{R,n}(S_R) = 0 + 0.5(2 + 0.9 v_{R,n}(S_0)) = 1 + 0.45v_{R,n}(S_0)$

Both $R$ and $L$ are optimal in case C.

Question: Is this correct?

",37627,,37627,,6/15/2020 10:24,6/15/2020 12:57,Solution to exercise 3.22 in the RL book by Sutton and Barto,,1,4,,,,CC BY-SA 4.0 21898,1,21908,,6/15/2020 9:41,,1,215,"

While transitioning from simple policy gradient to the actor-critic algorithm, most sources begin by replacing the ""reward to go"" with the state-action value function (see this slide 5).

I am not able to understand how this is mathematically justified. It seems intuitive to me that the ""reward to go"" when sampled through multiple trajectories should be estimated by the state-value function.

I feel this way since nowhere in the objective function formulation or resulting gradient expression do we tie down the first action after reaching a state. Alternatively, when we sample a bunch of trajectories, these trajectories might include different actions being taken from the state reached in timestep $t$.

So, why isn't the estimation/approximation for the ""reward to go"" the state value function, in which the expectation is also over all the actions that may be taken from that state as well?

",17143,,2444,,10/10/2020 15:53,10/10/2020 15:53,"Why is the ""reward to go"" replaced by Q instead of V, when transitioning from PG to actor critic methods?",,1,0,,,,CC BY-SA 4.0 21899,1,21901,,6/15/2020 9:53,,2,366,"

Named entity recognition (NER), also known as entity chunking/extraction, is a popular technique used in information extraction to identify and segment the named entities and classify or categorize them under various predefined classes.

Briefly, how does NER work? What are the main ideas behind it? And which algorithms are used to perform NER?

",30725,,2444,,6/15/2020 10:30,6/15/2020 10:53,What are the main ideas behind NER?,,2,0,,,,CC BY-SA 4.0 21900,2,,21875,6/15/2020 10:06,,1,,"

Sounds more like a optimization problem than a deep learning / machine learning problem to me.

For machine learning you would have the features of every child / vehicle and the optimal amount of pizza / fuel already given, but you don't know how exactly the optimal amount is computed. So the target is to find a function which maps features to target.

However in your case you don't know the optimal value, you just have a number of constraints. So my suggestion would be to use linear programming.

Here is an simple example: drive the maximum distance with a given amount of fuel

$$\text{max }\ m_1 \cdot x_1 + ... + m_n \cdot x_n$$ $$\text{s.t. }\ x_1 + ... + x_n \leq F$$ $$m_i \cdot x_i \geq T_i\ \forall\ 1 \leq i \leq n$$

$x_i$ is the amount of fuel (in gallons) given to car $i$, $m_i$'s are miles-per-gallon for car $i$. $F$ is the total amount of fuel available, this is our first constraint, we cannot distribute more fuel than we have. The second constraint say's that we like to drive at least $T_i$ miles with car $i$. This ensures we do not just give all the fuel to the most efficient car.

You can come up with more constraints for drivers competency, engine horse power etc. In order so solve your problem with linear programming, you just have to make sure it is still linear and convex. Problems of this kind can be solved using the Simplex algorithm (scipy.optimize.linprog).
If your objective or your constraints are more complex you can use numerical methods for non-linear constrained optimization problems (see scipy.optimize for algorithms).

",37120,,37120,,6/15/2020 10:47,6/15/2020 10:47,,,,1,,,,CC BY-SA 4.0 21901,2,,21899,6/15/2020 10:11,,3,,"

There are different algorithms, each with their advantages and disadvantages.

  1. Gazetteers: these have lists of the entities to be recognised, eg list of countries, cities, people, companies, whatever is required. They typically use a fuzzy matching algorithm to capture cases where the entity is not written in exactly the same way as in the list. For example, USA or U.S.A., United States, United States of America, US of A, etc. Advantage: generally good precision, ie can identify known entities. Disadvantage: can only find known entities

  2. Contextual Clues: here you have patterns that you find in the text, eg [PERSON], chairman of [COMPANY]. In this case, sentences like Jeff Bezos, chairman of Amazon, will match, even if you have never come across either Bezos or Amazon. Advantage: can find entities you didn't know about. Disadvantages: could end up with false positives, might be quite labour-intensive to come up with patterns; patterns depend on the domain (newspapers vs textbooks vs novels etc_

  3. Structural description: this is basically a 'grammar' describing what your entities look like, eg (in some kind of pseudo-regex): title? [first_name|initial] [middle_name|initial]? surname would match ""Mr J. R. Ewing"" or ""Bob Smith"". Similar descriptions could match typical company names; you'd probably still need lists of possible surnames or firstnames. Advantages: some flexibility, and potentially good precision. Disadvantages: patterns need to be developed and maintained.

Ideally you would want to combine all three approaches for a hybrid one to get the advantages of recognising unknown entities while keeping excess false positives in check.

There might also be other machine-learning approaches, but I'm not too familiar with those. The main problem is that they are hard to fine-tune or work out why they do what they do.

UPDATE: A good starting point would be to use a gazetteer-based approach to annotate some training data, and use that to identify contextual patterns. You can then use that data to train a machine learning approach (see OmG's answer on CRF) to broaden the approach; and then you add newly recognised entities to your list.

Ideally you would want to have a gazette as your main database to avoid false positives, and use machine-learning or contextual patterns to just capture previously unseen entities.

",2193,,2193,,6/15/2020 10:53,6/15/2020 10:53,,,,0,,,,CC BY-SA 4.0 21902,2,,21899,6/15/2020 10:11,,3,,"

One of the renowned learning algorithms for NER tagging is the conditional random field (CRF). As you can see in the provided link, sequence labeling algorithms such as RNN with LSTM‌ can be used to named entity recognition as well. By the way, you can find an implementation of the CRF for NER tagging in this source.

Notice that, the method of providing training data can be helpful to pass the data into the standard libraries of CRF without any extra preprocessing. One of the standard methods is BIO method(B (Begin), I (Interior), and E (End)). You can find more about it in this post.

",4446,,4446,,6/15/2020 10:21,6/15/2020 10:21,,,,2,,,,CC BY-SA 4.0 21904,2,,21897,6/15/2020 11:38,,2,,"

Your answer is correct but I am not sure exactly on how you arrived at it, as e.g. in the last case you don't know that $v_{L,n}(S_0) = v_{R,n}(S_0)$.

I will show for case B when $\gamma = 0.9$ as case A is trivial and hopefully you can apply what I've done in case B to case C so that you get exact answers.

Now, as you stated $v(s) = \mathbb{E}[R_{t+1} + \gamma v(S_{t+1}) | S_t = s]$. Assuming that $\gamma = 0.9$ we can calculate the values for each state under the policy of taking the left action. Note that because we are looking for deterministic policies and the environment is deterministic then a lot of the expectations can be disregarded as nothing random is happening.

'Left Policy'

\begin{align}v(s_0) &= 1 + 0.9 \times v(s_L) \\ v(s_L) &= 0 + 0.9 \times v(s_0) \\ v(s_R) &= 2 + 0.9 \times v(s_0) \end{align} We can solve this set of linear equations to get $v(s_0) = \frac{100}{19}, v(s_L) = \frac{90}{19}, v(s_R) = \frac{128}{19}\;.$

'Right Policy'

\begin{align}v(s_0) &= 0 + 0.9 \times v(s_R) \\ v(s_L) &= 0 + 0.9 \times v(s_0) \\ v(s_R) &= 2 + 0.9 \times v(s_0) \end{align} We can again solve these to obtain $v(s_0) = \frac{180}{19}, v(s_L) = \frac{162}{19}, v(s_R) = \frac{200}{19}\;.$

As we can see, for each of the states the value function is larger for all of the states under the policy 'go right', thus this is the optimal policy for the case of $\gamma = 0.9$.

It is important to note that if we take the 'left' action in state $s_0$ then our policy would never take us to state $s_R$, and the same for the right action and state $s_L$, however due to the definition of an optimal policy requiring $v_{\pi ^*}(s) \geq v_{\pi}(s)\; \forall s \in \mathcal{S}$ then we must evaluate the value function for all states, even ones that would not be visited under a policy you are evaluating. This means that the state for $s_R$ will change whether we go right or left, because the value of this state depends on the value of $s_0$, which clearly changes depending on whether we go right or left.

",36821,,36821,,6/15/2020 12:57,6/15/2020 12:57,,,,5,,,,CC BY-SA 4.0 21905,1,,,6/15/2020 13:07,,1,36,"

I am working in the field of Machine Vision, where accuracy and performance both play a major factor in deciding the approach towards a problem. Traditional rule based approaches work quite well in such cases.

I am gradually migrating towards deep learning, due to its umpteen advantages, where the results seem promising albeit with two huge caveats:

  1. Lack of Training data in this field. To be precise, the lack of erroneous data.

  2. Performance issues on inference. Accuracy and speed are required in equal proportion, and cannot be compromised.

In industrial settings, Point 1 plays a strong factor. I have been dabbling with Transfer learning techniques and using pre-trained models to overcome this situation. For simpler applications such as classification, this suits and gives good results. In other cases such as detection and localization, I have tried using MaskRCNN, which gives really good results but poor inference speed, means it is not production-ready.

The worrying factor in both cases is how slow detection and inference is, compared to traditional vision algorithms. A solution would be to buy Machine Vision software specifically from companies such as Cognex, HALCON, etc, who sell deep learning bundles. They are quite expensive and are to be used out- of- box with minimal modifications, which does not suit me currently.

Point 2, is highly necessary in production lines, where each iteration/image may take less than 500ms for execution.

Deep Learning gives a lot of opportunities in getting state of the art results with very less data in most of the situations, but in general without inference optimization in using apps such as TensorRT, the ""time"" metric does not give good results.

Is there an approach in using open source that can solve both point 1 and point 2? Creating a CNN from scratch is out of the question.

This post is to discuss ideas if possible, I know a concrete solution is not really possible in the scope of this question. I am the only person working on this problem at my company, thus any discussion ideas would be highly appreciated!

",37906,,2444,,6/16/2020 10:08,6/16/2020 10:08,Overcome caveats on using Deep Learning for faster inference on limited performance availability,,0,2,,,,CC BY-SA 4.0 21908,2,,21898,6/15/2020 15:55,,0,,"

When you say simple policy gradient, I assume you mean something like REINFORCE.

The main difference between actor-critic and REINFORCE-like algorithms is in how they estimate the reward to go:

  • in REINFORCE, you wait until a trajectory terminates to make any updates, and your estimator of the reward to go is the actual reward to go that was observed in the trajectory.

  • in actor critic algorithms, instead of using an entire trajectory to estimate a reward to go, you use the action value function (note that this way, you can compute updates at every time step). The reason why you use the action-value function is because at each transition, you have already committed to some action.

The benefit of the actor critic estimator is that it exhibits much less variance. REINFORCE estimators are Monte Carlo estimators, which are known to exhibit extremely high variance.

Now, the actor critic method in the slides you posted takes variance reduction one step further. Instead of estimating reward to go, it's estimating advantage -- that is, the difference between the current action value and your estimated state value. The state value function term doesn't depend on the action, so it doesn't affect the expected value of the policy gradient. However, it serves as a 'baseline', which helps reduce the variance of the reward to go estimator even further.

",37829,,,,,6/15/2020 15:55,,,,6,,,,CC BY-SA 4.0 21910,1,21912,,6/15/2020 17:43,,1,544,"

From Sutton and Barto's book Reinforcement Learning (Adaptive Computation and Machine Learning series), are the following definitions:

To aid my learning of RL and gain an intuition, I'm focusing on the differences between some algorithms. I've selected Sarsa (on-policy TD control) for estimating Q ≈ q * and Q-learning (off-policy TD control) for estimating π ≈ π *.

For conciseness I'll refer to Sarsa (on-policy TD control) for estimating Q ≈ q * and Q-learning (off-policy TD control) for estimating π ≈ π * as Sarsa and Q-learning respectively.

Are my following assertions correct?

The primary differences are how the Q values are updated.

Sarsa Q value update: $ Q ( S, A ) ← Q ( S, A ) + α [ R + \gamma Q ( S ′ , A ′ ) − Q ( S, A ) ] $

Q-learning Q value update: $ Q ( S, A ) ← Q ( S, A ) + α [ R + \gamma \max_a Q ( S ′ , a ) − Q ( S, A ) ] $

Sarsa, in performing the td update subtracts the discounted Q value of the next state and action, S', A' from the Q value of the current state and action S, A. Q-learning, on the other hand, takes the discounted difference between the max action value for the Q value of the next state and current action S', a. Within the Q-learning episode loop the $a$ value is not updated, is an update made to $a$ during Q-learning?

Sarsa, unlike Q-learning, the current action is assigned to the next action at the end of each episode step. Q-learning does not assign the current action to the next action at the end of each episode step

Sarsa, unlike Q-learning, does not include the arg max as part of the update to Q value.

Sarsa and Q learning in choosing the initial action for each episode both use a ""policy derived from Q"", as an example, the epsilon greedy policy is given in the algorithm definition. But any policy could be used here instead of epsilon greedy? Q learning does not utilise the next state-action pair in performing the td update, it just utilises the next state and current action, this is given in the algorithm definition as $ Q ( S ′ , a ) $ what is $a$ in this case ?

",12964,,2444,,6/15/2020 19:26,6/15/2020 19:28,What are the differences between SARSA and Q-learning?,,1,2,,12/13/2021 17:05,,CC BY-SA 4.0 21911,1,25635,,6/15/2020 18:56,,3,875,"

In the textbook ""Reinforcement Learning: An Introduction"" by Richard Sutton and Andrew Barto, the concept of Maximization Bias is introduced in section 6.7, and how Q-learning ""over-estimates"" action-values is discussed using an example. However, a formal proof of the same is not presented in the textbook, and I couldn't get it anywhere on the internet as well.

After reading the paper on Double Q-learning by Hado van Hasselt (link), I could understand to some extent why Q-learning ""over-estimates"" action values. Here is my (vague, informal) construction of a mathematical proof:

We know that Temporal Methods (just like Monte Carlo methods), use sample returns instead of real expected returns as estimates, to find the optimal policy. These sample returns converge to the true expected returns over infinite trials, provided all the state-action pairs are visited. Thus the following notation is used,

$$\mathbb{E}[Q()] \rightarrow q_\pi()$$ where $Q()$ is calculated from the sample return $G_t$ observed at every time-step. Over infinite trials, this sample return when averaged converges to it's expected value which is the true $Q$-value under the policy $\pi$. Thus $Q()$ is really an estimate of the true $Q$-value $q_\pi$.

In section 3 on page 4 of the paper, Hasselt describes how the quantity $\max_a Q(s_{t+1}, a)$ approximates $\mathbb{E}[\max_a Q(s_{t+1}, a)]$ which in turn approximates the quantity $\max_a(\mathbb{E}[Q(s_{t+1},a)])$ in Q-learning. Now, we know that the $\max[]$ function is a convex function (proof). From Jensen's inequality, we have $$\phi(\mathbb{E}[X]) \leq \mathbb{E}[\phi(X)]$$ where $X$ is a random variable, and the function $\phi()$ is a convex function. Thus, $$\max_a(\mathbb{E}[Q(s_{t+1},a)]) \leq \mathbb{E}[\max_a(Q(s_{t+1}, a)]$$

$$\therefore \max_a Q(s_{t+1}, a) \approx \max_a(\mathbb{E}[Q(s_{t+1},a)]) \leq \mathbb{E}[\max_a(Q(s_{t+1}, a)]$$

The quantity on the LHS of the above equation appears (along with $R_{t+1}$) as an estimate of the next action-value in the Q-learning update equation: $$Q(S_t,A_t) \leftarrow (1-\alpha)Q(S_t, A_t) + \alpha[R_{t+1} + \gamma\max_aQ(S_{t+1}, a)] $$

Lastly, we note that the bias of an estimate $T$ is given by: $$b(T) = \mathbb{E}[T] - T$$ Thus the bias of the estimate $\max_a Q(s_{t+1},a)$ will always be positive: $$b(\max_a Q(s_{t+1},a)) = \mathbb{E}[\max_a Q(s_{t+1},a)] - \max_a Q(s_{t+1},a) \geq 0$$ In statistics literature, any estimate whose bias is positive is said to be an ""over-estimate"". Thus the action values are over-estimated by the Q-learning algorithm due to the $\max[]$ operator, thus resulting in a $maximization$-$bias$.

Are the arguments made above valid? I am a student, with no rigorous knowledge of random processes. Thus, please forgive me if any of the steps above are totally unrelated, and doesn't make sense in a more mathematically rigorous fashion. Please let me know, if there is a much better proof than this failed attempt.

Thank you so much for your precious time. Any help/suggestions/corrections are greatly appreciated!

",37181,,37181,,6/16/2020 9:47,1/19/2021 11:22,Proof of Maximization Bias in Q-learning?,,1,6,,,,CC BY-SA 4.0 21912,2,,21910,6/15/2020 19:17,,1,,"

The main difference between the two is that Q-learning is an off policy algorithm. That is, we learn about an policy that is different to the one we choose to make actions. To see this, lets look at the update rule.

Q-Learning

$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma \max_aQ(s',a) - Q(s,a))$$

SARSA

$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma Q(s',a') - Q(s,a))$$

In SARSA we chose our $a'$ according to what our policy tells us to do when we are in state $s'$, so the policy we are learning about is also the policy that we choose to make our actions.

In Q-learning, we learn about the greedy policy whilst following some other policy, such as $\epsilon$-greedy. This is because when we transition into state $s'$ our TD-target becomes the maximum Q-value for whichever state we end up in, $s'$, where the max is taken over the actions.

Once we have actually updated our Q-function and we need to choose an action to take in $s'$, we do so from the policy we are using to generate our actions from -- thus we are learning about the greedy policy whilst following some other policy, hence off policy. In SARSA when we move into $s'$ our TD-target is chosen by the Q-value for the state we transition into and then the action we would choose based on our policy.

Within the Q-learning episode loop the $a$ value is not updated, is an update made to $a$ during Q-learning?

It will be, because the policy we use to choose our actions is ensured to explore sufficiently around all of the state-action pairs and so it is guaranteed to be encountered at some point.

Sarsa, unlike Q-learning, does not include the arg max as part of the update to Q value.

It is not an $\arg \max$, it is a $\max$. This is defined as $$\max_x f(x) = \{f(x) | \forall y\; : f(y) \leq f(x) \}$$

Sarsa, unlike Q-learning, the current action is assigned to the next action at the end of each episode step. Q-learning does not assign the current action to the next action at the end of each episode step

Kind of - the action that you chose for your TD-target in SARSA becomes the next action that you consider in the next step of the episode. This is natural because essentially you are in state $s$, you take action $a$ and observe a new state $s'$, at which point you can use your policy to see which action you will take, call this $a'$, and then perform the SARSA update, and then execute that action in the environment.

Sarsa and Q learning in choosing the initial action for each episode both use a ""policy derived from Q"", as an example, the epsilon greedy policy is given in the algorithm definition. But any policy could be used here instead of epsilon greedy?

Yes, any policy can be used although you want to choose a policy that allows sufficient exploration of the state-space.

Q learning does not utilise the next state-action pair in performing the td update, it just utilises the next state and current action, this is given in the algorithm definition as $Q(S',a)$ what is $a$ in this case ?

in the algorithm it actually has $\max_a Q(S',a)$, which if you refer back to my earlier definition of what the $\max$ operator does, should answer this question.

",36821,,36821,,6/15/2020 19:28,6/15/2020 19:28,,,,4,,,,CC BY-SA 4.0 21914,1,21915,,6/15/2020 19:24,,3,4492,"

In computer vision, what are bag-of-features (also known as bag-of-visual-words)? How do they work? What can they be used for? How are they related to the bag-of-words model in NLP?

",2444,,,,,6/16/2020 16:55,What are bag-of-features in computer vision?,,1,0,,,,CC BY-SA 4.0 21915,2,,21914,6/15/2020 19:24,,4,,"

Introduction

Bag-of-features (BoF) (also known as bag-of-visual-words) is a method to represent the features of images (i.e. a feature extraction/generation/representation algorithm). BoF is inspired by the bag-of-words model often used in the context of NLP, hence the name. In the context of computer vision, BoF can be used for different purposes, such as content-based image retrieval (CBIR), i.e. find an image in a database that is closest to a query image.

Steps

The BoF can be divided into three different steps. To understand all the steps, consider a training dataset $D = \{x_1, \dots, x_N \}$ of $N$ training images. Then BoF proceeds as follows.

1. Feature extraction

In this first step, we extract all the raw features (i.e. keypoints and descriptors) from all images in the training dataset $D$. This can be done with SIFT, where each descriptor is a $128$-dimensional vector that represents the neighborhood of the pixels around a certain keypoint (e.g. a pixel that represents a corner of an object in the image).

If you are not familiar with this extraction of computer vision (sometimes known as handcrafted) features, you should read the SIFT paper, which describes a feature (more precisely, keypoint and descriptor) extraction algorithm.

Note that image $x_i \in D$ may contain a different number of features (keypoints and descriptors) than image $x_j \neq x_i \in D$. As we will see in the third step, BoF produces a feature vector of size $k$ for all images, so all images will be represented by a fixed-size vector.

Let $F= \{f_1, \dots, f_M\}$ be the set of descriptors extracted from all training images in $D$, where $M \gg N$. So, $f_i$ may be a descriptor that belongs to any of the training examples (it does not matter which training image it belongs to).

2. Codebook generation

In this step, we cluster all descriptors $F= \{f_1, \dots, f_M\}$ into $k$ clusters using k-means (or another clustering algorithm). This is sometimes known as the vector quantization (VQ) step. In fact, the idea behind VQ is very similar to clustering and sometimes VQ is used interchangeably with clustering.

So, after this step, we will have $k$ clusters, each of them associated with a centroid $C = \{ c_1, \dots, c_k\}$, where $C$ is the set of centroids (and $c_i \in \mathbb{R}^{128}$ in the case that SIFT descriptors have been used). These centroids represent the main features that are present in the whole training dataset $D$. In this context, they are often known as the codewords (which derives from the vector quantization literature) or visual words (hence the name bag-of-visual-words). The set of codewords $C$ is often called codebook or, equivalently, the visual vocabulary.

3. Feature vector generation

In this last step, given a new (test) image $u \not\in D$ (often called the query image in this context of CBIR), then we will represent $u$ as a $k$-dimensional vector (where $k$, if you remember, is the number of codewords) that will represent its feature vector. To do that, we need to follow the following steps.

  1. Extract the raw features from $u$ with e.g. SIFT (as we did for the training images). Let the descriptors of $u$ be $U = \{ u_1, \dots, u_{|U|} \}$.

  2. Create a vector $I \in \mathbb{R}^k$ of size $k$ filled with zeros, where the $i$th element of $I$ corresponds to the $i$th codeword (or cluster).

  3. For each $u_i \in U$, find the closest codeword (or centroid) in $C$. Once you found it, increment the value at the $j$th position of $I$ (i.e., initially, from zero to one), where $j$ is the found closest codeword to the descriptor $u_i$ of the query image.

    The distance between $u_i$ and any of the codewords can be computed e.g. with the Euclidean distance. Note that the descriptors of $u$ and the codewords have the same dimension because they have been computed with the same feature descriptor (e.g. SIFT).

    At the end of this process, we will have a vector $I \in \mathbb{R}^k$ that represents the frequency of the codewords in the query image $u$ (akin to the term frequency in the context of the bag-of-words model), i.e. $u$'s feature vector. Equivalently, $I$ can also be viewed as a histogram of features of the query image $u$. Here's an illustrative example of such a histogram.

    From this diagram, we can see that there are $11$ codewords (of course, this is an unrealistic scenario!). On the y-axis, we have the frequency of each of the codewords in a given image. We can see that the $7$th codeword is the most frequent in this particular query image.

    Alternatively, rather than the codeword frequency, we can use the tf-idf. In that case, each image will be represented not by a vector that contains the frequency of the codewords but it will contain the frequency of the codewords weighted by their presence in other images. See this paper for more details (where they show how to calculate tf-idf in this context; specifically, section 4.1, p. 8 of the paper).

Conclusion

To conclude, BoF is a method to represent features of an image, which could then be used to train classifiers or generative models to solve different computer vision tasks (such as CBIR). More precisely, if you want to perform CBIR, you could compare your query's feature vector with the feature vector of every image in the database, e.g. using the cosine similarity.

The first two steps above are concerned with the creation of a visual vocabulary (or codebook), which is then used to create the feature vector of a new test (or query) image.

A side note

As a side note, the term bag is used because the (relative) order of the features in the image is lost during this feature extraction process, and this can actually be a disadvantage.

Further reading

For more info, I suggest that you read the following papers

  1. Video Google: A Text Retrieval Approach to Object Matching in Videos (2003) by Sivic and Zisserman
  2. A Bayesian Hierarchical Model for Learning Natural Scene Categories (2005) by Fei-Fei and Perona
  3. Introduction to the Bag of Features Paradigm for Image Classification and Retrieval (2011) by O'Hara and Draper
  4. Bag-of-Words Representation in Image Annotation: A Review (2012) by Tsai
",2444,,2444,,6/16/2020 16:55,6/16/2020 16:55,,,,0,,,,CC BY-SA 4.0 21917,1,,,6/15/2020 22:08,,1,1882,"

I ran a test using 3 strategies for multi-armed bandit: UCB, $\epsilon$-greedy, and Thompson sampling.

The results for the rewards I got are as follows:

  1. Thompson sampling had the highest average reward
  2. UCB was second
  3. $\epsilon$-greedy was third, but relatively close to UCB

Can someone explain to me why the results are like this?

",37918,,2444,,6/16/2020 11:41,6/16/2020 11:41,Why am I getting better performance with Thompson sampling than with UCB or $\epsilon$-greedy in a multi-armed bandit problem?,,1,7,,6/16/2020 10:03,,CC BY-SA 4.0 21925,1,,,6/16/2020 7:26,,1,192,"

It is known that if $\alpha$ is set to high, then the cost function of the model may not converge. However, would a decaying of the learning rate provide some ""tuning"" of the $\alpha$ value during training ? In the sense that if you set a high learning rate but you also have some form of learning rate decay, then eventually $\alpha$ value would fall within the ""just right"" and ""too low"" range eventually. Is it better to then set an initial learning rate that is more ""flexible"" in the higher ranges rather than a learning rate that is too low ?

",32780,,,,,6/16/2020 8:27,Is it harmful to set the learning rate of training a model to be too high if there is some decay function for the learning rate?,,1,0,,,,CC BY-SA 4.0 21926,1,,,6/16/2020 7:39,,0,115,"

I would like to build a model based on reinforcement learning (RL) for the following scenario

Recommend the best route (of cities listed for a given country) that satisfies the required criteria (museum, beaches, food, etc) for a total budget of $2000.

Based on the recommendation, the user will provide its feedback (as a reward), so the recommendations can be fine-tuned (by reinforcement learning) the next time. I modeled the system this way:

  • States = (c,cr), where $c$ is the city and $cr$ is the criteria (history, beach, food, etc)

  • Actions = (p) is the price of visiting the city

  • Reward: acceptance of the cities selected by end user as a route (1 or 0)

The objective is to decide which list of cities together satisfy the given budget.

Is this MDP model right and how can I implement this? May be the only option is using Monte Carlo methods and linear/dynamic programming.. Is there any other way?

",37930,,37930,,6/17/2020 14:55,12/6/2022 1:02,How can I model this problem of delivering assets by choosing a route with reinforcement learning?,,2,3,,,,CC BY-SA 4.0 21927,1,21935,,6/16/2020 7:40,,1,61,"

I have 5 classes of pictures to classify:

0 -> ~3 200 (~800 initial number before interference and duplication)

1 -> ~9 000 (I reduced from ~90 000)

2 -> ~8 000

3 -> ~3 000

4 -> ~7 200

How to divide the data?

Now I have divided the data giving 2 000 to test and 2 000 to validation set by taking a fixed number of images (400) from each class. I don't have much knowledge so I don't know if this is a good division of data. The attached picture shows the results on the test data after about 60 epoch of CNN with 15 layers.

The network continues to overfiting, and the results of validation and test set do not improve. I know that I could definitely improve my model but I would like to divide the data in some thoughtful and reasonable way. Pictures are spectrograms and are in RGB format.

",37928,,37928,,6/16/2020 9:46,6/16/2020 9:46,How to split data into training validation and test set when the number of data in classes varies greatly?,,1,0,,,,CC BY-SA 4.0 21928,1,,,6/16/2020 8:04,,1,292,"

I want to classify my corporate chat messages into a few categories such as question, answer, and report. I used a fine-tuned BERT model, and the result wasn't bad. Now, I started thinking about ways to improve it, and a rough idea came up, but I don't know what to do it exactly.

Currently, I simply put chat text into the model, but don't use the speaker's information (who said the text, the speaker's ID in our DB). The idea is if I can use the speaker's information, the model might better understand the text and classify it better.

The question is, are there any examples or prior researches similar to what I want to achieve? I googled for a few hours, but couldn't find anything useful. (Maybe the keywords weren't good.)

Any advice would be appreciated.

",37927,,,,,6/17/2020 16:36,How to use speaker's information as well as text for fine-tuning BERT?,,1,0,,,,CC BY-SA 4.0 21929,1,21952,,6/16/2020 8:18,,6,996,"

What is the cleanest, easiest way to explain someone who is a non-STEM work colleague the concept of Reinforcement Learning? What are the main ideas behind Reinforcement Learning?

",30725,,2444,,6/16/2020 12:35,6/17/2020 18:52,What is Reinforcement Learning?,,3,0,,,,CC BY-SA 4.0 21930,2,,21925,6/16/2020 8:27,,1,,"

Setting too high a learning rate will extend the time to get a good result.

In my opinion, it is better to set not too big a learning rate but to use learning with momentum. When the learning starts to be ineffective, increase the learning rate to find a better optimal result. It seems to me that this allows to get very good results in a faster time than setting a large value of the learning rate from the very beginning. At least in cases I've dealt with.

",37928,,,,,6/16/2020 8:27,,,,0,,,,CC BY-SA 4.0 21931,2,,21917,6/16/2020 8:34,,3,,"

The first thing to note here is that your results seem aligned with the results commonly found in the bandit literature.

Second thing to note would be that the performance of bandit algorithms is usually measured in terms of regret. This is the difference between (i) the amount of rewards accumulated by an oracle policy having prior knowledge about the true rewards of the bandit arms and (ii) the amount of rewards accumulated by the bandit strategy you are evaluating. In other words, this measures the loss your incur due to not knowing what the optimal arm is.

Ideally, the regret should be logarithmic. In theory, all three algorithms achieve logarithmic regret. See [1] for the regret of decaying $\epsilon$-greedy and UCB and [2] for the regret of Thompson sampling.

Despite their similar regret bounds, these algorithms may behave differently in practice. For example, according to experiments in [3]:

It appears that Thompson sampling is more robust than UCB when the delay is long. Thompson sampling alleviates the influence of delayed feedback$^*$ by randomizing over actions; on the other hand, UCB is deterministic and suffers a larger regret in case of a sub-optimal choice.

For decaying $\epsilon$-greedy, the fact that it obtained the lowest rewards may be due to the fact that the action selected for exploration is chosen uniformly at random (even if it has a bad reward estimate). In addition, it will explore actions even if they have already been selected many times, unlike UCB which considers the number of action selections to refine the confidence bound.

That said, besides these aspects, there may be other factors that could influence the results obtained by a given bandit algorithm. These may include:

  • The reward distributions you chose to model your actions, and their characteristics.
  • The number of actions you have, and how far each action’s reward is from the optimal action’s reward.

$^*$ Delayed feedback occurs when the outcome of an action selection is not revealed immediately but after some time delay.

",34010,,-1,,6/17/2020 9:57,6/16/2020 8:34,,,,0,,,,CC BY-SA 4.0 21932,2,,21876,6/16/2020 8:57,,0,,"

I would say it is the nature of data. Generally speaking, you are trying to predict a random sequence, especially if you use the history data as an input and try to get the future value as an output.

",28041,,,,,6/16/2020 8:57,,,,1,,,,CC BY-SA 4.0 21935,2,,21927,6/16/2020 9:08,,1,,"

I would make the distribution of the classes in test and validation sets the same as in the training set (and as in the whole original set). Anyway all your metrics are relative, not absolute and designed to provide reasonable results when classes are not ideally balanced.

",28041,,,,,6/16/2020 9:08,,,,0,,,,CC BY-SA 4.0 21936,2,,21929,6/16/2020 9:47,,5,,"

Reinforcement Learning can be explained by a few equations. However I assume that this is not what you are looking at since the explanation should be for someone having a non-STEM background. Not to say non-STEM folks are not able to understand math equations, but intuition comes easier with words and examples in my opinion.

Reinforcement Learning is about learning an optimal behavior by repeatedly executing actions, observing the feedback from the environment and adapting future actions based on that feedback.

Let's break down the last sentence by the concrete example of learning how to play chess:
Imagine you sit in front of a chess board, not knowing how to play. The optimal behavior you'd like to learn is what moves to execute in order to win the game. So you start learning the game by playing a few moves (actions) with some figures and observing what is happening on the board (environment) and identifying which moves bring you closer victory or give you a better position on the board (feedback). Therefore in future games you will prefer moves which gave you a positive outcome in previous games.
Admittedly this is a very slow process of learning if you don't have a teacher which helps you in the beginning and you would have to play a lot of games until your first victory. But this essentially how computers (and sometimes humans in some sense) learn to do certain things by Reinforcement Learning. Behaviors which lead to positive experiences are collected, memorized and thus reinforced.

",37120,,2444,,6/17/2020 18:52,6/17/2020 18:52,,,,0,,,,CC BY-SA 4.0 21937,1,,,6/16/2020 10:27,,1,60,"

There is a question already about applying RL to ""large scale problems"", where large scale refers to the problem of a relatively small number of actions (that could be from a continous space) resulting in a very large number of states.

A good exapmle of this kind of large-scale probems is modeling a motion of a boat as a point on a plane, with an action being a displacement vector $\mathbf{\delta}_b = (\delta_x, \delta_y)$ and there are infinitely many states, because the next state is given by the next position of the boat, in a circle surrounding the boat $\mathcal{B}(\mathbf{x}_b, \mathbf{\delta}_{b,max})$, where $\mathbf{x_b}$ is the boat's current position, and $\mathbf{\delta}_{b,max}$ the maximal possible displacement. So here, the displacement as an action (move the boat) is from an infinite space because it is a 2D vector ($\delta_b \in \subset \mathbb{R}^2$) and so is the state space $\mathcal{B}$. Still, I just have two actions to apply to the boat in the end: move in x-directions this much, and in y-direction that much.

What I mean, is something even larger. Considering the example of a boat, is it possible to apply reinforcement learning on a system that has 100 000 of such boats, and what would be the methods to look into to accomplish this. I do not mean to have 100 000 agents. The agent in this scenario is observing 100 000 boats, they are its environment, and let's say the agent is distributing them in a current on the sea in such a way that they have the least amount of resistance in the water (the wake of one ship influences the resistance of its downstream neighbors).

From this answer and from what I have read so far, I believe an approximation will be necessary for the displacements in $2D$ space $\mathbf{\delta}(x,y)$ as well as for the states and rewards, because there are so many of them. However, before digging into this, I would like to know if there are some references out there where something like this has already been tried, or if this is simply something where RL cannot be applied.

",37627,,37627,,6/17/2020 7:50,6/17/2020 7:50,Can reinforcement learning algorithms be applied on problems involving a very large number of possible actions?,,0,2,,,,CC BY-SA 4.0 21938,1,21944,,6/16/2020 12:47,,2,116,"

I would love to know in detail, how exactly GPUs help, in technical terms, in training the deep learning models.

To my understanding, GPUs help in performing independent tasks simultaneously to improve the speed. For example, in calculation of the output through CNN, all the additions are done simultaneously, and hence improves the speed.

But, what exactly happens in a basic neural network or in a LSTM type complex models in regard with GPU.

",37180,,37180,,6/16/2020 13:57,1/3/2022 18:20,How do GPUs faciliate the training of a Deep Learning Architecture?,,1,0,,,,CC BY-SA 4.0 21944,2,,21938,6/16/2020 13:46,,3,,"

GPUs are able to execute a huge amount of similar and simple instructions (floating point operations like addition and multiplication) in parallel. In contrast to a CPU which is able to execute a few complex tasks sequentially very quick. Therefore GPUs are very good at doing vector & matrix operations.

If you look at the operations performed inside a single basic NN layer you will see that most of the operations are matrix-vector multiplications:

$$x_{i+1} = \sigma(W_ix_i + b_i)$$

where $x_i$ is the input vector, $W_i$ the matrix of weights, $b_i$ the bias vector in the $i^{th}$ layer, $x_{i+1}$ the output vector and $\sigma(\cdot)$ the elemtwise non-linear activation. The computational complexity here is governed by the matrix-vector multiplication. If you look at the architecture of a LSTM cell you will notice that inside of it are multiple such operations.

Being able to execute the matrix-vector operations quickly and efficiently in parallel will reduce execution time, this is where GPUs excel CPUs.

",37120,,37120,,6/16/2020 14:04,6/16/2020 14:04,,,,5,,,,CC BY-SA 4.0 21945,2,,21883,6/16/2020 13:49,,0,,"

Based on the citations in the ConvLSTM paper I have come to the conclusion that they mean the Peephole LSTM when they say fully connected LSTM. In the paper that they have taken the encoder-decoder-predictor model from, where they refer to a fully connected LSTM, a Peephole LSTM is used. Also they take their fully connected LSTM definition from this paper, which again uses the Peephole LSTM.

With that the difference would be the added ""peephole connections"", that lets the gate layers look at the cell states and access the constant error carousel.

",37883,,,,,6/16/2020 13:49,,,,0,,,,CC BY-SA 4.0 21946,1,21971,,6/16/2020 14:15,,0,2613,"

I am applying spacy lemmatization on my dataset, but already 20-30 mins passed and the code is still running.

Is there anyway to make it faster? Is there any option to do this process using GPU?

My dataset size is 20k number of rows & 3 columns

",27505,,27505,,6/17/2020 6:12,6/17/2020 15:37,How to make spacy lemmatization process fast?,,1,1,,12/30/2021 11:21,,CC BY-SA 4.0 21947,1,21948,,6/16/2020 14:39,,1,129,"

During the first episode, it's 100% exploration, because all our Q values are 0. Suppose we have 1000 time steps, and it's terminated by meeting a reward. So, after the first episode, why can't we make it 100% exploitation? Why do we still need exploration?

",37831,,2444,,6/16/2020 16:00,6/17/2020 16:04,Why can't we fully exploit the environment after the first episode in Q-learning?,,2,0,,,,CC BY-SA 4.0 21948,2,,21947,6/16/2020 15:11,,1,,"

You can't guarantee that you have taken every action from every state, even with 1000 time steps. There would be multiple outcomes:

  1. The episode terminates, either by success or failure before the 1000 time steps. The agent is trying to maximise reward, if this is achieved by taking less than 1000 steps then it will do. It won't just walk around until it hits an arbitrary number of time steps.

  2. If you have more states than time steps then you will never be unable to visit all states and so you cannot guarantee that the policy you followed was optimal (and hence would still want to explore). Even if you have #states = #time-steps then you will almost certaintly have more state-action pairs than timesteps. The only time this would be equal is if from every state there is only one action, which would be a trivial problem that wouldn't need RL to solve.

",36082,,,,,6/16/2020 15:11,,,,6,,,,CC BY-SA 4.0 21950,2,,21929,6/16/2020 17:10,,5,,"

The famous book Reinforcement learning: an introduction by Sutton and Barto provides an intuitive description of reinforcement learning (that everyone is possibly able to understand).

Reinforcement learning is learning what to do — how to map situations to actions — so as to maximize a numerical reward signal. The learner is not told which actions to take, but instead must discover which actions yield the most reward by trying them.

In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. These two characteristics — trial-and-error search and delayed reward — are the two most important distinguishing features of reinforcement learning.

In chapter 3, the book also introduces the agent-environment interface, which summarises the cyclic interaction between the agent (aka policy) and the environment (which represents the task/problem that you need to solve).

Every RL algorithm implements a cyclic interaction between an agent and an environment (as illustrated above), where, on each time step $t$, the agent takes an action $A_t$, the environment emits a reward $R_{t+1}$, and the agent and the environment move from state $S_t$ to the state $S_{t+1}$. This interaction continues until some termination criterion is met (for example, the agent dies). While this interaction occurs, the agent is supposed to reinforce the actions that lead to better outcomes (i.e. higher reward).

",2444,,-1,,6/17/2020 9:57,6/16/2020 17:22,,,,0,,,,CC BY-SA 4.0 21952,2,,21929,6/16/2020 18:03,,5,,"

Humans are set loose in the world and go about their days doing stuff.

Whenever they do specific things, their brain sends them good signals (endorphins, joy, etc.) or bad signals (pain, sadness, etc.). They learn through these signals which things they should be doing and which things they shouldn't be doing.

Sometimes the signal is immediate and you know exactly what you're being ""rewarded"" or ""punished"" for (e.g. touch a hot stove and it hurts). Sometimes it takes a bit longer and there could be many possible reasons for the brain signal (even a combination of reasons), but you can hopefully figure out what caused it after it happens a few times (e.g. getting a stomach ache a few hours after eating a specific food).

That's basically what Reinforcement Learning is.

",17490,,,,,6/16/2020 18:03,,,,1,,,,CC BY-SA 4.0 21955,1,21956,,6/16/2020 19:32,,1,43,"

In a classic GridWorld Environment where the possible actions of an agent are (Up, Down, Left, Right), can another potential output of Action be ""x amount of steps"" where the agent takes 2,3,.. steps in the direction (U,D,L,R) that it chooses? If so, how would one go about doing it?

",37947,,37947,,6/16/2020 21:10,6/16/2020 21:24,Additional (Potential) Action for Agent in MazeGrid Environment (Reinforcement Learning),,1,0,,,,CC BY-SA 4.0 21956,2,,21955,6/16/2020 20:43,,0,,"

You can definitely define an environment that accepts more types of action, including actions that take multiple steps in a direction.

The first thing you would need to do is implement support for that action in the environment. That is not really a reinforcement learning issue, but like implementing the rules of a board game. You will need to decide things such as what happens if the move would be blocked - does the move succeed up the point of being blocked, does it fail completely, is the reward lower depending on how much the agent tries to overshoot, etc.

After you do that, you will want to write an agent that can choose the new actions. You have a few choices here:

  • Simplest would be to enumerate all the choices separately and continue to use the same kind of agent as you already have. So instead of $\{U, D, L, R\}$ you might have $\{U1, U2, U3, D1, D2, D3, L1, L2, L3, R1, R2, R3\}$.

  • If you want to take advantage of generalisation between similar actions (e.g. that action $U3$ is similar to $U2$ and also to $R3$), then you can use some form of coding for the action, such as the relative x,y movement that it is attempting. So you could express $U2$ as $(0,2)$ and $L3$ as $(-3,0)$. For that to then work with Q values, you cannot easily use a table. Instead, you would need to use function approximation, for example a neural network, so you can implement $q(s,a)$ as a parametric function - combining $s,a$ into the input vector, and learn the parameters to that the neural network outputs the correct action value. This is what the Q learning variation DQN can do, as well as other similar RL algorithms that use neural networks.

Using a neural network, instead of tabular Q-learning, is not something you see often with grid world environments. It is a step up in complexity, but it is often required if state space or action space becomes large and might benefit from the generalisation possible from trainable function approximators.

",1847,,1847,,6/16/2020 21:24,6/16/2020 21:24,,,,7,,,,CC BY-SA 4.0 21957,1,21967,,6/17/2020 4:20,,0,250,"

Given that this question has not yet been asked on this site, although similar questions have already been asked in the past (e.g. here or here), what is essentially a convolutional neural network (CNN)? Why are they heavily used in image recognition applications of machine learning?

",30725,,2444,,6/17/2020 12:58,6/17/2020 12:58,What is a convolutional neural network?,,1,2,,,,CC BY-SA 4.0 21958,2,,11949,6/17/2020 5:03,,0,,"

Information outside of the bounding box could still be useful as context. So not to lose it you can but bounding box as pixel mask into additional ""pseudo-color"" layer. That way you can also have many bounding boxes without changing input architecture. You will give network additional info without losing anything, so result shouldn't be worse at least.

",22745,,,,,6/17/2020 5:03,,,,0,,,,CC BY-SA 4.0 21960,1,,,6/17/2020 7:07,,1,48,"

Adam is known as an algorithm that has an adaptive learning rate for each parameter. I believe this is due to the division by the term $$v_t = \beta_2 \cdot v_{t-1} + (1-\beta_2) \cdot g_t^2 $$ Hence, each weight will get updated differently based on the accumulated squared gradients in their respective dimensions, even though $\alpha$ might be constant. There are other StackOverflow posts that have said that Adam has a built-in learning rate decay. In the original paper also, the authors of adam paper says that the learning rate at time step $t$ decays based on the equation $$\alpha_t = \alpha \cdot \frac{\sqrt{1-\beta_2^t}}{{1-\beta_1^t}}$$

Is the second equation the learning rate decay that has been built into the Adam algorithm?

",32780,,2444,,6/17/2020 9:28,6/17/2020 9:28,What is the equation of the learning rate decay in the Adam optimiser?,,0,0,,,,CC BY-SA 4.0 21961,1,21962,,6/17/2020 7:11,,3,430,"

This is exercise 3.18 in Sutton and Barto's book.

The task is to express $v_\pi(s)$ using $q_\pi(s,a)$.

Looking at the diagram above, the value of $q_\pi(s,a)$ at $s$ for each $a \in A$ we take gives us the value function at $s$ after taking the action $a$ and then following the policy $\pi$.

This is probably wrong, but if

$$v_\pi(s) = E_\pi[G_t | S_t = s]$$

and

$$q_\pi(s) = E_\pi[G_t | S_t = s, A_t = a]$$

isn't then $v_\pi(s)$ just the expected action value function at $s$ over all actions $a$ that are given by the policy $\pi$, namely

$$v_\pi(s) = E_{a \sim \pi}[q_\pi(s,a) | S_t = s, A_t = a] = \sum_{a \in A}\pi(a|s) q_\pi(s,a)$$?

",37627,,2444,,12/21/2021 11:18,12/21/2021 11:18,"How to express $v_\pi(s)$ in terms of $q_\pi(s,a)$?",,1,0,,,,CC BY-SA 4.0 21962,2,,21961,6/17/2020 7:37,,4,,"

isn't then $v_\pi(s)$ just the expected action value function at $s$ over all actions $a$ that are given by the policy $\pi$, namely

$v_\pi(s) = E_{a \sim \pi}[q_\pi(s,a) | S_t = s, A_t = a] = \sum_{a \in A}\pi(a|s) q_\pi(s,a)$?

Yes this is 100% correct.

There is no "trick" to this or deeper thought needed. You have correctly isolated the key part of the MDP description that controls relationship between $v_{\pi}$ and $q_{\pi}$ in that direction.

Note that for a deterministic policy, with $\pi(s): \mathcal{S} \rightarrow \mathcal{A}$ then the relationship is

$$v_\pi(s) = q_\pi(s, \pi(s))$$

The related exercise in the book - expressing $q_{\pi}$ in terms of $v_{\pi}$ and the MDP characteristics - is more complex because it involves a time step.

",1847,,2444,,6/17/2020 9:29,6/17/2020 9:29,,,,1,,,,CC BY-SA 4.0 21963,1,21964,,6/17/2020 10:04,,1,260,"

When deriving the Bellman equation for $q_\pi(s,a)$, we have

$q_\pi(s,a) = E_\pi[G_t | S_t = s, A_t = a] = E_\pi[R_{t+1} + \gamma G_{t+1} | S_t = s, A_t = a]$ (1)

This is what is confusing me, at this point, for the Bellman equation for $q_\pi(s,a)$, we write $G_{t+1}$ as an expected value, conditioned on $s'$ and $a'$ of the action value function at $s'$, otherwise, there is no recursion with respect to $q_\pi(s,a)$, and therefore no Bellman equation. Namely,

$ = \sum_{a \in A} \pi(a |s) \sum_{s' \in S} \sum_{r \in R} p(s',r|s,a)(r + \gamma E_\pi[G_{t+1}|S_{t+1} = s', A_{t+1} = a'])$ (2)

which introduces the recursion of $q$,

$ = \sum_{a \in A} \pi(a |s) \sum_{s' \in S} \sum_{r \in R} p(s',r|s,a)(r + \gamma q_\pi(s',a'))$ (3)

which should be the Bellman equation for $q_\pi(s,a)$, right?

On the other hand, when connecting $q_\pi(s,a)$ with $v_\pi(s')$, in this answer, I believe this is done

$q_\pi(s,a) = \sum_{a\in A} \pi(a |s) \sum_{s' \in S}\sum_{r \in R} p(s',r|s,a)(r + \gamma E_{\pi}[G_{t+1} | S_{t+1} = s'])$ (4)

$q_\pi(s,a) = \sum_{a\in A} \pi(a |s) \sum_{s' \in S}\sum_{r \in R} p(s',r|s,a)(r + \gamma v_\pi(s'))$ (5)

Is the difference between using the expectation $E_{\pi}[G_{t+1} | S_{t+1} = s', A_{t+1} = a']$ in (3) and the expectation $E_{\pi}[G_{t+1} | S_{t+1} = s']$ in $(4)$ simply the difference in how we choose to express the expected return $G_{t+1}$ at $s'$ in the definition of $q_\pi(s,a)$?

In $3$, we express the total return at $s'$ using the action value function

leading to the recursion and the Bellman equation, and in $4$, the total return is expressed at $s'$ using the value function

leading to $q_\pi(s,a) = q_\pi(s,a,v_\pi(s'))$?

",37627,,2444,,10/4/2020 10:54,10/4/2020 10:54,"Connection between the Bellman equation for the action value function $q_\pi(s,a)$ and expressing $q_\pi(s,a) = q_\pi(s, a,v_\pi(s'))$",,1,0,,,,CC BY-SA 4.0 21964,2,,21963,6/17/2020 10:28,,3,,"

Your understanding of the Bellman equation is not quite right. The state-action value function is defined as the expected (discounted) returns when taking action $a$ in state $s$. Now, in your equation (2) you have conditioned on taking action $a'$ in the inner expectation - this is not what happens in the state-action value function, you do not condition on knowing $A_{t+1}$, it is chosen according to the policy $\pi$ as per the definition of a bellman equation.

If you want to see a 'recursion' between state action value functions, note that

$$v_\pi(s) = \sum_a \pi(a|s)q_\pi(s,a)\;,$$

Your equation (5) is incorrect -- you need to drop the outter sum over $a$ as we have conditioned on knowing $a$. I will drop the $\pi$ subscripts for ease on notation, and we can see a 'recursion' for state-action value functions as:

$$q(s,a) = \sum_{s',r}p(s',r|s,a)\left(r + \gamma \left[\sum_{a'} \pi(a'|s')q(s',a')\right]\right)\;.$$

",36821,,36821,,6/17/2020 11:43,6/17/2020 11:43,,,,0,,,,CC BY-SA 4.0 21967,2,,21957,6/17/2020 12:28,,2,,"

(Of course, similar questions have been asked in the past and there are many sites, papers, video lessons, online that explain how CNNs work, but I think it's still a good idea to have a reference answer that hopefully will give you the main ideas behind CNNs.)

A convolutional neural network (CNN) is a neural network that performs the convolution (or cross-correlation) operation typically followed by some downsampling (aka pooling) operations.

The convolution operation comes from the mathematical equivalent operation, which is an operation that takes as inputs two functions $f$ and $h$ and produces another function $g$ as output, which is often denoted as $f \circledast h = g$, where $\circledast$ is the convolution operation (or operator).

In image processing, the convolution is used to process images in multiples ways. For example, it is used to remove noise (e.g. the convolution of a noisy image with a Gaussian kernel produces a smoother image) or to compute derivatives of the image, which can then be used e.g. to detect edges or corners, which are usually the main features of an image. For example, the Harris corner detector makes use of the partial derivatives (in the $x$ and $y$ direction) of the image to find corners (or interest points) in the image.

In the context of CNNs, $f$ is the image (in the case of the first layer of the CNN) or a so-called feature map (in the case of hidden layers), $h$ is the kernel (aka filter) and $g$ is also a feature map. (In this answer, I explain these concepts, including how an image can be viewed as a function, more in detail, so I suggest that you read it).

Here's an illustrative example of how the convolution works.

where the $\color{blue}{\text{blue}}$ grid is the image, the $\color{gray}{\text{gray}}$ grid is the kernel and $\color{green}{\text{green}}$ grid is the feature map (i.e. the output of the convolution between the image and the kernel). The white squares are around the images represent the padding that is added around the image so that the convolution operation produces a feature map of a specific dimension.

Essentially, the convolution is a series of dot products between the kernel and different patches (or parts) of the image. The convolution can even be represented as matrix multiplication, so the name convolution shouldn't scare you anymore, if you are familiar with dot products and matrix multiplications.

As opposed to many operations in image processing where the kernels are typically fixed, in the context of CNNs, the kernels are learnable parameters, i.e. they change depending on the loss function, so they are supposed to represent functions that when convolved with their respective input functions are useful to extract meaningful information (i.e. features) to solve the task the CNN is being trained to solve. For this reason, the convolution is often thought of as an operation that extracts features from images. In fact, the output of the convolution, in the context of CNNs, is often called feature map. Moreover, a CNN is typically thought of as a data-driven feature extractor for the same reason.

There are many different variations of the standard convolution operation. For example, there are transposed convolutions, dilated convolutions, etc., which are used to solve slightly different problems. For example, the dilated convolution can be used to increase the receptive field of an element of a feature map (which is typically a desirable property for several reasons). There are also upsampling operations which are particularly useful in the context of image segmentation. These different convolutions and their arithmetic are explained very well in the paper A guide to convolution arithmetic for deep learning by Vincent Dumoulin and Francesco Visin. Here's also the associated Github repository that contains all the images in this paper that show how the convolution operations work (from which I also took the gif above).

To conclude, CNNs are very useful to process images and extract features from them because they use convolution operations (and downsampling and upsampling operations). They can be used for image (or object) classification, object detection (i.e. object localization with a bounding box + object classification), image segmentation (including semantic segmentation and instance segmentation), and possibly many other tasks where you need to learn a function that takes as input images and needs to extract information from those images to get someone high-level (but also low-level) output (e.g. the name of the object in the image), given some training data.

",2444,,2444,,6/17/2020 12:55,6/17/2020 12:55,,,,0,,,,CC BY-SA 4.0 21968,1,22044,,6/17/2020 14:07,,1,104,"

I think I'm misunderstanding the description of IDA* and want to clarify.

IDA* works as follows (quoting from Wiki):

At each iteration, perform a depth-first search, cutting off a branch when its total cost exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.

Suppose that we have the following tree:

  • branching factor = 5
  • all cost are different

Say we have expanded 1000 nodes. We pick the lowest cost of the nodes that we 'touched' but didn't expand. Since all costs are unique, there is now only one more node which satisfies this new cost bound, and so we expand 1001 nodes, and 'touch' 5 new ones. We now pick the smallest of these weights, and starting from the root expand 1002 nodes, and so on and so forth, 1003, 1004...

I must be doing something wrong here right? If not, the complexity is $n^2$, where n is the number of nodes with cost smaller than the optimum, compared to n for normal A*.

Someone pointing out my misunderstanding would be greatly appreciated.

",37970,,2444,,6/17/2020 14:16,6/20/2020 16:57,Doesn't the number of explored nodes with IDA* increase linearly?,,1,2,,,,CC BY-SA 4.0 21970,1,21977,,6/17/2020 15:12,,8,6999,"

Nowadays, CV has really achieved great performance in many different areas. However, it is not clear what a CV algorithm is.

What are some examples of CV algorithms that are commonly used nowadays and have achieved state-of-the-art performance?

",30725,,30725,,6/18/2020 1:41,6/19/2020 8:03,What are the main algorithms used in computer vision?,,2,0,,,,CC BY-SA 4.0 21971,2,,21946,6/17/2020 15:37,,4,,"

https://spacy.io/api/lemmatizer just uses lookup tables and the only upstream task it relies on is POS tagging, so it should be relatively fast. For large amounts of text, SpaCy recommends using nlp.pipe, which can work in batches and has built in support for multiprocessing (with the n_process keyword), rather than than simply nlp.

Also, make sure you disable any pipeline elements that you don't plan to use, as they'll just waste processing time. If you're only doing lemmatization, you'll pass disable=["parser", "ner"] to the nlp.pipe call.

Example code that takes all of the above into account is below.

import spacy
nlp = spacy.load("en_core_web_sm")

docs = ["We've been running all day.", "Let's be better."]

for doc in nlp.pipe(docs, batch_size=32, n_process=3, disable=["parser", "ner"]):
    print([tok.lemma_ for tok in doc])

# ['-PRON-', 'have', 'be', 'run', 'all', 'day', '.']
# ['let', '-PRON-', 'be', 'well', '.']
```
",37972,,,,,6/17/2020 15:37,,,,0,,,,CC BY-SA 4.0 21972,1,21983,,6/17/2020 15:58,,10,7041,"

I am currently training some models using gradient accumulation since the model batches do not fit in GPU memory. Since I am using gradient accumulation, I had to tweak the training configuration a bit. There are two parameters that I tweaked: the batch size and the gradient accumulation steps. However, I am not sure about the effects of this modification, so I would like to fully understand what is the relationship between the gradient accumulation steps parameter and the batch size.

I know that when you accumulate the gradient you are just adding the gradient contributions for some steps before updating the weights. Normally, you would update the weights every time you compute the gradients (traditional approach):

$$w_{t+1} = w_t - \alpha \cdot \nabla_{w_t}loss$$

But when accumulating gradients you compute the gradients several times before updating the weights (being $N$ the number of gradient accumulation steps):

$$w_{t+1} = w_t - \alpha \cdot \sum_{0}^{N-1} \nabla_{w_t}loss$$

My question is: What is the relationship between the batch size $B$ and the gradient accumulation steps $N$?

By example: are the following configurations equivalent?

  • $B=8, N=1$: No gradient accumulation (accumulating every step), batch size of 8 since it fits in memory.
  • $B=2, N=4$: Gradient accumulation (accumulating every 4 steps), reduced batch size to 2 so it fits in memory.

My intuition is that they are but I am not sure. I am not sure either if I would have to modify the learning rate $\alpha$.

",26882,,2444,,12/12/2021 13:30,12/12/2021 13:30,What is the relationship between gradient accumulation and batch size?,,1,0,,,,CC BY-SA 4.0 21973,2,,21947,6/17/2020 16:04,,2,,"

BlueTurtle's answer is good, but I'd like to add something.

Your question realistically has nothing to do with Q Learning, in fact, you can ask the same thing about just about any RL algorithm. In fact, even in multi-armed bandits, you can easily see why your proposed method is suboptimal (please don't interpret this as a criticism, because I think your question is a very natural one). My suggestion to you is to read up on multi-armed bandits since they're much simpler to analyze. I think even the Sutton and Barto book deals with your proposed method explicitly, and mathematically proves that other strategies are better.

",37829,,,,,6/17/2020 16:04,,,,0,,,,CC BY-SA 4.0 21974,2,,21928,6/17/2020 16:36,,2,,"

My answer assumes your fine-tuning architecture simply stacks a single fully-connected layer on top of the BERT [CLS] output, as in Figure 4b of the BERT paper.

Generally, when working with mixed data such as continuous and categorical features, the first step is to simply concatenate all the inputs into one long vector. In your case, you would concatenate a one-hot encoding of speaker ID to the BERT [CLS] output for each example. If you're using tensorflow, you'll need to use the functional API to create a multiple input model, as I outline below.

import tensorflow.keras as keras

# bert_input is the wordpiece tokenized input text
# BERT is the BERT model, such as a tensorflow_hub.KerasLayer with a pretrained model from tfhub.dev
bert_out = BERT(bert_input)

# spkr_input is the one-hot speaker ID encoding.
spkr_input = keras.Input(shape=(num_speakers,), name='spkr_input')
dense_input = keras.layers.concatenate([bert_out, spkr_input])
scores = keras.layers.Dense(num_classes, activation='softmax')(dense_input)
model = keras.Model(inputs=[bert_input , spkr_input], outputs=[scores])

You could also try feeding the one-hot encoded speaker ID vectors through a separate fully-connected network first to obtain a continuous representation (i.e. a speaker embedding) and then concatenate that to the BERT [CLS] output and feed the result into your classification layer. Modifying my example above,

spkr_emb = keras.layers.Dense(spkr_emb_size, activation="relu")(spkr_input)
dense_input = keras.layers.concatenate([bert_out, spkr_emb])

You can find a more detailed guide to mixed input models with Keras at https://www.pyimagesearch.com/2019/02/04/keras-multiple-inputs-and-mixed-data/

",37972,,,,,6/17/2020 16:36,,,,1,,,,CC BY-SA 4.0 21975,1,22052,,6/17/2020 16:39,,0,316,"

I'm in the middle of a project in which I want to generate a TV series script (characters answering to each other, scene by scene) using SOTA models, and I need some guidance to simplify my architecture.

My current intuition is as follows: for a given character C1, I have pairs of sentences from the original scripts where C1 answers other characters, for example, C2 (C2->C1). These are used to fine-tune a data-driven chatbot. At inference time, the different chatbots simply answer each other, and, hopefully, the conversation will have some sense.

This is however unpractical and will be kind of a mess with many characters, especially if I use heavy models.

Is there an architecture out there that could be used for conversational purposes, which could be trained only once with the whole dataset while separating the different characters?

I'm open to any ideas!

",37974,,2444,,6/20/2020 9:36,6/21/2020 7:54,How to combine several chatbots into one?,,2,0,,,,CC BY-SA 4.0 21976,1,,,6/17/2020 20:18,,1,183,"

I'm working on a project lately and I'm trying to solve a problem with reinforcement learning and I have serious issues with shaping the reward function.

The problem is designing a device with maximum efficiency. So we simulated the problem as follows. There is a 4x4 grid (we defined a 4x4 matrix) and the elements of this matrix can be either 0 or 1 (value 0 means "air" and 1 means a certain material in reality), so there is a 2^16 possible configurations for this matrix. Our agent starts from the top left corner of this matrix and has 5 possible actions: move up, down, left, right and flip (which means flipping a 0 to 1 or vice versa). Based on flipping action, we get a new configuration and each configuration has an efficiency (which is calculated by maxwell equations in the background).

Our goal is to find the best configuration so that the efficiency of the device is maximum.

So far we have tried many reward functions and non of them seemed to work at all! I will mention some of them:

  1. reward = current_efficiency - previous_efficiency (the efficiency is being calculated in each time step)

  2.  if current_efficiency > previous_efficiency:
        reward = current_efficiency
        previous_efficiency = current_efficiency
    
    
  3.  diff = current_efficiency - previous_efficiency
     if diff > 0:
        reward = 1
     else:
        reward = -2
    
    

and some other variations. Nothing is working for our problem and the agent doesn't learn at all! So far, we have used different approaches to DQN and also A2C method and so far no positive feedback. We tried different definitions of states as well, but we don't think that is the problem.

So, can somebody maybe help me with this? It would be a huge help!

",37979,,2444,,10/7/2020 16:52,10/7/2020 16:52,Designing a reward function for my reinforcement learning problem,,0,4,,,,CC BY-SA 4.0 21977,2,,21970,6/17/2020 22:12,,7,,"

There are many computer vision (CV) algorithms and models that are used for different purposes. So, of course, I cannot list all of them, but I can enumerate some of them based on my experience and knowledge. Of course, this answer will only give you a flavor of the type of algorithm or model that you will find while solving CV tasks.

For example, there are algorithms that are used to extract keypoints and descriptors (which are often collectively called features, although the descriptor is the actual feature vector and the keypoint is the actual feature, and in deep learning this distinction between keypoints and descriptors does not even exist, AFAIK) from images, i.e. feature extraction algorithms, such as SIFT, BRISK, FREAK, SURF or ORB. There are also edge and corner detectors. For example, the Harris corner detector is a very famous corner detector.

Nowadays, convolutional neural networks (CNNs) have basically supplanted all these algorithms in many cases, especially when enough data is available. Rather than extracting the typical features from an image (such as corners), CNNs extract features that are most useful to solve the task that you want to solve by taking into account the information in the training data (which probably includes corners too!). Hence CNNs are often called data-driven feature extractors. There are different types of CNNs. For example, CNNs that were designed for semantic segmentation (which is a CV task/problem), such as the u-net, or CNNs that were designed for instance segmentation, such as mask R-CNN.

There are also algorithms that can be used to normalize features, such as the bag-of-features algorithm, which can be used to create fixed-size feature vectors. This can be particularly useful for tasks like content-based image retrieval.

There are many other algorithms that could be considered CV algorithms or are used to solve CV tasks. For example, RanSaC, which is a very general algorithm to fit models to data in the presence of outliers, can be used to fit homographies (matrices that are generally used to transform planes to other planes) that transform pixels of one image to another coordinate system of another image. This can be useful for the purpose of template matching (which is another CV task), where you want to find a template image in another target image. This is very similar to object detection.

There are also many image processing algorithms and techniques that are heavily used in computer vision. For example, all the filters (such as Gaussian, median, bilateral, non-local means, etc.) that can be used to smooth, blur or de-noise images. Nowadays, some deep learning techniques have also replaced some of these filters and image processing techniques, such as de-noising auto-encoders.

All these algorithms and models have something in common: they are used to process images and/or get low- or high-level information from images. Most of them are typically used to extract features (i.e. regions of the images that are relevant in some way) from images, so that they can later be used to train a classifier or regressor to perform some kind of task (e.g. find and distinguish the objects, such people, cars, dogs, etc. in an image). The classifiers/regressors are typically machine learning (ML) models, such as SVMs or fully-connected neural networks, but there's a high degree of overlap between CV and ML because some ML tools are used to solve CV tasks (e.g. image classification).

",2444,,2444,,6/18/2020 9:19,6/18/2020 9:19,,,,1,,,,CC BY-SA 4.0 21978,2,,21926,6/17/2020 22:32,,0,,"

I do not see how you came to choose prices as actions. Normally, actions are something like go left, go right, jump, stay etc. Analogously, I would say that in your case the actions are visiting a certain location, whereas locations are what you referred to as states. I'd go for something like that:

locations = {location1=(c1,cr1), location2=(c1,cr2), ...}
Actions = {Visit location1, Visit location2, ...}
States = {--set of all the possible paths the model can generate until the budget is possibly exhausted--}

The reward function could then be a combination of both the acceptance (vs. rejection) of a route/path by the user and the inverse of the cost associated with the suggested path (because you want the model to favor cheap paths in order to keep your own business costs low). How you balance these two terms is up to exploration.

For rapid prototyping, check out stable baselines, which offers a bunch of highly optimized RL algorithms.

",37982,,,,,6/17/2020 22:32,,,,0,,,,CC BY-SA 4.0 21979,1,,,6/17/2020 22:40,,1,67,"

I have developed an RPG in likeness to the features showcased in the Final Fantasy series; multiple character classes which utilise unique action sets, sequential turn-based combat, front/back row modifiers, item usage and a crisis mechanic which bears similarity to the limit break feature.

The problem is that the greater portion of my project depends on the use of some means of Machine Learning to, in some manner, act as an actor in the game environment, however, I do not know of my options under the bare-bones environment of a command line game; I am more familiar with the use of pixel data and a neural network for action selection on a frame-by-frame basis.

Could I use reinforcement learning to learn a policy for action selection under a custom environment or should I apply a machine learning algorithm to character data, find the example outlined below, determine the best action to use on a particular turn state?

+-------+--------+--------+---------+---------+------------+---------------------+------------+--------------+--------------------+--------------+--------------+
| Level | CurrHP | CurrMP | AtkStat | MagStat | StatusList | TargetsInEnemyParty | IsInCrisis | TurnNumber   | BattleParams       |ActionOutput  | DamageOutput |
+-------+--------+--------+---------+---------+------------+---------------------+------------+--------------+--------------------+--------------+--------------+
|    65 |   6500 |    320 |      47 |      56 |          0 | Behemoth            |0           | 7            | None               |ThiefAttack   |4254          |
|    92 |   8000 |    250 |      65 |      32 |          0 | Behemoth            |1           | 4            | None               |WarriorLimit  |6984          |
+-------+--------+--------+---------+---------+------------+---------------------+------------+--------------+--------------------+--------------+--------------+

I would like to prioritise the ease of implementation of an algorithm over how optimal the potential algorithm could be, I just need a baseline to work towards. Many thanks.

",37983,,,,,6/17/2020 22:40,"What would be the good choice of algorithm to use for character action selection in an RPG, implemented in Python?",,0,6,,,,CC BY-SA 4.0 21981,2,,16824,6/18/2020 0:49,,0,,"

In Convolutional Neural Networks (CNNs) you have small kernels (or filters) that you slide over an input (e.g. image). The value resulting from the convolution of the filter with a subset of the image over which the filter is currently positioned is then put into its respective cell in the output of that layer. Essentially, training CNNs boils down to training small filters, for example for detecting edges or corners etc. in input data, which most frequently happens to be images indeed. The assumption here is that features can be detected locally in the input volume, which entails that the nature of the input data shall be coherent over the entire volume of input data.

Recurrent Neural Networks (RNNs) do not work locally, but are applied to sequences of arbitrary input data, where one input node may receive sensor readings, while the next node receives the date on which the sensor reading was measured. Of such arbitrary data, you feed a sequence through the RNN, which always keeps its own internal state from processing the previous instance/sample in the sequence in memory while processing the next data point/sample in the sequence. Depending on the kind of recurrent cell type that is employed to construct a RNN layer, the memory of the previous internal state then affects the computation of the next internal state and/or output computed when working on the next data sample. So, information of past data points/samples is carried forward while iterating though a sequence.

In short, CNNs are meant to detect local features in volume data, while RNNs preserve information over their previous internal state while processing the next data sample.

Probably one of the best online resources walking you through all the related concepts step by step is the following lecture series offered by the Stanford University.

",37982,,37982,,6/18/2020 0:55,6/18/2020 0:55,,,,0,,,,CC BY-SA 4.0 21982,1,22447,,6/18/2020 1:18,,3,146,"

I am trying to understand Intagrated Gradients, but have difficulty in understanding the authors' claim (in section 3, page 3):

For most deep networks, it is possible to choose a baseline such that the prediction at the baseline is near zero ($F(x') \approx 0$). (For image models, the black image baseline indeed satisfies this property.)

They are talking about a function $F : R^n \rightarrow [0, 1]$ (in 2nd paragraph of section 3), and if you consider a deep learning classification model, the final layer would be a softmax layer. Then, I suspect for image models, the prediction at the baseline should be close to $1/k$, where $k$ is the number of categories. For CIFAR10 and MNIST, this would equal to $1/10$, which is not very close to $0$. I have a binary classification model on which I am interested in applying the Integrated Gradients algorithm. Can the baseline output of $0.5$ be a problem?

Another related question is, why did they choose a black image as the baseline in the first place? The parameters in image classification models (in a convolution layer) are typically initialized around $0$, and the input is also normalized. Therefore, image classification models do not really care about the sign of inputs. I mean we could multiply all the training and test inputs with $-1$, and the model would learn the task equivalently. I guess I can find other neutral images other than a black one. I suppose we could choose a white image as the baseline, or maybe the baseline should be all zero after normalization?

",37987,,37987,,6/22/2020 4:29,7/10/2020 18:48,"Why should the baseline's prediction be near zero, according to the Integrated Gradients paper?",,1,2,,,,CC BY-SA 4.0 21983,2,,21972,6/18/2020 1:48,,8,,"

There isn't any explicit relation between the batch size and the gradient accumulation steps, except for the fact that gradient accumulation helps one to fit models with relatively larger batch sizes (typically in single-GPU setups) by cleverly avoiding memory issues. The core idea of gradient accumulation is to perform multiple backward passes using the same model parameters before updating them all at once for multiple batches. This is unlike the conventional manner, where the model parameters are updated once every batch-size number of samples. Therefore, finding the correct batch-size and accumulation steps is a design trade-off that has to be made based on two things: (i) how much increase in the batch-size can the GPU handle, and (ii) whether the gradient accumulation steps result in at least as much better performance than without accumulation.

As for your example configurations, there are the same in theory. But, there are a few important caveats that need to be addressed before proceeding with this intuition.

  1. Using Batch Normalization with gradient accumulation generally does not work well, simply because BatchNorm statistics cannot be accumulated. A better solution would be to use Group Normalization instead of BatchNorm.
  2. When performing a combined update in gradient accumulation, it must be ensured that the optimizer is not initialized to zero (i.e. optimizer.zero_grad()) for every backward update (i.e. loss.backward()). It is easy to include both statements in the same for loop while training which defeats the purpose of gradient accumulation.

Here are some interesting resources to find out more about it detail.

  1. Thomas Wolf's article on different ways of combating memory issues.
  2. Kaggle discussions on the effect on learning rate: here and here.
  3. A comprehensive answer on gradient accumulation in PyTorch.
",36971,,,,,6/18/2020 1:48,,,,2,,,,CC BY-SA 4.0 21984,1,21998,,6/18/2020 3:14,,3,3265,"

In a Deep Q-learning algorithm, we perform a batch training every train_freq and we update the parameters of the target network every target_update_freq. Are train_freq and target_update_freq necessary related, e.g., one should be always greater than the other, or must they be independently optimized depending on the problem?

EDIT Changed the name of batch_freq to train_freq.

",37642,,37642,,6/18/2020 14:04,6/18/2020 16:25,"In Deep Q-learning, are the target update frequency and the batch training frequency related?",,1,3,,,,CC BY-SA 4.0 21988,2,,21975,6/18/2020 7:33,,0,,"

Define features that describe C1 and C2 (like their ids) and add these features to inputs of your model. Thus your only model will be able to generate the next line for any pair of (C1, C2).

",28041,,,,,6/18/2020 7:33,,,,0,,,,CC BY-SA 4.0 21989,1,,,6/18/2020 7:56,,0,79,"

I am doing human activity recognition project. I have total of 12 classes. The class distribution look like this:

$\color{red}{If \ you \ watch \ carefully, you \ can \ see \ that \ I \ have \ no \ data \ points \ for \ class \ 11 \ and \ class \ 8.}$ Also, the dataset is highly imbalanced. So, I took minimum data points (in this case 2028) for all of the classes. Now my balanced data look like this:

After doing this it looks like a balance data. $\color{red}{But \ still, \ I \ think \ it \ not, \ because \ I \ have \ zero \ datapoints \ for \ class \ 11 \ and \ class \ 8}$. In my opinion the classes are still imbalance.

I am using CNN model to solve this activity project. My model summary is following:

The main problem is, my model starts overfitting heavily when I train it.

Is it due to my imbalance data( class 8 and 11 has zero data points) or something else?

$\textbf{Hyperperameter:}$

$\textbf{features:}$ X, Y, Z of mobile accelerometer

$\textbf{frame size:}$ 80

$\textbf{optimizer:}$ Adam, $\textbf{Learning rate:}$ 0.001

$\textbf{Loss:}$ Sparse categorical cross-entropy

",28048,,,,,6/18/2020 8:23,Can imbalance data create overfitting?,,1,0,,,,CC BY-SA 4.0 21991,2,,21989,6/18/2020 8:23,,1,,"

Your zero-classes just do not exist for the model. There is no information about them in training and test sets. I think the reason for overfitting is that your have very small size of the training set comparing to the number of parameters in your model. If it is easier for a model to remember all training entries - it will just do that. You need to prune your model to force it to make some generalizations about the data.

",28041,,,,,6/18/2020 8:23,,,,2,,,,CC BY-SA 4.0 21992,1,21996,,6/18/2020 9:29,,7,992,"

I want to know if there is any metric to use for measuring sample-efficiency of a reinforcement learning algorithm? From reading research papers, I see claims that proposed models are more sample efficient but how does one reach this conclusion when comparing reinforcement learning algorithms?

",30174,,,,,6/18/2020 15:25,How to measure sample efficiency of a reinforcement learning algorithm?,,1,0,,,,CC BY-SA 4.0 21993,1,,,6/18/2020 10:47,,0,34,"

How do I calculate the points in a video sequence where the images are steady enough for a photo. For example, I want to take maybe 20 photos for a facial recognition dataset. Instead of asking the subject to hold still 20 times, I take a moving video of him and filter the images for 20 good photos. The problem is that the subject face has to move into different poses and facial expressions and many times the movements blur the quality of the image.

Thanks in advance.

",13058,,13058,,6/19/2020 0:39,6/19/2020 0:39,How to determine when the image is steady enough in a video sequence to take photos?,,0,2,,,,CC BY-SA 4.0 21994,1,22013,,6/18/2020 11:46,,1,71,"

For a kernel function, we have two conditions one is that it should be symmetric which is easy to understand intuitively because dot products are symmetric as well and our kernel should also follow this. The other condition is given below

There exists a map $φ:R^d→H$ called kernel feature map into some high dimensional feature space H such that $∀x,x'$ in $R^d:k(x,x') = <φ(x),φ(x')>$

I understand that this means that there should exist a feature map that will project the data from low dimension to any high dimension $D$ and kernel function will take the dot product in that space.

For example, the Euclidean distance is given as

$d(x,y)=∑_i(x_i−y_i)^2=<x,x>+<y,y>−2<x,y>$

If I look this in terms of second condition how do we know that doesn't exist any feature map for euclidean distance? What exactly are we looking in feature maps mathematically?

",38000,,37965,,6/18/2020 14:38,7/19/2020 8:02,How to understand mapping function of kernel?,,1,1,,,,CC BY-SA 4.0 21996,2,,21992,6/18/2020 12:54,,5,,"

First of all, let's recall some definitions.

A sample in the context of (deep) RL is a tuple $(s_t, a_t, r_t, s_{t+1})$ representing the information associated with a single interaction with the environment.

As for sample efficiency, it is defined as follows [1]:

Sample efficiency refers to the amount of data required for a learning system to attain any chosen target level of performance.

So, how you measure it is closely related to the way it is defined.

For example, one way to do it would be as shown in the figure below:

  • On the y-axis you have the performance of your RL algorithm (e.g. in terms of average return over episodes as done in [2], or mean total episode reward across different environment runs as done in [3])

  • On the x-axis you have the number of samples that you took.

  • The dashed line corresponds to your performance baseline (e.g. the performance at which a certain game or any other RL environment is considered solved).

So, you can measure sample efficiency at the intersection, where you get the number of samples needed to reach the performance baseline. So, an algorithm that requires less samples would be more sample-efficient.

Another way to do it would be the other way around, i.e. the RL agent is provided with a limited budget for the number of samples it could take. As a result, you can measure sample efficiency my measuring the area under the curve, as illustrated below. So, that would be how much performance you got just by using those samples in the budget. An algorithm that achieves a higher performance than another with the same amount of samples would then be more sample efficient.

I am not aware if there exist RL libraries that would provide you with this measure out-of-the-box. However, if you're using Python for example, I believe that using libraries like scipy or scikit-learn along with matplotlib could do the job.

NB: Image credits go to the following presentation: DLRLSS 2019 - Sample Efficient RL - Harm Van Seijen

",34010,,34010,,6/18/2020 15:25,6/18/2020 15:25,,,,0,,,,CC BY-SA 4.0 21997,1,22004,,6/18/2020 14:46,,0,97,"

Let's say an RL trading system places trades based on pricing data.

Each episode represents 1 hour of trading, and there are 24 hours of data available. The Q table represents for a given state, what is the most action with the highest utility.

The state is a sequence of prices, and the action is either buy, hold, sell.

Instead of "Loop for each episode" as per the Sarsa algorithm :

I add an additional outer loop. Now instead of just looping for each episode we have:

for 1 to N:     
   "Loop for each episode"

Manually set N or exit out of the loop on convergence.

Is this the correct approach? Iterating multiple times over the episodes will produce more valuable state-action pairs in the Q table, because e greedy is not deterministic and for each iteration may exploit an action to greater reward than other episode epochs.

",12964,,12964,,6/18/2020 15:40,6/18/2020 18:48,Looping over Sarsa algorithm for better Q values,,1,7,,,,CC BY-SA 4.0 21998,2,,21984,6/18/2020 14:56,,4,,"

It is fairly common in DQN to train a minibatch after every observation received after the replay memory has enough data (how much is enough is yet another parameter). This is not necessary, and it is fine to collect more data between training steps, the alogrithm is still DQN. The value higher than 1 for train_freq here might be related to use of prioritised replay memory sampling - I have no real experience with that.

The update to target network generally needs to occur less frequently than training steps, it is intended to stabilise results numerically, so that over or under estimates of value functions do not result in runaway feedback.

The parameters choices will interact each other, most hyperparameters in machine learning do so unfortunately. Which makes searching for ideal values fiddly and time-consuming.

In this case it is safe to say that train_freq is expected to be much lower than target_update_freq, probably by at least an order of magnitude, and more usually 2 or 3 orders of magnitude. However, that's not quite the same as saying there is a strong relationship between choices for those two hyperparameters. The value of batch_size is also relevant here, as it shows the rate that memory is being used (and re-used) by the training process.

The library you are using has these defaults:

    batch_size::Int64 = 32
    train_freq::Int64 = 4
    target_update_freq::Int64 = 500

They seem like sane starting points. You are relatively free to change them as if they were independent, as there is no simple rule like "target_update_freq should be 125 times train_freq". As a very rough guide, you can expect that high values of train_freq, low values of batch_size and low values of target_update_freq are likely to cause instability in the learning process, whilst going too far in the opposite direction may slow learning down. You might be able to set train_freq to 1, but I am not completely certain about that either in combination with the prioritised replay memory sampling which seems to be the default in the library you are using.

",1847,,1847,,6/18/2020 16:25,6/18/2020 16:25,,,,0,,,,CC BY-SA 4.0 21999,1,22000,,6/18/2020 17:13,,4,1151,"

Typically, people say that convolutional neural networks (CNN) perform the convolution operation, hence their name. However, some people have also said that a CNN actually performs the cross-correlation operation rather than the convolution. How is that? Does a CNN perform the convolution or cross-correlation operation? What is the difference between the convolution and cross-correlation operations?

",2444,,2444,,6/19/2020 10:00,6/21/2020 12:26,Do convolutional neural networks perform convolution or cross-correlation?,,2,0,,,,CC BY-SA 4.0 22000,2,,21999,6/18/2020 17:13,,7,,"

Short answer

Theoretically, convolutional neural networks (CNNs) can either perform the cross-correlation or convolution: it does not really matter whether they perform the cross-correlation or convolution because the kernels are learnable, so they can adapt to the cross-correlation or convolution given the data, although, in the typical diagrams, CNNs are shown to perform the cross-correlation because (in libraries like TensorFlow) they are typically implemented with cross-correlations (and cross-correlations are conceptually simpler than convolutions). Moreover, in general, the kernels can or not be symmetric (although they typically won't be symmetric). In the case they are symmetric, the cross-correlation is equal to the convolution.

Long answer

To understand the answer to this question, I will provide two examples that show the similarities and differences between the convolution and cross-correlation operations. I will focus on the convolution and cross-correlation applied to 1-dimensional discrete and finite signals (which is the simplest case to which these operations can be applied) because, essentially, CNNs process finite and discrete signals (although typically higher-dimensional ones, but this answer applies to higher-dimensional signals too). Moreover, in this answer, I will assume that you are at least familiar with how the convolution (or cross-correlation) in a CNN is performed, so that I do not have to explain these operations in detail (otherwise this answer would be even longer).

What is the convolution and cross-correlation?

Both the convolution and the cross-correlation operations are defined as the dot product between a small matrix and different parts of another typically bigger matrix (in the case of CNNs, it is an image or a feature map). Here's the usual illustration (of the cross-correlation, but the idea of the convolution is the same!).

Example 1

To be more concrete, let's suppose that we have the output of a function (or signal) $f$ grouped in a matrix $$f = [2, 1, 3, 5, 4] \in \mathbb{R}^{1 \times 5},$$ and the output of a kernel function also grouped in another matrix $$h=[1, -1] \in \mathbb{R}^{1 \times 2}.$$ For simplicity, let's assume that we do not pad the input signal and we perform the convolution and cross-correlation with a stride of 1 (I assume that you are familiar with the concepts of padding and stride).

Convolution

Then the convolution of $f$ with $h$, denoted as $f \circledast h = g_1$, where $\circledast$ is the convolution operator, is computed as follows

\begin{align} f \circledast h = g_1 &=\\ [(-1)*2 + 1*1, \\ (-1)*1 + 1*3, \\ (-1)*3 + 1*5, \\ (-1)*5+1*4] &=\\ [-2 + 1, -1 + 3, -3 + 5, -5 + 4] &=\\ [-1, 2, 2, -1] \in \mathbb{R}^{1 \times 4} \end{align}

So, the convolution of $f$ with $h$ is computed as a series of element-wise multiplications between the horizontally flipped kernel $h$, i.e. $[-1, 1]$, and each $1 \times 2$ window of $f$, each of which is followed by a summation (i.e. a dot product). This follows from the definition of convolution (which I will not report here).

Cross-correlation

Similarly, the cross-correlation of $f$ with $h$, denoted as $f \otimes h = g_2$, where $\otimes$ is the cross-correlation operator, is also defined as a dot product between $h$ and different parts of $f$, but without flipping the elements of the kernel before applying the element-wise multiplications, that is

\begin{align} f \otimes h = g_2 &=\\ [1*2 + (-1)*1, \\ 1*1 + (-1)*3, \\ 1*3 + (-1)*5, \\ 1*5 + (-1)*4] &=\\ [2 - 1, 1 - 3, 3 - 5, 5 - 4] &=\\ [1, -2, -2, 1] \in \mathbb{R}^{1 \times 4} \end{align}

Notes

  1. The only difference between the convolution and cross-correlation operations is that, in the first case, the kernel is flipped (along all spatial dimensions) before being applied.

  2. In both cases, the result is a $1 \times 4$ vector. If we had convolved $f$ with a $1 \times 1$ vector, the result would have been a $1 \times 5$ vector. Recall that we assumed no padding (i.e. we don't add dummy elements to the left or right borders of $f$) and stride 1 (i.e. we shift the kernel to the right one element at a time). Similarly, if we had convolved $f$ with a $1 \times 3$, the result would have been a $1 \times 3$ vector (as you will see from the next example).

  3. The results of the convolution and cross-correlation, $g_1$ and $g_2$, are different. Specifically, one is the negated version of the other. So, the result of the convolution is generally different than the result of the cross-correlation, given the same signals and kernels (as you might have suspected).

Example 2: symmetric kernel

Now, let's convolve $f$ with a $1 \times 3$ kernel that is symmetric around the middle element, $h_2 = [-1, 2, -1]$. Let's first compute the convolution.

\begin{align} f \circledast h_2 = g_3 &=\\ [(-1)*2 + 1*2 + (-1) * 3,\\ (-1)*1 + 2*3 + (-1) * 5,\\ (-1)*3 + 2*5 + (-1) * 4] &=\\ [-2 + 2 + -3, -1 + 6 + -5, -3 + 10 + -4] &=\\ [-3, 0, 3] \in \mathbb{R}^{1 \times 3} \end{align}

Now, let's compute the cross-correlation

\begin{align} f \otimes h_2 = g_4 &=\\ [(-1)*2 + 1*2 + (-1) * 3, \\ (-1)*1 + 2*3 + (-1) * 5, \\ (-1)*3 + 2*5 + (-1) * 4] &=\\ [-3, 0, 3] \in \mathbb{R}^{1 \times 3} \end{align}

Yes, that's right! In this case, the result of the convolution and the cross-correlation is the same. This is because the kernel is symmetric around the middle element. This result applies to any convolution or cross-correlation in any dimension. For example, the convolution of the 2d Gaussian kernel (a centric-symmetric kernel) and a 2d image is equal to the cross-correlation of the same signals.

CNNs have learnable kernels

In the case of CNNs, the kernels are the learnable parameters, so we do not know beforehand whether the kernels will be symmetric or not around their middle element. They won't probably be. In any case, CNNs can perform either the cross-correlation (i.e. no flip of the filter) or convolution: it does not really matter if they perform cross-correlation or convolution because the filter is learnable and can adapt to the data and tasks that you want to solve, although, in the visualizations and diagrams, CNNs are typically shown to perform the cross-correlation (but this does not have to be the case in practice).

Do libraries implement the convolution or correlation?

In practice, certain libraries provide functions to compute both convolution and cross-correlation. For example, NumPy provides both the functions convolve and correlate to compute both the convolution and cross-correlation, respectively. If you execute the following piece of code (Python 3.7), you will get results that are consistent with my explanations above.

import numpy as np 

f = np.array([2., 1., 3., 5., 4.])

h = np.array([1., -1.])
h2 = np.array([-1., 2., 1.])

g1 = np.convolve(f, h, mode="valid")
g2 = np.correlate(f, h, mode="valid")

print("g1 =", g1) # g1 = [-1.  2.  2. -1.]
print("g2 =", g2) # g2 = [ 1. -2. -2.  1.]

However, NumPy is not really a library that provides out-of-the-box functionality to build CNNs.

On the other hand, TensorFlow's and PyTorch's functions to build the convolutional layers actually perform cross-correlations. As I said above, although it does not really matter whether CNNs perform the convolution or cross-correlation, this naming is misleading. Here's a proof that TensorFlow's tf.nn.conv1d actually implements the cross-correlation.

import tensorflow as tf # TensorFlow 2.2

f = tf.constant([2., 1., 3., 5., 4.], dtype=tf.float32)
h = tf.constant([1., -1.], dtype=tf.float32)

# Reshaping the inputs because conv1d accepts only certain shapes.
f = tf.reshape(f, [1, int(f.shape[0]), 1])
h = tf.reshape(h, [int(h.shape[0]), 1, 1])

g = tf.nn.conv1d(f, h, stride=1, padding="VALID")
print("g =", g) # [1, -2, -2, 1]

Further reading

After having written this answer, I found the article Convolution vs. Cross-Correlation (2019) by Rachel Draelos, which essentially says the same thing that I am saying here, but provides more details and examples.

",2444,,2444,,6/21/2020 12:26,6/21/2020 12:26,,,,0,,,,CC BY-SA 4.0 22001,1,22002,,6/18/2020 18:26,,0,693,"

While I was playing with some hyperparameters, I came to a wired situation. My dataset is IRIS dataset to be specific. SVM algorithm has some hyperparameters that we can tune, such as Kernels, and C value.

(All accuracy calculations and SVM are from sklearn package to be specific)

I made a comparison between kernels and noticed sigmoid kernel was performing way worse in terms of accuracy. It is more than 3 times less accuracy than RBF, Linear, and Polynomial. I do know that kernels are quite data-sensitive and data-specific, but I would like to know "Which types of data is sigmoid kernel good at any example? or is this my fault due to wrong C value for sigmoid kernel?"

",38008,,38008,,6/19/2020 1:46,6/22/2020 12:37,Which kind of data does sigmoid kernel performance well?,,1,0,,,,CC BY-SA 4.0 22002,2,,22001,6/18/2020 18:30,,0,,"

The sigmoid kernel is better suited for binary classifications. As the IRIS dataset is for multi-class classification, its performance was not as good as other kernels. You can train with only 2 types of flowers to see if the sigmoid kernel can perform well.

",36446,,2444,,6/22/2020 12:37,6/22/2020 12:37,,,,3,,,,CC BY-SA 4.0 22003,1,22005,,6/18/2020 18:39,,1,142,"

I am trying to learn the theory behind first-order logic (FOL) and do some practice runs of converting statements into the form of FOL.

One issue I keep running into is hesitating on whether to use an AND ($\land$) statement or an IMPLIES ($\rightarrow$) statement.

I have seen examples such as "Some boys are intelligent" turned into:

$$ \exists x \text{boys}(x) \land \text{intelligent}(x) $$

Can I make a general assumption that when I see $x$ is/are $y$, I can use an AND?

With a statement such as "All movies directed by M. Knight Shamalan have a supernatural character", I feel that that statement can be translated to either:

$$ \forall x, \exists y \; \text{directed}(\text{Shamalan}, x) \rightarrow \text{super-natural character}(y) $$

or

$$ \forall x, \exists y \; \text{directed}(\text{Shamalan}, x) \land \text{super-natural character}(y) $$

Is there a better way to distinguish between when to use one or the other?

",38009,,2444,,6/19/2020 10:42,6/19/2020 10:42,When to use AND and when to use Implies in first-order logic?,,1,2,,,,CC BY-SA 4.0 22004,2,,21997,6/18/2020 18:48,,1,,"

Given that your state space is continuous, then I would recommend using Deep Q-Learning. As you say, running several episodes will definitely be beneficial so that the agent is able to explore the space more thoroughly.

",36821,,,,,6/18/2020 18:48,,,,0,,,,CC BY-SA 4.0 22005,2,,22003,6/18/2020 23:14,,0,,"

When considering the sentence "Some boys are intelligent", it makes sense to express it by ∃x boys(x) ∧ intelligent(x). This is because the existential quantifier makes sure to express that at least some, but not necessarily all boys are intelligent. More specifically, you say that there exists something which is a boy and which is intelligent. But if such a something exists, then there are some boys which are intelligent. Your statement is satisfied.

Using a simple implication here wouldn't work semantically in that context. For example, ∃x boys(x) → intelligent(x) expresses that: For some x, if x is a boy, then x is intelligent. By rewriting, we would get ∃x ¬boys(x) ∨ intelligent(x). This statement would even be true if all x were not boys, because ¬boys(x) would be true in these cases. So, ∃x boys(x) → intelligent(x) would hold as soon as something in your model wasn't a boy. In that case, ∃x boys(x) → intelligent(x) would be true, but "Some boys are intelligent" would not really be satisfied, since there are simply no boys in your model which could be intelligent. So, you will have to resort to something like ∃x boys(x) ∧ intelligent(x), since otherwise there is no guarantee that boys (even intelligent ones) are present in the resulting model.

",37982,,,,,6/18/2020 23:14,,,,0,,,,CC BY-SA 4.0 22007,2,,21999,6/19/2020 4:59,,0,,"

Just as a short and quick answer to build of nbros:

The way CNNs are typically taught, they are taught using a correlation on the forward pass, rather than a convolution. In reality, Convolutional neural networks is a bit of a misleading name, but not entirely incorrect.

CNNs do in fact use convolutions every time they are trained and run. If a correlation is used on the forward pass, a convolution is used on the backward pass. The opposite is true if a convolution is used on the forward pass (which is equally valid as using a correlation). I couldn't seem to find this information anywhere, so had to learn it myself the hard way.

So to summarise, a typical CNN goes like this: Correlation forward, convolution backward.

",26726,,,,,6/19/2020 4:59,,,,5,,,,CC BY-SA 4.0 22008,2,,18417,6/19/2020 5:07,,1,,"
  1. The ROIs in the input space are mapped to the feature map space, by dividing it by the net stride at that layer. Say, in a network, after a sequence of four 2x2 pooling layers, your image is reduced to 1/16 of the original size. (A 32*32 image is reduced to 2x2) So, the bounding boxes in the original space are mapped to the feature space by dividing by the net stride, which is 16 here. But here's the catch, the ROI co-ordinates could be a floating point number when divided by 16, so, it is adjusted by either flooring it or ceiling it. This is why ROIPooling is quantized. There is a loss of information when you are rounding off the co-ordinates of the ROI. Nevertheless, now, each region is pooled to a single-size feature map, and each ROI is fed one-by-one to the following layers. The Mask-RCNN paper brings a change to the mapping to the feature space, by not rounding off the co-ordinates, and by bilinear interpolation, and the loss of information is reduced, thus, the ROIAlign algorithm(which has been described here) performs better at object detection, than the quantized ROIPool algorithm.

  2. As said, each fixed-size feature map vector is processed one-by-one.

Reference

",36474,,,,,6/19/2020 5:07,,,,0,,,,CC BY-SA 4.0 22010,1,,,6/19/2020 6:48,,4,365,"

Currently, I'm only going through these two books

What other introductory books to reinforcement learning do you know, and how do they approach this topic?

",37627,,2444,,1/17/2021 19:31,1/17/2021 19:31,"What introductory books to reinforcement learning do you know, and how do they approach this topic?",,2,1,,,,CC BY-SA 4.0 22013,2,,21994,6/19/2020 7:36,,1,,"

A kernel function $f : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is a valid support vector kernel if it is a Mercer kernel. Mercer's condition essentially ensures that the Gram matrix of the kernel is positive semi-definite. Interestingly, this ensures that the SVM objective is convex.

The Euclidean distance function does not satisfy Mercer's condition since it's Gram matrix is not necessary positive semi-definite. Thus, it is not a valid kernel.

",5293,,,,,6/19/2020 7:36,,,,0,,,,CC BY-SA 4.0 22014,2,,21970,6/19/2020 8:03,,3,,"

Computer vision is a wide field, and besides the fact that deep learning dominates, there are still many, many other algorithms that see widespread use in both academia and industry.

For tasks such as image classification / object recognition, the typical paradigm is some CNN architecture such as a ResNet or VGG. There has been lots of works to extend and improves the CNNs, but the basic architecture has not really changed much over the years. Interestingly, there's been some work to encode more complex inductive biases / invariants into the deep learning modelling process, such as Spatial Transformer Networks and Group Equivariant Networks. More classical vision approaches to such problems typically include computing some form of hand-crafted feature (HOG, LBP), and training any off-the-shelf classifier.

For object detection, the de-facto for many years was Viola-Jones for it's combination of performance and speed (even though there were more accurate systems at the time, but they were slower). More recently, object detection has been dominated by deep learning, with architectures such as SSD, YOLO, all the RCNN variants, etc.

A related problem to object detection is segmentation. Deep learning again dominates in this area with algorithms such as Mask RCNN. However, many other approaches exist and see some use, such as superpixels (e.g. SLIC), watershed, and normalized cuts.

For problems such as image search, vision approaches such as Fisher vectors and VLAD (computed from image descriptors such as SIFT or SURF) are still competitive. However, CNN features have also seen use in this domain.

For video analysis, CNNs (typically, 3D CNNs) are popular. However, they often leverage other vision techniques such as optical flow. The most popular optical flow algorithms are Brox, TVL-1, KLT, and Farneback. There are more recent approaches which attempt to use deep learning to actually learn the optical flow, though.

An overarching set of techniques that has so many varying applications are interest point detectors, image descriptors, and feature encoding techniques. Interest point detectors attempt to localise interest points in an image or video, and popular detectors include Harris, FAST, and MSER. Image descriptors are used to describe those interest points. Example descriptors include SIFT, SURF, KAZE, and ORB. The descriptors themselves can be used to do various things such as estimate homographies using the RANSAC algorithm (for applications such as panorama and camera stabilisation). However, the descriptors can also be encoded and pooled into a single fixed-length feature vector, which serves as the representation of the image. The most common approach to this encoding is bag of feature / bag of visual words. This is based on K-means. However, popular extensions / variants include Fisher vectors and VLAD.

Self-supervised and semi-supervised learning is also very popular nowadays in academia, and seeks to get the most of out the abundant unlabelled data. In a computer vision context, popular techniques include MoCo and SimCLR, but new methods are released almost weekly!

Another problem domain in computer vision is the ability to generate / synthesize images. The is not unique to computer vision, but the common algorithms for this are variational autoencoders (VAEs) and generative adversarial networks (GANs).

",5293,,,,,6/19/2020 8:03,,,,0,,,,CC BY-SA 4.0 22016,1,,,6/19/2020 10:08,,0,48,"

I was running into a situation in which my input feature experience a very large variation in term of magnitude.

Particularly, consider feature 1 belong to group 1 and feature 2 3 4 belong to group 2,

Like this picture below

I was really worried that in this case feature 1 might dominate feature 2,3,4 (group 2) because its corresponding value is so large (I was trying to train this data set on a neural network).

In this situation, what would be the appropriate scaling strategy ? Update: I know for sure that the value of feature 1 is an integer that is uniform on the interval [22,42] But for feature 2 ,3 ,4 I do not have any insight

Thank you for your enthusiast !

",37297,,37297,,6/19/2020 13:35,11/6/2022 17:03,Feature scaling strategy for many feature with very large variation between them?,,2,0,,,,CC BY-SA 4.0 22017,2,,22016,6/19/2020 10:23,,0,,"

You should check the distribution of each feature and scale them accordingly, but in any case you should aim to roughly the same interval of values for every feature. For example, if f1 has the standard distribution and f2 is close to the uniform one, then you can scale f1 to N(0,1) and f2 to U(-1,1). In other words, try to have maximum, minimum and mean values for all features as close as possible, while keeping their original distributions.

",28041,,,,,6/19/2020 10:23,,,,0,,,,CC BY-SA 4.0 22019,1,22020,,6/19/2020 12:00,,1,224,"

Is the VC dimension meaningful for reinforcement learning (RL), as a machine learning (ML) method? How?

",4446,,2444,,1/22/2021 15:48,1/22/2021 15:48,Is the VC Dimension meaningful in the context of Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 22020,2,,22019,6/19/2020 12:00,,2,,"

Yes, it is. This article (Approximate Planning in Large POMDPs via Reusable Trajectories) explain about it by means of the trajectory tree:

A trajectory tree is a binary tree in which each node is labeled by a state and observation pair, and has a child for each of the two actions. Additionally, each link to a child is labeled by a reward, and the tree's depth will be $H_\epsilon$, so it will have about $2^{H_\epsilon}$ nodes. The root is labeled by $s_0$ and the observation there, $o_0$.

Now a policy $\pi$ will be defined like the following base on the trajectory tree:

For any deterministic strategy $\pi$ and any trajectory tree $T$, $\pi$ defines a path through $T$: $\pi$ starts at the root, and inductively, if $\pi$ is at some internal node in $T$, then we feed to $\pi$ the observable history along the path from the root to that node, and $\pi$ selects and moves to a child of the current node. This continues until a leaf node is reached, and we define $R(\pi, T)$ to be the discounted sum of returns along the path taken. In the case that $\pi$ is stochastic, $\pi$ defines a distribution on paths in $T$, and $R(\pi, T)$ is the expected return according to this distribution. Hence, given $m$ trajectory trees $T_1 , \ldots , T_m$, a natural estimate for $V^\pi(s_0)$ is $V^\pi(s_0) = \frac{1}{m}\sum_{i=1}^mR(\pi, T_i)$. *Note that each tree can be used to evaluate any strategy, much the way a single labeled example $\langle x, f(x)\rangle$ can be used to evaluate any hypothesis $h(x)$ in supervised learning. Thus in this sense, trajectory trees are reusable.

Now similar to definition of VC theory for classification methods:

Our goal now is to establish uniform convergence results that bound the error of the estimates $V^\pi(s_0)$ as a function of the "sample size" (number of trees) $m$.

And finally, we have the following theorem:

Let $\Pi$ be any finite class of deterministic strategies for an arbitrary two-action POMDP $M$. Let $m$ trajectory trees be created using a generative model for $M$, and $\widehat{V}^\pi(s_0)$ be the resulting estimates. If $m = O((V_{\max}/\epsilon)^2(\log(|\Pi|) + \log(1/\delta)))$, then with probability $1 - \delta$, $|V^\pi(s_0) - \widehat{V}^\pi(s_0)|\leqslant \epsilon$ holds simultaneously for all $\pi \in \Pi$.

About the VC dimension of $\Pi$, if we suppose we have two actions $\{a_1, a_2\}$ (it can be generalized to more actions), we can say:

If $\Pi$ is a (possibly infinite) set of deterministic strategies, then each strategy $\pi \in \Pi$ is simply a deterministic function mapping from the set of observable histories to the set $\{a_1, a_2\}$, and is thus a boolean function on observable histories. We can, therefore, write $\mathcal{VC}(\Pi)$ to denote the familiar VC dimension of the set of binary functions $\Pi$. For example, if $\Pi$ is the set of all thresholded linear functions of the current vector of observations (a particular type of memoryless strategy), then $\mathcal{VC}(\Pi)$ simply equals the number of parameters.

and the following theorem:

Let $\Pi$ be any class of deterministic strategies for an arbitrary two-action POMDP $M$, and let $\mathcal{VC}(\Pi)$ denote its VC dimension. Let $m$ trajectory trees be created using a generative model for $M$, and $\widehat{V}^\pi(s_0)$ be the resulting estimates. If: $$ m = O((V_{\max}/\epsilon)^2(H_\epsilon\mathcal{VC}(\Pi)\log(V_{\max}/\epsilon) + \log(1/\delta))) $$ then with probability $1 - \delta$, $|V^\pi(s_0) - \widehat{V}^\pi(s_0)|\leqslant \epsilon$ holds simultaneously for all $\pi \in \Pi$.

",4446,,4446,,6/19/2020 13:26,6/19/2020 13:26,,,,0,,,,CC BY-SA 4.0 22021,2,,22016,6/19/2020 12:20,,0,,"

You should normalize every column individually. It will work just fine. Sum up the column and divide every element of that column by sum of that column. But as your feature 2,3,4 are of very small scale you should consider some transformation like log transformation as you might encounter numerical underflow.

",36446,,,,,6/19/2020 12:20,,,,2,,,,CC BY-SA 4.0 22022,2,,22010,6/19/2020 13:31,,3,,"

In addition to the ones you mentioned, I would add Algorithms of Reinforcement Learning by Csaba Szepesvári. There is a number of professors who use it as a reference in their RL teaching materials (for example this one).

It generally follows the same outline as Sutton & Barto's book (except the part on bandits, it is included in the Chapter on Control). In fact, it may be considered as a condensed version of Sutton & Barto (about 100 pages). In addition, it's freely available online.

I like the author's justification as to why he wrote this book, so I'm just going to quote it:

Why did I write this book? Good question! There exist a good number of really great books on Reinforcement Learning. So why a new book? I had selfish reasons: I wanted a short book, which nevertheless contained the major ideas underlying state-of-the-art RL algorithms (back in 2010), a discussion of their relative strengths and weaknesses, with hints on what is known (and not known, but would be good to know) about these algorithms.

",34010,,,,,6/19/2020 13:31,,,,0,,,,CC BY-SA 4.0 22029,2,,18480,6/19/2020 21:45,,1,,"

Contrastive learning is a framework that learns similar/dissimilar representations from data that are organized into similar/dissimilar pairs. This can be formulated as a dictionary look-up problem.

If I conceptually compare the loss mechanisms for:

Both MoCo and SimCLR use varients of a contrastive loss function, like InfoNCE from the paper Representation Learning with Contrastive Predictive Coding \begin{eqnarray*} \mathcal{L}_{q,k^+,\{k^-\}}=-log\frac{exp(q\cdot k^+/\tau)}{exp(q\cdot k^+/\tau)+\sum\limits_{k^-}exp(q\cdot k^-/\tau)} \end{eqnarray*}

Here q is a query representation, $k^+$ is a representation of the positive (similar) key sample, and ${k^−}$ are representations of the negative (dissimilar) key samples. $\tau$ is a temperature hyper-parameter. In the instance discrimination pretext task (used by MoCo and SimCLR), a query and a key form a positive pair if they are data-augmented versions of the same image, and otherwise form a negative pair.

The contrastive loss can be minimized by various mechanisms that differ in how the keys are maintained.

In an end-to-end mechanism (Fig. 1a), the negative keys are from the same batch and updated end-to-end by back-propagation. SimCLR, is based on this mechanism and requires a large batch to provide a large set of negatives.

In the MoCo mechanism i.e. Momentum Contrast (Fig. 1b), the negative keys are maintained in a queue, and only the queries and positive keys are encoded in each training batch.

",32249,,32249,,6/19/2020 21:51,6/19/2020 21:51,,,,0,,,,CC BY-SA 4.0 22030,1,22035,,6/19/2020 22:19,,1,57,"

In this paper Fairness Through Awareness, the notation $\mathbb{E}_{x \sim V} \mathbb{E}_{a \sim \mu_x} L(x,a)$ is being used (page 5 top line), where $V$ denotes the set of individuals (so I guess set of feature vectors?) and the meaning of the other variables can be found in the paragraph above the mentioned notation. What does the $\sim$ in the expectation stand for?

Another notation that I do not know is $\Delta (A) $, where $A$ is the set of outcomes, for instance, $A = \{ 0,1\}$. What does it stand for?

",36116,,2444,,6/19/2020 22:48,6/20/2020 0:09,"What do the notations $\sim$ and $\Delta (A) $ mean in the paper ""Fairness Through Awareness""?",,1,0,,,,CC BY-SA 4.0 22035,2,,22030,6/19/2020 23:30,,2,,"

The $\sim$ symbol means that a random variable is drawn from the given distribution, i.e. if I were to say $X$ has a Standard Normal distribution I would write $X \sim \text{Normal}(0,1)$. They write two explicit expectations here because $a$ is a random variable with distribution $\mu_x$ but $X$ is also a random variable with distribution $V$. I believe you are right that $V$ would be analogous to a set of features. So we are saying that $X$ is a random variable over the distribution over the features, or individuals in this context.

As for the $\Delta(A)$, I have never seen this notation used before -- I am not sure if it is standard notation. I Googled to see if it was something that I just hadn't seen before, but there was no such answer. However, from the context of the paper they define $M: V \rightarrow \Delta (A)$ to be mappings from individuals to probability distributions over $A$, so I guess that $\Delta (A) $ is probability distributions over $A$.

",36821,,36821,,6/20/2020 0:09,6/20/2020 0:09,,,,0,,,,CC BY-SA 4.0 22036,1,,,6/20/2020 2:44,,1,198,"

So, my company recently bought a big 4k HDR TV for our reception, where we keep showing some videos that were originally shot/created at 720p resolution. Before this, we had a relatively small HD TV, so not a problem. Because the old videos now look dated, my boss wanted to upscale them and enhance their coloring to avoid shooting or procuring new animated videos.

This sounded like a fun project, but I know little about AI, and less so about video encoding/decoding. I've started researching and found some references, such as Video Super-Resolution via Bidirectional Recurrent Convolutional Networks, so while it seems like I have homework to do, it's clearly "been done before". Would be great to find some code that works on standard formatted videos though.

What I'm struggling to find, but would need some good basis to answer in the negative, is: What about HDR? I'm not finding the research terms nor any mention on result for improving dynamic range on videos. Is there any research done on that? Though actual HDR is a format, most of the shots and pictures used for our videos were taken on cameras with small color gamut and latitude, thus everything looks "washed" and the new TV really makes this obvious by comparing against demo videos.

PS: Unlike much of the literature I'm finding, I'm not aiming at real-time super resolution, it would be great if it took less than one night to process for a 10 minute video though.

",38037,,38037,,6/26/2020 18:16,6/26/2020 18:16,Creating 4k HDR video from 720p footage,,0,0,,,,CC BY-SA 4.0 22037,1,22045,,6/20/2020 3:37,,2,101,"

I have the following situation. An agent plays a game and wants to maximize the accumulated reward as usual, but it can choose its adversary. There are $n$ adversaries.

In episode $e$, the agent must first select an adversary. Then for each step $t$ in the episode $e$, it plays the game against the chosen adversary. Every step $t$, it receives a reward following the chosen action in step $t$ (for the chosen adversary). How to maximize the expected rewards using DQN? It is clear that choosing the "wrong" (the strongest) adversary won't be a good choice for the agent. Thus, to maximize the accumulated rewards, the agent must take two actions at two different timescales.

I started solving it using two DQNs, one to decide the adversary to play against and one to play the game against the chosen adversary. I have two duplicate hyperparameters (batch_size, target_update_freq, etc), one for each DQN. Have you ever seen two DQNs like this? Should I train the DQNs simultaneously?

The results that I am getting is not that good. The accumulated reward is decreasing, the loss isn't always decreasing...

",37642,,,,,6/21/2020 9:03,Two DQNs in two different time scales,,1,4,,,,CC BY-SA 4.0 22038,1,22053,,6/20/2020 5:38,,3,215,"

I made a DQN that controls a traffic light. The observation states are the number of vehicles of each lane in the intersection. I trained it for 500 episodes and saved the model every 50th episode. I plotted the reward curve of the model after the training and found out that around the 460th episode, the reward curve become unstable. Does it mean that the optimized DQN model is the 450th model? If not, how do I know if the my DQN is really optimized?

",37382,,,,,6/21/2020 10:19,How to know if my DQN is optimized?,,1,0,,,,CC BY-SA 4.0 22041,1,,,6/20/2020 15:03,,1,69,"

I'm going through the distributions package on PyTorch's documentation and came across the term stochastic computation graph. In layman's terms, what is it?

",33448,,2444,,6/21/2020 11:46,6/21/2020 11:46,"In layman's terms, what is stochastic computation graph?",,0,5,,,,CC BY-SA 4.0 22042,2,,21096,6/20/2020 16:28,,1,,"
  1. The bias plane is a layer everywhere equal to the constant $a/18$ where $a$ is the action. So each of the 32 frames has three frames for RGB and a fourth frame which is the bias plane making for 128 input layers. This is explained in the Network Architecture section where it mentions that the actions are "broadcast" to the planes.

  2. For this I do not have conclusive evidence, but I think tiling a vector means arranging parallel copies of it into e.g. a grid shape. In other words, the input is 6x6x18, and action 1 is represented as all ones in the first plane with all zeros in the remaining planes. One problem with the way you've depicted is that the input is subject to convolutions, but there is no inherent reason why the fifth position and eleventh actions (which are next to each other vertically) should be included in the same 3x3 filter application, but the fifth and seventh (for example) actions should not.

",38039,,,,,6/20/2020 16:28,,,,3,,,,CC BY-SA 4.0 22043,1,,,6/20/2020 16:32,,0,133,"

Can I refine StyleGAN or StyleGAN2 without retraining it for many days, such that its pretrained model is trained to generate only faces similar to a (rather small) set of reference images?

I would like to avoid creating a large dataset and training for many days to weeks, but use the existing model and just bias it towards a set of images.

",25798,,,,,6/20/2020 16:32,Can StyleGAN be refined without a full training?,,0,2,,,,CC BY-SA 4.0 22044,2,,21968,6/20/2020 16:57,,0,,"

No misunderstanding. IDA* is indeed n^2 vs n for A* in certain situations.

See https://pdfs.semanticscholar.org/388c/0a934934a9e60da1c22c050566dbcd995702.pdf for a reference.

IDA* only works in certain (not uncommon) situations. If there are many states with the same costs, and this number grows exponentially with the cost, then IDA* will be O(n). An example of such a problem is the 8 sliding piece puzzle.

IDA* will however, as noted, become exponential when all costs differ. Examples include continuous costs, or discrete values spanning a large range.

If you really wanted to apply IDA* to e.g. a map, a workable approach would likely be to round all values - or perform a log+floor operation.

",37970,,,,,6/20/2020 16:57,,,,1,,,,CC BY-SA 4.0 22045,2,,22037,6/20/2020 17:14,,3,,"

From comments, you say there is no "outer" goal for picking an adversary other than scoring highly in an individual episode.

You could potentially model the initial adversary choice as a partially separate Markov Decision Process (MDP), where choosing the opponent is a single-step episode with return equal to whatever reward the secondary MDP - which played the game - obtains. However, this "outer" MDP is not much of an MDP at all, it is more like a contextual bandit. In addition, the performance of the inner game-playing agent will vary both with the choice of opponent, and over time as it learns to play better against each opponent. This makes the outer MDP non-stationary. It also requires the inner MDP to know what opponent it is facing in order to correctly predict correct choices and/or future rewards.

That last part - the need for any "inner" agent to be aware of the opponent it is playing against - is likely to be necessary whatever structure you chooose. That choice of opponent needs to be part of the state for this inner agent, because it will have an impact on likely future rewards. A characterisation of the opponents also needs to be part of whatever predictive models you could use for the outer agent.

A more natural, and probably more useful, MDP model for your problem is to have a single MDP where the first action $a_0$ is to select the opponent. This matches the language you use to describe the problem, and resolves your issue about trying to run a hierarchy of agents. Hierarchical reinforcement learning is a real thing, and very interesting for solving problems which can be broken down into meaningful sub-goals that an agent could discover autonomously, but it does not appear to apply for your problem.

This leaves you with a practical problem of creating a model that can switch between choosing between two sets of radically different actions. The select an opponent action only occurs at the first state of the game, and the two sets of actions do not overlap at all. However, in terms of the theoretical MDP model this is not an issue at all. It is only a practical issue of how you get to fit your Q function approximator to the two radically different action types. There a a few ways around that. Here are a couple that might work for you:

One shared network

Always predict for all kinds of action choice, so the agent still makes predictions for switching opponents all the way to the end of the game. Then filter the action choices down to only those available at any time step. When $t=0$ only use the predictions for actions for selecting an opponent, for $t \ge 1$ only use predictions relating to moves in the game.

Two separate approximators

Have two function approximators in your agent, use one for predicting reward at $t=0$ that covers different opponent choices, and use the other for the rest of the game. If $n$ is small and there is no generalisation between opponents (i.e. no opponent "stats" that give some kind of clue towards the end results), then for the first approximator, you could even use a Q table.

For update steps you need to know whether any particular action value was modelled in one or other of the Q functions - and this will naturally lead you to bootstrap

$$\hat{q}_{o}(s_0, a_0, \theta_0) \leftarrow r_1 + \gamma \text{max}_{a'}\hat{q}_{p}(s_1, a', \theta_1)$$

where $\hat{q}_{o}$ is your approximate model for action values of selecting opponents (and $a_0$ must be an opponent choice) at the start of the game, and $\hat{q}_{p}$ is the nodel you use for the rest of it (and $a'$ must be a position play in the game). I've misused $\leftarrow$ here to stand in for whatever process used to update the action value towards the new estimate - a tabular method that would be a rolling average with current estimate, in neural networks of course that is gradient descent using backpropagation.

",1847,,1847,,6/21/2020 9:03,6/21/2020 9:03,,,,4,,,,CC BY-SA 4.0 22048,2,,17608,6/21/2020 0:46,,1,,"

Especially in continuous space, convergence of the value function is mainly a theoretical property. Without seeing enough of the state space, as you suggest, there's no way to ensure that your Q function will generalize to the whole state space. Convergence results for Q learning with function approximation generally show that in the limit of infinite data, your value function will converge to the desired fixed point -- note that this is only true when your agent explores occasionally, for an infinite amount of time.

When your parameters have converged, this simply means that your Q function has fit the data you've collected. As you explore more, your agent may get "surprised" and your parameters may start to change again.

Also, convergence of the parameters in function approximation can never guarantee that an optimal value function was found in practice -- the only guarantee you can wish for is that the optimal value function that can be produced with your model has been found. For instance, the parameters of the linear Q function you posted can converge, even if the optimal Q function is not linear.

",37829,,,,,6/21/2020 0:46,,,,0,,,,CC BY-SA 4.0 22049,1,,,6/21/2020 4:21,,2,97,"

As the title says, in reinforcement learning, does the off-policy evaluation work for non-stationary policies?

For example, IS (importance sampling)-based estimators, such as weighted IS or doubly robust, are still unbiased when they are used to evaluate UCB1, which is a non-stationary policy, as it chooses an action based on the history of rewards?

",30051,,2444,,6/21/2020 11:34,6/21/2020 11:34,Does the off-policy evaluation work for non-stationary policies?,,0,1,,,,CC BY-SA 4.0 22051,1,,,6/21/2020 7:02,,2,194,"

Suppose that the transition time between two states is a random variable (for example, unknown exponential distribution); and between two arrivals, there is no reward. If $\tau$ (real number not an integer number) shows the time between two arrivals, should I update Q-functions as follows:

$Q(s,a) = Q(s,a)+\alpha.(R+\gamma^{\tau} \max_{b \in A}Q(s^{\prime},b)-Q(s,a))$

And, to compare different algorithms, total rewards ($TR=R_{1}+ R_2+R_{3}+...+R_{T}$) is used.

What measure should be used in the SMDP setting? I would be thankful if someone can explain the Q-Learning algorithm for the SMDP problem with this setting.

Moreover, I am wondering when Q-functions are updated. For example, if a customer enters our website and purchases a product, we want to update the Q-functions. Suppose that the planning horizon (state $S_{0}$) starts at 10:00 am, and the first customer enters at 10:02 am, and we sell a product and gain $R_1$ and the state will be $S_1$. The next customer enters at 10:04 am, and buy a product, and gain reward $R_2$ (state $S_{2}$). In this situation, should we wait until 10:02 to update the Q-function for state $S_0$?

Is the following formula correct?

$$V(S_0)= R_1 \gamma^2+ \gamma^2V(S_1)$$

In this case, if I discretize the time horizon to 1-minute intervals, the problem will be a regular MDP problem. Should I update Q-functions when no customer enters in a time interval (reward =0)?

",10191,,2444,,6/27/2020 21:47,6/28/2020 10:54,Updating action-value functions in Semi-Markov Decision Process and Reinforcement Learning,,1,0,,,,CC BY-SA 4.0 22052,2,,21975,6/21/2020 7:54,,0,,"

What you want to do is you have some character like hulk, thor, caption America, iron man, etc. and you want to Train a response generator for each character like for thor on his dataset, for hulf on his dataset etc... and then you wanted to make a conversation. If I Understood you well then.

  • You can fine-tune GPT-2 small or GPT-2 Medium using your Dataset for each character. Reference for Finetune

  • You can decode using simple nucleus sampling at each time step. Use greedy nucleus sampling multiple times in parallel to generate multiple responses. You can generate 30 such responses and also use the last 3-5 dialogue turn as a context or Short term memory. Decoder reference

  • To Choose the best response out of all generated responses(30 responses as an example). you can use cosine similarity between generated responses and query + last 3-5 dialogue turn.

    def cosine_similarity_nd(embd1, embd2):
    numerator = np.multiply(embd1, embd2)
    numerator = np.sum(numerator, axis=1)
    eucli_norm_1 = np.sqrt(np.sum(np.power(embd1, 2), axis=1))
    eucli_norm_2 = np.sqrt(np.sum(np.power(embd2, 2), axis=1))
    denominator = np.multiply(eucli_norm_1, eucli_norm_2)
    denominator = denominator + 1e-10  # remove zeros
    cosine_similarity = np.divide(numerator, denominator)
    return cosine_similarity.reshape((-1))
    
  • Or you can train a reverse model for either all character or mix all character dialogue and use this model to calculate loss of all responses with your query. Reference reverse model

  • Or you can Combine result of cosine and reverse model to find response out of all the responses.

",32861,,,,,6/21/2020 7:54,,,,0,,,,CC BY-SA 4.0 22053,2,,22038,6/21/2020 8:46,,1,,"

There is a good chance that your DQN is already optimized but you would have to take a look at its performance to really check and see whether its actions seem up to par.

Reasons it may not be optimized: If you are tracking the reward after every episode, unstable rewards are very common just due to random chance, but if you are averaging the reward over the past 50 episodes or so, then it may also be your learning rate or epsilon.

If your learning rate is too high or low you may either never be able to reach a fully optimized DQN or be stuck in a local minimum. An easy way to solve a problem like this would be to just add a simple learning rate decay, so that the learning rate would start off high as to not get stuck in local minima yet decay to a small enough number where you know that the agent has found the global minimum.

The other problem could be that your epsilon may be too high or low. A high epsilon will never allow the agent to fully optimize while a low epsilon doesn't allow the agent to explore and discover better strategies, so it may be a good idea to mess around with this as well.

The only way to really gauge the agent's performance would be to look at it making some decisions through a video or by analyzing some of its predictions. And, if it seems to be doing well, then it may very well be optimized, but, if the agent is not performing as well as it should, then it may be a good idea to try out some of the strategies above.

",38062,,2444,,6/21/2020 10:19,6/21/2020 10:19,,,,0,,,,CC BY-SA 4.0 22054,1,,,6/21/2020 10:52,,1,165,"

I'm trying to understand the DDPG algorithm shown at this page. I don't know what should the result of the gradient at step 14 be.

Is it a scalar that I have to use to update all the weights (so all weights are updated with the same value)? Or is it a list with a different values to use for updating for each weight? I'm used to working with loss functions and an $y$ target, but here I don't have them so I'm quite confused.

",37169,,2444,,6/22/2020 17:05,7/12/2022 23:04,"In Deep Deterministic Policy Gradient, are all weights of the policy network updated with the same or different value?",,1,0,,,,CC BY-SA 4.0 22055,1,,,6/21/2020 14:27,,1,61,"

Does this figure correctly represent the overall general idea about actor-critic methods for on-policy (left) and off-policy (right) case?

I am a bit confused about the off-policy case (right figure). Why does the right figure represent the off-policy actor-critic methods?

",37381,,2444,,6/22/2020 12:08,6/22/2020 12:08,Is this figure a correct representation of off-policy actor-critic methods?,,0,0,,,,CC BY-SA 4.0 22056,1,,,6/21/2020 14:41,,1,440,"

I am writing a couple of different reinforcement learning models based on Rainbow DQN or some PG models. All of them internally use an LSTM network because my project is using time series data.

I wanted to test my models using OpenAI Gym before I add too many domain specific code to the models.

The problem is that, all of the Atari games seem to fall into the CNN area which I don't use.

Is it possible to use OpenAI Gym to test any time series data driven RL models/networks?

If not, is there any good environment that I can use to examine the validity of my models?

",37615,,,,,11/18/2020 16:03,How do I test an LSTM-based reinforcement learning model using any Atari games in OpenAI gym?,,1,0,,,,CC BY-SA 4.0 22057,2,,22056,6/21/2020 15:10,,1,,"

If I understand your problem correctly, you can test on just about any environment, and just omit parts of the observations to ensure your RNN is learning. For example, you can test on cartpole, ignoring the velocity and angular velocity states. This way the MDP isn't actually Markovian and you'll need the RNN to learn.

",37829,,,,,6/21/2020 15:10,,,,4,,,,CC BY-SA 4.0 22058,2,,22054,6/21/2020 15:13,,1,,"

Each Q output is a scalar, so the sum of all those is a scalar. Thus, you're taking a gradient wrt your parameters of a scalar. The result is a vector with one entry per parameter.

",37829,,,,,6/21/2020 15:13,,,,2,,,,CC BY-SA 4.0 22059,1,,,6/21/2020 17:01,,0,53,"

In this article, there is an explanation (with an example) of how policy iteration works.

It seems that, if we replace all the probabilities of moves in the example by new probabilities where the best action is taken 100% of the time and the other moves are taken 0% of the time, then the final policy will end up to be (south, south, north) as in the example provided. However, if we are certain of our moves, we can go north and then south in the example to get most of the reward.

In other words, it seems to be incorrect to calculate the value of a state by summing up rewards for all possible actions out of the state, because, like in the case I described above, or a case where one action gives you a huge penalty, you are 100% gonna avoid it, therefore the value state is unfairly weighted.

Why care about the value of the action which I'm not gonna take?

",38069,,2444,,6/22/2020 20:06,6/22/2020 20:06,Why care about the value of the action which I'm not gonna take in policy iteration?,,0,2,,,,CC BY-SA 4.0 22062,1,,,6/21/2020 17:53,,1,58,"

I made a DQN model and plot its reward curve. You can see intuitively that the curve already converged since its reward value now just oscillates. How can I show confidence that my DQN already reached its optimal other than by just showing the curve? Are there any way to validate that it is already optimized?

",37382,,,,,6/21/2020 17:53,Is there a way to show convergence of DQN other than by eye observation?,,0,0,,,,CC BY-SA 4.0 22063,1,,,6/21/2020 18:52,,0,293,"

My DQN model outputs the best traffic light state in an intersection. I used different values of batch size and learning rate to find the best model. How would I know if I got the optimal hyperparameter values?

",37382,,,,,6/21/2020 21:24,How to validate that my DQN hyperparameters are the optimal?,,1,0,,,,CC BY-SA 4.0 22064,1,22197,,6/21/2020 19:56,,1,964,"

I'm currently modeling DQN in Reinforcement Learning. My question is: what are the best practices related to Boltzmann Exploration? My current thoughts are: (1) Let the temperature decay through training and finally stop at 0.01, when the method will always select the best practice, with almost no randomness. (2) Standardize the predicted Q values before feeding into the softmax function.

Currently, I'm using (2), and the reward is suffering from high variance. I'm wondering whether it has something to do with the exploration method?

",37178,,,,,6/26/2020 18:55,What's the best practice for Boltzmann Exploration temperature in RL?,,1,0,,,,CC BY-SA 4.0 22065,2,,22063,6/21/2020 21:24,,1,,"

If possible, I would try to calculate what the (theoretical) maximum throughput through the intersection is for a given time interval. If the control behavior that the DQN produces comes empirically close to the maximally possible throughput score, the model is good. Otherwise, you could measure and compare the throughput of different models and choose the best performing one.

",37982,,,,,6/21/2020 21:24,,,,6,,,,CC BY-SA 4.0 22069,1,,,6/22/2020 9:02,,1,72,"

I have a paired dataset of binary images A and B: A1 paired with B1, A2-B2, etc., with simple shapes (rectangles, squares).

The external software receives both images A and B and it returns a number that represents the error.

I need a model that, given images A and B, can modify A into A' by adding or removing squares, so that the error from the software is minimized. I don't have access to the source code of the software so I don't know how it works.

I tried to make a NN that copies the functionality of the software, and a generative NN to generate the modified image A' but I haven't got good results.

The software can only receive binary images, so I cannot use a loss function because my last layer of the generator being a softmax, if I apply a threshold, I will lose the track of the gradients, so I cannot apply gradient descent.

Someone told me that when you cannot calculate the gradient of the loss with respect to the weights, reinforcement learning with policy gradients is a good solution.

I'm new to this field, so I want to be sure I'm going in the right direction.

",38083,,2444,,6/22/2020 12:03,6/22/2020 12:03,Is Reinforcement Learning what I need for this image to image translation problem?,,0,3,,,,CC BY-SA 4.0 22070,1,,,6/22/2020 11:31,,1,129,"

I am a computer science student. I learned about programming languages recently, but I don't know much about artificial intelligence.

I want to know, why don't we program something in a way that we could tell the program

Hey! Do this for me!

And then just sit down and wait that the AI does the job?

Is this currently possible to do?

",38086,,2444,,6/22/2020 14:00,7/2/2020 20:35,Can we give a command to an AI and wait for it to do the job without explicitly telling it how to do it?,,2,0,,,,CC BY-SA 4.0 22071,2,,22070,6/22/2020 12:37,,4,,"

Normally when you write a program, you are acting like a boss that micromanages the job, telling the workers how to accomplish a task, perhaps without even letting them know what the purpose is. What you are hoping to be is a boss that gives the workers a goal and allows them to determine how to accomplish it.

In many ways, that is one of the aims of AI.

We already have small examples, such as a smart washing machine that weighs the clothes, monitors the amount of dirt in the water, and continually decides how much water to add or drain, when to agitate, when to rinse, and when to spin. All you had to tell it was "clean these", and perhaps say what kind of material they are made of.

As a much larger example, automobiles were traditionally operated by turning the steering wheel to cause it to change direction and by pressing the pedals to increase or decrease the speed, but now we are working on car controllers that can respond to "go to Cleveland" by determining the best speed and direction by itself.

But note that those two examples (the first requiring only a little "intellegence", the second a lot) were for very specific requests that could be expressed in a few words. As soon as the request becomes even slightly more complicated, the difficulty of creating a solution becomes much much more difficult.

Ask your "Do this for me!" request of a human being. The first thing they'll do is ask "what does this mean?". And then you'll have to give a lot of details. And then they'll have questions about what you really want. And so on. Providing the requirements for "this" will be neither simple nor easy.

Human intellegence is still vastly superior to current AI. In particular, it is capable of not only recognizing that additional information is required, but of having the intuitive ability to know what that missing information must be like.

",36323,,36323,,6/24/2020 18:54,6/24/2020 18:54,,,,2,,,,CC BY-SA 4.0 22072,1,,,6/22/2020 12:40,,0,70,"

I am quite new to deep learning. I just finished the deep learning specialization by Professor Andrew NG and Deep Learning AI. Now, my professor (instructor) has advised me to look into some classic papers for aspect extraction and opinion mining from video. Could anyone suggest me some resources where I can get started? Can anyone suggest some papers I should read? Maybe a course or a book or some links to descriptive sessions. Your help would be appreciated.

",38087,,38087,,7/17/2020 1:10,12/4/2022 18:07,What are some good papers or resources for aspect extraction and opinion modelling from video or audio?,,1,0,,,,CC BY-SA 4.0 22074,1,22076,,6/22/2020 19:39,,1,185,"

The concept of experience replay is saving our experiences in our replay buffer. We select at random to break the correlation between consecutive samples, right?

What would happen if we calculate our loss using just one experience instead of a mini-batch of experiences?

",37831,,2444,,6/22/2020 19:52,6/22/2020 20:09,What would happen if we sampled only one tuple from the experience replay?,,1,0,,,,CC BY-SA 4.0 22075,1,22077,,6/22/2020 19:59,,2,325,"

If I have a convolutional neuronal network, does the input dimension change the number of parameters? And if yes, why? If the sizes and lengths of the filters are still the same, how can the number of parameter in a network increase?

",35615,,2444,,6/22/2020 20:13,6/23/2020 12:02,Does the number of parameters in a convolutional neuronal network increase if the input dimension increases?,,1,0,,,,CC BY-SA 4.0 22076,2,,22074,6/22/2020 20:09,,0,,"

The concept of experience replay is saving our experiences in our replay buffer. We select at random to break the correlation between consecutive samples, right?

Yes that is a major benefit of using a replay buffer.

A secondary benefit is the ability to use the same sample more than once. This can lead to beter sample efficiency, although that is not guaranteed.

What would happen if we calculate our loss using just one experience instead of a mini-batch of experiences?

The algorithm is still valid, but the gradient estimate for the update step would be based on a single record of [state, action, reward, next state]. This would be a high variance update process, with many steps in wrong directions, but in expectation over many steps you should still see a correct gradient. You would probably need to compensate for the high variance per sample by reducing the learning rate.

In addition, assuming the standard approach of collecting one time step then making one update to DQN neural network, each piece of experience would only be used once on average before being discarded.

These two effects will likely combine such that the learning process would not be very sample efficient.

The size of the minibatch is one of many hyperparameters you can change in DQN. It might be the case for some problems that choosing a low minibatch size is helpful, provided other adjustments (such as a lower learning rate) are made along with it. If you are not sure, you mostly have to try and see.

In my experience on a small range of problems, a moderate size of minibatch - ranging from 10 to 100 - has worked the best in terms of end results of high scoring agents. However, I have not spent a long time trying to make low batch sizes work.

",1847,,,,,6/22/2020 20:09,,,,5,,,,CC BY-SA 4.0 22077,2,,22075,6/22/2020 20:23,,1,,"

If I have a convolutional neuronal network, does the input dimension change the number of parameters? And if yes, why?

If the convolutional neural network (CNN) only uses convolutional layers, then the number of parameters does not increase as a function of the spatial dimensions ($x$ and $y$) of the input. This is one of the advantages of CNNs!

The reason is quite simple: the parameters of the convolutional layers are the kernels (aka filters), which typically have fixed dimensions and can, nevertheless, be applied to inputs of different spatial dimensions, provided that the necessary padding is used. However, note that padding can create bigger feature maps, but feature maps are not the parameters of the neural network: they are the output of the convolutional layers. That's probably what confuses you, when you see a diagram of a CNN, because you see bigger feature maps and you might think that the number of parameters increases.

Here's a TensorFlow 2 (and Keras) program that shows that the number of parameters does not change as a function of the $x$ and $y$ dimension of the input.

import tensorflow as tf

input_shapes = [(2 * k, 2 * k, 3) for k in range(2, 6)]

for input_shape in input_shapes:
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Input(shape=input_shape))
    model.add(tf.keras.layers.Conv2D(10, kernel_size=3, use_bias=True))
    model.summary() # The total number of parameters is always 280

The parameters of a convolutional layer can increase if you increase the size of each kernel and the number of kernels, but this does not necessarily depend on the input.

The parameters of the CNN can also increase if you increase the depth of the input, but that's typically fixed (either $3$ for RGB images or $1$ for grayscale images). The reason is quite simple too: the kernels in the first convolutional layer (connected to the input layer) will have the same depth as the depth of the input.

If your CNN also has fully connected layers, then the number of parameters also depends on the dimensions of the inputs. This is because the parameters of the fully connected layers depend on the number and dimensions of the feature maps (remember the flatten layer before the fully connected layers?), which, as I said, can increase as a function of the input.

If you don't want to use fully connected layers, you may want to try fully convolutional networks (FCNs), which do not make use of fully connected layers, but can, nonetheless, be used to solve classification (and other) tasks.

",2444,,2444,,6/23/2020 12:02,6/23/2020 12:02,,,,0,,,,CC BY-SA 4.0 22079,1,,,6/22/2020 20:51,,2,330,"

Model Description: Model based(assume known of the entire model) Markov decision process.

Time($t$): Finite horizon discrete time with discounting factor

State($x_t$): Continuous multi-dimensional state

Action($a_t$): Continuous multi-dimensional action (deterministic action)

Feasible(possible action) ($A_t$): possible action is a continuous multi-dimensional space, no discretization for the action space.

Transition kernel($P$): known and have some randomness associated with the current state and next state

Reward function: known in explicit form and terminal reward is know.

The method I tried to solve the model:

  1. Discretize the state space and construct multi-dimensional grid for state space, starting from the terminal state, I used backward induction to reconstruct value function from the previous period. By using the Bellman equation, I need to solve an optimization problem, selecting the best action that gives me the best objective function.

$$V_t(x_t) = max_{a_t \in A_t}[R_t(x_t,a_t) + E[\tilde{V}_{t+1}(x_{t+1})|x_t, a_t]]$$

where $\tilde{V}_{t+1}$ here is an approximation using interpolation method, since the discrete values are calculated from the previous time episode. In other words: $\tilde{V}_{t+1}$ is approximated by some discrete value: $$V_{t+1}(x_0),V_{t+1}(x_1),V_{t+1}(x_2)\cdots V_{t+1}(x_N)$$ where $x_0,x_1,x_2\cdots x_N$ are grid points from discretizing the state space.

In this way, for every time steps t, I could have a value function for every grid point and my value function could be approximated by using some interpolation method(probably cubic spline). But here are some of the problems: 1. what kind of interpolation is suitable for high dimensional data. 2. Say we have five dimension for the state, then I discretize the state by giving 5 grid points to every dimension, then there are 5^5 = 3125 discrete state values I need to calculate through the optimization.(Curse of the dimensionality). 3. What kind of optimizer should I use? Since I do not know the shape of the objective function, I do not know if it is a smooth function and I do not know if the function is concave. So I may have to use a robust optimizer, probably some evolutionary optimizer. So eventually I end up with this computational complexity and it takes too long for the computation.

And recently I learned the techniques of policy gradient from OpenAI: https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html This method avoid using this backward induction and approximating the value function by using interpolation. And obtained the approximated policy by firstly guessing a functional form of the policy and take approximated gradient of the policy function by using sampling(simulation) method. Since the model is known, every time it could sample new trajectories and use that to update the policy by using the stochastic gradient assent method. In this way updating the policy until it gets some sort of convergency.

I am wondering if this type of technique could potentially reduce the computational complexity significantly. Any advice helps, thanks a lot.

",38097,,38097,,6/23/2020 15:28,11/10/2022 20:08,Continuous state and continuous action Markov decision process time complexity estimate: backward induction VS policy gradient method (RL),,1,0,,,,CC BY-SA 4.0 22080,1,,,6/22/2020 21:32,,0,846,"

Let's assume that we embedded a vector of length 49 into a matrix using 512-d embeddings. If we then multiply the matrix by its transposed version, we receive a matrix of 49 by 49, which is symmetric. Let's also assume we do not add the positional encoding and we only have only one attention head in the first layer of the transformer architecture.

What would the result of the softmax on this 49 by 49 matrix look like? Is it still symmetric, or is the softmax correctly applied for each line of the matrix, resulting in a non-symmetric matrix? My guess would be that the matrix should not be symmetric anymore. But I'm unsure about that.

I ask this to verify if my implementation is wrong or not, and what the output should look like. I have seen so many sophisticated and different implementations of the transformer architecture with different frameworks, that I can't answer this question for myself right now (confusion). I still try to understand the basic building blocks of the transformer architecture.

",38099,,2444,,8/24/2021 11:22,5/21/2022 13:00,Is the self-attention matrix softmax output (layer 1) symmetric?,,1,0,,,,CC BY-SA 4.0 22082,2,,17020,6/22/2020 22:04,,3,,"

CNNs learn convolutional filters that get trained on finding local, recurring patterns in some kind of image/volume data. 1D convolution is actually a thing, but I think what would be more suitable for your case is using Recurrent Neural Nets. They are specifically designed for working on time series-es of heterogeneous data.

Update:

I would like to reconsider the answer I gave earlier. First of all, in case of dealing with time series data where it is uncertain over which time span a given event to be detected lasts, I'd generally consider using Recurrent Neural Networks (RNNs) rather than CNNs, since RNNs maintain so-called hidden states that can carry potentially useful information over time, i.e. past inputs may influence present outputs. So, either if you know that the event to be detected spans many time steps (which might not even be consecutive time steps) or if you are uncertain about how long exactly the time span of an event to be detected is, then I'd suggest going for RNNs. If you decide to use RNNs, adding extra statistics to your data might make the learning task easier for the network, but is not strictly required since you would expect an RNN to learn relevant time-series statistics on its own.

However, there is a case where using CNNs might actually suffice and yield good results as well. Consider the case of the figure shown below, where the white tiles are time series inputs and the blueish area is the area that a CNN filter/kernel (or multiple ones) is working on right now.

As you can see, there is a time series of data points and some statistics derived from that time series that have been appended to the input data stream. So, in the figure, the augmented time series input consists of the measurements themselves as well as mean, variance, and skewness values derived from the time series data (over consecutive time steps). Under the assumption that the type of event you are interested in is bounded, in terms of time steps, to the width of the CNN filter(s)/kernel(s) that you slide over your input data, then a simple CNN might suffice to detect the events you are interested in. In other words, if the width of your filter spans over at least as many time steps as an event of interest lasts, then your CNN will be able to learn to detect the event (given proper training). In that case, adding further statistics to your input data might indeed help or even be necessary to detect certain events, since it implicitly widens the time window that your CNN filter can observe at any given time step, since a mean (or other statistics) may also considers past data that the filter itself (or multiple filters...) might not span over at a given time step where you want to detect an event.

",37982,,37982,,2/6/2021 13:05,2/6/2021 13:05,,,,4,,,,CC BY-SA 4.0 22085,1,,,6/23/2020 2:23,,1,304,"

I have been creating sports betting algorithms for many years using Microsoft access and I am transitioning to the ML world and trying to get a grasp on determining the success of my algorithms. I have exported my algorithms as CSV files dating back to the 2013-14 NBA season and imported them into python via pandas.

The purpose of importing these CSV files is to determine the future accuracy of these algorithms using ML. Here are the algorithm records based on the Microsoft access query:

  • A System: 471-344 (58%) +92.60

  • B System: 317-239 (57%) +54.10

  • C System: 347-262 (57%) +58.80

I have a total of 8,814 records in my database, however, the above systems are based on situational stats, e.g., Team A fits an algorithm if they have better Field Goal %, Played Last Game Home/Away, More Points Per Game, etc...

Here is some of the code that I wrote using Jupyter to determine the accuracy:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

clf = LinearSVC(C=1.0, penalty="l2", dual=False)

clf.fit(X_train, y_train)

pred_clf = clf.predict(X_test)

scores = cross_val_score(clf, X, y, cv=10)

rfe_selector = RFE(clf, 10)

rfe_selector = rfe_selector.fit(X, y)

rfe_values = rfe_selector.get_support()

train = accuracy_score(y_train, clf.predict(X_train))

test = accuracy_score(y_test, pred_clf)

print("Train Accuracy:", accuracy_score(y_train, clf.predict(X_train)))

print("Test Accuracy:", accuracy_score(y_test, pred_clf))

print(classification_report(y_test, pred_clf, zero_division=1))

print(confusion_matrix(y_test, pred_clf))

print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))

Here are the results from the code above by system:

A System:

  • Train Accuracy: 0.6211656441717791
  • Test Accuracy: 0.5153374233128835
  • F1 Score: 0.52
  • CONFUSION MATRIX: [[16 50] [29 68]]
  • Accuracy: 0.55 (+/- 0.10)

B System:

  • Train Accuracy: 0.6306306306306306
  • Test Accuracy: 0.5178571428571429
  • F1 Score: 0.52
  • CONFUSION MATRIX: [[49 23] [31 9]]
  • Accuracy: 0.55 (+/- 0.08)

C System:

  • Train Accuracy: 0.675564681724846
  • Test Accuracy: 0.5409836065573771
  • F1 Score: 0.54
  • CONFUSION MATRIX: [[15 29] [27 51]]
  • Accuracy: 0.57 (+/- 0.16) ​

In order to have a profitable system, the accuracy only needs to be 52.5%. If I base my systems off of the Test Accuracy, only the C System is profitable. However, all are profitable if based on Accuracy (mean & standard deviation).

My question is, can I rely on my Accuracy (mean & standard deviation) for future games even though my Testing Accuracy is lower than 52.5%?

If not, any suggestions are greatly appreciated on how I can gauge the future results on these systems.

",38101,,2444,,6/23/2020 9:44,3/20/2021 13:01,Is my 57% sports betting accuracy correct?,,1,0,,,,CC BY-SA 4.0 22086,1,,,6/23/2020 4:08,,1,399,"

What are the main differences between a language model and a machine translation model?

",9863,,2444,,6/23/2020 9:45,6/23/2020 11:47,What are the main differences between a language model and a machine translation model?,,1,1,,,,CC BY-SA 4.0 22087,1,,,6/23/2020 5:40,,1,156,"

I'm new to relatively RNNs, and I'm trying to train generative and guessing neural networks to produce sequences of real numbers that look random. My architecture looks like this (each "circle" in the output is the adverserial network's guess for the generated circle vertically below it -- having seen only the terms before it):

Note that the adverserial network is rewarded for predicting outputs close to the true values, i.e. the loss function looks like tf.math.reduce_max((sequence - predictions) ** 2) (I have also tried reduce_mean).

I don't know if there's something obviously wrong with my architecture, but when I try to train this network (and I've added a reasonable number of layers), it doesn't really work very well.

If you look at the result of the last code block, you'll see that my generative neural network produces things like

  • [0.9907787, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827, 0.9907827]

But it could easily improve itself by simply training to jump around more, since you'll observe that the adverserial network also predicts numbers very close to the given number (even when the sequence it is given to predict is one that jumps around a lot!).

What am I doing wrong?

",38105,,38105,,6/23/2020 12:52,11/11/2022 0:05,"Why does my ""entropy generation"" RNN do so badly?",,1,0,,,,CC BY-SA 4.0 22088,1,,,6/23/2020 6:46,,1,31,"

Are there any existing ontologies available for "engineering" data?

By "engineering" I mean pertaining to the fields of electrical, mechanical, thermal, etc., engineering.

",38107,,2444,,6/23/2020 9:52,6/23/2020 9:52,Are there any existing ontologies that model engineering data?,,0,6,,,,CC BY-SA 4.0 22090,1,,,6/23/2020 7:35,,1,41,"

In different books on reinforcement learning, policy-based methods are motivated by their ability to handle large (continuous) action spaces. Is this the only motivation for the policy-based methods? What if the action space is tiny (say, only 9 possible actions), but each action costs a huge amount of resources and there is no model for the MDP, would this also be a good application of policy-based methods?

",37627,,2444,,6/23/2020 10:45,6/23/2020 10:45,Are policy-based methods better than value-based methods only for large action spaces?,,0,0,,,,CC BY-SA 4.0 22091,2,,22085,6/23/2020 7:41,,1,,"

My question is, can I rely on my Accuracy (mean & standard deviation) for future games even though my Testing Accuracy is lower than 52.5%?

If by Accuracy you mean training accuracy, then absolutely you should not trust those values. For almost all machine learning algorithms there is a problem with overfitting to training data, which will result in reporting over-estimates for metrics against the training data. This is why you should always have a test data set of unseen data, because the performance on the training data is not what you truly care about - it is the performance on new unseen data, or how the model generalises, that matters when you want to use it to predict new values.

The test data set gives you a measure of how well your model will generalise, because it simulates the performance of the model against unseen data.

However, the test data set is not perfect for all uses. Random chance will cause your test measurements to vary. If you use your test results to select a model from a list of models based on performance, then:

  • The model you selected does have the highest likelihood of being the best one, as you want. It is not guaranteed to be the best though.

  • The test result is likely to be an over-estimate of general performance. That is because you cannot separate the random fluctuations in test results from real performance improvements, you only see the combination.

The more tests you run, the more likely you will have an inflated view of the performance of your best model.

The usual fix for this is to use cross validation. Cross validation uses yet another data set to help you with the first step of choosing the best model. After you have chosen your best performing model using cross validation, then you can use a test set that you have kept in reserve to measure the performance. Because you have not used that test set to select a model, then it will give you an unbiased measure of performance. You should still bear in mind that this measure still comes with implied error bars (and with other caveats, such as any inherent sampling bias).

When predicting future results from past data, you do also need to be concerned about population drift and non-stationary problems. This is a common issue in any data set that includes complex behaviour that can evolve over time. This is very likely to affect results from sports teams where many conditions affecting performance evolve over the same timescales that you are trying to predict. In practice this means you will want to feed in new data ad re-train your models constantly, and despite this your models will tend to lag behind reality. It is unlikely you will achieve even the test result accuracy in the long term.

You can add one more thing to your testing routines to help assess the impact of non-stationarity - when reserving data for the test or cross-validation sets, don't do it at random, instead reserve all the latest results (e.g. last four weeks of data) for test only. You are likely to see a reduction in the metrics when you do this for a problem domain like sport, but that should give you a more realistic assessment of the model you are building.

",1847,,,,,6/23/2020 7:41,,,,15,,,,CC BY-SA 4.0 22093,2,,16970,6/23/2020 8:02,,0,,"

This is still Q-learning, remember Q-learning is off-policy value-based. For Bellman optimality operator $\mathcal{T}Q=r+max\ Q'$. If you have enough exploration, it always takes $Q$ to the optimal fixed point.

",38110,,,,,6/23/2020 8:02,,,,0,,,,CC BY-SA 4.0 22094,1,22635,,6/23/2020 8:21,,3,246,"

In the proof of the policy gradient theorem in the RL book of Sutton and Barto (that I shamelessly paste here):

there is the "unrolling" step that is supposed to be immediately clear

With just elementary calculus and re-arranging of terms

Well, it's not. :) Can someone explain this step in more detail?

How exactly is $Pr(s \rightarrow x, k, \pi)$ deduced by "unrolling"?

",37627,,2444,,5/11/2022 11:21,1/8/2023 18:00,"How exactly is $Pr(s \rightarrow x, k, \pi)$ deduced by ""unrolling"", in the proof of the policy gradient theorem?",,2,0,,,,CC BY-SA 4.0 22095,2,,22094,6/23/2020 8:50,,-1,,"

It looks like "v of s prime" is just substituted with the already derived value for "v of s". You can call it a recursion of a kind. In other words, v(s) is dependent on v(s') and that implies that v(s') is dependent on v(s''). So we can combine that and get the dependency of v(s) of v(s'').

",28041,,,,,6/23/2020 8:50,,,,3,,,,CC BY-SA 4.0 22097,1,,,6/23/2020 9:41,,1,110,"

I am considering using Reinforcement Learning to do optimal control of a complex process that is controlled by two parameters

$(n_O, n_I), \quad n_I = 1,2,3,\dots, M_I, n_O = 1,2,3,\dots, M_O$

In this sense, the state of the system is represented $S_t = (n_{O,t}, n_{I,t})$. It is represented, because there is a relatively complex system, a solution of coupled Partial Differential Equations (PDES), actually in the background.

Is this problem considered a partially observable Markov Decision Process (POMDP) because there is a whole mess of things behind $S_t = (n_{O,t}, n_{I,t})$?

The reward function has two parameters

$r(s) = (n_{lt}, \epsilon_\infty)$

that are results of the environment (solution of the PDEs).

In a sense, using $S_t = (n_{O,t}, n_{I,t})$ makes this problem similar to Gridworld, where the goal is to go from $S_0 = (M_O, M_I)$ to a state with smaller $(n_O, n_I)$, given reward $r$, where the reward changes from state to state and episode to episode.

Available action operations are

$inc(n) = n + 1$

$dec(n) = n - 1$

$id(n) = n$

where $n$ can be $n_I$ or $n_O$. This means there are $9$ possible actions

$A=\{(inc(n_O), inc(n_I)),(inc(n_O), dec(n_I)),(inc(n_O), id(n_I)),(dec(n_O), inc(n_I)), \dots\}$

to be taken, but there is no model for the state transition, and the state transition is extremely costly.

Intuitively, as solving a kinematic equation for a point in space, solving coupled PDEs from fluid dynamics should have the Markov property (strongly if the flow is laminar, for turbulence, I have no idea). I've also found a handful of papers where a fluid dynamics problem is parameterized and a policy-gradient method is simply applied.

I was thinking to use REINFORCE as a start, but the fact that $(n_O, n_I)$ does not fully describe the state and questions like this one on POMDP and this one about simulations make me suspicious. Could REINFORCE be used for such a problem, or is there something that prevents this?

",37627,,37627,,6/23/2020 9:58,6/23/2020 9:58,How to choose an RL algorithm for a Gridworld that models a much more complex problem,,0,0,,,,CC BY-SA 4.0 22099,2,,20118,6/23/2020 9:56,,0,,"

Shouldn't this allow us to find better actions more quickly compared to policy gradient updates?

It depends on the nature of the simulation. If the simulation models a car as a solid body moving with three $(x,y,\theta)$ degrees of freedom in a plane (hopefully, if it doesn't hit anything and propel vertically), the three ordinary differential equations of solid body motion can be solved quite quickly, compared to a simulation used to model the path of least resistance of a ship on wavy sea, where fluid dynamics equations must be solved, that require a huge amount of resources. OK, the response time needed for a ship is much longer, than for a car, yes, but to compute it predictively, one needs a huge amount of computational power.

",37627,,,,,6/23/2020 9:56,,,,0,,,,CC BY-SA 4.0 22100,1,22110,,6/23/2020 10:38,,1,200,"

Why don't we use a trigonometric function, such as $\tan(x)$, where $x$ is an element of the interval $[0,pi/2)$, instead of the sigmoid function for the output neurons (in the case of classification)?

",38113,,2444,,6/23/2020 11:02,6/23/2020 13:45,Why don't we use trigonometric functions for the output neurons?,,2,2,,,,CC BY-SA 4.0 22101,2,,21632,6/23/2020 10:46,,0,,"

BUT there are 2 different attention layers and one of which do not use the encoder’s output at all. So, what are the keys and values now?

The first attention layer in the decoder is the "Masked Multi-Head Attention" layer and is the self-attention layer, calculating how much each word is related to each word in the same sentence. However, our aim in the decoder is to generate the next French word and so for any given output French word we can use all the English words but only the French words previously seen in the sentence. We, therefore, "mask" the words that appear later in the French sentence by transforming these to 0 so the attention network cannot use them.

How these 3 words are converted to 4 words

The second attention block in the decoder is where the English to French word mapping happens. We have a query for every output position in the French sentence and a key/value for every English input word. We calculate relevance scores from the dot product of the query and key and then obtain output scores for each predicted word from multiplying the relevance and value. The following diagram is useful to visualise how, for each predicted word, we can have relevance scores that can predict that one English word can be translated to multiple, or no French word.

In summary, the encoder discovers interesting things about the English sentence whilst the decoder predicts the next French word in the translation. It should be noted they use "Multi-Head Attention", meaning that a number (8 in the original paper) of attention vectors are calculated to learn attention mechanisms to pay attention to different things, for example, grammar, vocabulary, tense, gender, and the output is a weighted average of these.

",9554,,2444,,11/20/2020 12:56,11/20/2020 12:56,,,,0,,,,CC BY-SA 4.0 22102,1,22111,,6/23/2020 10:59,,2,288,"

I created a virtual 2D environment where an agent aims to find a correct pose corresponding to a target image. I implemented a DQN to solve this task. When the goal is fixed, e.g. the aim is to find the pose for position (1,1), the agent is successful. I would now like to train an agent to find the correct pose while the goal pose changes after every episode. My research pointed me to the term "Multi-Objective Deep Reinforcement Learning". As far as I understood, the aim here is to train one or multiple agents to achieve a policy approximation that fits all goals. Am I on the right track or how should I deal with different goal states?

",37623,,,,,6/23/2020 13:46,How to handle changing goals in a DQN?,,1,0,,,,CC BY-SA 4.0 22103,2,,22086,6/23/2020 11:47,,3,,"

The simple language model will give you the probability of a sequence of tokens(sentence) for that language. So lets say if you have trained a model for English language your model can give you the probability for any random english sentence.

Consider some sentence

$X$ $=$ "the quick brown fox jumps over the lazy dog" $=$ $x_1 \ x_2 \ x_3 \ ... \ x_n$

model will give you $P(X)$

Moreover if the model has been trained properly, in the following scenario, where

$X$ $=$ "the quick brown fox jumps over the lazy dog" $=$ $x_1 \ x_2 \ x_3 \ ... \ x_n$

$Y$ $=$ "dog brown quick fox over the jumps lazy the" $=$ $y_1 \ y_2 \ y_3 \ ... \ y_m$

model will always give $P(X) > P(Y)$, as it has learnt the structure of language.

On the other hand Machine translation model gives you the conditional probability of the next token given your source sentence and partial-target sentence. So if

$X = $ "I am a student" and $Y = $ "je suis" $=y_1, \ y_2$

model will give you $P(y_3 | X,y1_,y_2)$ .

where $X$ is a source sentence and $Y= y_1, y_2$ is a partial target sentence. The probability of word/token "étudiant" would be maximum among all words of vocabulary.

",38114,,,,,6/23/2020 11:47,,,,0,,,,CC BY-SA 4.0 22104,1,22109,,6/23/2020 12:33,,0,263,"

Currently, I am reading Rethinking Model Scaling for Convolutional Neural Networks. The authors are talking about a different way of scaling convolutional neural networks by scaling all dimensions simultaneously and relative to each dimension. I understand the scaling methods regarding the depth of a network (# layers) and the resolution (size of the input image).

What I was stumbling is the concept of the network's width (# channels). What is meant by the width or the number of channels of a network? I don't think it is the number of color channels, or is this the case? The number of color channels was the only link I found regarding the terms "ConvNets" and "number of channels".

",38119,,2444,,6/23/2020 16:58,6/23/2020 16:58,What is meant by the number of channels of a network?,,1,1,,,,CC BY-SA 4.0 22105,1,,,6/23/2020 12:43,,1,936,"

I am trying to understand the genetic algorithm in terms of feature selection and these features are extracted using a machine learning algorithm.

Let's suppose I have data of heart rate for 3 minutes collected from $50$ subjects. From these 3-minute heart rate, I extracted $5$ features, like the mean, standard deviation, variance, skewness and kurtosis. Now the shape of my feature set is (50, 5).

I want to know what are gene, chromosome and population in genetic algorithm related to the above scenario.

What I understand is each feature is a gene, and a set of all features for one subject (1, 5) is the chromosome, and the whole feature set (50, 5) is a population. But I think this concept is not correct. Because in the genetic algorithm, we take a random population, but according to my concept complete data is population, so how random data is selected.

Can anyone help me to understand it?

",38120,,2444,,6/23/2020 16:40,4/20/2021 9:47,"What is meant by gene, chromosome, population in genetic algorithm in terms of feature selection?",,2,2,,,,CC BY-SA 4.0 22106,1,,,6/23/2020 13:06,,2,916,"

I'm building a denoising autoencoder. I want to have the same input and output shape image.

This is my architecture:

input_img = Input(shape=(IMG_HEIGHT, IMG_WIDTH, 1))  

x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)



x = Conv2D(32, (3, 3), activation='relu', padding='valid')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)


# decodedSize = K.int_shape(decoded)[1:]

# x_size = K.int_shape(input_img)
# decoded = Reshape(decodedSize, input_shape=decodedSize)(decoded)


autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

My input shape is: 1169x827

This is Keras output:

Model: "model_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_7 (InputLayer)         [(None, 1169, 827, 1)]    0         
_________________________________________________________________
conv2d_30 (Conv2D)           (None, 1169, 827, 32)     320       
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 585, 414, 32)      0         
_________________________________________________________________
conv2d_31 (Conv2D)           (None, 585, 414, 64)      18496     
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 293, 207, 64)      0         
_________________________________________________________________
conv2d_32 (Conv2D)           (None, 291, 205, 32)      18464     
_________________________________________________________________
up_sampling2d_12 (UpSampling (None, 582, 410, 32)      0         
_________________________________________________________________
conv2d_33 (Conv2D)           (None, 582, 410, 32)      9248      
_________________________________________________________________
up_sampling2d_13 (UpSampling (None, 1164, 820, 32)     0         
_________________________________________________________________
conv2d_34 (Conv2D)           (None, 1162, 818, 1)      289       
===============================================================

How can I have the same input and output shape?

",38093,,2444,,6/23/2020 16:33,7/24/2020 9:01,How can I have the same input and output shape in an auto-encoder?,,2,0,,,,CC BY-SA 4.0 22108,2,,22100,6/23/2020 13:20,,2,,"

The main reason why the sigmoid function is used is because it 'does not blow up' since it is between 0 and 1 always. As for the relu it is used because it is computationally cheap and even resolves the problem of vanishing gradient(and hence used more often than sigmoid).

So a function like tan(x) will blow up for certain values of x. This can cause a problem of exploding gradients. So, I believe for this reason tan(x) cannot be a good non-linearity to be used.

As for any other function, it is more because of the results that we have gotten over the years and sigmoid and relu have been promising.

",38118,,,,,6/23/2020 13:20,,,,0,,,,CC BY-SA 4.0 22109,2,,22104,6/23/2020 13:27,,1,,"

It is exactly that - the number of color channels or any other analogue to color that you use.

",28041,,,,,6/23/2020 13:27,,,,3,,,,CC BY-SA 4.0 22110,2,,22100,6/23/2020 13:45,,0,,"

Although it's true that if you use certain trigonometric functions, such as the tangent, you could have numerical problems (as suggested in this answer), that's not the only reason for not using trigonometric functions.

Trigonometric functions are periodic. In general, we may not want to convert a non-periodic function to a periodic one. To be more concrete, let's suppose we use the sine function as the activation function of the output neurons of a neural network. Assuming only one input, if the input to any of those output neurons is $360k$, for any integer $k$, the result will always be $0$, but that may not be desirable.

",2444,,,,,6/23/2020 13:45,,,,0,,,,CC BY-SA 4.0 22111,2,,22102,6/23/2020 13:46,,1,,"

The simplest thing you can do is to add data regarding the target pose to the state vector. This will allow any generalisations that the agent learns that apply to similar poses to be used directly.

Clearly in normal use, where the target pose remains fixed during the episode, then that part of the state information will not change either during the episode. You will also need to train with a large variety of target poses - so training will take longer.

Multi-Objective Deep Reinforcement Learning is slightly different in that it attempts to resolve prioritising between multiple sub-goals. It would also be a more complex solution, whilst augmenting the state vector should allow you to continue using a very similar DQN set up as you have already.

",1847,,,,,6/23/2020 13:46,,,,2,,,,CC BY-SA 4.0 22112,2,,22079,6/23/2020 14:03,,0,,"

Discretizing state/action space will always be a very expensive strategy, in fact I don't think you can do better than exponential time in state/action dimension that way.

Now, you still haven't really explained your algorithm with the discretized states and actions. Given your known models, it sounds like you're planning on doing dynamic programming. This would definitely be an exponential time algorithm. You also haven't explained how you'd use your knowledge of the model in your policy gradient implementation. If you're just using it to sample some trajectories to have more data for updates, policy gradient methods will be much more efficient (poly-time). Otherwise, if you're planning on using your model for some sort of MCTS planning, depending on how you implement that, it could be very inefficient as well.

",37829,,,,,6/23/2020 14:03,,,,2,,,,CC BY-SA 4.0 22113,2,,21536,6/23/2020 14:38,,1,,"

This seems to be inherited from the original Google implementation, which also uses 2 outputs (https://github.com/google-research/bert/blob/master/run_pretraining.py#L293). I can see two possible reasons that the original implementation uses 2 outputs:

  1. They are using the cross entropy loss, which typically works with log probabilities. To get probabilities they use softmax activation, which requires an output for each class. It is possible, of course, to compute cross entropy from sigmoid activations (which would correspond to a 1-output architecture), but there seems to be some confusion as to whether the output of the sigmoid function should be used as a probability.

  2. Using 2 outputs can simplify the computation of the binary cross entropy loss, which, in typical Google fashion, is computed using low-level tensorflow ops rather than with tf.nn.softmax_cross_entropy_with_logits. Specifically,

-tf.reduce_sum(one_hot_labels * log_probs, axis=-1)

where one_hot_labels and log_probs are $\mathbb{R}^{N \times 2}$, is much easier to read than

-tf.reduce_sum(binary_labels * tf.math.log(probs) + (1 - binary_labels) + tf.math.log(1 - probs))

where binary_labels and probs are $\mathbb{R}^N$.

",37972,,,,,6/23/2020 14:38,,,,0,,,,CC BY-SA 4.0 22114,2,,22106,6/23/2020 15:24,,1,,"

If you look at Keras' output, there are various steps which lose pixels:

Max pooling on odd sizes will always lose one pixel. Conv2D using 3x3 kernels will also lose 2pixels, although I'm puzzled that it doesn't seem to happen in the downsampling steps.

Intuitively, padding the original images with enough border pixels to compensate for the pixel loss due to the various layers would be the simplest solution. At the moment I can't calculate how much it should be, but I suspect rounding up to a multiple of 4 should take care of the max pooling layers. For denoising, borders could be just copied from the outermost pixels, probably with some sort of low pass filtering to avoid artefacts.

",22993,,2444,,6/23/2020 16:35,6/23/2020 16:35,,,,2,,,,CC BY-SA 4.0 22118,1,22129,,6/23/2020 16:41,,2,1074,"

I am watching DeepMind's video lecture series on reinforcement learning, and when I was watching the video of model-free RL, the instructor said the Monte Carlo methods have less bias than temporal-difference methods. I understood the reasoning behind that, but I wanted to know what one means when they refer to bias-variance tradeoff in RL.

Is bias-variance trade-off used in the same way as in machine learning or deep learning?

(I am just a beginner and have just started learning RL, so I apologize if it is a silly question.)

",37911,,2444,,6/23/2020 16:44,12/19/2021 20:41,What is the bias-variance trade-off in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 22119,1,22120,,6/23/2020 16:47,,7,1948,"

I understand the two major branches of RL are Q-Learning and Policy Gradient methods.

From my understanding (correct me if I'm wrong), policy gradient methods have an inherent exploration built-in as it selects actions using a probability distribution.

On the other hand, DQN explores using the $\epsilon$-greedy policy. Either selecting the best action or a random action.

What if we use a softmax function to select the next action in DQN? Does that provide better exploration and policy convergence?

",38127,,2444,,12/4/2020 18:32,12/4/2020 18:32,What happens when you select actions using softmax instead of epsilon greedy in DQN?,,1,0,,,,CC BY-SA 4.0 22120,2,,22119,6/23/2020 16:59,,4,,"

DQN on the other hand, explores using epsilon greedy exploration. Either selecting the best action or a random action.

This is a very common choice, because it is simple to implement and quite robust. However, it is not a requirement of DQN. You can use other action choice mechanisms, provided all choices are covered with a non-zero probability of being selected.

What if we use a softmax function to select the next action in DQN? Does that provide better exploration and policy convergence?

It might in some circumstances. A key benefit is that it will tend to focus on action choices that are close to its current best guess at optimal. One problem is that if there is a large enough error in Q value estimates, it can get stuck as the exploration could heavily favour a current best value estimate. For instance, if one estimate is accurate and relatively high, but another estimate is much lower but in reality would be a good action choice, then the softmax probabilities to resample the bad estimate will be very low and it could take a very long time to fix.

A more major problem is that the Q values are not independent logits that define preferences (whilst they would be in a Policy Gradient approach). The Q values have an inherent meaning and scale based on summed rewards. Which means that differences between optimal and non-optimal Q value estimates could be at any scale, maybe just 0.1 difference in value, or maybe 100 or more. This makes plain softmax a poor choice - it might suggest a near random exploration policy in one problem, and a near determinitsic policy in another, irrespective of what exploration might be useful at the current stage of learning.

A fix for this is to use Gibbs/Boltzmann action selection, which modifies softmax by adding a scaling factor - often called temperature and noted as $T$ - to adjust the relative scale between action choices:

$$\pi(a|s) = \frac{e^{q(s,a)/T}}{\sum_{x \in \mathcal{A}} e^{q(s,x)/T}}$$

This can work nicely to focus later exploration towards refining differences between actions that are likely to be good whilst only rarely making obvious mistakes. However it comes at a cost - you have to decide starting $T$, the rate to decay $T$ and an end value of $T$. A rough idea of min/max action value that the agent is likely to estimate can help.

",1847,,1847,,6/23/2020 17:06,6/23/2020 17:06,,,,2,,,,CC BY-SA 4.0 22121,2,,22105,6/23/2020 17:11,,0,,"

Genetic algorithms, also known as evolutionary search, provide a general technique to optimize an objective function. We also say that we are trying to maximize fitness. This means that we are trying to find an individual with the highest possible fitness. We start with a population, say 100 individuals, and using mutation and crossover we generate offspring among whom we hope to find fitter individuals and as the generations progress, we get better and better.

One way to start this all is to think about the "fitness" or objective function. What is it that we want in the best individuals? Can we model that? How do we model that?

In your case, does a specific measurement (those 5 numbers you mention) say how fit an individual is? And that fitness can be one number, say from 1 to 100, or it could be unbounded (as in real life where things get better and better with temporary regressions).

So the challenge is how to map the features to a number. That's a math function to design.

Genes are what change from individual to individual and they mutate from parent to offspring and they are shared in crossover. Given a set of genes, what is the fitness? If you can answer that question (meaning a mathematical function to map the genes of each individual to a number), then you have a genetic algorithm to run and it will find the fittest individuals according to your (math) function.

",38131,,,,,6/23/2020 17:11,,,,0,,,,CC BY-SA 4.0 22122,2,,22087,6/23/2020 17:38,,0,,"

I think there are two problems with your network. The first one, always having very similar outputs, is the rather simple one. As it seems, your network suffers from the so-called, very common Mode Collapse problem. The attached link provides both an explanation and some potential remedy to that problem.

The second problem is more fundamental. You say that you want your network to produce random numbers. Or numbers that at least appear as such. However, once training is finished, your model is going to be a static function which will not change any further. Given the same input x, it will always produce the same output y. Consequently, unless the inputs to your network contain some randomness already or are, at least, always slightly dissimilar, you will not end up having a random generator. So, whether that is going to be useful will depend on your usecase. But if you make sure that the true random variable (like date&time) serves as input to the RNN and the RNN just translates this into some different format, that might work again. Just keep in mind that randomness can never arise out of a trained model.

",37982,,,,,6/23/2020 17:38,,,,3,,,,CC BY-SA 4.0 22123,1,,,6/23/2020 18:01,,1,50,"

I have a tree that represents a hierarchical ontology of computer science topics (such as AI, data mining, IR, etc). Each node is a topic, and its child nodes are its sub-topics. Leaf nodes are weighted based on their occurrence in a given document.

Is there a well-known algorithm or function to calculate the weight of inner nodes based on the weights of leaf nodes? Or is it totally based on the application to decide mathematical calculation of the weights?

In my application, the node's weights should be some sort of accumulation of its child nodes weights. Is there a better mathematical formula or function to do that than just summing up weights of child nodes?

I am not asking about traversal, but rather about weighting function.

",38132,,2444,,6/23/2020 22:14,6/23/2020 22:14,Is there an algorithm to calculate the weights of an ontology tree's inner nodes?,,0,0,,,,CC BY-SA 4.0 22124,1,,,6/23/2020 19:22,,4,204,"

I've read that ANNs are based on how the human brain works. Now, I am reading about dropout. Is some kind of dropout used in the human brain? Can we say that the ability to forget is some kind of dropout?

",36900,,2444,,6/23/2020 22:05,6/24/2020 0:48,Is some kind of dropout used in the human brain?,,1,1,,,,CC BY-SA 4.0 22126,2,,9954,6/23/2020 20:40,,1,,"

An algorithm's bias and variance can be thought of as its property, this can be tweaked with things that we call as hyperparameters, but every algorithm has its own set of assumptions that it makes which if fulfilled, the algorithm performs better.

Some algorithms such as Logistic Regression, linear-SVMs (not the kernel SVMs, because they can be used for non-linear problems as well) etc are linear models and work well if the data is linearly separable. If the data can not be separated by a linear plane, then no matter how much you tweak and fine-tune them, it won't work, because the data simply can not be separated by a linear plane, and that is the bias everyone talks about for these kinds of algorithms.

On the other hand, Decision Trees can split the whole space into several hypercubes and based on which hypercube a datapoint is in, they classify that datapoint. KNNs on the other hand, use the neighbours of a datapoint and their types/properties to make predictions. Thus, a change in the positions of those datapoints will largely affect both the decision boundary(s) of both of these algorithms, and that is why they can be very easily overfitted and have a high variance.

Hope this helps.

",37911,,37911,,6/24/2020 14:36,6/24/2020 14:36,,,,0,,,,CC BY-SA 4.0 22127,1,22305,,6/24/2020 0:38,,1,848,"

The following is a statement and I am trying to figure out if it's true or false and why.

Given a non-admissible heuristic function, A* will always give a solution if one exists, but there is no guarantee it will be optimal.

I know that a non-admissible function is $h(n) > h^*(n)$ (where $h^*(n)$ is the real cost to the goal), but I do not know if there is a guarantee.

Which heuristics guarantee the optimality of A*? Is the admissibility of the heuristic always a necessary condition for A* to produce an optimal solution?

",38009,,2444,,11/7/2020 14:38,11/7/2020 14:40,Which heuristics guarantee the optimality of A*?,,1,0,,,,CC BY-SA 4.0 22128,2,,22124,6/24/2020 0:48,,5,,"

The human brain works by having neurons constantly fire at different rates. So, if the firing rate increases, the neuron is transmitting overly exciting or calming information to further neurons connected to it. How other neurons connected to the former neuron respond on the messages sent by it, depends on the strength of the connection between the connected neurons. However, having true dropout in the brain would be like having a neuron stop firing entirely for some time. As far as I know, this is not gonna happen unless a neuron dies. So, no, there doesn't seem to be any real biological equivalent to dropout in the brain. If you forget something, that is just caused by a weakening of some connections in the brain. But such a weakening (or strengthening alternatively) is a gradual process and is a function of the degree of firing synchrony between any two connected nodes.

It is true that the introductory texts to Neural Nets tell you about those nets' neuro-scientific inspiration, but this similarity holds only in the most abstract way. Yes, you have neurons in both biological and artificial neural networks. But communication in the brain works in an asynchronous, continuous bio-chemical way, while communication in an artificial neural net works by taking the numeric state of one node, scaling it by the weight of the connection between two nodes, and then adding the weighted contribution of the previous node to the activation of the receiving node. So, already the fundamentals are very different between the two approaches (with the biological variant being much more complex and elaborate).

If you want to find something that possesses more biologically inspired properties, look at how Convolutional Neural Networks are theoretically inspired by the working of the (human) visual system. Besides that, dropout is a purely mechanical strategy to regularize a model well, in order to enhance the model's generalization ability, i.e. making the model work well also on unseen data. It's just part of an algorithmic optimization procedure.

",37982,,,,,6/24/2020 0:48,,,,2,,,,CC BY-SA 4.0 22129,2,,22118,6/24/2020 1:00,,3,,"

The bias-variance trade-off that you're referring to has to do with the return estimator. Any RL algorithm you choose needs some estimate of the cumulative return, which is a random variable with many sources of randomness, such as stochastic transitions or rewards.

Monte Carlo RL algorithms estimate returns by running full trajectories and literally averaging the return achieved for each state. This imposes very few assumptions on the system (in fact, you don't even need the Markovian property for these methods), so bias is low. However, variance is high since each estimate depends on the literal trajectories that you observe. As such, you'll need many, many trajectories to get a good estimate of the value function.

On the other hand, with TD methods, you estimate returns as $R_t + \gamma V(S_{t+1})$, where $V$ is your estimate of the value function. Using $V$ this imposes some bias (for instance, the initialization of the value function at the beginning of training affects your next value function estimates), with the benefit of reducing variance. In TD learning, you don't need full environment rollouts to make a return estimate, you just need one transition. This also lets you make much better use of what you've learned about the value function, because you're learning how to infer value "piecewise" rather than just via literal trajectories that you happened to witness.

",37829,,2444,,12/19/2021 20:41,12/19/2021 20:41,,,,1,,,,CC BY-SA 4.0 22130,1,22132,,6/24/2020 2:59,,3,284,"

I'm using the DQN algorithm to train my agent to play a turn-based game. The memory replay buffer stores tuples of experiences $(s, a, r, s')$, where $s$ and $s'$ are consecutive states. At the last turn, the game ends, and the non-zero reward is given to the agent. There are no more observations to be made and there is no next state $s'$ to store in the experience tuple. How should the final states be handled?

",38076,,2444,,6/24/2020 11:50,6/24/2020 11:50,How to handle the final state in experience replay?,,1,1,,,,CC BY-SA 4.0 22132,2,,22130,6/24/2020 7:04,,2,,"

You do not store a terminal state as $s$ in the replay table because by definition its value is always $0$, and there is no action, reward or next state. There is literally nothing to learn.

However, you may find it useful to store information that $s'$ is actually a terminal state, in case this is not obvious. That is typically achieved by storing an additional done boolean component. This is useful, because it allows you to branch when calculating the TD target g:

s, a, r, next_s, done = replay_memory_sample()
if done:
  g = r
else:
  g = r + gamma * max( q(s') )
",1847,,2444,,6/24/2020 11:50,6/24/2020 11:50,,,,0,,,,CC BY-SA 4.0 22134,1,,,6/24/2020 8:00,,1,30,"

For CNN image recognition tasks, like object recognition/face recognition/object segmentation/posture recognition, are there experiment results about how much will the performance be degraded with monochrome images?

The imaginary experiment is like:

  1. Take the existing frameworks, reduce the channel number in the framework to fit monochrome images

  2. Transform the existing training data and testing data to monochrome images

  3. Train the model with monochrome training data.

  4. Use the model to test the monochrome testing data.

  5. Compare the result with the original result.

",25322,,2444,,6/24/2020 11:57,6/24/2020 11:57,How is the performance of a CNN trained with monochrome images on image recognition tasks degraded?,,0,2,,,,CC BY-SA 4.0 22135,2,,22106,6/24/2020 8:16,,0,,"

I don't know if this is the right way of doing it but I solved the problem.

Following the code from above I've added:

img_size = K.int_shape(input_img)[1:]

resized_image_tensor = tf.image.resize(decoded, list(img_size[:2]))****


autoencoder = Model(input_img, resized_image_tensor)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

I used tf.image.resize to synchronize the shape of reconstructed image and input image.

Hope it helps.

",38093,,,,,6/24/2020 8:16,,,,0,,,,CC BY-SA 4.0 22136,1,22143,,6/24/2020 9:48,,1,52,"

Neural network training problems are oftentimes formulated as probability estimation problems (such as autoregressive models).

How does one intuitively understand this idea?

",37099,,2444,,6/24/2020 12:01,6/24/2020 17:03,"Intuitively, why can the training of a neural network be formulated as a probability estimation problem?",,1,0,0,,,CC BY-SA 4.0 22138,2,,20939,6/24/2020 12:37,,1,,"

I will answer my own question to try and provide some insights.

My research supervisor suggested that I should use the SSIM metric or some other well-known image processing metric (see the book "Modern Image Quality Assessment" by Wang and Bovik) for assessing the visual similarity of an images.

Another way I evaluate the performance of an autoencoder is by simply visually comparing the input and output images taken from the test set. This is by no means very scientific, but it gives a good idea whether an autoencoder is able to reconstruct the input images. One thing I would add here is that even if an autoencoder can reconstruct images perfectly, it doesn't mean that the encoding it learned is useful. For example, when I wanted similar images to be mapped to similar encodings, the autoencoder that was able to do that better was outputting more blurred reconstructed images in comparison to the autoencoder that wasn't achieving this similarity preservation (but was outputting better reconstructions).

",36769,,,,,6/24/2020 12:37,,,,0,,,,CC BY-SA 4.0 22139,1,,,6/24/2020 13:20,,1,42,"

Is there a guide how much data do you need for making successful denoising model using autoencoders?

Or the rule is, the more data, the better it is?

I tried with small dataset 350 samples, to see what I will get as an output. And I failed. :D

",38093,,2444,,12/30/2021 10:45,12/30/2021 10:45,How much data do we need for making a successful de-noising auto-encoder?,,0,0,,,,CC BY-SA 4.0 22140,1,22141,,6/24/2020 13:25,,1,61,"

Suppose we have a small space state and that, after about 2000 episodes, we've accurately explored the environment and known the accurate $Q$ values. In that case, why do we still leave a small probability for exploration?

My guess is in the case of a dynamic environment where a bigger reward might pop up in another state. Is my assumption correct?

",37831,,2444,,6/24/2020 14:17,6/24/2020 14:18,Why do we explore after we have an accurate estimate of the value function?,,2,0,,,,CC BY-SA 4.0 22141,2,,22140,6/24/2020 13:44,,0,,"

Suppose we have a small space state and that, after about 2000 episodes, we've accurately explored the environment and known the accurate $Q$ values. In that case, why do we still leave a small probability for exploration?

It will depend on the goal of the work:

  • If the learning algorithm is off-policy (e.g. Q learning), it is normal to continue to explore at a moderate-to-low rate because it can accurately estimate an optimal deterministic target policy from a close-to-optimal stochastic behaviour policy.

  • Perhaps it is engineered with a low tolerance and will keep going even when you don't need it to.

  • Perhaps the code is for education and run so long that convergence is easily visible. Or for comparison with other methods which really do take that long to converge, and you would like data on the same axis.

  • For comparison with other methods for sample efficiency whilst learning and measuring regret (i.e. how much the exploration is costing you).

  • When environment is dynamic and could change, then continuous exploration is potentially useful to discover the changes, as you suggest in the question.

If you do really have an ideal agent, then of course you could just stop and say "job done". In practice for more interesting problems, you won't usually get small state spaces and perfect solutions inside 2000 episodes (or ever) - as a result if you are reading tutorials in reinforcement learning, they may just skip this point.

",1847,,2444,,6/24/2020 14:18,6/24/2020 14:18,,,,5,,,,CC BY-SA 4.0 22142,2,,22140,6/24/2020 13:44,,0,,"

When you are training a system using stochastic gradient descent, your system will converge towards some local minimum. If the local minimum was a good one, we would be fine with it. However, we cannot know how good a found solution is in comparison to other solutions of which we do not know their quality because they have been insufficiently explored. So, continuing to explore is a good way to escape comparatively bad local minima even if training has progressed already for quite a bit.

Besides that, maybe even more importantly towards the end of training, one also wants the system to perform well, i.e. robustly, in the presence of noise and not just under ideal circumstances. So, introducing some randomness, i.e. noise, into the network's policy can also lead to more robust policies being learned since the agent gets trained on how to best recover failure/unforeseen transitions into unexpected states.

",37982,,,,,6/24/2020 13:44,,,,7,,,,CC BY-SA 4.0 22143,2,,22136,6/24/2020 15:06,,1,,"

Consider the case of binary classification, i.e. you want to classify each input $x$ into one of two classes: $y_1$ or $y_2$. For example, in the context of object classification, $y_1$ could be "cat" and $y_2$ could be "dog", and $x$ is an image that contains one main object.

In certain cases, $x$ cannot be easily classified. For example, in object classification, if $x$ is a blurred image where there's some uncertainty about the object in the image, what should the output of the neural network be? Should it be $y_1$, $y_2$, or maybe it should be an uncertainty value (i.e. a probability) that lies between $y_1$ and $y_2$? The last option is probably the most reasonable, but also the most general one (in the sense that it can also be used in the case there's little or no uncertainty about what the object is).

That's the reason why we can model or formulate this (or other) supervised learning problem(s) as the estimation of a probability value (or probability distribution).

To be more concrete, you can formulate this binary classification problem as the estimation of the following probability

\begin{align} P(y_1 \mid x, \theta_i) \in [0, 1] \label{1}\tag{1} \end{align}

where $y_1$ is the first class (or label), $(x, y) \in \mathcal{D}$ is a labeled training example, where $y$ is the ground-truth label for the input $x$, $\theta_i$ are the parameters of the neural network at iteration $i$, so, intuitively, $P(y_1 \mid x, \theta_i) $ is a probability that represents how likely the neural network thinks that $x$ belongs to the class $y_1$ given the current estimate of the parameters. The probability that $x$ belongs to the other class is just $1 - P(y_1 \mid x, \theta_i) = P(y_2 \mid x, \theta_i)$. In this specific case, I have added a subscript to $\theta$ to indicate that this probability depends on the $i$th estimate of the parameters of the neural network.

Once you have $P(y_1 \mid x, \theta_i)$, if you want to perform classification, you will actually need to choose a threshold value $t$, such that, if $P(y_1 \mid x, \theta_i) > t$, then $x$ is classified as $y_1$, else it is classified as $y_2$. This threshold value $t$ can be $0.5$, but it can also not be.

Note that, in the case above, $P(y_1 \mid x, \theta_i)$ is a number and not a probability distribution. However, in certain cases, you can also formulate your supervised learning problem so that the output is a probability distribution (rather just a probability). There are also other problems where you don't estimate a conditional probability but maybe a joint probability, but the case above is probably the simplest one that should give you the intuition behind the idea of formulating machine learning problems as the estimation of probabilities or probability distributions.

",2444,,2444,,6/24/2020 17:03,6/24/2020 17:03,,,,1,,,,CC BY-SA 4.0 22146,1,,,6/24/2020 18:48,,2,61,"

Is there a good website where I can learn about Deep Deterministic Policy Gradient?

",37947,,2444,,6/24/2020 20:24,11/21/2020 22:07,Is there a good website where I can learn about Deep Deterministic Policy Gradient?,,1,0,,,,CC BY-SA 4.0 22147,2,,22146,6/24/2020 18:57,,2,,"

Spinning Up by Open Ai.

Be sure to read up Part 3 (Intro to Policy Optimisation) before you move on to : https://spinningup.openai.com/en/latest/algorithms/ddpg.html

",33448,,,,,6/24/2020 18:57,,,,1,,,,CC BY-SA 4.0 22148,1,22151,,6/24/2020 20:36,,0,48,"

What is the advantage of having a stochastic/probabilistic classification procedure?

The classifiers I have encountered so far are as follows. Suppose we have two outcomes $A = \{0,1\}$. Given a feature vector $x$, we have calculated a probability for each outcome and return the outcome $a \in A$ for which the probability is highest.

Now, I encountered a classification procedure as follows: first, map each $x$ to a probability distribution on $A$ by a mapping $H$. To classify $x$, choose an outcome $a$ according to the distribution $H(x)$.

Why not use the deterministic classification? Suppose $H(x)$ is 1 with probability $0.75$. Then, the obvious choice for an outcome would be $1$ and not $0$.

",36116,,2444,,6/24/2020 23:52,6/24/2020 23:53,What is the advantage of having a stochastic classification procedure?,,1,1,,,,CC BY-SA 4.0 22149,2,,22080,6/24/2020 21:08,,1,,"

I compared my results visually to a second implementation known to be working - "The annotated transformer". I compared the pytorch calculation results of the attention-method to my implementation results.

The answe is - the softmax is applied row by row. Therefore the resulting matrix p-attn is not equal to its transposed version.

",38099,,,,,6/24/2020 21:08,,,,0,,,,CC BY-SA 4.0 22150,1,,,6/24/2020 21:40,,3,83,"

Geb is an alife simulation that as far as I know passes all of the tests we have tried to come up with in defining open endedness. However, when you actually run the code, the behavioral complexity certainly increases, but the physical bodies of the creatures never changes (and cannot change), and Geb bodies are the only thing present in the world.

Tool development, or at least developing new physical capabilities and new “actions” seems to be a crucial part of what makes evolution open ended. I think the problem with Geb is that the evolution and progress all takes place in the networks and network production rules, which are systems outside the physical world. They are external systems that take in data from the world and output actions for the agents. So while this rich complexity and innovation is occurring, it’s not integrated with the agents actions and physical bodies.

This leads to a simple question: is there an alife system that passes all the same tests Geb does, but is “fully embedded” in its world? In the sense that any mechanism agents use to make actions must be part of the physical world of those agents, and subject to the same rules as the bodies of the agents themselves.

What I’m saying here is loose, you could come up with plenty of weird edge cases that meet what I ask exactly without meeting my intent. And perhaps being fully embodied isn’t necessary, just being more embodied would be enough. What my intent is is to ask if we have any systems that pass the open endedness tests Geb have passed, but have the innovation occurring in a way that leads to emergent growth of “actions” and emergent growth of “bodies” because to evolve and do better those aspects must be improved as well.

",6378,,1847,,6/25/2020 9:23,6/25/2020 9:23,Artificial life simulator that is fully embodied and passes open endedness tests,,0,3,,,,CC BY-SA 4.0 22151,2,,22148,6/24/2020 22:34,,1,,"

There are multiple potential reasons for having stochastic predictions (instead of categorical/binary).

First, it often simplifies training and improves the training outcome when training a classifier on producing probabilities per class. For example, it allows for the usage of many nice loss functions like the Mean Squared Error (MSE), which is compatible with the famous back-prop algorithm. Moreover, improving classification based on the loss computed from stochastic probabilities per class allows for improving the classifier even further when the counting loss computed on predicted class labels has reached 0 already. So, if you only have binary decisions and your classifier gets all classes correct, training can stop immediately since the loss becomes 0. However, when predicting probabilities per class and iteratively driving the probability of the correct class towards 1, training can continue for much longer, even after all classes have been classified correctly already. So, the classifier trained on making probabilistic predictions can in the end perform much better since training can progress for much longer, allowing for increasingly pronounced discrimination between classes with every update. A nice introduction is given in this Stanford lecture recording.

Second, it is often nice for a user to know how much certainty versus uncertainty there is in a classification. If one class has 80% probability, this is a pretty clear decision in favor of the 80%-probability-class. However, if you only get categorical class labels returned by your classifier, you cannot possibly know whether the evidence in favor of the winning class was only marginally higher than that of the other class (as measured by the classifier) or whether there was a clear difference.

",37982,,2444,,6/24/2020 23:53,6/24/2020 23:53,,,,1,,,,CC BY-SA 4.0 22154,1,22830,,6/24/2020 23:55,,2,148,"

Everybody knows how successful transformers have been in NLP. Is there known work on other domains (e.g that also have a sequential natural way of occurring, such as stock price prediction or other problems)?

",36341,,2444,,6/25/2020 0:04,8/2/2020 16:11,Do transformers have success in other domains different than NLP?,,1,6,,,,CC BY-SA 4.0 22158,2,,10644,6/25/2020 8:57,,0,,"

I believe that mathematical theorems are social constructions which are formalised by virtue of rigorous proofs facilitated by an academic peer review process; in other words, I am not a mathematical Platonist. You ask: “Can we define the AI singularity mathematically?” I personally see no reason why the so-called AI singularity cannot be defined in mathematical terms. Let us pretend that a gifted mathematician is able to provide a convincing and logically consistent set of rigorous proofs for a mathematical conjecture pertaining to the AI singularity. The aforementioned mathematician then submits the paper to a highly prestigious mathematics journal, such as Journal of the American Mathematical Society. If this paper passes the stringent academic peer review process for publication, then this would constitute peer acceptance that the AI singularity can indeed be defined in mathematical terms.

",38152,,,,,6/25/2020 8:57,,,,1,,,,CC BY-SA 4.0 22166,1,22168,,6/25/2020 11:39,,0,219,"

Why is non-linearity desirable in a neural network?

I couldn't find satisfactory answers to this question on the web. I typically get answers like "real-world problems require non-linear solutions, which are not trivial. So, we use non-linear activation functions for non-linearity".

",38172,,2444,,6/25/2020 11:55,3/29/2021 16:03,Why is non-linearity desirable in a neural network?,,2,0,,,,CC BY-SA 4.0 22168,2,,22166,6/25/2020 13:06,,0,,"

Consider what happens if you intend to train a linear classifier on replicating something trivial as the XOR function. If you program/train the classifier (of arbitrary size) such that it outputs XOR condition is met whenever feature a or feature b are present, then the linear classifier will also (incorrectly) output XOR condition is met whenever both features together are present. That is because linear classifiers simply sum up contributions of all features and work with the total weighted inputs they receive. For our example, that means that when the weighted contribution of either feature is sufficient already to trigger the classifier to output XOR condition is met, then obviously also the summed contributions of both features are sufficient to trigger the same response.

To get a classifier that is capable of outputting XOR condition is met if and only if the summed contributions of all input features are above a lower threshold and below an upper threshold, commonly non-linearities are introduced. You could of course also try to employ a quadratic function to solve the two-feature problem, but as soon as the number of variables/features increases again, you run into the same problem again, only in higher dimensions. Therefore, the most general approach to solving this problem of learning non-linear functions, like the XOR, is by setting up large models with enough capacity to learn a given task and equipping them with non-linearities. That simplifies training since it allows for using stochastic gradient descent for training the system/classifier, preventing one from having to solve Higher-Degree polynomial equations analytically (which can get computationally quite expensive quite quickly) to solve some task.

In case you are interested, here's one paper analyzing and dealing with the XOR problem (as one concrete instance of a problem where purely linear models fail to solve some task).

EDIT:

You can consider a layer in a network as a function $y = f(x)$, where $x$ is the input to some layer $f$ and $y$ is the output of the layer. As you propagate $x$, being the network's input, through the network, you get something like $y = p(t(h(g(f(x)))))$, where $f$ is the input layer and $p$ constitutes the output layer, i.e. a set of weights, by which the input to that respective layer gets multiplied. If $h$, for example, is some non-linear activation function, like ReLU or sigmoid, then $y$, being the network's output, is a non-linear function of input $x$.

",37982,,36737,,3/28/2021 17:53,3/28/2021 17:53,,,,9,,,,CC BY-SA 4.0 22169,1,,,6/25/2020 13:27,,1,31,"

I am trying to create a Graph NN that will be able to predict the forces in truss elements of a space frame.

The input for the NN will be a graph, where the nodes represent the nodes of the spaceframe. And the output should be the forces in the edges of the supplied graph/frame.

The problem that I am facing is that for the NN to be beneficial I need to encode a lot of data per node:

  • 3 floats for the position of the node,
  • 3 floats for the vector of the force applied to the node,
  • a boolean/int to determine whether a node is a support.

I am not sure how to design my Graph NN to allow for so many parameters per node.

Maybe I should try a different approach?

Any help is greatly appreciated!

",38176,,2444,,6/25/2020 16:32,6/25/2020 16:32,How to design a graph neural network to predict the forces in truss elements of a space frame?,,0,1,,,,CC BY-SA 4.0 22170,1,,,6/25/2020 13:30,,1,61,"

Let's say we have a dynamic environment: a new state gets added after 2000 episodes have been done. So, we leave room for exploration, so that it can discover the new state.

When it gets to that new state, it has no idea of the Q values, and, since we're past 2000 episodes, our exploration rate is very low. What happens if try to exploit when all Q values are 0?

",37831,,2444,,6/25/2020 15:56,11/13/2022 0:07,How to deal with the addition of a new state to the environment during training?,,1,0,,,,CC BY-SA 4.0 22171,2,,22170,6/25/2020 14:12,,0,,"

There are several ways to tackle this, although exploration is definitely not a solved problem yet ;)

In general, I believe the right thing to do here is to measure the uncertainty of your policy or Q-value estimates and use that to construct some sort of exploration bonus. An intuitive example is given in Exploration by Random Network Distillation. They make two randomly initialized neural nets, one of which is never updated. At every transition, they feed transition data through both neural nets and use the difference in output between them as an estimate of uncertainty, and this quantity is added to the reward. Then they update the modifiable neural net towards the other one. This way, on a completely novel transition, the two neural nets will likely have very different outputs so the reward will be augmented a lot. Of course, this will hopefully encourage the agent to explore.

",37829,,,,,6/25/2020 14:12,,,,6,,,,CC BY-SA 4.0 22172,1,22174,,6/25/2020 14:40,,1,75,"

Our length of feature representation decreases as we go deeper into the CNN, I mean to say that horizontal and vertical lengths decrease while depth(channels) increase. So, how will the input be preserved, since there won't be any data left at the end of the network, where we connect, to say Multi Layer Perceptrons?

",38060,,38060,,7/6/2020 7:46,7/6/2020 7:46,"How will the input be preserved as we go deeper in CNN, where dimensions decrease drastically?",,1,0,,,,CC BY-SA 4.0 22174,2,,22172,6/25/2020 16:13,,2,,"

You can also think of a convolutional neural network (CNN) as an encoder, i.e. a neural network that learns a smaller representation of the input, which then acts as the feature vector (input) to a fully connected network (or another neural network). In fact, there are CNNs that can be thought of as auto-encoders (i.e. an encoder followed by a decoder): for example, the u-net can indeed be thought of as an auto-encoder.

Although it is (almost) never the case that you transform the input to an extremely small feature vector (e.g. a number), even a single float-pointing number can encode a lot of information. For example, if you want to classify the object in the image into one of two classes (assuming there is only one main object in the image), then a floating-point is more than sufficient (in fact, you just need one bit to encode that information).

This smaller representation (the feature vector) that is then fed to a fully connected network is learned based on the information in your given training data. In fact, CNNs are known as data-driven feature extractors.

I am not aware of any theoretical guarantee that ensures that the learned representation is the best suited for your task (probably you need to look into learning theory to know more about this). In practice, the quality of the learned feature vector will mainly depend on your available data and the inductive bias (i.e. the assumptions that you make, which are also affected by the specific neural network architecture that you choose).

",2444,,2444,,6/25/2020 16:24,6/25/2020 16:24,,,,2,,,,CC BY-SA 4.0 22177,1,22179,,6/25/2020 17:51,,1,199,"

I am looking at a lecture on POMDP, and the context is that, when the quadcopter can't see the landmarks, it has to use reckoning. And then he mentions the transition model is not deterministic, hence the uncertainty grows.

Can transition models in MDP be deterministic?

",36047,,2444,,6/25/2020 23:36,6/26/2020 22:24,"Does ""transition model"" alone in an MDP imply it's non-deterministic?",,2,0,,,,CC BY-SA 4.0 22179,2,,22177,6/25/2020 18:34,,0,,"

Yes, in which case, it will be more like a search problem, if it is not POMDP and have finite number of states. Or you can use the same framework (used for POMDP) with constrained (deterministic) transition matrix modeling for model-based systems.

If you think about it after you train any model/agent with an MDP modeling, during test time the optimal strategy is generally deterministic, i.e., given a feature/state you will take a particular action even if it has more than one non-zero element in each row of the transition matrix.

",38188,,,,,6/25/2020 18:34,,,,6,,,,CC BY-SA 4.0 22180,2,,11672,6/25/2020 20:22,,0,,"

Differentiability for activation functions is desirable but not necessary since you can reformulate the derivatives at the non-differentiable points, like in the case of ReLU.

Properties Needed:

  • To take advantage of Universal Approximation Theorem, and to take advantage of the modeling capacity it promises, the activation functions need to be continuous or Borel measurable (if this term is confusing just think of this as common functions), discriminatory (similarly discriminatory means the function doesn't just produce 0 all the time when integrating w.r.t. another function) and nonpolynomial (nonlinear).
  • Also with the recent research works it is beneficial to have it monotonically increasing.

Recent theoretical analysis of the activation functions gives the edge to ReLU with some preliminary theoretical guarantees like this "Convergence Analysis of Two-layer Neural Networks with ReLU Activation" https://arxiv.org/abs/1705.09886.

",38188,,,,,6/25/2020 20:22,,,,0,,,,CC BY-SA 4.0 22183,2,,16148,6/26/2020 9:55,,0,,"

I have been reading a lot about Natural Gradient and its use to find a descent direction. I found that this post was the most clear.

Consider a model $p$ parameterized by some parameters $\theta$ and we want to maximize the likelihood of observing our data $x$ under this model: $p(x|\theta)$. To optimise this likelihood we can take steps in the the distribution space. Updating the parameters $\theta$ we need to measure how our likelihood changes and this is measure using the KLK divergence.

Even though the KL divergence is not a "proper" distance metric as it is not symmetric, it is still quite informative about the similarity between distributions. It's practical because it can capture differences between distributions that the Euclidean metric (parameter-dependent) could not (see the same post for a simple example).

So answering your question is essentially answering which is the best between Natural Gradient Descent and "Normal" Gradient Descent in the Euclidean space where your loss is measured with a L2 norm. You can train the same model using both methods and you will just find different descent directions.

Hopefully though, both will converge but in my opinion Natural Gradient descent should be superior in nature. It is just very expensive to actually compute because to find the direction in distribution space you need to compute the inverse Fisher matrix $F^{-1}$ or approximate it and that's quite costly as it is of size $n\times n$ where $n$ is the size of $\theta$ which is typically high in neural networks.

",38197,,,,,6/26/2020 9:55,,,,0,,,,CC BY-SA 4.0 22184,1,,,6/26/2020 10:12,,3,251,"

Self-supervised learning algorithms provide labels automatically. But, it is not clear what else is required for an algorithm to fall under the category "self-supervised":

Some say, self-supervised learning algorithms learn on a set of auxiliary tasks [1], also named pretext task [2, 3], instead of the task we are interested in. Further examples are word2vec or autoencoders [4] or word2vec [5]. Here it is sometimes mentioned that the goal is to "expose the inner structure of the data".

Others do not mention that, implying that some algorithms can be called to be "self-supervised learning algorithms" if they are directly learning the task we are interested in [6, 7].

Is the "auxiliary tasks" a requirement for a training setup to be called "self-supervised learning" or is it just optional?


Research articles mentioning the auxiliary / pretext task:

  1. Revisiting Self-Supervised Visual Representation Learning, 2019, mentioned by [3]:

The self-supervised learning framework requires only unlabeled data in order to formulate a pretext learning task such as predicting context or image rotation, for which a target objective can be computed without supervision.

  1. Unsupervised Representation Learning by Predicting Image Rotations, ICLR, 2018, mentioned by [2]:

a prominent paradigm is the so-called self-supervised learning that defines an annotation free pretext task, using only the visual information present on the images or videos, in order to provide a surrogatesupervision signal for feature learning.

  1. Unsupervised Visual Representation Learning by Context Prediction, 2016, mentioned by [2]:

This converts an apparently unsupervised problem (finding a good similarity metric between words) intoa “self-supervised” one: learning a function from a givenword to the words surrounding it. Here the context predic-tion task is just a “pretext” to force the model to learn agood word embedding, which, in turn, has been shown tobe useful in a number of real tasks, such as semantic wordsimilarity.

  1. Scaling and Benchmarking Self-Supervised Visual Representation Learning, 2019:

In discriminative self-supervised learning, which is the main focus of this work, a model is trained on an auxiliary or ‘pretext’ task for which ground-truth is available for free. In most cases, the pretext task involves predicting some hidden portion of the data (for example, predicting color for gray-scale images

",38174,,2444,,11/20/2020 17:22,12/12/2022 2:04,Does self-supervised learning require auxiliary tasks?,,1,1,,,,CC BY-SA 4.0 22186,1,,,6/26/2020 11:52,,1,478,"

Weak supervision is supervised learning, with uncertainty in the labeling, e.g. due to automatic labeling or because non-experts labelled the data [1].

Distant supervision [2, 3] is a type of weak supervision that uses an auxiliary automatic mechanism to produce weak labels / reference output (in contrast to non-expert human labelers).

According to this answer

Self-supervised learning (or self-supervision) is a supervised learning technique where the training data is automatically labelled.

In the examples for self-supervised learning, I have seen so far, the labels were extracted from the input data.

What is the difference between distant supervision and self-supervision?


(Setup mentioned in discussion:

",38174,,38174,,7/4/2020 15:07,7/4/2020 15:07,What is the difference between distant supervision and self-supervision?,,1,3,,,,CC BY-SA 4.0 22188,1,22191,,6/26/2020 12:28,,1,161,"

I know we keep the target network constant during training to improve stability, but why exactly are we updating the weights of our target network? In particular, if we've already reached convergence, why exactly are we updating the weights of our target network?

",37831,,2444,,6/26/2020 12:59,6/26/2020 14:08,Why do we update the weights of the target network in deep Q learning?,,1,0,,,,CC BY-SA 4.0 22191,2,,22188,6/26/2020 14:08,,2,,"

If you are certain that you reached convergence then there is no point in continuing to train your agent, because of that there is also no point in discussing why is target network being updated after convergence is reached. You should simply stop training if you converged. During training we obviously need to keep updating target network to improve correctness of Q-value estimates.

",20339,,,,,6/26/2020 14:08,,,,0,,,,CC BY-SA 4.0 22193,2,,4748,6/26/2020 15:49,,0,,"

You could vary an error coefficient in training. For example, if the expected output was negative and it gave some value in the positive you can train on C * ERROR and conversely if the expected output was positive and it gave some value in the negative you can train on just the error, that way false positives have more impact on the model as opposed to false negatives.

Varying learning rates could as well help, however, increasing the learning rate and increase the error have different effects because changing the error will change the direction of the gradient whereas changing the learning rate will only change the gradient's magnitude of effect on the network, two slightly different things.

(for the learning rate, split the data into two, positive and negative, then train them separately with the learning rate for negative cases larger than for positive cases)

",38204,,,,,6/26/2020 15:49,,,,0,,,,CC BY-SA 4.0 22194,1,,,6/26/2020 16:58,,3,180,"

During the learning phase, why don't we have a 100% exploration rate, to allow our agent to fully explore our environment and update the Q values, then during testing we bring in exploitation? Does that make more sense than decaying the exploration rate?

",37831,,2444,,6/26/2020 23:23,6/27/2020 15:27,Why is it not advisable to have a 100 percent exploration rate?,,2,0,,6/29/2020 19:47,,CC BY-SA 4.0 22195,2,,22194,6/26/2020 17:03,,2,,"

No - imagine if you were playing an Atari game and took completely random actions. Your games would not last very long and you would never get to experience all of the state space because the game would end too soon. This is why you need to combine exploration and exploitation to fully explore the state space.

",36821,,36821,,6/26/2020 18:11,6/26/2020 18:11,,,,12,,,,CC BY-SA 4.0 22196,1,,,6/26/2020 18:27,,1,45,"

Let's say I have two databases, $(\mathbf{x_i}, \mathbf{\hat{p_i}})$ and $(\mathbf{x_j}, \mathbf{\hat{q_j}})$. A neural network with weights $\theta$ can receive an input $\mathbf{x}$ and produce an output $\mathbf{y}$. Mathematically, $\mathbf{y} = f_{NN}(\mathbf{x},\theta)$. To compare the output of my neural network and the database, I need two wrappers, $\mathbf{p}=g(\mathbf{y})$ and $\mathbf{q}=h(\mathbf{y})$.

The problem is: only $g(\cdot)$ is differentiable while writing $h(\cdot)$ in a differentiable manner would take a huge effort.

Is there any efficient way to train my neural network to minimize the following loss function? $$ \mathcal{L}(\theta) = \sum_i \left\{g\left[f_{NN}(\mathbf{x_i}, \theta)\right] - \mathbf{\hat{p_i}}\right\}^2 + \sum_j \left\{h\left[f_{NN}(\mathbf{x_j}, \theta)\right] - \mathbf{\hat{q_j}}\right\}^2 $$

My thinking

If I am using gradient descent-type algorithm, I can only optimize the first part of the loss function while ignoring the second part. If I am using evolutionary-type algorithm, I can optimize both parts, but it will take a long time and I don't make a full use of the differentiable property of $g(\cdot)$.

",38206,,,,,6/26/2020 18:27,Algorithm to train a neural network against differentiable and non-differentiable databases?,,0,0,,,,CC BY-SA 4.0 22197,2,,22064,6/26/2020 18:55,,1,,"

Do you have to use Boltzmann exploration, strictly? There is a modification for Boltzmann exploration called Mellow-max. It, basically, provides an adaptive temperature for Boltzmann exploration.

Here is the link for the paper for tuning mellow-max with deep reinforcement learning (DQN is often mentioned): http://cs.brown.edu/people/gdk/pubs/tuning_mellowmax_drlw.pdf

Here is the link for mellow-max implemented with SARSA (I recommend reading this first, to get an understanding of mellow-max): https://arxiv.org/pdf/1612.05628.pdf

",30174,,,,,6/26/2020 18:55,,,,0,,,,CC BY-SA 4.0 22199,1,22203,,6/26/2020 22:07,,2,77,"

Why isn't it wise for us to completely erase our old Q value and replace it with the calculated Q value? Why can't we forget the learning rate and temporal difference?

Here's the update formula.

",37831,,2444,,6/26/2020 23:43,6/27/2020 4:42,Why isn't it wise for us to completely erase our old Q value and replace it with the calculated Q value?,,1,0,,,,CC BY-SA 4.0 22200,2,,22177,6/26/2020 22:24,,0,,"

Yes , a transition model shows that our environment is stochastic in nature, and with that model we know the probability of entering a state when an action is taken.

",37831,,,,,6/26/2020 22:24,,,,0,,,,CC BY-SA 4.0 22201,1,22273,,6/26/2020 22:37,,3,288,"

q learning is defined as:

Here is my implementation of q learning of the tic tac toe problem:

import timeit
from operator import attrgetter
import time
import matplotlib.pyplot
import pylab
from collections import Counter
import logging.handlers
import sys
import configparser
import logging.handlers
import unittest
import json, hmac, hashlib, time, requests, base64
from requests.auth import AuthBase
from pandas.io.json import json_normalize
from multiprocessing.dummy import Pool as ThreadPool
import threading
import time
from statistics import mean 
import statistics as st
import os   
from collections import Counter
import matplotlib.pyplot as plt
from sklearn import preprocessing
from datetime import datetime
import datetime
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib
import numpy as np
import pandas as pd
from functools import reduce
from ast import literal_eval
import unittest
import math
from datetime import date, timedelta
import random

today = datetime.today()
model_execution_start_time = str(today.year)+"-"+str(today.month)+"-"+str(today.day)+" "+str(today.hour)+":"+str(today.minute)+":"+str(today.second)

epsilon = .1
discount = .1
step_size = .1
number_episodes = 30000

def epsilon_greedy(epsilon, state, q_table) : 
    
    def get_valid_index(state):
        i = 0
        valid_index = []
        for a in state :          
            if a == '-' :
                valid_index.append(i)
            i = i + 1
        return valid_index
    
    def get_arg_max_sub(values , indices) : 
        return max(list(zip(np.array(values)[indices],indices)),key=lambda item:item[0])[1]
    
    if np.random.rand() < epsilon:
        return random.choice(get_valid_index(state))
    else :
        if state not in q_table : 
            q_table[state] = np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
        q_row = q_table[state]
        return get_arg_max_sub(q_row , get_valid_index(state))
    
def make_move(current_player, current_state , action):
    if current_player == 'X':
        return current_state[:action] + 'X' + current_state[action+1:]
    else : 
        return current_state[:action] + 'O' + current_state[action+1:]

q_table = {}
max_steps = 9

def get_other_player(p):
    if p == 'X':
        return 'O'
    else : 
        return 'X'
    
def win_by_diagonal(mark , board):
    return (board[0] == mark and board[4] == mark and board[8] == mark) or (board[2] == mark and board[4] == mark and board[6] == mark)
    
def win_by_vertical(mark , board):
    return (board[0] == mark and board[3] == mark and board[6] == mark) or (board[1] == mark and board[4] == mark and board[7] == mark) or (board[2] == mark and board[5] == mark and board[8]== mark)

def win_by_horizontal(mark , board):
    return (board[0] == mark and board[1] == mark and board[2] == mark) or (board[3] == mark and board[4] == mark and board[5] == mark) or (board[6] == mark and board[7] == mark and board[8] == mark)

def win(mark , board):
    return win_by_diagonal(mark, board) or win_by_vertical(mark, board) or win_by_horizontal(mark, board)

def draw(board):
    return win('X' , list(board)) == False and win('O' , list(board)) == False and (list(board).count('-') == 0)

s = []
rewards = []
def get_reward(state):
    reward = 0
    if win('X' ,list(state)):
        reward = 1
        rewards.append(reward)
    elif draw(state) :
        reward = -1
        rewards.append(reward)
    else :
        reward = 0
        rewards.append(reward)
        
    return reward

def get_done(state):
    return win('X' ,list(state)) or win('O' , list(state)) or draw(list(state)) or (state.count('-') == 0)
    
reward_per_episode = []
            
reward = []
def q_learning():
    for episode in range(0 , number_episodes) :
        t = 0
        state = '---------'

        player = 'X'
        random_player = 'O'


        if episode % 1000 == 0:
            print('in episode:',episode)

        done = False
        episode_reward = 0
            
        while t < max_steps:

            t = t + 1

            action = epsilon_greedy(epsilon , state , q_table)

            done = get_done(state)

            if done == True : 
                break

            if state not in q_table : 
                q_table[state] = np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])

            next_state = make_move(player , state , action)
            reward = get_reward(next_state)
            episode_reward = episode_reward + reward
            
            done = get_done(next_state)

            if done == True :
                q_table[state][action] = q_table[state][action] + (step_size * (reward - q_table[state][action]))
                break

            next_action = epsilon_greedy(epsilon , next_state , q_table)
            if next_state not in q_table : 
                q_table[next_state] = np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])

            q_table[state][action] = q_table[state][action] + (step_size * (reward + (discount * np.max(q_table[next_state]) - q_table[state][action])))

            state = next_state

            player = get_other_player(player)
            
        reward_per_episode.append(episode_reward)

q_learning()

The alogrithm player is assigned to 'X' while the other player is 'O':

    player = 'X'
    random_player = 'O'

The reward per episode:

plt.grid()
plt.plot([sum(i) for i in np.array_split(reward_per_episode, 15)])

renders:

Playing the model against an opponent making random moves:

## Computer opponent that makes random moves against trained RL computer opponent
# Random takes move for player marking O position
# RL agent takes move for player marking X position

def draw(board):
    return win('X' , list(board)) == False and win('O' , list(board)) == False and (list(board).count('-') == 0)

x_win = []
o_win = []
draw_games = []
number_games = 50000

c = []
o = []

for ii in range (0 , number_games):
    
    if ii % 10000 == 0 and ii > 0:
        print('In game ',ii)
        print('The number of X game wins' , sum(x_win))
        print('The number of O game wins' , sum(o_win))
        print('The number of drawn games' , sum(draw_games))

    available_moves = [0,1,2,3,4,5,6,7,8]
    current_game_state = '---------'
    
    computer = ''
    random_player = ''
    
    computer = 'X'
    random_player = 'O'

    def draw(board):
        return win('X' , list(board)) == False and win('O' , list(board)) == False and (list(board).count('-') == 0)
        
    number_moves = 0
    
    for i in range(0 , 5):

        randomer_move = random.choice(available_moves)
        number_moves = number_moves + 1
        current_game_state = current_game_state[:randomer_move] + random_player + current_game_state[randomer_move+1:]
        available_moves.remove(randomer_move)

        if number_moves == 9 : 
            draw_games.append(1)
            break
        if win('O' , list(current_game_state)) == True:
            o_win.append(1)
            break
        elif win('X' , list(current_game_state)) == True:
            x_win.append(1)
            break
        elif draw(current_game_state) == True:
            draw_games.append(1)
            break
            
        computer_move_pos = epsilon_greedy(-1, current_game_state, q_table)
        number_moves = number_moves + 1
        current_game_state = current_game_state[:computer_move_pos] + computer + current_game_state[computer_move_pos+1:]
        available_moves.remove(computer_move_pos)
     
        if number_moves == 9 : 
            draw_games.append(1)
#             print(current_game_state)
            break
            
        if win('O' , list(current_game_state)) == True:
            o_win.append(1)
            break
        elif win('X' , list(current_game_state)) == True:
            x_win.append(1)
            break
        elif draw(current_game_state) == True:
            draw_games.append(1)
            break

outputs:

In game  10000
The number of X game wins 4429
The number of O game wins 3006
The number of drawn games 2565
In game  20000
The number of X game wins 8862
The number of O game wins 5974
The number of drawn games 5164
In game  30000
The number of X game wins 13268
The number of O game wins 8984
The number of drawn games 7748
In game  40000
The number of X game wins 17681
The number of O game wins 12000
The number of drawn games 10319

The reward per episode graph suggests the algorithm has converged? If the model has converged shouldnt the number of O game wins be zero ?

",12964,,,,,6/30/2020 18:57,q learning appears to converge but does not always win against random tic tac toe player,,1,0,,,,CC BY-SA 4.0 22202,2,,13460,6/26/2020 23:01,,2,,"

In deep learning (and, in general, machine learning), tensors are multi-dimensional arrays. You can perform some operations on these multi-dimensional arrays (depending also on the specific implementations and libraries). These operations are similar to the operations you can apply to vectors or matrices, which are just specific examples of multi-dimensional arrays. Examples of these operations are

  • indexing and slicing (if you are familiar with Python, these terms should not scare you)
  • algebraic operations (such as multiplication of a tensor with another tensor, which includes numbers, vectors or matrices), which typically support broadcasting
  • reshaping (i.e. change the shape of the tensor)
  • conversion to or from another format (e.g. a string)

TensorFlow provides an article that discusses these tensors, so I suggest that you read it.

In mathematics, tensors are not just multi-dimensional arrays. They are multi-dimensional arrays that need to satisfy certain properties (in the same way that matrices need to satisfy certain properties to be called matrices) and are equipped with certain operations.

The paper A Survey on Tensor Techniques and Applications in Machine Learning (2019) Yuwang Ji et al., published by IEEE, provides a comprehensive overview of tensors in mathematics. It is full of diagrams that illustrate the concepts and the explanations are concise. Some of the explanations in this paper may not be very useful to develop deep learning applications, but some of the illustrations and explanations (especially, in the first pages, which are the only ones that I read) will give you some intuition behind the tensors or multi-dimensional arrays used in deep learning).

So, tensors in deep learning (DL) may not be exactly equivalent to the tensor objects in mathematics, because they may not satisfy all the required properties or some of the operations in the specific libraries may not be implemented, but it is fine to call them tensors because a tensor in mathematics is also a multi-dimensional array, to which you can apply operations (some of them are implemented in the DL libraries).

",2444,,2444,,6/26/2020 23:14,6/26/2020 23:14,,,,0,,,,CC BY-SA 4.0 22203,2,,22199,6/27/2020 4:42,,3,,"

Removing the learning rate will likely yield poor convergence to the optimal policy and optimal Q-values. Note that the current policy is completely dependent on the Q-values, as we take the action with highest Q-value in a given state (with a few other considerations such as exploration, etc.). If we were to remove the learning rate, then we are making a relatively large change to our Q-values and possibly to our policy as well after only a single update. For example, if the sample rewards have great variance (e.g. in stochastic environments), then drastic updates to a single Q-value may occur simply by chance when a learning rate is not used. Due to the recursive definition of Q-values, a few poor updates can undo the work of many previous updates. If this phenomenon were to occur frequently, then the policy may take a long time to converge to the optimal policy, if at all.

Underlying the temporal-difference update and many other reinforcement learning updates is the notion of policy iteration in which the estimated value function is updated to match the true value function of the current policy and the current policy is updated to be greedy with respect to the estimated value function. This process proceeds iteratively and gradually until convergence to the optimal policy and optimal value function is achieved. Gradual changes such as setting a small learning rate (e.g. $\alpha = 0.1$) aim to speed up convergence by lessening the frequency of the phenomenon in the above paragraph. Sutton and Barto make comments on convergence throughout their book, with the remarks surrounding line 2.7 in Section 2.5 providing a summary.

",37607,,,,,6/27/2020 4:42,,,,3,,,,CC BY-SA 4.0 22205,1,,,6/27/2020 6:46,,3,51,"

The average return for trajectories, $V^{\pi_e}$(s) is often computed via the importance sampling estimate $$V^{\pi_e}(s) = \frac{1}{n}\sum_{i=1}^n\prod_{t=0}^{H}\frac{\pi_e(a_t | s_t)}{\pi_b(a_t|s_t)}G_i$$ where $G_i$is the reward observed for the $i$th trajectory. Sutton and Barton gives an example whereby the variance could be infinite.

In general, however, why does this estimator suffer from high variance? Is it because $\pi_e(a_t|s_t)$ is mainly deterministic and, therefore, the importance weight is $0$ for most trajectories, rendering those sample trajectories useless?

",32780,,2444,,11/5/2020 22:10,11/5/2020 22:10,Why is it the case that off-policy evaluation using importance sampling suffers from high variance?,,0,0,,,,CC BY-SA 4.0 22207,1,,,6/27/2020 8:30,,1,29,"

I stumbled upon a paper from P.Diehl and M.Cook with the title "Unsupervised learning of digit recognition using spike-timing-dependent plasticity" and I'm trying to understand the logic behind the network connection they made.

The network is as follows. The inputs (size of an image 28x28) are connected to the usual all-to-all fashion with positive weights to an NxN layer of neurons. They are decoded using poisson random distribution in which the frequency of spikes of a pixel is set accordingly to the pixel value. The NxN layer is connected 1 on 1 to an NxN layer of inhibitory neurons. These neurons inhibit all other neurons except the one they are connected to. Thus they are connected all-to-all with an exception.

According to the paper this provides competition among neurons. I cannot understand how, in this particular connection, competition is provided. How can different neurons inherit different properties? To me it seems that all neurons will inherit the same properties, thus no differences in weights will be made in the training session among all neurons. For example, if the input 5 is passed to the network all weights of all neurons will try to adjust according to 5. Then if input 7 is passed next, all the weights will be updated according to the new number (7). It is expected, though, that some weights will keep the previous adjustment ie that some weights will have the properties of 5 and the others the properties of 7.

",31817,,31817,,6/28/2020 6:05,6/28/2020 6:05,How does Lateral Inhibition Provide Competition among Neurons?,,0,0,,,,CC BY-SA 4.0 22208,1,,,6/27/2020 13:00,,1,71,"

The following quote is taken from the beginning of the chapter on "Approximate Solution Methods" (p. 198) in "Reinforcement Learning" by Sutton & Barto (2018):

reinforcement learning generally requires function approximation methods able to handle nonstationary target functions (target functions that change over time). In control methods based on GPI (generalized policy iteration) we often seek to learn $q_\pi$ while $\pi$ changes. Even if the policy [pi] remains the same, the target values of training examples are nonstationary if they are generated by bootstrapping methods (DP and TD learning).

Could someone explain why the same is not the case if we use non-bootstrapping methods (such as Monte Carlo that is not allowed infinite rollouts)?

",29670,,2444,,6/27/2020 13:54,6/27/2020 13:54,Why do bootstrapping methods produce nonstationary targets more than non-bootstrapping methods?,,0,0,,,,CC BY-SA 4.0 22212,2,,22194,6/27/2020 15:01,,0,,"

While theoretically you can do something like this if you're very confident you'll cover most of the state space in exploration, this is still a suboptimal strategy. Even in the case of multi-armed bandits, this strategy can be much less sample efficient than $\epsilon$-greedy, and exploration is much easier in this case.

So, even if your strategy miraculously works on a decently sized MDP, it'll be worse than combining exploration and exploitation.

",37829,,36821,,6/27/2020 15:27,6/27/2020 15:27,,,,6,,,,CC BY-SA 4.0 22213,2,,16610,6/27/2020 16:58,,-2,,"

The book sets this hypothesis up by laying out a few assumptions:

In reinforcement learning, the purpose or goal of the agent is formulated in terms of a special signal called the reward, passing from the environment to the agent. At each time step, the reward is a simple number.

We could think about what counterexamples to those assumptions might be:

  1. The reward signal originates internally, instead of originating from the environment. (e.g. meditation, or abstract introspection)
  2. The signal is not received every time step, or isn't necessarily expected to be received at all. (e.g. seeking of transcendent experiences)

What might be common for these counterexamples is that the reinforcement learning mechanism itself undergoes spontaneous change. A signal that would have been positive before the spontaneous change might now be negative. The reward landscape itself might be completely different. From the agent's perspective, it might be impossible to evaluate what changed. The agent might have a 'subconscious' secondary algorithm that introduces changes in the learning algorithm itself, in a way that's decoupled from any reward-defined behavior.

",38221,,2444,,7/1/2020 16:34,7/1/2020 16:34,,,,2,,,,CC BY-SA 4.0 22214,1,22216,,6/27/2020 19:02,,2,172,"

I'm implementing DQN algorithm to train my agent to play a turn-based game. The action space for the game is small, but not all moves are available at all the states. Therefore, when deciding on which action to pick, agent sets Q-values to 0 for all the illegal moves while normalizing the values of the rest.

During training, when the agent is calculating the loss between policy and target networks, should the illegal actions be ignored (set to 0) so that they don't affect the calculations?

",38076,,,,,6/27/2020 22:36,Should illegal moves be excluded from loss calculation in DQN algorithm?,,1,0,,,,CC BY-SA 4.0 22215,2,,2231,6/27/2020 19:21,,1,,"

Rules based on Gestalt psychology could be seen as a 'local minima' in terms of optimal image processing. Some of them could be surprisingly effective, but difficult to extend and improve upon, since they assume certain high level attributes that might not generalize well.

"If it's circular, then it's a fruit 90% of the time"

Modern methods like neural networks come at the problem from another direction, and try to build towards those high level attributes incrementally. You could say that a goal of these modern methods is to recreate a version of those 'assumptions' or 'Gestalt rules' empirically. That way, we can feel more confident that we're not using a locally optimal approach.

"This is the smallest algorithm that will accurately identify fruit 90% of the time"

Are such methods being used or worked on today?

Yes, and not just in image processing. If you look at any industry where fast processing of ambiguous domain-specific data is needed, there are usually systems that were designed by experts in that field using 'hand-crafted' heuristics. A benefit of these approaches is that the 'Gestalt rules' are understandable apart from the system that uses it, and is therefore easier to trust.

Was any progress made on this? Or was this research program dropped?

Aside from trivial cases, employing the gestalt approach in a particular domain requires expertise. "Image Expert" is an expensive and hard to fill role. Even then, you're bounded by that expert's ability to effectively generalize an extremely complex problem. Over the past decades, machine learning has become a lot easier and less expensive to employ, and has been slowly replacing all those 'hand-crafted' approaches.

However, one of the interesting things we're finding when we let the machine learning approaches loose is that they sometimes don't agree with our previous rules. Why they don't agree, and what this means about automation and our psychological biases, is one of the big open questions in AI.

You might also be interested in Saliency Maps, and similar techniques intended to visualize the general rules used by neural networks: https://en.wikipedia.org/wiki/Saliency_map

References:

http://wayback.archive-it.org/219/20121229060421/http://www.cs.indiana.edu/~jwmills/ANALOG.NOTEBOOK/klm/klm.html

https://journals.sagepub.com/doi/pdf/10.1177/002029400403701001

https://maritime.org/doc/op1140/index.htm

https://airandspace.si.edu/stories/editorial/inventing-apollo-spaceflight-biomedical-sensors

",38221,,38221,,7/1/2020 13:53,7/1/2020 13:53,,,,2,,,,CC BY-SA 4.0 22216,2,,22214,6/27/2020 22:36,,3,,"

I've implemented this exact scenario before; your approach would most likely be successful, but I think it could be simplified.

Therefore, when deciding on which action to pick, agent sets Q-values to 0 for all the illegal moves while normalizing the values of the rest.

In DQN, the Q-values are used to find the best action. To determine the best action in a given state, it suffices to look at the Q-values of all valid actions and then take the valid action with highest Q-value. Setting Q-values of invalid actions to 0 is unnecessary once you have a list of valid actions. Note that you would need that set of valid actions to set invalid Q-values to 0 in the first place, so the approach I'm suggesting is more concise without worsening the performance.

Since the relative order of the Q-values is all that is required to find the best action, there is no need for normalization. Also, the original DQN paper uses $\epsilon$-greedy exploration. Keep in mind to only sample from valid actions in a given state when exploring this way.

During training, when the agent is calculating the loss between policy and target networks, should the illegal actions be ignored (set to 0) so that they don't affect the calculations?

As noted in one of your previous questions, we train on tuples of experiences $(s, a, r, s')$. The definition of the Q-learning update is as follows (taken from line 6.8 of Sutton and Barto):

$$Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha \left[R_{t+1} + \gamma\max\limits_aQ(S_{t+1}, a) - Q(S_t, A_t)\right].$$

The update requires taking a maximum over all valid actions in $s'$. Again, setting invalid Q-values to 0 is unnecessary extra work once you know the set of valid actions. Ignoring invalid actions is equivalent to leaving those actions out of the set of valid actions.

",37607,,,,,6/27/2020 22:36,,,,0,,,,CC BY-SA 4.0 22219,2,,10812,6/28/2020 2:07,,1,,"

For anyone coming across this question and wants a very intuitive understanding of first and every visit monte-carlo, look at the answer given in the link provided here.

https://amp.reddit.com/r/reinforcementlearning/comments/9zkdjb/d_help_need_in_understanding_monte_carlo_first/

After looking at that intuition, then you can come back and look at nbroz answer provided above.

Hope this helps anyone struggling with this idea

",38229,,38229,,6/28/2020 2:29,6/28/2020 2:29,,,,1,,,,CC BY-SA 4.0 22220,1,22229,,6/28/2020 2:19,,6,180,"

Coq exists, and there are other similar projects out there. Further, Reinforcement Learning has made splashes in the domain of playing games (a la Deepmind & OpenAI and other less well-known efforts).

It seems to me that these two domains deserve to be married such that machine learning agents try to solve mathematical theorems. Does anyone know of any efforts in this area?

I'm a relative novice in both of these domains, but I'm proficient enough at both to take a stab at building a basic theorem solver myself and trying to make a simple agent have a go at solving some basic number theory problems. When I went to look for prior art in the area I was very surprised to find none. I'm coming here as an attempt to broaden my search space.

",38230,,2444,,6/28/2020 20:53,6/28/2020 20:53,Has reinforcement learning been used to prove mathematical theorems?,,1,5,,,,CC BY-SA 4.0 22221,1,22230,,6/28/2020 6:59,,2,193,"

I am a bit confused as to how exactly I should be implementing SARSA (or Q-learning too) on what is a simple 2-stage Markov Decision Task. The structure of the task is as follows:

Basically, there are three states $\{S_1,S_2,S_3\}$ with $S_1$ is in the first stage for which the two possible actions are the two yellow airplanes. $S_2$ and $S_3$ are the possible states for the second stage and the feasible actions are the blue and red background pictures, respectively. There is only a reward at the end of the second stage choice. If I call the two first stage actions $\{a_{11},a_{12}\}$ and the four possible second stage actions $\{a_{21},a_{22},a_{23},a_{24}\}$, from left to right, then a sample trial/episode will look like: $$S_1, a_{11}, S_2, a_{22},R \quad \text{ or }\quad S_1, a_{11}, S_3, a_{24}, R.$$

In the paper I am reading, where the figure is from, they used a complicated version of TD$(\lambda)$ in which they maintained two action-value functions $Q_1$ and $Q_2$ for each stages. On the other hand, I am trying to implement a simple SARSA update for each episode $t$: $$Q_{t+1}(s,a)= Q_t(s,a) + \alpha\left(r + \gamma\cdot Q_t(s',a') - Q_t(s,a)\right).$$

In the first-stage, there is no reward so an actual realization will look like: $$Q_{t+1}(S_1, a_{11}) = Q_t(S_1,a_{11})+\alpha\left( \gamma\cdot Q_t(S_3,a_{23}) - Q_t(S_1,a_{11})\right).$$

I guess my confusion is then how should it look like for the second stage of an episode? That is, if we continue the above realization of the task above, $S_1, a_{11}, S_3, a_{23}, R$, would should fill in the $?$: $$Q_{t+1}(S_3,a_{23}) = Q_t(S_3,a_{23}) + \alpha\left(R +\gamma\cdot Q_t(\cdot,\cdot)-Q_t(s_3,a_{23}) \right)$$

One on hand, it seems to me that since this is the end of an episode, we assign $0$ to the $Q_t(\cdot,\cdot).$ On the other hand, the nature of this task is that it repeats the same episode over time for a total of $T$, a large number, times we need $Q_t(\cdot,\cdot) = Q_t(S_1,\cdot),$ with the additional action-selection in the first stage there.

I will greatly appreciate if someone can tell me what is the right way to go here.

The link to paper

",38234,,2444,,6/28/2020 20:49,6/28/2020 20:49,Implementing SARSA for a 2-stage Markov Decision Process,,1,3,,,,CC BY-SA 4.0 22222,2,,22051,6/28/2020 10:54,,1,,"

Personally, I find the best way to think of SMDPs intuitively by just imagining that you just discretise the time into such small steps (infinitesimally small steps if necessary) that you can treat it as a normal MDP again, but with some extra domain knowledge that you can exploit primarily for computational efficiency:

  1. Only at time steps that really correspond to "events" in your SMDP can you observe non-zero rewards; at all other time steps you just get rewards equal to $0$.
  2. Only at time steps that really correspond to "events" in your SMDP do you have an action space greater than $1$; in all the "fake" time steps, you have no agency, you just have a single action available (say, a "dummy" or "null" action). So all of these "fake" time steps do not contribute in any way to the "credit assignment" problem in RL, and you can kind of ignore them in your learning steps; only the time spent in them can still be important for discount factors $\gamma < 1$.

If $\tau$ (real number not an integer number) shows the time between two arrivals, should I update Q-functions as follows:

Yes, an update rule like that looks correct to me. Let's take an example situation, where $\tau = 2.0$, and instead of using the update rule you suggest, we take the "proper" approach of discretising into smaller time steps, and treating it as a regular MDP. In this simple example case, it is sufficient to discretise by taking time steps that correspond to durations of $1.0$.

In the SMDP, we'll have only a single transition $s_0 \rightarrow s_2$ (it will become clear why I use slightly strange time-indexing here soon), after which we observe a reward, and this transition takes time $\tau = 2.0$. In the corresponding MDP, we'll have two state transitions; $s_0 \rightarrow s_1$, and $s_1 \rightarrow s_2$, with two reward observations $R_1$ and $R_2$, where we know for sure that:

  1. $R_2 = 0$ (because it does not actually correspond to any event in the SMDP)
  2. We have a meaningful choice between multiple actions at $s_0$, each of which can have different transition probabilities for taking us into different "dummy" states $s_1$, and yield possibly-non-zero rewards $R_1$. In the dummy state $s_1$, we'll always only have the choice for a single dummy/null action (because this state does not correspond to any event in the SMDP), which always yields $R_2 = 0$ as mentioned above.

So, the correct update rule for $s_1$ where we picked a forced dummy action $\varnothing$ and are doomed to receive a reward $R_2 = 0$, would be:

$$Q(s_1, \varnothing) \gets Q(s_1, \varnothing) + \alpha \left( 0 + \gamma \max_{a'} Q(s_2, a') - Q(s_1, \varnothing) \right)$$

and the correct update rule for $s_0$, where we picked a meaningful action $a_0$ and may get a non-zero reward $R_1$, would be:

$$Q(s_0, a_0) \gets Q(s_0, a_0) + \alpha \left( R_1 + \gamma \max_{a'} Q(s_1, a') - Q(s_0, a_0) \right)$$

In this last update rule, we know that $s_1$ is a dummy state in which the dummy action $\varnothing$ is the only legal action. So, we can get rid of the $\max$ operator there and simplify it to:

$$Q(s_0, a_0) \gets Q(s_0, a_0) + \alpha \left( R_1 + \gamma Q(s_1, \varnothing) - Q(s_0, a_0) \right)$$

Since we know that $s_1$ is a dummy state in which we are never able to make meaningful choices anyway, it seems a bit wasteful to actually keep track of $Q(s_1, \varnothing)$ values for it. Luckily, we can easily express $Q(s_1, \varnothing)$ directly in terms of $Q(s2, \cdot)$ -- which is exactly the next set of $Q$-values that we would be interested in keeping track of again:

$$Q(s_1, \varnothing) = \mathbb{E} \left[ 0 + \gamma \max_{a'} Q(s_2, a') \right]$$

So if we want to skip learning $Q$-values for $s_1$ (because it's kind of a waste of effort), we can just use this definition and plug it straight into the update rule for $Q(s_0, a_0)$. $Q$-learning is inherently an algorithm that just concrete samples of experience to estimate expectations (and this is a major reason why it typically uses learning rates $\alpha < 1.0$, so we can simply get rid of the expectation operator when doing this:

$$Q(s_0, a_0) \gets Q(s_0, a_0) + \alpha \left( R_1 + \gamma \left[ \gamma \max_{a'} Q(s_2, a') \right] - Q(s_0, a_0) \right)$$

and this is basically the update rule that you suggested. Note; here I assumed that you receive your rewards directly when you do take actions in your SMDP, which is why I had $R_1$ as a possibly-non-zero reward, and always $R_2 = 0$. I suppose you could also in some cases envision SMDPs where the reward only arrives on the next SMDP-time-step, and that the amount of time that ends up being elpased in between the two events is important to take into account via the discount factor $\gamma$. So you could also choose to model a problem where $R_1 = 0$ and $R_2$ may be non-zero, and this would yield a different update rule (I think one where the reward gets multiplied by $\gamma^{\tau - 1}$? not sure, would have to go through the steps again).


What measure should be used in the SMDP setting? I would be thankful if someone can explain the Q-Learning algorithm for the SMDP problem with this setting.

I think it would be important to involve the amount of time that you take somehow in your evaluation criterion. You could run episodes for a fixed amount of time, and then just evaluate agents based on the sum of rewards. If you don't run for a fixed amount of time (but instead for a fixed number of steps, each of which may take variable amounts of time, for example), you would probably instead want to evaluate agents based on the average rewards per unit of time. You could also include discount factors in your evaluation if you want, but probably don't have to.


Moreover, I am wondering when Q-functions are updated. For example, if a customer enters our website and purchases a product, we want to update the Q-functions. Suppose that the planning horizon (state $S_0$) starts at 10:00 am, and the first customer enters at 10:02 am, and we sell a product and gain $R_1$ and the state will be $S_1$. The next customer enters at 10:04 am, and buy a product, and gain reward $R_2$ (state $S_2$). In this situation, should we wait until 10:02 to update the Q-function for state $S_0$?

This depends on you state representation, and how you model a "state", and to what extent previous actions have influence the state you end up in. Keep in mind that the update rule for $Q(S_0)$ also requires for $S_1$ (or even $S_2$ if $S_1$ is a "dummy state" that you skip) to have been observed. So, if your state representation includes some features describing the "current customer" for which you want to pick an action (do you offer them a discount or not, for example?), then you can only update the $Q$-value for the previous customer when the next customer has arrived. This model does assume that your previous actions have some level of influence over the future states that you may end up in though. For example, you might assume that if your actions make the first customer very happy, you get a better reputation and are therefore more likely to end up in future states where other customers visit more frequently.

",1641,,,,,6/28/2020 10:54,,,,0,,,,CC BY-SA 4.0 22224,1,,,6/28/2020 14:10,,1,206,"

I am using Sutton and Barto's book for Reinforcement Learning.

In Chapter 8, I am having difficulty in understanding the Trajectory Sampling.

I have read the particular section on trajectory sampling (Sec 8.6) two times (plus 3rd time partially) but still, I do not get how it is different from the normal sampling update, and what are its benefits.

",36710,,2444,,6/28/2020 15:11,6/28/2020 15:11,How is trajectory sampling different than normal (importance) sampling in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 22225,1,,,6/28/2020 14:12,,0,43,"

I am inspired by the paper Neural Architecture Search with Reinforcement Learning to use reinforcement learning for optimizing a child network (learner). My meta-learner (controller or parent network) is an MLP and will take as the reward function a silhouette score. Its output is a vector of real numbers between 0 and 1. These values are k different possibilities for the number of clusters (the goal is to cluster the result of the child network which is an auto-encoder, embedded images are the input to the meta-learner).

What I am confused about is the environment here and how to implement this network. I was reading this tutorial and the author has used gym library to set the environment.

Should I build an environment from scratch myself or it is not always needed?

I appreciate any help or hints or links to a source that helps me understand better RL concepts. I am new to it and easily gets confused.

",37744,,2444,,6/28/2020 20:48,7/28/2020 21:02,Should I build an environment from scratch myself or it is not always needed?,,1,0,,,,CC BY-SA 4.0 22226,2,,22225,6/28/2020 14:19,,1,,"

I guess it would always be better if you can reuse existing environments to make it work for yourself. Since most of the environment codes is anyway opensourced, you can always edit it to your liking.

If you want a custom environment, you can add an environment to gym like this.

",36074,,,,,6/28/2020 14:19,,,,0,,,,CC BY-SA 4.0 22227,2,,22224,6/28/2020 14:27,,1,,"

Here is my understanding:

In trajectory sampling as the book describes it, we use the current policy on the simulator to get (next-state, action) pairs. The advantage is that if some states occur more frequently than others in that environment, and if we take enough samples, the distribution among the samples would be similar to the actual distribution.

On the another hand, you can sample in a different manner if you have access to the model. Suppose you have access to the transition distribution. Then you can sample your start state uniformly, and use the transition distribution to get the (next-state, action) pairs. This can be useful if you want to force your algorithm to look at all states evenly, even if that's not the actual case.

",36074,,,,,6/28/2020 14:27,,,,0,,,,CC BY-SA 4.0 22229,2,,22220,6/28/2020 16:15,,5,,"

Artificial Intelligence for Theorem Proving is an active research area as witnessed by the existence of the AITP conference and of many publications on the topic. Some papers are mentioned in this thread: https://coq.discourse.group/t/machine-learning-and-hammers-for-coq/303. I haven't read these papers myself, so I cannot point you to a paper using reinforcement learning specifically, but given the important activity in this domain, I would be very surprised if it hadn't been attempted.

",38247,,,,,6/28/2020 16:15,,,,3,,,,CC BY-SA 4.0 22230,2,,22221,6/28/2020 18:56,,1,,"

In this game you can view end of an episode two ways:

  • There is an implied, terminal, fourth state $s_4$ representing the end of the game.

  • You could view the process as a continuous repeating one, where no matter what the choice is made in $s_2$ or $s_3$, the following state is $s_1$.

The first, terminating, view is a simpler and entirely natural view since nothing that the agent does in one episode can influence the next. It will result in a Q table that predicts future rewards within a single episode for the current agent (as opposed to discounted view over multiple episodes).

You are over-complicating things for yourself by ignoring that a zero reward is still a reward (of $0$). There is no need to remove $R$ from your initial update rule. In many environments there are rewards collected before the end of an episode.

In addition, to complete the standard episodic view, you can note that $Q(s_4, \cdot) = 0$ always by definition, hence so does $\text{max}_{a'}[Q(s_4, a'] = 0$. It is common here though to have a branch based on detecting a terminal state, and use a different update rule:

$$Q_{t+1}(S_3,a_{23}) = Q_t(S_3,a_{23}) + \alpha\left(R - Q_t(s_3,a_{23}) \right)$$

In brief, most implementations of TD algorithms do this:

  • Always assume a reward on each time step, which can be set to $0$

  • Special case for end of episode with a simplified update rule, to avoid needing to store, look up or calculate the $0$ value associated with terminal states

When implementing the environment, it is common to have a step function that always returns reward, next state and whether or not it is terminal e.g.

reward, next_state, done = env.step(action)

Details may vary around this. If you are working with an environment that does not have such a function (many will not have an inherent reward), then it is common to implement a similar function as a convenient wrapper to the environment so that the agent code does not have to include calculations of what the reward should be or whether the state is terminal.

",1847,,1847,,6/28/2020 19:01,6/28/2020 19:01,,,,4,,,,CC BY-SA 4.0 22232,1,,,6/28/2020 22:51,,2,75,"

The intuition provided when introducing actor-critic algorithms is that the variance of its gradient estimates is smaller than in REINFORCE as, e.g., discussed here. This intuition makes sense for the reasons outlined in the linked lecture.

Is there a paper/lecture providing a formal proof of that claim for any type of actor-critic algorithm (e.g. the Q Actor-Critic)?

",38052,,2444,,1/13/2022 12:01,1/13/2022 12:01,What is the proof that the variance of the gradient estimate in Actor-Critic is smaller than in REINFORCE?,,0,1,,,,CC BY-SA 4.0 22234,1,,,6/29/2020 2:06,,1,72,"

Importance sampling is a common method for calculating off-policy estimates in RL. I have been reading through some of the original documentation (D.G. Horvitz and D.J. Thompson, Powell, M.J. and Swann, J) and cannot find any restrictions on the reward or value being estimated. However, it seems that there are constraints because the calculation is not what I would expect for RL environments that have negative rewards.

For example, consider for a given action-state pair ($a_i, s_i$), $\pi_e(a|s) = 0.4$ and $\pi_b(a|s) = 0.6,$ where $\pi_b$ and $\pi_e$ are the behavioral and evaluation policies respectively. Also, assume the reward range is $[-1,0]$, and this action has a reward of $r_{\pi_b}=-0.5$.

Under the IS definition, the expected reward under $\pi_b$ would be $r_{\pi_e} = \frac{\pi_b(a|s)}{\pi_e(a|s)} r_{\pi_b}$. In this example, $r_{\pi_e}=-0.75$ thus $r_{\pi_e} < r_{\pi_b}$. However, assuming a change of scale of the reward to be $[0,1]$ which result in $r_{\pi_b}=0.5$, results in $r_{\pi_e} > r_{\pi_b}$.

All examples of IS I have seen in reference focus on positive rewards. However, I find myself wondering if this formulation applies to negative rewards too. If this formulation does allow for negative reward structures, I'm not sure how to interpret this result. I'm wondering how changing the scale of the reward could change the order? Is there any documentation on the requirements of the value in IS? Any insight into this would be greatly appreciated!

",38256,,2444,,11/5/2020 22:20,11/5/2020 22:20,Does importance sampling for off-policy estimation also apply to the case of negative rewards?,,0,1,,,,CC BY-SA 4.0 22235,1,22240,,6/29/2020 2:07,,2,326,"

Why can't we during the first 1000 episodes allow our agent to perform only exploration?

This will give a better chance of covering the entire space state. Then, after the number of episodes, we can then decide to exploit.

",37831,,2444,,6/29/2020 12:14,6/29/2020 12:14,Why is 100% exploration bad during the learning stage in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 22236,2,,18840,6/29/2020 2:10,,1,,"

I ended up using a work around.

I set up the network so that an C x C (i.e. 320 x 320) input would output a C x C mask for some constant C (in my case it was 320).

I then resized the image I wanted to pass in to C x C, and then resized the output back to the original size of the Image.

",27240,,,,,6/29/2020 2:10,,,,0,,,,CC BY-SA 4.0 22238,1,,,6/29/2020 3:06,,1,69,"

In section 6.5.6 of the book Deep Learning by Ian et. al. general backpropagation algorithm is described as:

The back-propagation algorithm is very simple. To compute the gradient of some scalar z with respect to one of its ancestors x in the graph, we begin by observing that the gradient with respect to z is given by dz = 1. We can then compute dz the gradient with respect to each parent of z in the graph by multiplying the current gradient by the Jacobian of the operation that produced z. We continue multiplying by Jacobians traveling backwards through the graph in this way until we reach x. For any node that may be reached by going backwards from z through two or more paths, we simply sum the gradients arriving from different paths at that node.

To be specific I don't get this part:

We can then compute dz the gradient with respect to each parent of z in the graph by multiplying the current gradient by the Jacobian of the operation that produced z.

Can anyone help me understand this with some illustration? Thank you.

",31749,,31749,,6/30/2020 3:04,6/30/2020 3:04,I need help understanding general back propagation algorithm,,0,2,,,,CC BY-SA 4.0 22240,2,,22235,6/29/2020 7:53,,3,,"

Why can’t we during the first 1000 episodes allow our agent perform only exploration

You can do this. It is fine to do so either to learn the value function of a simple random policy, or when performing off-policy updates. It is quite normal when learning an environment from scratch in a safe way - e.g. in simulation - to collect an initial set of data from behaving completely randomly. The amount of this random data varies, and usually the agent will not switch from fully random to fully deterministic based on calculated values as a single step, but will do so gradually.

this will give a better chance of covering the entire space state

That will depend on the nature of the problem. For really simple problems, it may explore the space sufficiently to learn from. However, for many problems it is just a starting point, and not sufficient to cover parts of the space that are of interest in optimal control.

When behaving completely randomly, the agent may take a very long time to complete an episode, and may never complete its first episode. So you could be waiting for a long time to collect data for the first 1000 such episodes. An example of this sort of environment would be a large maze - the agent will move back and forth in the maze, revisiting same parts again and again, where in theory it could already be learning not to repeat its mistakes.

In some environments, behaving completely randomly will result in early failure, and never experiencing postive rewards that are available in the environment. An example of this might be a robot learning to balance on a tightrope and get from one end to the other. It would fall off after a few random actions, gaining very little knowledge for 1000 episodes.

The state space coverage you are looking for ideally should include the optimal path through the space - at least at some point during learning (not necessarily the start). This does not have to appear in one single perfect episode, because the update rules for value functions in reinforcement learning (RL) will eventually allocate the correct values and find this optimal path in the data. However, the collected data does need to include the information about this optimal path amongst all the alternatives so that the methods in RL can evaluate and select it. In simple environments acting randomly may be enough to gain this data, but becomes highly unlikely when the environments are more complex.

then after the number of episodes, we can then decide to exploit

Again this might work for very simple environments, where you have collected enough information through acting randomly to construct a useful value function. However, if acting randomly does not find enough of the optimal path, then the best that exploitation can do is find some local optimum based on the data that was collected.

I suggest you experience this difference for yourself: Set up a toy example environment, and use it to compare different approaches for moving between pure exploration and pure exploitation. You will want to run many experiments (probably 100s for each combination, averaged) to smooth out the randomness, and you can plot the results to see how well each approach learns - e.g. how many time steps (count time steps, not episodes, if you are interested in sample efficiency) it takes for the agent to learn, and whether or not it actually finds the correct optimal behaviour. Bear in mind that the specific results will only apply in your selected environment - so you might also want to do this comparison on a small range of environments.

",1847,,1847,,6/29/2020 8:13,6/29/2020 8:13,,,,9,,,,CC BY-SA 4.0 22242,1,,,6/29/2020 12:42,,1,294,"

DQN for Atari takes considerable training time. For example, the 2015 paper in Nature notes that algorithms are trained for 50 million frames or equivalently around 38 days of game experience in total. One reason is that DQN for image data typically uses a CNN, which is costly to train.

However, the main purpose of a CNN is to extract the image features. Note that the policy for DQN is represented by a CNN and an output layer equal to the number of discrete actions. Is it possible to use a pretrained DQN to accelerate the training process by fixing the weights of the underlying pretrained CNN, resetting the weights of the output layer, and then running another (possibly different) DQN algorithm to relearn the weights of the output layer? Both DQN algorithms would be run on the same underlying environment.

",31984,,37607,,6/30/2020 22:33,6/30/2020 22:33,Atari Games: Pretrained CNN to accelerate training?,,0,5,,,,CC BY-SA 4.0 22243,1,,,6/29/2020 14:15,,1,1161,"

I know that $G_t = R_{t+1} + G_{t+1}$.

Suppose $\gamma = 0.9$ and the reward sequence is $R_1 = 2$ followed by an infinite sequence of $7$s. What is the value of $G_0$?

As it's infinite, how can we deduce the value of $G_0$? I don't see the solution. It's just $G_0 = 5 + 0.9*G_1$. And we don't know $G_1$ value, and we don't know $R_2, R_3, R_4, ...$

",38264,,2444,,6/30/2020 12:06,6/30/2020 12:06,How do I calculate the return given the discount factor and a sequence of rewards?,,2,0,,,,CC BY-SA 4.0 22244,2,,22243,6/29/2020 14:42,,2,,"

You know all the rewards. They're 5, 7, 7, 7, and 7s forever. The problem now boils down to essentially a geometric series computation.

$$ G_0 = R_0 + \gamma G_1 $$

$$ G_0 = 5 + \gamma\sum_{k=0}^\infty 7\gamma^k $$

$$ G_0 = 5 + 7\gamma\sum_{k=0}^\infty\gamma^k $$

$$ G_0 = 5 + \frac{7\gamma}{1-\gamma} = \frac{5 + 2\gamma}{1-\gamma} $$

",37829,,,,,6/29/2020 14:42,,,,1,,,,CC BY-SA 4.0 22245,2,,22243,6/29/2020 14:44,,1,,"

There are a few ways to resolve values of infinite sums. In this case, we can use a simple technique of self-reference to create a solvable equation.

I will show how to do it for the generic case here of an MDP with same reward $r$ on each timestep:

$$G_t = \sum_{k=0}^{\infty} \gamma^k r$$

We can "pop off" the first item:

$$G_t = r + \sum_{k=1}^{\infty} \gamma^k r$$

Then we can note that the second term is just $\gamma$ times the orginal term:

$$G_t = r + \gamma G_t$$

(There are situations where this won't work, such as when $\gamma \ge 1$ - essentially we are taking advantage that the high order terms are arbitrarily close to zero, so can be ignored)

Re-arrange it again:

$$G_t = \frac{r}{1 - \gamma}$$

This means that you can get the value for a return for the discounted sum of repeating rewards. Which allows you to calculate your $G_1$. I will leave that last part as an execrise for you, as you already figured the first part out.

",1847,,,,,6/29/2020 14:44,,,,1,,,,CC BY-SA 4.0 22246,1,22248,,6/29/2020 16:33,,2,79,"

I'm using the DQN algorithm to train my agent to play a turn-based game. The winner of the game can be known before the game is over. Once the winning condition is satisfied, it cannot be reverted. For example, the game might last 100 turns, but it's possible to know that one of the players won at move 80, because some winning condition was satisfied. The last 20 moves don't change the outcome of the game. If people were playing this game, they, would play it to the very end, but the agent doesn't have to.

The agent will be using memory replay to learn from the experience. I wonder, is it helpful for the agent to have the experiences after the winning condition was satisfied for a more complete picture? Or is it better to terminate the game immediately, and why? How would this affect agent's learning?

",38076,,38076,,6/29/2020 19:50,6/29/2020 20:06,Should the agent play the game until the end or until the winner is found?,,1,4,,,,CC BY-SA 4.0 22247,1,,,6/29/2020 16:59,,1,94,"

I am still new to CNNs, but I would like to check my understanding between when to use convolutional layers versus fully connected layers.

From what I have read, we can use convolutional layers with filters, rather than fully connected layers, with images, text, and audio. However, with regular data, for example, the iris dataset, a convolutional layer would not perform well because of the structure. As in the columns can be swapped, yet the record or sample itself does not change. For example we can swap the order of the Petal Length column with Petal Width and the record does not change. Where as in an image or audio, changing the column items would result in a different image or audio file.

These convolutional layers are "better" for images and audio because not all the features need to connect to the next layer. For example, we do not need the background of a car image to know it is a car, thus we do not need all the connections and we save computational costs.

Is this the right way to think about when to use convolutional layers versus fully connected layers?

",38265,,38265,,6/30/2020 15:34,6/30/2020 15:34,When to use convolutional layers as opposed to fully connected layers?,,0,3,,,,CC BY-SA 4.0 22248,2,,22246,6/29/2020 20:06,,1,,"

You should probably grant reward at the point that the game is logically won. This will help the agent learn more efficiently, by reducing the number of timesteps over which return values need to be backed up.

Stopping the episode at that point should also be fine, and may add some efficiency too, in that there will be more focused relevant data in the experience replay. It seems like on the surface that there is no benefit to exploring or discovering any policy after the game is won, and from the comments no expectation from you as agent developer that the agent has any kind of behaviour - random actions would be fine.

It is still possible that the agent could learn more from play after a winning state. It would require certain things to be true about the environment and additional work from you as developer.

For example, if the game has an end phase where a certain kind of action is more common and it gains something within the game ("victory points", "gold" or some other numbered token that is part of the game mechanics and could be measured), then additional play where this happened could be of interest. Especially if the moves that gained this measure could also be part of winning moves in the earlier game. To allow the agent to learn this though, it would have to be something that it predicted in addition to winning or losing.

One way to achieve this is to have a secondary learning system as part of the agent, that learns to predict gains (or totals) of this resource. Such a prediction could either be learned separately (but very similarly to the action value) and fed into the q function as an input, or it could be a neural network that shares early layers with the q function (or policy function) but with a different head. Adding this kind of secondary function to the neural network can also have a regularising effect on the network, because the interim features have to be good for two types of prediction.

You definitley do not need to consider such an addition. It could be a lot more work. However, for some games it is possible that it helps. Knowing the game, and understanding whether there is any learning experience to be had as a human player to play on beyond winning or losing, might help you decide whether looking into trying to replicate this additional experience for a bot. Even if it works, the effect may be minimal and not worth the difference it makes. For instance running a more basic learning agent for more episodes may still result in a very good agent for the end game. That only costs you more run time for training, not coding effort.

",1847,,,,,6/29/2020 20:06,,,,0,,,,CC BY-SA 4.0 22249,1,,,6/29/2020 22:31,,1,129,"

I need help in understanding something basic.

In this video, Andrew Ng says, essentially, that convolutional layers are better than fully connected (FC) layers because they use fewer parameters.

But I'm having trouble seeing when FC layers would/could ever be used for what convolutional layers are used for, specifically, feature detection.

I always read that FC layers are used in the final, classification stages of a CNN, but could they ever be used for the feature detection part?

In other words, is it even possible for a "feature" to be deciphered when the filter size is the same as the entire image?

If not, it's hard for me to understand Andrew Ng's comparison---there aren't any parameter reduction "savings" if we're not going to use an FC "filter" in place of a CNN layer in the first place.

A semi-related question: Can multi-layer perceptrons (which I understand to be fully connected neural networks) be used for feature detection? If so, how do image-sized "filters" make any sense?

",38271,,2444,,6/29/2020 22:41,11/22/2021 2:02,Can fully connected layers be used for feature detection?,,1,0,,,,CC BY-SA 4.0 22257,2,,22249,6/29/2020 23:18,,1,,"

First of all, (FC) Neural Networks (NN) are universal function approximators. This means, that, in theory, there must exist some NN of appropriate size that is capable of doing what a CNN can do as well. The only difference is that the standard NN would have to be considerably larger than a corresponding CNN with the same capabilities. Each input pixel would probably have to map to a few dozens to hundreds of input nodes that are trained each on detecting some part of a distinct feature in their respective input image location. Consecutive, hidden layers than have to combine the parts extracted by the input layer for any given location and finally act as the true feature detectors, detecting features as compositions of values in the first layer.

So, for each position in the input image separately, all the filters have to be learned separately. This absence of weight sharing (since filters are not shared across input locations, but have to be trained for each separate input location) makes using a standard NN for feature extraction so inefficient.

However, in theory, also a standard NN could be trained on solving an image classification task, thereby learning to extract features. Actually, I would not be surprised if someone had tested that already on the MNIST dataset.

",37982,,2444,,6/29/2020 23:42,6/29/2020 23:42,,,,3,,,,CC BY-SA 4.0 22259,1,,,6/30/2020 1:38,,0,585,"

I am intrigued with the idea of Zettelkasten but unsatisfied with the current implementations. It seems to me that a machine learning and NLP approach could be productive by helpfully identifying “important” keywords on which to links could be created, with learning to help narrow the selection of keywords over time.

My problem is that it’s been 30 years since AI classes in grad school and things have moved on. I’m sure I could become an nlp expert with study but I don’t wanna. So I’m looking for guidance: what are the right terms to describe identifying keywords in context, ideally with some semantic content; how would I apply ML with my training to improve the keyword identification.

I’d love references, ideas, and packages references. Python is preferred, but not strongly; I write most common (and many uncommon, SNOBOL and COBOL anyone?) languages so language isn’t all that much of an issue.

",38273,,,,,7/1/2020 0:59,NLP Identifying important key words in a corpus,,1,0,,,,CC BY-SA 4.0 22262,1,,,6/30/2020 8:32,,2,166,"

Is it possible to create a named entity recognition system without using POS tagging in the corpus?

",36258,,2444,,6/30/2020 12:07,7/26/2021 6:06,Is it possible to create a named entity recognition system without using POS tagging in the corpus?,,1,0,,,,CC BY-SA 4.0 22263,1,,,6/30/2020 10:30,,2,65,"

There is often interest in the results of machine learning algorithms, specifically because they came from machine learning algorithms -- as opposed to interest in the results in and of itself. It seems similar to the 'from the mouths of babes' trope, where comments made by children are sometimes regarded as containing some special wisdom, due to the honest and innocent nature of their source. Similarly, people seem to think that the impassionate learning of a machine might extract some special insight from a data set.

(Of course, anyone with such opinions has obviously never met either a machine learning algorithm or a child.)

Has this effect been discussed or studied anywhere? Does it have a name?

",38253,,1641,,2/3/2021 17:51,2/3/2021 17:51,Studies on interest in results from ML purely due to use of ML,,0,1,,,,CC BY-SA 4.0 22267,1,,,6/30/2020 13:32,,1,62,"

Does someone know a method to estimate / measure the total energy consumption during the test phase of the well-known CNN models? So with a tool or a power meter...

MIT has already a tool to estimate the energy consumption but it only works on AlexNet and GoogleNet, I need something for more Architectures (VGG, MobileNet, ResNet...). And I also need a metric to evaluate the well-known architects in terms of energy consumption. So at first estimate or measure the energy consumption and then evaluate the results with a good metric.

With a measuring device I would measure the power consumption before using the CNN and I will repeat this experiment a few times then I will average the results, and do the same thing while using the CNN and at the end I will compare the results. But I have three problems here:

1. how can I know that nothing else is running on the PC that also consumes energy while using the CNN?
2. how can I increase the accuracy of the measurements?
3. I don't find any power meter that measures the energy consumption in short periodes (1s).

Thats why I prefer a tool to estimate the energy consumption, the accuracy of the measurments will not be that good but I didn't find any another tool..

Does someone have an Idea, papers, sites that can help me?

Many thanks in advance for your reply!

",38282,,,,,6/30/2020 13:32,How to measure/estimate the energy consumption of CNN models during testing?,,0,0,,,,CC BY-SA 4.0 22268,1,22283,,6/30/2020 13:37,,3,189,"

The associative property of multidimensional discrete convolution says that:

$$Y=(x \circledast h_1) \circledast h_2=x\circledast(h_1\circledast h_2)$$

where $h_1$ and $h_2$ are the filters and $x$ is the input.

I was able to do exploit this property in Keras with Conv2D: first, I convolve $h_1$ and $h_2$, then I convolve the result with $x$ (i.e. the rightmost part of the equation above).

Up to this point, I don't have any problem, and I also understand that convolution is linear.

The problem is when two Conv2D layers have a non-linear activation function after the convolution. For example, consider the following two operations

$$Y_1=\text{ReLU}(x \circledast h_1)$$ $$Y_2=\text{ReLU}(Y_1\circledast h_2)$$

It is possible to apply the associative property if the first or both layers have a non-linear activation function (in the case above ReLU, but it could be any activation function)? I don't think so. Any idea or related paper or some kind of approach?

",38283,,2444,,7/1/2020 1:59,7/21/2022 7:05,Is it possible to apply the associative property of the convolution operation when it is followed by a non-linearity?,,1,0,,,,CC BY-SA 4.0 22269,1,,,6/30/2020 13:40,,3,292,"

I have studied linear algebra, probability, and calculus twice. But I don't understand how can I reach the level that I can read any AI paper and understand mathematical notation in it.

What is your strategy when you see the mathematical expression that you can't understand?

For example, in Wasserstein GAN article, there are many advanced mathematical notations. Also, some papers are written by people who have a master's in mathematics, and those people use advanced mathematics in some papers, but I have a CS background.

When you come across this kind of problem, what do you do?

",27315,,2444,,6/30/2020 14:07,7/1/2020 15:54,How can I read any AI paper?,,4,2,,,,CC BY-SA 4.0 22270,1,24433,,6/30/2020 14:05,,1,110,"

Is the GAIL applicable if the expert's trajectories (sample data) are for the same task but are in a different environment (modified but will not be completely different)?

My gut feeling is, yes, otherwise we can just simply adopt behavioural cloning. Furthermore, since the expert's trajectories are from a different environment, the dimension/length of state-action pairs will most likely be different. Will those trajectories still be useful for GAIL training?

",33419,,2444,,11/5/2020 23:45,11/5/2020 23:45,Is GAIL applicable if the expert's trajectories are for the same task but are in a different environment?,,1,2,,,,CC BY-SA 4.0 22271,2,,22269,6/30/2020 15:04,,2,,"

I think the best way to make reading papers easier is to practice (as in, read lots of papers, try implementing them, etc), and to discuss them with other students/researchers.

Sometimes it's tough to avoid some obscure or really technical math, so you may just need to do extra reading. The Wasserstein metric, for example, is used a lot in ML but I kinda doubt most ML researchers have a good understanding of it. This metric comes from a branch of math called "optimal transportation theory", which is super interesting, but very real-analysis-heavy. If you're really interested in learning about the Wasserstein metric, I recommend Cedric Villani's book "Optimal Transport: Old and New". I also recommend this awesome paper. Nevertheless, learning analysis is likely gonna serve you very well for understanding a wide range of ML papers.

Finally, as a beginning grad student, I have experienced your issue as well. I made a tool to help me with this at this repo, which manages a library of papers you're interested in. It then uses a PageRank algorithm to recommend new papers to you that are commonly referred to by the papers you want to read, with the goal of helping you read up on the foundational "prerequisite" material.

",37829,,,,,6/30/2020 15:04,,,,2,,,,CC BY-SA 4.0 22273,2,,22201,6/30/2020 18:57,,5,,"

The primary issue I see is that in the loop through time steps t in every training episode, you select actions for both players (who should have opposing goals to each other), but update a single q_table (which can only ever be correct for the "perspective" of one of your two players) on both of those actions, and updating both of them using a single, shared reward function.

Intuitively, I guess this means that your learning algorithm assumes that your opponent will always be helping you win, rather than assuming that your opponent plays optimally towards its own goals. You can see that this is likely indeed the case from your plot; you use $30,000$ training episodes, split up into $15$ chunks of $2,000$ episodes per chunk for your plot. In your plot, you also very quickly reach a score of about $1,950$ per chunk, which is almost the maximum possible! Now, I'm not 100% sure what the winrate of an optimal player against random would be, but I think it's likely that that should be lower than 1950 out of 2000. Random players will occasionally achieve draws in Tic-Tac-Toe, especially taking into consideration that your learning agent itself is also not playing optimally (but $\epsilon$-greedily)!


You should instead pick one of the following solutions (maybe there are more solutions, this is just what I come up with on the spot):

  1. Keep track of two different tables of $Q$-values for the two different players, and update each of them only on half of the actions (each of them pretending that actions selected by the opponent are just stochastic state transitions created by "the environment" or "the world"). See this answer for more on what these scheme would look like.
  2. Only keep track of a $Q$-value for your own agent (again only updating it on half the actions as described above -- specifically only on the actions your agent actually selected). Actions by the opposing player should then NOT be selected based on those same $Q$-values, but instead by some different approach. You could for instance have opposing actions selected by a minimax or alpha-beta pruning search algorithm. Maybe selecting them to minimise instead of maximise values from the same $Q$-table could also work (didn't think this idea fully through, not 100% sure). You probably could also just pick opponent actions randomly, but then your agent will only learn to play well against random opponents, not necessarily against strong opponents.

After looking into the above suggestions, you'll probably also want to look into making sure that your agent experiences games in which it starts as Player 1, as well as games in which it starts as Player 2, and trains for both of those possible scenarios and learns how to handle both of them. In your evaluation code (after training), I believe that you always make the Random opponent play first, and the trained agent play second? If you don't cover this scenario in your training episodes, your agent may not learn how to properly handle it.


Finally, a couple of small notes:

  • Your discount factor $\gamma 0.1$ has an extremely small value. Common values in literature are values like $\gamma = 0.9$, $\gamma = 0.95$, or even $\gamma = 0.99$. Tic-Tac-Toe episodes tend to always be very short anyway, and we tend to not care too much about winning quickly rather than winning slowly (a win's a win), so I would tend to use a high value like $\gamma = 0.99$.
  • A small programming tip, not really AI-specific: your code contains various conditions of the form if <condition> == True :, like: if done == True :. The == True part is redundant, and these conditions can be written more simply as just if done:.
",1641,,,,,6/30/2020 18:57,,,,1,,,,CC BY-SA 4.0 22275,1,,,6/30/2020 21:24,,1,29,"

I was reading the following paper: Rl-Ncs: Reinforcement Learning Based Data-Driven Approach For Nonuniform Compressed Sensing, and my question is: how do they decide whether a signal is characterized as a region of interest coefficient or non-region of interest coefficient?

",37947,,2444,,7/1/2020 16:40,7/1/2020 16:40,How are the coefficients of the Region of Interest being selected?,,0,0,,,,CC BY-SA 4.0 22276,1,,,6/30/2020 21:50,,1,58,"

I am reading about GANs. I understand that GANs learn implicitly the probability distribution that generated the data. However, at the input we give a random noise vector. It seems that we can sample that random noise vector from whatever distribution we want.

My question is: given that there is ONLY ONE possible distribution that could have generated the data, and that our GAN is trying to approximate that distribution, can I think that the GAN also learns how to map between that distribution that it needs to learn and the distribution from which we sample the random noise vector?

I am thinking about this, as the random noise vector can be sampled from whatever distribution we want, so it can be different each time we start to train, so it can vary, but the GAN needs to be able to still imitate one unique distribution, so, in a way, it needs to be able to adapt to the distribution from which the noise comes.

",37919,,2444,,7/1/2020 0:50,7/1/2020 0:50,Do GANs also learn to map between the distribution from which the random noise is sampled and the true distribution of the data?,,0,0,0,,,CC BY-SA 4.0 22279,2,,17421,7/1/2020 0:31,,3,,"

Perhaps you are getting checkerboard artifacts Explained here, solutions involve changing the kernel and stride size to prevent them from being not divisible. Besides that, a solution could be to apply Gaussian smoothing to minimize the artifacts.

For example, using Gaussian smoothing in OpenCV with your image results in

import cv2
img = cv2.imread('s.png') # I took a screen shot to see how it would look
blur = cv2.blur(img,(8,8))
plt.imshow(blur)

The artifact is gone using a kernel of size (8,8). I hope this can help

",38294,,,,,7/1/2020 0:31,,,,0,,,,CC BY-SA 4.0 22280,2,,22269,7/1/2020 0:38,,1,,"

When I read papers in a new domain and when I started reading theoretical ML paper, I faced similar problems. I usually start with the introduction then related work and try to understand all the concepts and related papers cited that are relevant to understanding the paper.

Specifically when it comes to difficult mathematical formulations, as @harwiltz said the more you read about it the easier it gets. There may be a set of papers with concepts that are similar to the paper you are reading but are well-explained I usually read them first (or if it is an important mathematical concept you can find some blogs describing the intuitions/basics behind it).

",38188,,,,,7/1/2020 0:38,,,,0,,,,CC BY-SA 4.0 22281,2,,22262,7/1/2020 0:49,,2,,"

Yes, both of them are different tasks. POS tagging helps NER systems but it is not necessary. You can get features (say BERT/ELMo embedding) for each word in the sentence and train a CRF NER model. This looks like simple example https://www.pragnakalp.com/bert-named-entity-recognition-ner-tutorial-demo/

",38188,,,,,7/1/2020 0:49,,,,0,,,,CC BY-SA 4.0 22282,2,,22259,7/1/2020 0:59,,1,,"

I am sure there are complex methods to extract keywords, but the standard one which should serve as a strong baseline is the RAKE graph algorithm https://pypi.org/project/rake-nltk/. It should work reasonably well in most text domains.

",38188,,,,,7/1/2020 0:59,,,,2,,,,CC BY-SA 4.0 22283,2,,22268,7/1/2020 1:13,,1,,"

Yes, when you have non-linearity it is not possible to combine your convolution steps.

However, you can approximate the two layers network with one layer net according to the Universal Approximation Theorem. You will probably need to use something like a knowledge distillation technique to do it. Note that, the theorem doesn't say anything about the number of neurons required or if the learning techniques that we usually use will work well.

Also, $ReLU(x)$ is a linear mapping when $x \geq 0$, so if your input, weight, and biases are $\geq 0$, the net can be exactly modeled with a single layer network.

",38188,,,,,7/1/2020 1:13,,,,1,,,,CC BY-SA 4.0 22284,2,,22269,7/1/2020 1:25,,1,,"

From my experience (and I've been reading many research papers for a while), it's rare to find a research paper where you fully understand everything in one go, especially if the research paper was published or released recently or a very long time ago (because, back then, maybe people had a different writing style, used a different notation, or something like that), unless you are an expert on the topic, which is probably not the case, unless you are doing serious research on the topic (i.e. you're doing a Ph.D. and beyond; in that case, you probably don't need to ask questions on this site: hopefully, you have a qualified advisor to whom you can ask these questions!), or the paper is really easy and does not contain any formulas.

Of course, if a paper is published, it must contain something novel, so that something novel could be one of the things that you need to spend some time to understand, but the hardest parts of a paper could also easily be the prerequisites (i.e. the concepts that the paper builds upon), because you may not have a very solid knowledge of those topics (as you probably have already experienced).

There are at least three ways to proceed when you are stuck because you don't understand something

  1. If you can ignore what you don't understand (i.e. you don't need it for your purposes because e.g. you just need to have a high-level understanding of the topics), ignore it (really!!)
  2. If it cannot be ignored (e.g. because you really need to know all the details of the paper because e.g. you need to give a presentation at your university), try to understand what you don't understand by picking up a resource on that topic that you don't understand, then read it; spend the time that you think is opportune (i.e. do not spend 6.5 days to understand a detail of a paper if you only have 7 days to read that paper and prepare a presentation or whatever you need to do)
  3. If you can afford it, stop reading that paper and go back to the basics.

In general, learning is not an easy process and, more specifically, reading research papers is not the easiest reading (because research papers are typically concise, i.e. there's a lot of information compression), so do not expect to understand everything of a paper in one go. In fact, the paper How to Read a Paper by S. Keshav, which gives you some guidelines on how to read a paper, tells you to read a paper in three steps. For more details about these three steps, please, read the paper!

",2444,,2444,,7/1/2020 1:46,7/1/2020 1:46,,,,0,,,,CC BY-SA 4.0 22285,1,,,7/1/2020 1:57,,6,1044,"

I am new to reinforcement learning. For my application, I have found out that if my reward function contains some negative and positive values, my model does not give the optimal solution, but the solution is not bad as it still gives positive reward at the end.

However, if I just shift all readings by subtracting a constant until my reward function is all negative, my model can reach the optimal solution easily.

Why is this happening?

I am using DQN for my application.

I feel that this is also the reason why the gym environment mountaincar-v0 uses $-1$ for each time step and $0.5$ at the goal, but correct me if I am wrong.

",38298,,2444,,11/1/2020 23:27,11/1/2020 23:27,Why does shifting all the rewards have a different impact on the performance of the agent?,,1,0,,,,CC BY-SA 4.0 22286,1,22288,,7/1/2020 2:34,,0,66,"

I just learned the math behind neural networks so please bear with my ignorance. I wonder if there is a precise definition for DNN.

Is it true that any neural network with more than 2 hidden layers can be named as a DNN, and training a NN with 2 hidden layers using Q-learning we are technically doing a type of deep reinforcement learning?

PS: If it is conceptually that simple why do common people regard deep learning as something done by archmages in ivory towers.

",38299,,2444,,12/31/2021 13:10,12/31/2021 13:16,A neural network with 2 or more hidden layers is a DNN?,,1,0,,7/1/2020 12:29,,CC BY-SA 4.0 22288,2,,22286,7/1/2020 4:23,,1,,"

I don't think there is a fixed threshold that differentiates between Shallow and Deep Learning, but I would say that a 2 layer NN should not be considered deep. But now-a-days, almost all NN architectures are studied under the umbrella of Deep Learning.

And yes, training a 2 hidden layers NN using Q-learning would technically mean doing deep RL.

I guess it is conceptually simple but making NN perform optimally is an art. Tuning hyperparameters or debugging NN can be tough and one learns with experience. I guess others in the community would be much more suited to answer this question. But these were my 2 cents.

",36074,,,,,7/1/2020 4:23,,,,0,,,,CC BY-SA 4.0 22289,1,,,7/1/2020 7:24,,2,819,"

I know that there has been some discussion about this (e.g. here and here), but I can't seem to find consensus.

The crucial thing that I haven't seen mentioned in these discussions is that applying batch normalization before ReLU switches off half the activations, on average. This may not be desirable.

In other words, the effect of batch normalization before ReLU is more than just z-scaling activations.

On the other hand, applying batch normalization after ReLU may feel unnatural because the activations are necessarily non-negative, i.e. not normally distributed. Then again, there's also no guarantee that the activations are normally distributed before ReLU clipping.

I currently lean towards a preference to batch normalization after ReLU (which is also based on some empirical results).

What do you all think? Am I missing something here?

",37751,,2444,,7/1/2020 16:43,7/1/2020 16:43,Should batch normalisation be applied before or after ReLU?,,0,1,,,,CC BY-SA 4.0 22290,2,,22285,7/1/2020 8:13,,2,,"

You have some freedom to re-define reward schemes, whilst still describing the same goals for an agent. How this works depends to some degree on whether you are dealing with an episodic or continuing problem.

Episodic problems

An episodic problem ends, and once an agent reaches the terminal state, it is guaranteed zero rewards from that point on. The optimal behaviour can therefore depend quite critically on balance between positive and negative rewards.

  • If an environment contains many unavoidable negative rewards, and these outweigh total positive rewards, then the agent will be motivated to complete an episode sooner.

  • If an environment contains repeatable positive rewards, and these outweigh total negative rewards, then the agent will be motivated to loop through the postive rewards and not end the episode.

Scaling all rewards by the same positive factor makes no difference to the goals of an agent in an episodic problem. Adding a positive or negative offset to all rewards can make a difference though. It is likely to be most notable when such a change moves rewards from positive to negative or vice versa. In the MountainCar example, adding +2 to all rewards would mean the agent would gain +1 for each time step. As it would stop gaining any reward for reaching the goal, even though reaching that goal would score the highest possible +2.5 reward, the fact that this ends the episode means that it now becomes a poor choice. The best action for the car in this modified MountainCar is to stay at the bottom of the valley collecting the +1 reward per time step forever.

Continuing problems

In a continuing problem, there is no way for the agent to avoid the stream of new reward data. That means any positive scaling of all reward values or positive or negative offset, by the same amount, has no impact on what counts as the optimal policy. The calculated value of any state under the same policy, but with rewards all transformed with the same multiplier and offset, will be different, but the optimal policy in that environment will be the same.

If you scale or offset rewards differently to each other, then that can change the goals of the agent and what the optimal policy is. The balance does not really depend on whether rewards are positive or negative in a continuing environment.

There may be some exceptions to this for continuing problems when using a discount factor, and setting it relatively low (compared to the typical state "cycling" length in the problem). That can cause changes in behaviour due to offsets, similar to those seen in episodic problems. If you use an average reward setting this tends to be less relevant. Often in DQN, you will choose a high discount factor e.g. 0.99 or 0.999, and this will tend to behave close to an average reward setting provided rewards are not very sparse.

In general

In either case, if you change a reward system, and that results in an agent that consistently learns a different policy, that will usually mean one of two things:

  • The original reward system was incorrect. It described a goal that you did not intend, or had "loopholes" that the agent could exploit to gain more reward in a way that you did not intend.

  • The implementation of the agent was sensitive in some way to absolute values of total reward. That could be due to a hyperparameter choice in something like a neural network for example, or maybe a bug.

Another possibility, that you may see if you only run a few experiments, is that the agent is not learning 100% consistently, and you are accidentally correlating your changes to reward scheme with the noise/randomness in the results. A DQN-based agent will usually have some variability in how well it solves a problem. After training, DQN is usually only approximately optimal, and by chance some approximations are closer than others.

",1847,,1847,,7/1/2020 11:36,7/1/2020 11:36,,,,2,,,,CC BY-SA 4.0 22291,1,,,7/1/2020 8:34,,1,132,"

I know this is deceptively simple. Tic tac toe is a well studied game for RL.

Assume your agent is playing aggainst a strong opponent.

I know you deal in after states. I know that in Q learning the optimal policy should be converged on faster as the Q(S,A) is becoming closer to the optimal each step. While in SARSA the Q function will not be updated towards the optimal sometimes as it is exploring. If epislon is fixed SARSA will convereg to the epislon greedy policy.

I came across the question above and I don't know the answer, is it that SARSA may play more conservatively? Opting for more draws rather than getting in board positions where one could is likely to either lose or win rather than draw.

",38303,,,,,7/1/2020 8:34,Tic-tac-toe: How would standard SARSA and Q-learning yield different results in the agent's behaviour?,,0,0,,,,CC BY-SA 4.0 22299,2,,16610,7/1/2020 15:30,,0,,"

The closest counterexamples I can think of are cases where reward shaping is required to learn a good policy but ends up having unintended consequences.

Reward shaping is usually used in cases we want to encourage a particular behavior or when the reward is sparse or when capturing exactly what you want is not straightforward or infeasible. But it is not a good practice to rely too much on it as it can have unintended consequences. A simple example of this is described here https://openai.com/blog/faulty-reward-functions/.

",38188,,,,,7/1/2020 15:30,,,,4,,,,CC BY-SA 4.0 22300,2,,22269,7/1/2020 15:54,,3,,"

I think the answer depends very much on why you are reading the paper, what are you trying to get out of it? There are plenty of papers that I "read" (or often really just quickly skim through) where I'll definitely not understand all the math. More often than not, this will be because I don't actually care to deeply understand it.

There is plenty of more "practical" research to be done in AI, which definitely doesn't always require a deep understanding of all the math. Intuition can often be enough, at least to get started, for meaningful practical contributions. If this is the sort of research that you're interested in doing, you probably don't need to understand as many of the mathematical parts of AI papers as you do if you're really trying to do research directly in that theoretical area.

Personally, when I write "math-heavy" parts in my own papers (and that will often already be restricted to a rather simple level of math in comparison to the "real theory" ML papers), I always try to make sure to include intuitive, English descriptions of what we're doing around it. Even if you don't immediately understand a full equation, just having the intuitive explanation around it to tell you what it means can be enough for a broad understanding of the paper. Then you only have to dive deep into the details of the equations if -- based on the English text -- you decide that you're actually really interested. So, if there are sufficient, intuitive explanations surrounding the equations, I'd recommend to focus heavily on that first. Not every paper does this though, sometimes there's very little text and very much math, and then this can be difficult.

Even if it turns out that you do have to understand math, you may not have to understand ALL of it right away though. The important parts that I would try to focus on understanding first are:

  • A mathematical description of the "problem". This could be an objective function, a metric to be optimised/minimised/maximised, or an existing equation from previous literature that the authors take as a starting point and inspect some detail of in greater detail.
  • Mathematical descriptions of the outcomes/results. These could be equations that they actually use in concrete algorithms (see if you can relate them to any pseudocode that may be present), or the final equations stated in theorems / at the end of proofs.

All the complex parts in between are probably less important. Just a vague idea of what the starting point is, and a vague understanding of the final outcome, can be enough to at least know what the paper is about. Then you can decide for yourself whether you really need to know more about the details in between, or if they're maybe not relevant to you / your work / your research.

",1641,,,,,7/1/2020 15:54,,,,0,,,,CC BY-SA 4.0 22302,1,22310,,7/2/2020 0:06,,0,73,"

I want to use reinforcement learning in an environment I made. The exact environment doesn't really matter, but it comes down to this: The amount of different states in the environment is infinite e.g. amount of ways you can put 4 cars at an intersection, but the amount of different actions is only 3 e.g. go forward, right or left. The state exists out of five numbers. My question is: what algorithm should I use or at least what kind of algorithm?

",38316,,2444,,7/2/2020 18:24,7/2/2020 18:24,What reinforcement learning algorithm should I use in continuous states?,,1,0,,,,CC BY-SA 4.0 22304,1,22313,,7/2/2020 5:27,,1,149,"

In a course that I am attending, the cost function of a support vector machine is given by

$$J(\theta)=\sum_{i=1}^{m} y^{(i)} \operatorname{cost}_{1}\left(\theta^{T} x^{(i)}\right)+\left(1-y^{(i)}\right) \operatorname{cost}_{0}\left(\theta^{T} x^{(i)}\right)+\frac{\lambda}{2} \sum_{j=1}^{n} \Theta_{j}^{2}$$

where $\operatorname{cost}_{1}$ and $\operatorname{cost}_{0}$ look like this (in Magenta):

What are the values of the functions $\operatorname{cost}_{1}$ and $\operatorname{cost}_{0}$?

For example, if using logistic regression the values of $\operatorname{cost}_{1}$ and $\operatorname{cost}_{0}$ would be $-\log* \operatorname{sigmoid}(-z)$ and $-\log*(1-\operatorname{sigmoid}(-z))$.

",32636,,2444,,1/2/2022 11:33,1/2/2022 11:33,"What is the definition of the ""cost"" function in the SVM's objective function?",,1,0,,,,CC BY-SA 4.0 22305,2,,22127,7/2/2020 5:32,,0,,"

Given a non-admissible heuristic function, A* will always give a solution if one exists, but there is no guarantee it will be optimal.

I won't duplicate the proof here, but it isn't too hard to prove that any best-first search will find a solution for any measure of best, given that a path to the solution exists and infinite memory. A* is a best-first search algorithm, so it will always find a solution if one exists.

Which heuristics guarantee the optimality of A*? Is the admissibility of the heuristic always a necessary condition for A* to produce an optimal solution?

Admissibility is not a necessary condition. Take any admissible heuristic $h_1$ and make a new function $h(n) = h_1(n)+5$. This heuristic is not admissible, but if you run A* on it, it will still find optimal solutions.

But, we also have to ask what you mean by "the optimality of A*", because optimality can have two senses here. My point in the previous paragraph is in the sense of returning optimal paths. An alternate interpretation is that no algorithm performs fewer expansions that A* with the same information. This is probably not what was meant, and the answer in that context is far more complicated. But, with an inconsistent (but admissible) heuristic, A* can perform exponentially more expansions than other known algorithms, and thus is not the optimal algorithm to use.

",17493,,2444,,11/7/2020 14:40,11/7/2020 14:40,,,,2,,,,CC BY-SA 4.0 22307,1,22325,,7/2/2020 7:55,,2,1905,"

Feature scaling, in general, is an important stage in the data preprocessing pipeline.

Decision Tree and Random Forest algorithms, though, are scale-invariant - i.e. they work fine without feature scaling. Why is that?

",35585,,,,,2/15/2021 23:03,Why are decision trees and random forests scale invariant?,,2,1,,,,CC BY-SA 4.0 22308,1,22311,,7/2/2020 8:44,,4,122,"

In contrast to L2 regularization, L1 regularization usually yields sparse feature vectors and most feature weights are zero.

What's the reason for the above statement - could someone explain it mathematically, and/or provide some intuition (maybe geometric)?

",35585,,2444,,1/29/2021 22:24,1/29/2021 22:24,Why does L1 regularization yield sparse features?,,1,2,,,,CC BY-SA 4.0 22309,2,,18000,7/2/2020 9:28,,2,,"

Inverse Reinforcement Learning (IRL) is a technique that attempts to recover the reward function that the expert is implicitly maximising based on expert demonstrations. When solving reinforcement learning problems, the agent maximises a reward function specified by the designer, and in the process of reward maximisation, accomplishes some task that it had set out to do. However, reward functions for certain tasks are sometimes difficult to specify by hand. For example the task of driving takes into consideration many different factors such as the distance of the car in front of him, the road conditions and whether or not the person needs to get to his destination quickly. A reward function can be hand specified based on these features. However, when there exists trade offs between these different features, it is difficult to know how to specify the different desiderata of these tradeoffs.

Instead of specifying the trade offs manually, it would be easier to recover a reward function from expert demonstrations using IRL. Such a reward function can lead to better generalisations to unseen states as long as the features of driving do not change.

In the case where reward shaping fails to learn a task (such as driving), it would be better to have someone demonstrate a task and learn a reward function from these demonstrations. Solving the MDP with the learnt reward function will thus yield a policy that should resemble the demonstrated behaviour. The reward function learnt should also generalise to unseen states and the agent acting in unseen states should be able to perform actions that an expert would take when he is placed in the same conditions, assuming that the unseen states come from the same distribution as the training states.

While Reward Shaping might be able to perform the same task as well, IRL might be able to do better, based on some performance metric that will differ from problem to problem.

",32780,,,,,7/2/2020 9:28,,,,0,,,,CC BY-SA 4.0 22310,2,,22302,7/2/2020 10:17,,0,,"

I would recommend looking at Deep Q-Learning.

",36821,,,,,7/2/2020 10:17,,,,3,,,,CC BY-SA 4.0 22311,2,,22308,7/2/2020 10:48,,3,,"

In L1 regularization, the penalty term you compute for every parameter is a function of the absolute value of a given weight (times some regularization factor). Thus, irrespective of whether a weight is positive or negative (due to the absolute value) and irrespective of how large the weight is, there will be a penalty incurred as long as weight is unequal 0. So, the only way how a training procedure can considerably reduce the L1 regularization penalty is by driving all (unnecessary) weights towards 0, which results in a sparse representation.

Of course, the L2 regularization will also only be strictly 0 when all weights are 0. However, in L2, the contribution of a weight to the L2 penalty is proportional to the squared value of the weight. Therefore, a weight whose absolute value is smaller than 1, i.e. $abs(weight) < 1$, will be much less punished by L2 than it would be by L1, which means that L2 puts less emphasis on driving all weights towards exactly 0. This is because squaring a some value in (0,1) will result in a value of lower magnitude than taking the un-squared value itself: $x^2 < x\ for\ all\ x\ with\ abs(x) < 1$.

So, while both regularization terms end up being 0 only when weights are 0, the L1 term penalizes small weights with $abs(x) < 1$ much more strongly than L2 does, thereby driving the weight more strongly towards 0 than L2 does.

",37982,,37982,,7/2/2020 17:12,7/2/2020 17:12,,,,0,,,,CC BY-SA 4.0 22313,2,,22304,7/2/2020 13:09,,2,,"

That is the hinge loss, a type of loss most notably used for SVM classification. The hinge loss is typically defined as:

$$ \ell(y)=\max (0,1-t \cdot y), $$

which, in your use case, is something like this:

$$ \operatorname{cost}\left(h_{\theta}(x), y\right)=\left\{\begin{array}{ll} \max \left(0,1-\theta^{T} x\right) & \text { if } y=1 \\ \max \left(0,1+\theta^{T} x\right) & \text { if } y=0 \end{array}\right. $$

Check this article on it.

",26882,,2444,,11/1/2020 22:20,11/1/2020 22:20,,,,0,,,,CC BY-SA 4.0 22314,2,,4714,7/2/2020 14:28,,1,,"

You can do that pretty easily using Posenet or Openpose. Train the keypoints for Squats and then count it. :)

",38330,,,,,7/2/2020 14:28,,,,0,,,,CC BY-SA 4.0 22316,1,,,7/2/2020 19:46,,1,215,"

When using an on-policy method in reinforcement learning, like advantage actor-critic, you shouldn't use old data from an experience buffer, since a new policy requires new data. Does this mean that to apply batching to an on-policy method you have to have multiple parallel environments?

As an extension of this, if only one environment is available when using on-policy methods, does that mean batching isn't possible? Doesn't that limit the power of such algorithms in certain cases?

",38335,,2444,,10/12/2020 12:53,3/7/2022 2:05,Do we need multiple parallel environments to train in batches an on-policy algorithm?,,1,1,,,,CC BY-SA 4.0 22317,2,,22070,7/2/2020 20:28,,0,,"

This is still a long way off and would require what is known as Artificial General Intelligence, which is probably the best descriptive term for AI with cognitive capacity akin to humans.

(Another term often used is "superintelligence", but it's less precise in regard to general intelligence, in that it could be understood to connote superior intelligence that is not general. The same problem exists for Searle's "Strong AI", an AI that matches or exceeds humans capacity, since AI can now be "strong but narrow", exceeding human capability in a single task or a set of tasks, such as a game, or managing fully definable systems such as air-conditioning.)

Part of the problem is we still don't really understand how human cognition works, so the famous projection by an incredibly important figure in AI, Herbert A. Simon, that:

"Machines will be capable, within twenty years, of doing any work a man can do."

is still, like fusion power, potentially always 20 years in the future. (Worth noting that if a general superintelligence is developed, it could not only do anything humans could do, but things we can't even conceive of, because its intelligence would greatly exceed our own, and would likely be capable of continual self-optimization.)

",1671,,1671,,7/2/2020 20:35,7/2/2020 20:35,,,,0,,,,CC BY-SA 4.0 22318,1,22324,,7/3/2020 4:37,,0,57,"

Text classification of equal length texts works without padding, but in reality, practically, texts never have the same length.

For example, spam filtering on blog article:

thanks for sharing    [3 tokens] --> 0 (Not spam)
this article is great [4 tokens] --> 0 (Not spam)
here's <URL>          [2 tokens] --> 1 (Spam)

Should I pad the texts on the right:

thanks for     sharing --
this   article is      great
here's URL     --      --

Or, pad on the left:

--   thanks  for    sharing
this article is     great
--   --      here's URL

What are the pros and cons of either pad left or right?

",2844,,,,,7/3/2020 7:35,"Text classification of non-equal length texts, should I pad left or right?",,1,0,,,,CC BY-SA 4.0 22319,1,,,7/3/2020 5:22,,1,101,"

I want to collect training samples from images.

That can mean different things depending on the context. I think of the simplest case, which should be most commonly required. Because it is so common, there may be a standard tool for it.

An example would be to have a collection of images of random street scenes and manually collect images of nonoccluded cars from them into separate files.

What is a common way or tool to do this:

For a large number of images, select one or more rectangles (of arbitrary size and with edges parallel to the image edges) in the image and save them to separate image files.

Of course, it can be done with any general image editing program, but in this case, most of the work time would be used for opening new images, closing old images, saving sample images and the most time-consuming part of entering a non-conflicting file name for the individual sample image files.
For small numbers of samples per input file, this may need about an order of magnitude more time, and also more complex interaction.

I would prefer a tool running on Linux/Ubuntu.
If this does not exist, I'd be curious why.

",2317,,2317,,7/3/2020 6:13,7/14/2020 9:20,How to manually collect rectangular training data samples from images?,,1,0,0,,,CC BY-SA 4.0 22320,1,,,7/3/2020 5:28,,2,65,"

I've been reading about Fisher's Linear Discriminant Analysis lately, and I noticed that the objective function (particularly for two-class classification) to be maximized contains scatter terms instead of variance, in the denominator. Why is that?

To clarify, the scatter of a sample is just the variance multiplied by the number of data points in the sample.

Thank you!

",35585,,,,,7/14/2020 7:37,Why is 'scatter' used instead of variance in LDA?,,0,2,,,,CC BY-SA 4.0 22322,1,,,7/3/2020 5:53,,0,155,"

I wanted to understand back-propagation so I made a basic neural network library. I used momentum, with learning rate = $0.1$, beta = $0.99$, epochs = $200$, batch size = $10$, loss function is cross entropy and model structure is $784$, $64$, $64$, $10$ and all layers use sigmoid. It performed terribly at first, so I initialized all the weights and biases in the range $[10^{-9}, 10^{-8}]$ and it worked. I am quite new to deep learning and I find TensorFlow doesn't seem as friendly to beginners who want to play around with hyper-parameters. How do you find the right hyper-parameters? I trained it on 100 digits (which took 10 minutes), tweaked hyper-parameters, chose the best set and trained the model using that set on the entire data set of $60,000$ images. I also found that halving the epochs and doubling the training set size gave better results. Are there fool proof heuristics to find good hyper-parameters? What is the best set of hyper-parameters (without regularization, dropout, etc) for MNIST digits? Here is the code for those who want to take a look.

",38343,,38343,,7/3/2020 9:38,7/3/2020 9:38,96.91% accuracy on MNIST after 2 hours of training using custom made neural net library. Ways to improve?,,1,1,,,,CC BY-SA 4.0 22323,1,22346,,7/3/2020 6:09,,3,111,"

Assuming the definition of an agent to be:

An entity that perceives its environment, processes the perceived information, and acts on the environment such that some goal is fulfilled.

Are there any agents that are based on quantum processing/computing (i.e. implemented by a network of quantum gates)?

Is there any work done towards this end? If so, could someone provide references?

",34403,,2444,,7/5/2020 0:02,7/6/2020 1:24,Are there any agents that are based on quantum computing?,,1,0,,,,CC BY-SA 4.0 22324,2,,22318,7/3/2020 7:35,,1,,"

For any model that does not take a time series approach like an RNN does, the padding shouldn't make a difference.

I prefer padding right simply because there also might be text you need to cut-off. Then padding is more intuitive as you either cut-off a text if it's too long or pad a text when it's too short.

Either way, when a model is trained a certain way, it shouldn't make a difference so long the testing is also padded the way it was presented in training.

",38328,,,,,7/3/2020 7:35,,,,1,,,,CC BY-SA 4.0 22325,2,,22307,7/3/2020 7:39,,3,,"

Scaling only makes sense when there is something that reacts to that scale. Decision Trees though, just make a cut at a certain number.

Imagine: For a feature that goes from 0 to 100 a cut at 50 may be improving performance. Scaling this down to 0 to 1 making the cut a 0.5 doesn't change a thing.

Now on the other hand NN have some kind of activation function (leaving RELu aside) that react differently to input that is above 1. Here Normalization, putting every feature between 0 and 1 makes sense.

",38328,,,,,7/3/2020 7:39,,,,0,,,,CC BY-SA 4.0 22326,2,,22322,7/3/2020 8:05,,1,,"

There is no singular best set of hyperparameters. Even more, there is no real search algorithm for hyperparameters. You can do a grid search, but this obviously will take some time. Most people either do that or will try to handpick their parameters.

A few other things to note: Initializing your weights at [10^9,10^8] doesn't seem right to me. They should be centered and close to zero initially. You should also take a look again how to set up train/test/val splits.

",38328,,,,,7/3/2020 8:05,,,,2,,,,CC BY-SA 4.0 22327,1,,,7/3/2020 9:11,,1,138,"

I'm using U-Net for image segmentation.

The model was trained with images that could contain up to 4 different classes. The train classes are never overlapping.

The output of the UNet is a heatmap (with float values between 0 and 1) for each of these 4 classes.

Now, I have 2 problems:

  • for a certain class, how do I segment (draw contours) in the original image only for the points where the heatmap has significant values? (In the image below an example: the values in the centre are significant, while the values on the left aren't. If I draw the segmentation of the entire image without any additional operation, both are considered.)

  • downstream of the first point, how do I avoid that in the original image the contours of two superimposed classes are drawn? (maybe by drawing only the one that has higher values in the corresponding heatmap)
",16136,,,,,7/3/2020 9:11,Suppress heatmap non-maxima in segmentation with UNet,,0,1,,,,CC BY-SA 4.0 22328,1,22338,,7/3/2020 10:07,,2,124,"

I was reading this paper where they are stating the following:

We also use the T-Test to test the significance of GMAN in 1 hour ahead prediction compared to Graph WaveNet. The p-value is less than 0.01, which demonstrates that GMAN statistically outperforms Graph WaveNet.

What does "Model A statistically outperforms B" mean in this context? And how should the p-value threshold be selected?

",20430,,2444,,7/3/2020 22:55,7/4/2020 7:11,"What does it mean when a model ""statistically outperforms"" another?",,1,1,,,,CC BY-SA 4.0 22329,1,24787,,7/3/2020 10:21,,1,306,"

When describing the model architecture for a deep recurrent q network, the authors of the paper Learning to Communicate with Deep Multi-Agent Reinforcement Learning

each agent consists of a recurrent neural network (RNN), unrolled for $T$ time-steps, that maintains an internal state $h$, an input network for producing a task embedding $z$, and an output network for the Q-values and the messages $m$. The input for agent $a$ is defined as a tuple of $\left(o_{t}^{a}, m_{t-1}^{a^{\prime}}, u_{t-1}^{a}, a\right)$.

Can someone explain what the purpose of the embedding layer is in this specific context?

Implementation can be found here.

",33448,,2444,,11/21/2020 23:03,11/22/2020 14:58,What is the role of embeddings in a deep recurrent Q network?,,1,0,,,,CC BY-SA 4.0 22330,1,22332,,7/3/2020 12:12,,2,102,"

In the course of a scientific work, I will discuss the different types of reinforcement learning. However, I have difficulties to find these different types.

So, into which subcategories can reinforcement learning be divided? For example, the following subdivisions seem to be useful

  • Model-free and Model-based
  • Dynamic Programming, Monte Carlo and Temporal Difference

Any others?

",38349,,2444,,7/3/2020 13:40,7/3/2020 13:40,Into which subcategories can reinforcement learning be divided?,,1,0,,,,CC BY-SA 4.0 22331,1,,,7/3/2020 12:17,,0,86,"

In looking at an algorithm in the paper Learning to Communicate with Deep Multi-Agent Reinforcement Learning.

Here is the full algorithm:

What does the notation for t=T to 1,−1 do: refer to in terms of time steps?

The network structure is a deep recurrent q network.

Secondly, why do the gradients need to be reset to zero?

",33448,,2444,,7/3/2020 13:37,12/20/2021 18:46,"What does the notation ""for t=T to 1,−1 do"" in terms of time steps, in deep recurrent q network?",,1,2,,,,CC BY-SA 4.0 22332,2,,22330,7/3/2020 12:49,,4,,"

Your two suggestions are not mutually exclusive. If you go by this process, you'll have to do a "Cartesian product" of a bunch of different RL categorizations which would get out of hand. I recommend, if you can, to describe some sort of "RL taxonomy" instead. By this I mean describing different RL characterizations without assuming they're mutually exclusive.

To add to your list :

  • On-policy or off-policy
  • Value based or policy gradient
",37829,,,,,7/3/2020 12:49,,,,2,,,,CC BY-SA 4.0 22333,1,22334,,7/3/2020 14:41,,3,158,"

I want to train ResNet50 model using resistor images like below:

I tried it by collecting data from google images and there were quite few. So accuracy was very low (around %10) but I wonder If it is due lack of images or is it really possible to classify these images? Because as it is seen the object to be classified is very small and its value as color coded. I thought maybe this is not a good idea. Searched it on google but could not find anybody tried to do it before. I have also tried data augmentation and changing to other models but still its accuracy was quite low.

P.S: I have also tried changin epoch numbers, optimizers and all other parameters. So I want to make sure If it is due low data or is it just very hard task to complete for a computer vision model.

And Is it rational to crop the image by using a mask before classifying it to make sure all color codes are bigger and easily valuable by model?

",38344,,38344,,7/3/2020 14:47,7/3/2020 19:34,Is it possible to classify resistors using ResNet50?,,1,0,,,,CC BY-SA 4.0 22334,2,,22333,7/3/2020 19:34,,3,,"

Yes it should be possible. You may have a bug in your code, or the wrong hyperparameters. Training ResNet-50 will take a long time. Try training on other sets of images and see what accuracy you get to check if your approach is correct. Or, try loading a pretrained model, and training from that.

",26838,,,,,7/3/2020 19:34,,,,5,,,,CC BY-SA 4.0 22335,1,,,7/3/2020 21:31,,1,222,"

Suppose I have a private set of images containing some objects.

How do i

  1. Make it very hard for the neural networks such as ImageNet to recognize these objects, while allowing humans to do it at the same time?

  2. Suppose I label these private images - a picture of a cat with a label "cat" - how do I make it hard for the attacker to train his neural network on my labels? Is it possible to somehow fool a neural network so that they couldn't easily train it to recognize it?

Like random transforms etc, so that they couldn't use a neural network to recognize these objects, or even train it on my dataset if they had labels.

",,user38359,145,,8/11/2022 15:53,8/11/2022 15:53,How to prevent image recognition of my dataset with neural networks and make it hard to train them?,,1,1,,,,CC BY-SA 4.0 22336,2,,22335,7/4/2020 2:13,,2,,"

If the model is trained and held constant, then there are so-called adversarial attacks to modify images such that the model classifies them incorrectly (see Attacking Machine Learning with Adversarial Examples).

However, if you want to make images that are untrainable, you are probably out of luck. Deep neural networks can learn to recognize even random images with random labels (see Understanding deep learning requires rethinking generalization), though if there's no rhyme or reason to the randomness, they won't generalize in meaningful ways.

",11155,,,,,7/4/2020 2:13,,,,0,,,,CC BY-SA 4.0 22337,1,,,7/4/2020 4:50,,3,3451,"

How does best-first search differ from hill-climbing?

",38362,,2444,,7/4/2020 21:44,10/13/2021 12:32,How does best-first search differ from hill-climbing?,,1,2,,,,CC BY-SA 4.0 22338,2,,22328,7/4/2020 7:01,,4,,"

Most model-fitting is stochastic, so you get different parameters every time you train, and you usually can't say that one algorithm will always give you a better-performing model.

However, since you can retrain many times to get a distribution of models, you can use a statistical test like the T-Test to say "algorithm A usually produces a better model than algorithm B," which is what they mean by "statistically outperforms."

p-value is usually set by consensus in the field. The higher the p-value, the less confidence you have that there's a statistical difference between the distribution of values being compared. 0.1 might be normal in a field where data is very expensive to collect (like risky, long-term studies of humans), but in machine learning, it's usually easy enough to retrain a model that 0.01 is common, and demonstrates very high confidence. To know more about selecting and interpreting the values, I recommend Wikipedia's page on statistical significance.

",11155,,11155,,7/4/2020 7:11,7/4/2020 7:11,,,,0,,,,CC BY-SA 4.0 22340,1,22344,,7/4/2020 8:26,,3,161,"

Let's say we have a captcha system that consists of a greyscale picture (of a part of a street or something akin to re-captcha), divided into 9 blocks, with 2 missing pieces.

You need to choose the appropriate missing pieces from over 15 possibilities to complete the picture.

The puzzle pieces have their edges processed with glitch treatment as well as they have additional morphs such as heavy jpeg compression, random affine transform, and blurred edges.

Every challenge picture is unique - pulled from a dataset of over 3 million images.

Is it possible for the neural network to reliably (above 50%) predict the missing pieces? Sometimes these are taken out of context and require human logic to estimate the correct piece.

The chance of selecting two answers in correct order is 1/15*1/14.

",,user38359,145,,8/11/2022 15:53,8/11/2022 15:53,Is such a captcha AI-resistant?,,2,2,,,,CC BY-SA 4.0 22341,2,,22186,7/4/2020 11:36,,1,,"

The main difference between distant supervision (as described in the link you provided) and self-supervision lies on the task the network is trained on.

Distant supervision focuses on generating weak labels for the very same task that would be tackled with supervised labels, and the final result could be directly used for that matter.

Self-supervision is a means for learning a data representation. It does so by learning a surrogate task, which is defined by inputs and labels derived exclusively from the original input data.

I can imagine cases in which an implementation of self-supervision could be considered distant supervision (if the task casually matches the target task). On the other hand, if external data sources would be employed for training on a surrogate task, that would be a case of representation learning (that could incorporate self-supervision too).

",27444,,,,,7/4/2020 11:36,,,,7,,,,CC BY-SA 4.0 22342,1,,,7/4/2020 13:28,,2,26,"

I'm reading this paper: https://arxiv.org/pdf/1602.07576.pdf. I'll quote the relevant bits:

Deep neural networks produce a sequence of progressively more abstract representations by mapping the input through a series of parameterized functions. In the current generation of neural networks, the representation spaces are usually endowed with very minimal internal structure, such as that of a linear space $\mathbb{R}$^n.

In this paper we construct representations that have the structure of a linear $G$-space, for some chosen group $G$. This means that each vector in the representation space has a pose associated with it, which can be transformed by the elements of some group of transformations $G$. This additional structure allows us to model data more efficiently: A filter in a $G$-CNN detects co-occurrences of features that have the preferred relative pose [...]

A representation space can obtain its structure from other representation spaces to which it is connected. For this to work, the network or layer $\phi$ that maps one representation to another should be structure preserving. For $G$-spaces this means that $\phi$ has to be equivariant: $$\phi(T_gx)=T'_g\phi(x)$$That is, transforming an input $x$ by a transformation $g$ (forming $T_gx$) and then passing it through the learned map $\phi$ should give the same result as first mapping $x$ through $\phi$ and then transforming the representation.

Equivariance can be realized in many ways, and in particular the operators $T$ and $T'$ need not be the same. The only requirement for $T$ and $T'$ is that for any two transformations $g$ and $h$, we have $T(gh) = T (g)T (h)$ (i.e. $T$ is a linear representation of $G$).

I didn't understand the paragraph in bold. A structure preserving map is something that preserves some operation between elements in the underlying set. A simple example: if $f:\mathbb{R}^3\to\mathbb{R}$ such that $(x,y,z)^T\mapsto x+y+z$, then $$f(r+s)=f((r_1,r_2,r_3)^T+(s_1,s_2,s_3)^T)=f((r_1+s_1,r_2+s_2,r_3+s_3)) \\=r_1+s_1+r_2+s_2+r_3+s_3=f(r)+f(s)$$ where the addition in the far left term is in $\mathbb{R}^3$ and addition in the far right is in $\mathbb{R}$. So the map $f$ preserves the additional structure of addition.

In the quoted paragraph, $\phi$ is the structure preserving map, but what's the structure being preserved exactly? And why is the operator on the right different from one on the left? i.e. $T'$ on RHS instead of $T$

",38372,,,,,7/4/2020 13:28,Structure-preserving layer in a network with respect to a transformation,,0,0,,,,CC BY-SA 4.0 22344,2,,22340,7/4/2020 17:51,,0,,"

Well to give you a short answer, I would say that YES, it would be MORE resistant than a more standard captcha approach...

This being said, I would still go as far as to predict something like a 75-80% successful prediction rates, for a custom model which is designed specifically for defeating a mechanism such as what you describe. The reason why I am fairly confident in such an appraisal, is primarily because of the following:

  1. New techniques which researchers have begun to explore, which are intended to be "Structure Preserving Convolutions" which utilize a higher dimensional filter to store the extra correlation data.

  2. I think that the obfuscation efforts that you mention will definitely help to some degree, although they can be easily defeated by training the model on a dataset which you pull out some portion of the samples during pre-processing and inject the same sort of noise and glitch treatments, etc.

    • An idea that would be worth exploring would be to process your dataset with an adversarial model which you could then use to generate Adversarial Noise that could then be fed into a pre-process step for your images and replace (or extend) the obfuscation efforts!

TL;DR: If you cant beat 'em, then join 'em! Just train a model to defeat your captcha implementation, and then use the model to generate adversarial examples and then apply obfuscations to your data set accordingly!

For more information on what I am talking about in my suggestion for further obfuscation efforts explore some of the papers you can find on Google Scholar - Ensemble Adversarial Training Examples

",30631,,,,,7/4/2020 17:51,,,,9,,,,CC BY-SA 4.0 22345,2,,22337,7/4/2020 21:16,,1,,"

Best-first search

BFS is a search approach and not just a single algorithm, so there are many best-first (BFS) algorithms, such as greedy BFS, A* and B*. BFS algorithms are informed search algorithms, as opposed to uninformed search algorithms (such as breadth-first search, depth-first search, etc.), i.e. BFS algorithms make use of domain knowledge that can be encoded into a so-called heuristic function (that's why they are informed!).

Every BFS algorithm defines a so-called evaluation function $f$ of the following form

$$f(n) = g(n) + h(n)$$

for all nodes $n \in V$ (where $V$ is the set of nodes or states of the search space), where

  • $g(n)$ is the cost from the start node (of the search) to the node $n$, and

  • $h(n)$ (the so-called heuristic, which is the way of including domain knowledge to solve the search problem) is an estimate of the cost of the cheapest path from $n$ to the goal node.

Depending on how you define $f$ and, in particular, the heuristic function $h$, you get different BFS algorithms. For instance, A* is a BFS algorithm where $h$ is an admissible heuristic (i.e. it never overestimates the cost to the goal node). Because of this admissibility property, A* is guaranteed to find the globally optimal solution (i.e. the cheapest path from a start node to the goal node among all paths), but you can ignore this detail.

To apply a BFS algorithm, you need to define the evaluation function and the search space, i.e. the states (or nodes) and their connections. For example, if you want to find the cheapest path from Paris to Madrid, you need to define that Paris is the start node, Madrid the goal node and then you need to define all intermediate nodes between Paris and Madrid, but you also need to define $g$ and $h$.

Hill climbing

Hill climbing (HC) is a general search strategy (so it's also not just an algorithm!). HC algorithms are greedy local search algorithms, i.e. they typically only find local optima (as opposed to global optima) and they do that greedily (i.e. they do not look ahead). The idea behind HC algorithms is that of moving (or climbing) in the direction of increasing value. HC algorithms can be used to solve optimization problems and not just well-defined search problems, i.e. you start from some solution and you move to the best neighboring solution, and then loop.

Best-first search vs hill climbing

  • BFS algorithms are informed search algorithms (as opposed to uninformed)
  • BFS algorithms need to define the search space and the evaluation function
  • Some BFS algorithms (such as A*) are guaranteed to find the best global solution
  • HC algorithms are general (i.e. widely applicable) but local and greedy search and optimization algorithms; consequently, they are not generally guaranteed to find the global optimum (but, in practice, they may work well, depending also on the problem).
  • HC algorithms do not need to define the search space explicitly (i.e. you do not need to define the start and goal nodes, and so on), but you just need a way of determining the best neighboring solution

Further reading

The book Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig provides more details about these two search approaches. (You can find free copies of this book online).

",2444,,2444,,10/13/2021 12:32,10/13/2021 12:32,,,,0,,,,CC BY-SA 4.0 22346,2,,22323,7/5/2020 0:33,,3,,"

I think you are looking for quantum machine learning (QML), which is a relatively new field that sits at the intersection of quantum computing and machine learning.

If you are not familiar with quantum computing (QC) and you are interested in QML, I suggest that you follow this course by prof. Umesh Vazirani and read the book Quantum Computing for Computer Scientists (2008) by Yanofsky and Mannucci. If you have a solid knowledge of linear algebra, you should not encounter big problems while learning QC. Be prepared to deal with the weirdness and beauty of qubits, quantum entanglement, and so on.

If you want to directly dive into QML (although I am not familiar with the details of QML, I suggest that you first get familiar with the basics of QC, which I am familiar with) there are already several courses, papers, overviews, and libraries (such as TensorFlow Quantum) on quantum machine learning.

If you are interested in quantum reinforcement learning, maybe have a look at the paper Quantum Reinforcement Learning (2008) by Daoyi Dong et al.

",2444,,2444,,7/6/2020 1:24,7/6/2020 1:24,,,,0,,,,CC BY-SA 4.0 22347,1,22360,,7/5/2020 5:39,,2,58,"

I have trouble understanding how nested cross-validation works - I understand the need for two loops (one for selecting the model, and another for training the selected model), but why are they nested?

From what I understood, we need to select the model before training it, which points toward non-nested loops.

Could someone please explain what's wrong (or right?) with my line of reasoning, and also explain nested cross-validation in greater detail? A representative example would be great.

",35585,,2444,,7/6/2020 12:22,7/6/2020 12:22,How exactly does nested cross-validation work?,,1,0,,,,CC BY-SA 4.0 22348,1,22349,,7/5/2020 9:25,,2,114,"

I've found online some DQN algorithms that (in a problem with a continuous state space and few actions, let's say 2 or 3), at each time step, compute and store (in the memory used for updating) all the possible actions (so all the possible rewards). For example, on page 5 of the paper Deep Q-trading, they say

This means that we don't need a random exploration to sample an action as in many reinforcement learning tasks; instead we can emulate all the three actions to update the Q-network.

How can this be compatible with the exploration-exploitation dilemma, which states that you have to balance the time steps of exploring with the ones of exploiting?

",37169,,2444,,7/5/2020 23:44,7/5/2020 23:44,Why do some DQN implementations not require random exploration but instead emulate all actions?,,1,0,,,,CC BY-SA 4.0 22349,2,,22348,7/5/2020 12:55,,1,,"

The example that you linked is using a model (emulation) in order to look ahead at all possible actions from any state. It essentially explores off-policy and offline using that model. This is not an option that is available in all environments, but if possible it resolves the exploration/eploitation dilemma for a single time step nicely by investigating all options.

Longer term the agent proposed by the link does not sufficiently explore for general use in my opinion. It appears to always choose a single action deterministically based on maximising action value. In other words it always attempts to exploit the training data so far, even though it augments the training data with short-term knowledge about exploration. However, this appears to be sufficient in the problem domain that it is used in. I suspect this is for a couple of reasons:

  • The environment is non-stationary, making long-term state predictions unreliable in any case. An agent that learns to exploit in the short term (i.e. over only a few time steps into the future) is likely to be approximately optimal already.

  • State transitions may be highly stochastic, meaning that state space will still be adequately explored even using a deterministic policy. This feature of the environment is also used by other well-known Q learning approaches with deterministic behaviour policies, such as TD Gammon

I think you have correctly identified a weakness of the approach used in the linked paper that means it may not make a strong general algorithm. The algorithm avoids addressing the exploration/exploitation balance in full, and instead relies on features of the environment to work well despite this. If you find yourself working in similar environments for your own projects, then it may well be worth trying the same approach. However, if you find yourself working in a more deterministic environment with more stationary behaviour and sparse rewards, the lack of state space exploration would be a serious limitation.

",1847,,1847,,7/5/2020 13:02,7/5/2020 13:02,,,,0,,,,CC BY-SA 4.0 22350,2,,22340,7/5/2020 15:18,,0,,"

This is not resistant at all. A simple comparison on the similarity of edge pixels between borders should be very sufficient to break this method completely.

We can do a very simple calculation. Assume the picture is 8bit black and white with each border being 50x50 pixels. Also assume the distribution is continuously uniform between 0-255 (should probably be normally distributed, but whatever). You have a total of 200 pixels that border each other in between borders. Assume that the naturally generated image is continuous in brightness with respect to dimension in at least 10% of the image with +/- 10 units of brightness is acceptable. Thus we have 20 pixels to work with.

In the case where the image is incorrect, we assume the pixel brightness to be iid in [0-255], thus giving us roughly a 8% (21/256) chance of each pixel around the border to be of acceptable similarity. Which gives us about a 10^-22 chance of this algorithm being fooled. You might disagree with my assumed parameters, but to be frank I am probably being too generous in estimating a lower bound.

There are two lessons here: 1. Just because you and others can't think of a way to break your secure system doesn't mean it's actually secure. 2. Modern ML techniques are not strictly stronger than handcrafted algorithms, though I would also imagine that a simple NN would be able to solve this problem easily.

",6779,,6779,,7/5/2020 15:56,7/5/2020 15:56,,,,9,,,,CC BY-SA 4.0 22353,1,22363,,7/5/2020 21:07,,0,174,"

Suppose that we want to train a car to drive in the real world and decide to use Reinforcement Learning (specifically, DQN) for that. I am a bit confused about how training generally works.

Is it that we are exploring the environment at the same time that we are training the Q network? If so, is there not a way to train the Q network before actually going out into the real world? And then, aren't there millions of possible states in the real world? So, how does RL or I guess the neural network generalize so that it can function during rush hour, empty roads, etc.

",37947,,2444,,7/5/2020 23:38,7/6/2020 15:14,How does training for DQN work if messing up in the environment in costly?,,1,3,,,,CC BY-SA 4.0 22354,1,,,7/6/2020 3:30,,1,38,"

Say we have a simple gray scale image. If we use a filter which is just the 3x3 identity matrix (or more pointedly the identity matrix but with -1 instead of the 0 entries), it is fairly easy to see how applying this filter with stride length 1 and padding of 1 would produce an image of the same size that represents the presence of north-west diagonals in the input image.

As I am reading more about the generative networks in GAN paradigms, I am learning that 'transposed convolutions' are used to turn gaussian noise into meaningful images like a human face. However, when I try to look at sources for transposed convolutions, most articles address the upscaling use of these convolutions, rather than their 'generative' properties. Also it is not clear to me that upscaling is even necessary in these applications, since we could start with noise that has the same resolution as our desired output.

I am asking for an example, article, or paper that can provide me with more understanding as to the feature generation aspect of transposed convolutions. I have found this interesting article that relates the word 'transpose' to the transpose of a matrix. I have a good background in linear algegra, and I understand how the transpose would swap the dimensions of the input/output. This has obvious relation to upscaling/downscaling, but this effect would happen if we replaced the m x n matrix with any other n x m matrix, not specifically just the transpose. Essentially, I'm not sure how actual transpose functor can go from detecting a given feature associated to a convolutional filter, to producing that same feature

EDIT: I've done some thinking at it is clear to me now how the transpose matrix will produce an 'input image' that has the features specified by a given feature map. That is if $M$ is the matrix given by the convultion operation, and $F$ is a feature map then

$$M^T F$$ will produce an image with the corresponding features. Its obviously not a perfect inverse operation, but it works. However I still don't yet see how to interpret this transposed matrix as a convolution of its own

",38407,,38407,,7/6/2020 4:34,7/6/2020 4:34,Concrete example of how transposed convolutions are able to *add* features to an image,,0,0,,,,CC BY-SA 4.0 22355,1,22555,,7/6/2020 4:04,,0,589,"

Theoretically, number of units for a LSTM layer is the number of hidden states or the max length of sequences as per my practice.

For example, in Keras:

Lstm1 = LSTM(units=MAX_SEQ_LEN, return_sequences=False);

However, with lots of sequences to train, should I add more LSTM layers? because increasing MAX_SEQ_LEN is not the way as it doesn't help make the network better since the extra number of hidden states isn't useful any more.

I'm considering increasing number of LSTM layers, but how many are enough?

For example, 3 of them:

Lstm1 = LSTM(units=MAX_SEQ_LEN, return_sequences=True);
Lstm2 = LSTM(units=MAX_SEQ_LEN, return_sequences=True);
Lstm3 = LSTM(units=MAX_SEQ_LEN, return_sequences=False);
",2844,,,,,7/18/2020 1:19,Number of LSTM layers needed to learn a certain number of sequences,,1,0,,,,CC BY-SA 4.0 22358,1,,,7/6/2020 9:53,,4,3301,"

I wish to train two domain-specific models:

  • Domain 1: Constitution and related Legal Documents
  • Domain 2: Technical and related documents.

For Domain 1, I've access to a text-corpus with texts from the constitution and no question-context-answer tuples. For Domain 2, I've access to Question-Answer pairs.

Is it possible to fine-tune a light-weight BERT model for Question-Answering using just the data mentioned above?

If yes, what are the resources to achieve this task?

Some examples, from the huggingface/models library would be mrm8488/bert-tiny-5-finetuned-squadv2, sshleifer/tiny-distilbert-base-cased-distilled-squad, /twmkn9/albert-base-v2-squad2.

",38411,,2444,,1/26/2021 15:34,1/26/2021 15:34,How to fine tune BERT for question answering?,,1,0,,,,CC BY-SA 4.0 22360,2,,22347,7/6/2020 10:17,,1,,"

"Selecting the model" in this case refers to selecting the hyperparameters of the model. The reason to use a nested CV is simply to avoid overfitting training data.

Consider the example in the link. First you like to select the best hyperparameters of your svm model by GridSearchCV(). This is done by 4-fold CV. Now the clf.best_score_ will be the mean cross-validated score of the best estimator (the model with the best hyperparameters). However now you used the same data for training and reporting the performance, although you used CV. Keep in mind that the folds are not independent. Therefore the hyperparameters might be too data specific, i.e. your generalization error estimates are too optimistic. Therefore we like to evaluate our final model performance outside / independent of the hyperparameter selection loop / process (the call cross_val_score()).

In the provided plot, you can clearly see that the reported performance by GridSearchCV() is most of the time better than the performance reported by cross_val_score().

",37120,,,,,7/6/2020 10:17,,,,0,,,,CC BY-SA 4.0 22361,1,,,7/6/2020 11:27,,2,125,"

I am learning computer vision. When I was going through implementations of various computer vision projects, some OCR problems used GRU or LSTM, while some did not. I understand that RNNs are used only in problems where input data is a sequence, like audio or text.

So, in kernels of MNIST on kaggle almost no kernel has used RNNs and almost every repository for OCR on IAM dataset on GitHub has used GRU or LSTMs. Intuitively, written text in an image is a sequence, so RNNs were used. But, so is the written text in MNIST data. So, when exactly is it that RNNs(or GRUs or LSTMs) need to be used in computer vision and when don't?

",38060,,38060,,7/6/2020 12:34,7/7/2020 12:57,Why are RNNs used in some computer vision problems?,,1,0,,,,CC BY-SA 4.0 22362,2,,22358,7/6/2020 12:10,,1,,"

The answer is yes but 'lightweight' will require a 'lightweight' model.

Your application for 'domain one' is called open domain question answering (ODQA). Here is a demonstration of ODQA using BERT: https://www.pragnakalp.com/demos/BERT-NLP-QnA-Demo/

Your application for 'domain two' is a little different. It is about learning sequences from sequences. More specifically these are called sequence to sequence models. Here is an example using a pre-trained BERT model fine-tuned on the Stanford Question Answering (SQuAD) dataset.

Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.

In both applications, the resources required are going to rely on the performance you require. There are many sizes of BERT models. Generally, the larger the model, the larger the GPU memory requirements, and the higher the performance (i.e. accuracy, precision, recall, F1-score, etc.). For example, I can run BERT Base on a particular data set on a GTX 1080Ti and a RTX 2080Ti but not BERT Large.

This article, NVIDIA Quadro RTX 6000 BERT Large Fine-tune Benchmarks with SQuAD Dataset shows performance for BERT using TensorFlow on four NVIDIA Quadro RTX 6000 GPUs.

There is a 'mobile' version of BERT called MobileBERT for running on small devices like smartphones. Here is an article on using that with SQuAD: https://www.tensorflow.org/lite/models/bert_qa/overview

cdQA-suite is a good package. The following should help in fine-tuning on your own corpus:

",5763,,5763,,7/6/2020 14:07,7/6/2020 14:07,,,,7,,,,CC BY-SA 4.0 22363,2,,22353,7/6/2020 12:32,,1,,"

In general, you need to actively explore the environment to gather data to train your Q network. However, especially in your self-driving car example, you might be looking for Batch RL. In Batch RL you start with a given, fixed dataset of transitions (state, action, reward, next state) and you learn a policy (or Q function) based on the dataset without exploring. The Batch-Constrained Q-learning algorithm (BCQ) is a good example of this.

With regards to acquiring the data necessary for Batch RL, that's another story. I suppose one could collect data through sensors in a car, and label datapoints with rewards according to some reward function.

",37829,,37829,,7/6/2020 15:14,7/6/2020 15:14,,,,1,,,,CC BY-SA 4.0 22364,2,,21559,7/6/2020 13:28,,1,,"

Your idea is a good one.

Another idea is to upsample or aggregate your data. For example, average by week if you generally have a couple of missing days in every week.

A similar question on Stack Exchange: https://stats.stackexchange.com/questions/374935/how-to-deal-with-really-sparse-time-series-data-for-a-binary-classification-task

",5763,,,,,7/6/2020 13:28,,,,1,,,,CC BY-SA 4.0 22367,1,22406,,7/6/2020 16:32,,0,79,"

I have a confusion on where exactly is the L2 regularization (weight decay) is added.

In various resources I have come across, I find two equations where L2 regularization is applied.

Adding R(W) to loss function makes sense because it tries decrease large weights. Also, I have seen equations where we add R(W) to the weight update term, 2nd equation in 2nd line as shown in this image:

In the above image, using the weight update rule that

W(final) = W(initial) + (alpha) * (Gradient of W),

I obtain a different equation as compared to the other equation which is commonly written in various resources.

Where exactly is the regularization term added, I previously thought it was only added in Loss function but that gives me a different weight update equation from what is commonly presented in resources.( Or is my interpretation of the equation wrong? )

I presume it is also added in weight update equation because while constructing models, we add regularization term.

model.add(Conv2D(256, (5,5), padding="same", kernel_regularizer=l2(reg))) 

Would be grateful for any help.

",35616,,35616,,7/8/2020 17:01,7/8/2020 19:12,Where is L2-regularization term applied,,1,1,,,,CC BY-SA 4.0 22369,1,,,7/6/2020 23:30,,1,444,"

I am trying to build a recurrent neural network from scratch. It's a very simple model. I am trying to train it to predict two words (dogs and gods). While training, the value of cost function starts to increase for some time, after that, the cost starts to decrease again, as can be seen in the figure.

I am using the gradient descend method for optimization. Decreasing the step size/learning rate does not change the behavior. I have checked the code and math, again and again, I don't think there is an error (I could be wrong).

Why is the cost function not decreasing monotonically? Could there be a reason other than an error in my code/math? If there is an error, do you think that it is just a coincidence that each time the system finally converges to a very small value of error?

I am a beginner in the field of machine learning, hence, many questions I have asked may seem foolish to you. I am saving the values after every 100 iterations so the figure is actually for 15000 iterations.

About training: I am using one-hot encoding. As the training data has only two samples ("gods" and "dogs"), where each alphabet is represented as d=[1,0,0,0],o=[0,1,0,0],g=[0,0,1,0],s=[0,0,0,1]. The recurrent neural network (RNN) goes back to a maximum of 3 time units, (e.g for dogs, the first input is 'd', then 'o', followed by 'g' and s). So, for the second input, the RNN goes back to 1 input, for the third input the RNN observes both previous inputs and so on. After calculating the gradients for the word "dogs", the values of the gradients are saved and the process is repeated for the word "gods". The gradients calculated for the second input "gods" are summed with the gradients calculated for "dogs" at the end of each epoch/iteration, and then the sum is used to updated all the weights. In each epoch, the inputs remain the same i.e "gods" and "dogs". In mini-batch training, in some epoch the RNN may encounter new inputs, hence, the loss may increase. However, I do not think that what I am doing qualifies as mini-batch training as there are only two inputs, both inputs are used in each epoch, and the sum of calculated gradients is used to update the weights.

",38423,,38423,,7/7/2020 18:11,11/25/2022 1:01,Why the cost/loss starts to increase for some iterations during the training phase?,,2,0,,,,CC BY-SA 4.0 22370,2,,22369,7/7/2020 0:24,,0,,"

In general, there's nothing wrong with training loss to increase from time-to-time during training.

This is because GD with minibatch is a stochastic process and doesn't guarantee that the loss will decrease at each step.

",32621,,,,,7/7/2020 0:24,,,,2,,,,CC BY-SA 4.0 22371,1,,,7/7/2020 2:16,,0,36,"

For learning a single sequence, LSTM only should suffice.

However, my situation is different here. I have a list of sequences to learn:

  • The sale volumes of 12 months, these are the sequences

And each sequence above belongs to a category.

I'm trying it out by consider [category,sequence] as a sequential sample, the loss can be reduced to 1% but it gives wrong values in inferring real data.

The second try is considering [category,sequence] as a sample of 2 inputs:

  • X1 = sequence
  • X2 = category

Feed the sequence thru' LSTM layers to get H, and then concat with X2, and feed again the pair [H,X2] thru' some dense layers, the results aren't better.

Any popular solutions (network shape, network design) for learning this kind of data: sequential data in different categories?

",2844,,,,,7/7/2020 2:16,Network design to learn multiple sequences of multiple categories,,0,2,,,,CC BY-SA 4.0 22372,1,,,7/7/2020 8:39,,1,54,"

I'm studying RL with Sutton and Barto's book. I'd like to ask about the order of execution of a statement in the algorithm below.

Here, $W$ (importance sampling ratio) is updated at the end of the For loop.

But, I think that updating should be located after calculating $G$ (the return) and before updating $C(s,a)$ (cumulative of $W$). This seems to be right considering the second picture below, which I found in http://incompleteideas.net/book/first/ebook/node56.html.

Is Sutton and Barto's book wrong? Or the two algorithms in Sutton and Barto's book and seconds picture are actually the same, and I am wrong? Is there any difference between the two when implemented? If I am wrong, can you explain the reason?

",38430,,2444,,7/7/2020 12:35,7/7/2020 12:35,Should the importance sampling ratio be updated at the end of the for loop in the off-policy Monte Carlo control algorithm?,,0,0,,,,CC BY-SA 4.0 22373,1,,,7/7/2020 9:22,,1,27,"

I'm reading this paper Global-Locally Self-Attentive Dialogue State Tracker and follow through the implementation published in GLAD.

I was wondering if someone can clarify what variable or score is used to calculate the global and local self-attention scores in Figure 4 (the heatmap).

For me, it is not really clear how to derive these scores. The only score that would match the given dimension would be in the scoring module $p_{utt}=softmax(a_{utt})$. However, I do not see in their implementation that anything is done with this value.

So, what I did was the following:

 q_utts = []
 a_utts=[]
 for c_val in C_vals:
     q_utt, a_utt = attend(H_utt, c_val.unsqueeze(0).expand(len(batch), *c_val.size()), lens=utterance_len)
     q_utts.append(q_utt)
     a_utts.append(a_utt)
attention_score= torch.mean(torch.stack(a_utts,dim=1),dim=1)

But the resulting attention score differs very much from what I expect.

",35910,,2444,,7/7/2020 12:38,7/7/2020 12:38,What is the score used to visualize attention in this paper?,,0,0,,,,CC BY-SA 4.0 22375,1,,,7/7/2020 10:11,,1,228,"

I have many text documents and I want to identify concepts in these documents in an unsupervised manner. One of my problems is that the concepts can be bigrams, trigrams, or even longer.

So, for example, out of all the bigrams, how can I identify the ones that are more likely to represent a concept?

A concept could be "machine learning".

Are you aware of any standard approaches to solve this problem?

Edit: The corpus I am working with consists of papers accessed from web of science. That is, they are all in some given domain niche. I want to extract words, bigrams, trigrams... that represent common concepts/buzzwords from these papers. These could be "Automated machine learning", "natural language processing" et cetera. I need to be able to distinguish these from other common n-grams such as "New York", "Barack Obama",...

I know that I could do this using a NER approach but this would require hand-labelling. Are you aware of any unsupervised ways to approach this problem? Or even a semi-superised method with little labelled data?

",38434,,38434,,7/8/2020 11:38,11/25/2022 18:03,How can I identify bigrams and trigrams that represent concepts?,,1,2,,,,CC BY-SA 4.0 22376,2,,22361,7/7/2020 12:57,,3,,"

There are tasks in computer vision where recurrent neural networks (RNNs) can be useful because there's some sequential sub-task in the main task.

For instance, in the paper Long-Term Recurrent Convolutional Networks for Visual Recognition and Description, the authors investigate the use of a neural network that is both recurrent and convolutional to solve certain computer vision tasks that also have a sequential component/part, such as video recognition tasks, image to sentence generation problems, and video narration challenges.

There are other papers that investigate the combination of convolutional and recurrent layers, such as Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition, which also has a biological motivation.

",2444,,,,,7/7/2020 12:57,,,,0,,,,CC BY-SA 4.0 22377,2,,22369,7/7/2020 13:22,,0,,"

Since neural networks rely on stochasticity (i.e. randomness) to initialize their parameters and gradient descent selects random batches of training data at each iteration, is perfectly normal if the value of loss function fluctuates instead of decreasing monotonically.

",38411,,,,,7/7/2020 13:22,,,,0,,,,CC BY-SA 4.0 22378,1,,,7/7/2020 14:29,,1,995,"

I am confused as to why the sequence length is the first dimension of the input tensor for an RNN, while the batch size is the first dimension for any other kind of network (linear, CNN, etc.).

This makes me think that I haven't fully grasped the concept of RNN batches. Is each independent batch a different sequence? And is the same hidden state across batches? Is the hidden state maintained between timesteps for a given sequence (for vanilla/truncated BPTT)?

",38445,,2444,,12/5/2020 3:16,1/4/2021 4:04,"In PyTorch, why does the sequence length need to be provided as the first dimension of the input tensor for an RNN?",,1,0,,,,CC BY-SA 4.0 22379,2,,20585,7/7/2020 15:06,,4,,"

Deep Q Learning is a model-free algorithm. In the case of Go (and chess for that matter) the model of the game is very simple and deterministic. It's a perfect information game, so it's trivial to predict the next state given your current state and action (this is the model). They take advantage of this with MCTS to speed up training. I suppose Deep Q Learning would also work, but it would be at a huge disadvantage.

",37829,,,,,7/7/2020 15:06,,,,0,,,,CC BY-SA 4.0 22380,1,22396,,7/7/2020 15:10,,3,527,"

How is the notion of immediate reward used in the reinforcement learning different from the notion of a label we find in the supervised learning problems?

",38447,,38447,,7/7/2020 15:38,7/8/2020 10:28,How is the reward in reinforcement learning different from the label in supervised learning problems?,,1,2,,,,CC BY-SA 4.0 22381,1,,,7/7/2020 16:25,,3,84,"

I'm trying to train some deep RL agents using policy gradient methods like AC and PPO. While training, I have a ton of different metrics being monitored.

I understand that the ultimate goal is to maximize the reward or return per episode. But there are a ton of other metrics that I don't understand what they are used for.

In particular, how should one interpret the mean and standard deviation curves of the policy loss, value, value loss, entropy, and reward/return over time while training?

What does it mean when these values increase or decrease over time? Given these curves, how would one decide how to tune hyperparameters, see where the training is succeeding and failing, and the like?

",38448,,2444,,7/7/2020 22:47,8/7/2020 16:00,How should we interpret all the different metrics in reinforcement learning?,,1,1,,,,CC BY-SA 4.0 22382,2,,22375,7/7/2020 16:46,,0,,"

One of the renowned methods is using the TF-IDF method. You can define a lower bound for the TF-IDF score of the bigram or trigram to identify it as a keyword. Here, you can find more information about it.

",4446,,,,,7/7/2020 16:46,,,,2,,,,CC BY-SA 4.0 22383,1,,,7/7/2020 17:52,,0,230,"

In his paper introducing SHA-RNN (https://arxiv.org/pdf/1911.11423.pdf) Stephen Merity states that neglecting one direction of research (in this case LSTMs) over another (transformers) merily because the SOTA in transformers are due to using more computing power is not the way to go.

I agree that finding neat tricks in AI/ML is equally (if not more) important than just throwing more computing power at the problem. However I am a little bit confused.

The main difference (since they both use attention units) between his SHA-RNN and transformers seem to be the fact that SHA-RNN uses LSTMs to "encode the position of words", where transformers do position encoding by using cosine and sine functions.

My confusion comes from the fact that LSTMs need to be handled sequentially and thus they cannot use this large advantage of GPUs, being able to compute things in parallel, whilst transformers can. Wouldn't this mean that (assuming LSTMs and positional encoding are able to acquire the same results), training using LSTMs would take longer than transformers and thus need more computing power, thus defeating the initial puporse of this paper? Or am I misinterpreting this?

Basically my question comes down to "Why would an SHA-RNN be less computationally expensive than a transformer?"

",34359,,,,,7/28/2022 3:04,What is the big fuzz about SHA-RNN versus Transformers?,,1,0,,,,CC BY-SA 4.0 22384,2,,20585,7/7/2020 20:03,,7,,"

$Q$-learning (and also its deep variant, and most of the other well-known reinforcement learning algorithms) are inherently learning approaches for single-agent environments. The entire problem setting that these algorithms are developed for (Markov decision processes, or MDPs) is always framed in terms of a single agent situated in some environment, where that agent can take actions that have some level of influence over the states that they lead to, and rewards may be observed.

If you have a problem that is, in reality, a multi-agent environment, there is a way to translate this environment to a single-agent setting; you simply have to assume that all other agents (i.e. your opponent in Go) are an inherent part of "the world" or "the environment", and that all the states in which these other agents make moves are not really states (not visible to your agent), but just intermediate steps where these part-of-the-environment-agents cause the environment to change and, as a result, create state transitions.

The primary issue with this approach is; we still need to model the decision-making of these agents in order to implement this new view of "the world", where our opponents are really a part of the world. Whatever implementation we give them, that is what our single-agent RL algorithm will learn to play against. We can just implement our opponents to be random agents, and run a single-agent RL algorithm like DQN, and then we will likely learn to play well against random agents. We'll probably still be very bad against strong opponents though. If we want to use a single-agent RL algorithm to learn to play well against strong opponents, we need to have an implementation for those strong opponents first. But if we already have that... why even bother with learning at all? We've already got the strong Go player, so we're already done and don't need to learn!

MCTS is a tree search algorithm, one that actively takes into account the fact that there is an opponent with opposing goals, and tries to model the choices that this opponent can make, and can do so better the more computation time we give it. This algorithm, and learning approaches built around it, are inherently designed to tackle the multi-agent setting (with agents having opposing goals).

",1641,,,,,7/7/2020 20:03,,,,0,,,,CC BY-SA 4.0 22385,1,,,7/7/2020 21:31,,1,267,"

I am a student learning machine learning recently, and one thing is keep confusing me, I tried multiple sources and failed to find the related answer.

As following table shows (this is from some paper):

Is it possible that every class has a higher recall than precision for multi-class classification?

Recall can be higher than precision over some class or overall performance which is common, but is it possible to keep recall greater than precision for every class?

The total amount of test data is fixed, so, to my understanding, if the recall is greater than the precision for one class, it is a must that the recall must be smaller than the precision for some other classes.

I tried to make a fake confusion matrix to simulate the result, but I failed. Can someone explain it to me?

this is a further description:

Assume we have classified 10 data into 3 classes, and we have a confusion matrix like this,


if we want to keep recall bigger than precision over each class (this case 0,1,2) respectively, we need to keep:
x1+x2 < x3+x5
x3+x4 < x1+x6
x5+x6 < x2+x4

There is a conflict, because the sum of the left side equals to the sum of the right side in these inequalities, and the sum(x1...x6) = 10 - sum(a,b,c) in this case.

Hence, I think to get recall higher than precision on all classes is not feasible, because the quantity of the total classification is fixed.

I don't know am I right or wrong, please tell me if I made a mistake.

",38455,,38455,,7/8/2020 1:44,12/26/2022 23:03,Is it possible that every class has a higher recall than precision for multi-class classification?,,2,4,,,,CC BY-SA 4.0 22386,2,,22383,7/7/2020 23:07,,1,,"

Transformers can be trained with parallelization. So you might have 100 cores training the transformer for an hour (100 total hours of CPU time), whereas the LSTM has one core training for ten hours (10 total hours of CPU time). Even though the LSTM takes longer, it is less expensive to train.

",26838,,,,,7/7/2020 23:07,,,,2,,,,CC BY-SA 4.0 22387,2,,22385,7/7/2020 23:32,,0,,"

In the case of binary classification, i.e. when you have two classes $A$ and $B$, the recall is defined as

$${\displaystyle {\text{Recall}}={\frac {tp}{tp+fn}}}$$

Similarly, the precision as

$${\displaystyle {\text{Precision}}={\frac {tp}{tp+fp}}}$$

where, if you choose and fix a class $A$ as your main class (or class of interest),

  • $tp$ is the number of true positives (i.e. the number of observations that have been classified as $A$ and actually belong to class $A$, i.e. they have been correctly classified)

  • $fn$ is the number of false negatives (i.e. the number of observations that have been incorrectly classified as $B$, so, in this case of binary classification, they actually belong to class $A$)

  • $fp$ is the number of false positives (i.e. the number of observations that have been incorrectly classified as $A$)

In this context, the confusion matrix will be a $2 \times 2$ matrix because you have two predictions (one for each of the two classes) compared against the two ground-truth labels (or classes).

In the context of multi-class classification, where you have $N > 2$ classes, you will have a $N \times N$ confusion matrix, for the same reason. In this case, the recall and precision are just an extension of the binary classification problem. More precisely, let $M$ represent your confusion matrix, so $M_{ij}$ is the entry of this confusion matrix at row $i$ and column $j$. Then the recall, for class $i$ (i.e. your main class, as $A$ above), is defined as

$$\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ij}}$$

$$\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ji}}$$

In other words, the recall for a certain class $i$ is computed as the fraction of the diagonal element $M_{ii}$ and the sum of all elements in that row $i$. Similarly, the precision is computed as the fraction of $M_{ii}$ and the sum of all elements in column $i$.

If the true positives (i.e. the diagonal elements $M_{ii}$) of all classes are indeed bigger compared to the denominator, the recall can indeed be bigger than the precision for all classes.

",2444,,,,,7/7/2020 23:32,,,,1,,,,CC BY-SA 4.0 22388,2,,22378,7/8/2020 0:04,,1,,"

As it says in the documentation, you can simply reverse the order of dimensions by providing the argument batch_first=True when constructing the RNN. Then, the dimensionality will be: (batch, seq, feature), i.e. batch-size times sequence length times the dimension of your input (however dimensional that may be). Then, everything is gonna work as you are used to it.

To answer the second part of your question, normally, each sequence in a batch is independent of the others (since they commonly get sampled at random). So, there is no direct dependence between any two inputs in a batch (except, of course, for the fact that they are commonly expected to stem from some underlying shared data generating process which you want to approximate by the RNN).

And a hidden state is commonly maintained per batch element, i.e. there is one hidden state per batch element (i.e. per sequence).

",37982,,,,,7/8/2020 0:04,,,,4,,,,CC BY-SA 4.0 22390,1,22399,,7/8/2020 0:45,,5,1561,"

I have seen two different representations of neural networks when it comes to bias. Consider a "simple" neural network, with just an input layer, a hidden layer and an output layer. To compute the value of a neuron in the hidden layer, the weights and neurons from the input layer are multiplied, shifted by a bias and then activated by the activation function. To compute the values in the output layer, you may choose not to have a bias and have an identity activation function on this layer, so that this last calculation is just "scaling".

Is it standard to have a "scaling" layer? You could say that there is a bias associated with each neuron, except those in the input layer correct (and those in the output layer when it is a scaling layer)? Although I suppose you could immediately shift any value you're given. Does the input layer have a bias?

I have seen bias represented as an extra unchanging neuron in each layer (except the last) having value 1, so that the weights associated with the connections from this neuron correspond to the biases of the neurons in the next layer. Is this the standard way of viewing bias? Or is there some other way to interpret what bias is that is more closely described by "a number that is added to the weighted sum before activation"?

",38458,,2444,,7/8/2020 12:12,3/31/2022 1:07,Does the input layer have bias and are there bias neurons?,,1,2,,,,CC BY-SA 4.0 22391,1,,,7/8/2020 1:37,,0,63,"

I am learning the expectation-maximization algorithm from the article Semi-Supervised Text Classification Using EM. The algorithm is very interesting. However, the algorithm looks like doing a circular inference here.

I don't know am I understanding the description right or wrong, what I perceived is:

Step 1: train NB classifier on labeled data.
Repeat
Step 2 (E-step): use trained NB to add label to unlabeled data.
Step 3 (M-step): train NB classifier by using labeled data and unlabeled data(with tags from step 2) to get a new classifier
Until convergent.

Here is the question:

In step 2, the label is tagged by the classifier trained on the labeled data, which is the only source containing the knowledge on a correct prediction. And the step-3 (M-step) is actually updating the classifier on the labels generated from step 2. The whole process is relying on the labeled data, so how can the EM classifier improve the classification? Can someone explain it to me?

",38455,,2444,,7/8/2020 13:53,7/8/2020 13:53,How can the expectation-maximization improve the classification?,,0,3,,,,CC BY-SA 4.0 22392,2,,1742,7/8/2020 2:42,,3,,"

First, in most condition machine learning actually refers traditional/classical machine learning, and deep learning is specifically referring multi-layered neural network, and neural network is one of the machine learning approach.

Second, Machine learning especially supervised machine learning requires engineers to design and predefine features manually, which are used to represent the data in numerical way. Such as we can represent animals with three features such as the number of eyes, the number of legs and the number of heads. The data [2,4,1] representing an animal with 2 eyes, 4 legs and 1 head. In this scenario, the feature is extracted by us, because we have knowledge on animals, and we think these features can represent animals. However, instead of hand-crafting features the deep learning learn the features automatically.

Third, when someone say machine learning he is saying algorithm, such as naive bayes, decision tree, linear regression etc. However, the deep learning is more related to the framework and architecture such as RNN, CNN, Transformer etc.

Fourth, it is possible to start deep learning without knowing machine learning, sources from internet like Andrew Ng's course usually covers most topic you should know in deep learing. Try search Andrew Ng, I think he is really good!

",38455,,,,,7/8/2020 2:42,,,,1,,,,CC BY-SA 4.0 22396,2,,22380,7/8/2020 8:29,,3,,"

Reward in reinforcement learning (RL) is entirely different from a supervised learning (SL) label, but can be related to it indirectly.

In a RL control setting, you can imagine that you had a data oracle that gave you SL training example and label pairs $x_i, y_i$ where $x_i$ represents a state and $y_i$ represents the correct action to take in that state in order to maximise the expected return. For simplicity I will use $G_t = \sum_{k=1}^{\infty} \gamma^k R_{t+k+1}$ for return here (where $G_t$ and $R_t$ are random variables), there are other definitions, but the argument that follows doesn't change much for them.

You can use the oracle to reduce the RL training process to SL, creating a policy function $\pi(s): \mathcal{S} \rightarrow \mathcal{A}$ learned from a dataset that the oracle output. This clearly relates SL with RL, but how do $x_i, y_i$ from SL relate to $s_t, a_t$ from RL in terms of reward values?

The states can relate directly (as input):

$$x_i \equiv s_t$$

The action from the policy function is more indirect, if you want to see how reward is involved:

$$y_i \equiv \pi^*(s_t) = \text{argmax}_a \mathbb{E}_{A \sim \pi^*}[\sum_{k=1}^{\infty} \gamma^k R_{t+k+1} | S_t=s_t, A_t=a]$$

Note the oracle is represented by the optimal policy function $\pi^*(s_t)$, and the expectation is conditional both on the start conditions of state and action plus following the optimal policy from then on (which is what $A \sim \pi^*$ is representing).

In practice the optimal policy function is unknown when starting RL, so the learning process cannot be reduced to a SL problem. However, you can get close in some circumstances by creating a dataset of action choices made by an expert at the problem. In that case a similar relationship applies - the label (of which action to take) and immediate reward are different things but can be related by noting that the expert behaviour is close to the $\text{argmax}$ over actions of expected sums of future reward.

Another way to view the difference:

  • In SL, the signal from the label is an instruction - "associate these two values". Data is supplied to the learning process by some other independent process, and can be learned from directly

  • In RL, the signal from the reward is a consequence - "this is the value, in context, of what you just did", and needs to be learned from indirectly. Data is not supplied separately from the learning process, but must be actively collected by it - deciding which state, action pairs to learn from is part of the agent's learning task

",1847,,1847,,7/8/2020 10:28,7/8/2020 10:28,,,,2,,,,CC BY-SA 4.0 22398,1,,,7/8/2020 10:51,,0,31,"

Suppose I have a neural network $N$ that produces the output probabilities $[0.3, 0.8]$. Normally, I would specify a threshold of 0.5 for the argmax of the prediction, let's say, second arg > 0.5 means that the image is attractive, and if both probabilities are lesser than 0.5 we don't have that good of a prediction.

My question is, can we plot this threshold on a ROC curve so we can figure out the best value?

",20934,,2444,,7/8/2020 14:09,7/8/2020 14:09,Best ROC threshold for classifier?,,0,2,,,,CC BY-SA 4.0 22399,2,,22390,7/8/2020 12:37,,4,,"

The purpose of the input layer is just to conceptually represent the input and, in case it is necessary, define the dimensions of the input that the neural network expects. In fact, some neural networks, such as multi-layer perceptrons, expect a fixed-size input, but not all of them: fully convolutional networks can deal with inputs of different dimensions.

The input layer doesn't contain neurons (although in the diagrams that you will come across they are usually represented as circles, like the neurons, and that's probably why you are confused!), so it also does not contain biases, linear transformations, and non-linearities. In fact, in the context of neural networks, you could define a neuron as some unit/entity that performs a linear or non-linear transformation (to which you can add a bias). Note that the hidden and output layers can contain biases because they contain neurons that perform a linear or non-linear transformation.

However, although I have never seen it (or I don't recall having seen it), I would not exclude the existence of an input layer that transforms or augments the inputs before passing them to the next layer. For example, one could implement a neural network that first scales the input to a certain range, and the input layer could do this, although, in practice, this is typically done by some object/class that does not belong to the neural network (e.g. tf.data.Dataset).

",2444,,2444,,7/8/2020 22:48,7/8/2020 22:48,,,,0,,,,CC BY-SA 4.0 22401,1,22402,,7/8/2020 14:15,,2,79,"

Recently, I followed the open course CS229, http://cs229.stanford.edu/notes/cs229-notes1.pdf
This lecturer introduces an alternative approach to gradient descent that is called "Normal Equation" and the equation is as follows:

$$\theta=\left(X^{T} X\right)^{-1} X^{T} \vec{y}$$

The normal equation can directly compute the $\theta$.

If the normal equation works, why do we need gradient descent? What is the trade-off between these two methods?

",38455,,2444,,7/8/2020 14:23,7/8/2020 14:31,"If the normal equation works, why do we need gradient descent?",,1,0,,,,CC BY-SA 4.0 22402,2,,22401,7/8/2020 14:31,,1,,"

That normal equation is sometimes called the closed-form solution.

The short answer to your question is that the closed-form solution may be impractical or unavailable in certain cases or the iterative numerical method (such as gradient descent) may be more efficient (in terms of resources).

This answer gives you more details and an example.

",2444,,,,,7/8/2020 14:31,,,,0,,,,CC BY-SA 4.0 22403,2,,22381,7/8/2020 15:06,,1,,"

As you said, generally the most important one is reward per episode. If this isn't increasing overall, there's a problem (of course this metric can fluctuate, I mean to say that macroscopically it should increase).

Policy loss (I assume you mean the "actor loss"?) is generally harder to interpret. You should think of this more as a source of gradients and not necessarily a good indicator of how well your agent is performing.

I'm not really sure why you'd be monitoring the value during training. Value loss, however, is basically equivalent to value loss in value based methods like Q-learning, for example. So this one should be decreasing overall. Otherwise the baselines you compute to reduce variance in the policy gradient will either be less effective, or even harmful.

Entropy is a nice quantity to measure, because it's a good indicator of how much your agent is exploring. If you see that your agent is not achieving high returns and the entropy is really low, this means that your policy has converged to a suboptimal one. If the entropy is really high, this means the agent is acting fairly randomly (so it's basically exploring a lot). Ideally the entropy should decrease over time, so your policy becomes more deterministic (less exploration) as it reaches an optimum.

",37829,,,,,7/8/2020 15:06,,,,0,,,,CC BY-SA 4.0 22405,2,,16033,7/8/2020 16:08,,1,,"

The brat annotation tool has provided some useful scripts for converting the annotations format including standoff format to CoNLL. Please see this source code from brat GitHub repo for converting the .ann and .txt inputs (standoff format) to .conll file: https://github.com/nlplab/brat/blob/master/tools/anntoconll.py

",38474,,38474,,7/15/2020 0:07,7/15/2020 0:07,,,,0,,,,CC BY-SA 4.0 22406,2,,22367,7/8/2020 19:12,,1,,"

The regularization terms are applied to the loss functions by default. However, their gradients do appear in the update step as the gradient of loss appears in the update step.

",32621,,,,,7/8/2020 19:12,,,,3,,,,CC BY-SA 4.0 22408,1,,,7/9/2020 1:38,,1,259,"

When I label images for semantic segmentation (using u-net, if that matters), is labeling the background (anything I am not interested in) necessary? Will it improve the network's performance?

",38295,,2444,,7/9/2020 17:18,7/9/2020 17:18,Is it necessary to label the background when generating the labelled dataset for semantic segmentation?,,0,3,,,,CC BY-SA 4.0 22409,1,,,7/9/2020 3:45,,1,63,"

For practical applications, like autonomous driving, depth perception is needed to make useful decisions.

How is this normally addressed without using a LIDAR or RADAR unit (but using a camera)?

",32390,,2444,,7/9/2020 11:48,7/9/2020 11:48,How is depth perception (e.g. in autonomous driving) addressed without using a Lidar or Radar unit?,,0,6,,,,CC BY-SA 4.0 22410,1,,,7/9/2020 5:10,,1,92,"

Given a hand-drawn shape, I'd like to generate the corresponding symmetrical polished shapes such as circle, rectangle, triangle, trapezoid, square, parallelogram, etc.

A short video demonstration

Here below we can see a parallelogram, trapezoid, triangle and a circle. I was wondering how can I transform it into symmetrical polished shapes?

At first, I tried a simple approach with traditional computer vision algorithms, with OpenCV (no neural networks were involved), by counting the number of corners, but it failed miserably, since there are many edge cases within a user's doodler.

So, I was thinking to delve into CNN specifically U-Net for segmentation.

Can somebody please give me some suggestions on how to approach this kind of problem? I'd like to read some relevant articles and code about this subject for getting a better grasp of this kind of problem.

",25338,,2444,,7/9/2020 11:51,11/28/2022 6:07,How to quickly change hand-drawn shapes to symmetrical polished shapes?,,1,2,,,,CC BY-SA 4.0 22412,1,,,7/9/2020 12:51,,2,107,"

My problem has a single state and an infinite amount of actions on a certain interval (0,1). After quite some time of googling I found a few paper about an algorithm called zooming algorithm which can solve problems with a continous action space. However my implementation is bad at exploiting. Therefore I'm thinking about adding an epsilon-greedy kind of behavior.

Is it reasonable to combine different methods?

Do you know other approaches to my problem?

",38494,,,,,7/9/2020 12:51,Solving multi-armed bandit problems with continuous action space,,0,0,,,,CC BY-SA 4.0 22413,1,,,7/9/2020 15:04,,3,79,"

For an upcoming project, I am trying to build a neural network for classifying text from scratch, without the use of libraries. This requires an embedding layer, or a way to convert words to some vector representation. I understand the gist, but I can't find any deep explanations or tutorials that don't start with importing TensorFlow. All I'm really told is that it works by context using a few surrounding words, but I don't understand exactly how.

Is it much different from a classic network, with weights and biases? How does it figure out the loss?

If someone could point me towards a guide to how these things work exactly I would be very grateful.

",38497,,2444,,7/9/2020 19:57,8/1/2022 16:18,How can I create an embedding layer to convert words to a vector space from scratch?,,1,0,,,,CC BY-SA 4.0 22414,1,,,7/9/2020 15:05,,1,43,"

Sample-based algorithms, like Monte Carlo Algorithms and TD-Learning, are often presented as useful since they do not require a transition model.

Assuming I do have access to a transition model, are there any reasons one might want to use sample-based methods instead of performing a full Bellman update?

",12201,,2444,,7/9/2020 17:23,7/9/2020 17:23,"If the transition model is available, why would we use sample-based algorithms?",,1,0,,,,CC BY-SA 4.0 22415,2,,22414,7/9/2020 15:09,,1,,"

A full Bellman update can be intractable. For instance, if your state space or action space are continuous, the full Bellman update is intractable. You can try to solve this by discretizing, but if your state space is large this will also be intractable.

",37829,,,,,7/9/2020 15:09,,,,4,,,,CC BY-SA 4.0 22416,1,22420,,7/9/2020 16:49,,3,101,"

I'm training a robot to walk to a specific $(x, y)$ point using TD3, and, for simplicity, I have something like reward = distance_x + distance_y + standing_up_straight, and then it adds this reward to the replay buffer. However, I think that it would be more efficient if it can break the reward down by category, so it can figure out "that action gave me a good distance distance_x, but I still need work on distance_y and standing_up_straight".

Are there any existing algorithms that add rewards this way? Or have these been tested and proven not to be effective?

",34441,,2444,,12/20/2020 16:40,1/13/2021 13:19,Can rewards be decomposed into components?,,2,0,,,,CC BY-SA 4.0 22417,2,,22413,7/9/2020 17:16,,1,,"

Word2vec embeddings are trained using a simple auto-encoder model that takes a word and tries to predict one word from the window of surrounding words.

You could define it like this:

num_of_words = 50000
# one hot encoded word
input = Input(num_of_words)
# You could use non linear activation
w2v = Dense(300, activation=”linear”)(input)
output = Dense(num_of_words, activation=”softmax”)(w2v)

But in practice, the model is redefined and takes two words as input and predicts the next words. It outputs a probability score for all the words it knows (the model’s “vocabulary”, which can range from a few thousand to over a million words).

It is trained both ways from the beginning to the end of a sentence and in the reverse direction. The loss used is categorical_crossentropy. A detailed explanation can be found here [http://jalammar.github.io/illustrated-word2vec/]

",38475,,30426,,8/1/2022 16:18,8/1/2022 16:18,,,,2,,,,CC BY-SA 4.0 22419,1,,,7/9/2020 18:34,,0,57,"

I am a complete beginner in the area. I implemented my first neural network following the online book "Neural Networks and Deep Learning" by Micheal Nielsen. It works fine with classifying handwritten digits. Achieving ~9500/10000 accuracy on the test data. I am trying to train the network to determine whether $x > 5$ where $x$ is in the interval $[0,10)$, which should be a much simpler task than classifying handwritten digits. However, no learning happens and the accuracy ever the test data stays exactly the same with every epoch. I tried different structures and different learning rates but always the same thing happened. Here is the code I wrote that uses libraries in Nielsen's book:

import networkCopy
import numpy as np
# Creating training data
x = []
y = []
for n in range(1000):
    to_add = 100*np.random.rand()
    x.append(np.array([to_add]).reshape(1,1))
    y.append(np.array([float(to_add > 50)]).reshape(1,1))
training_data = zip(x, y)
# Creating test data
tx = []
ty = []
for n in range(1000):
    to_add = 100*np.random.rand()
    tx.append(np.array([to_add]).reshape(1,1))
    ty.append(np.array([float(to_add > 50)]).reshape(1,1))
test_data = zip(tx, ty)

# Creating and training the network
net = networkCopy.Network([1, 5, 1])  # [1, 5, 1] contains the number of neurons for each layer
net.SGD(training_data, 300, 100, 5.0, test_data=test_data)
# 300 is the number of epochs, 100 is the mini batch size
#5.0 is the learning rate 

The way I generated the data may not be optimal, it is an ad hoc solution to make the data in the proper form for the network. This is my first question so I apologize for any mistakes that might be in the format of the question.

",38499,,,,,7/9/2020 18:34,Neural Network is not learning a very simple task,,0,2,,,,CC BY-SA 4.0 22420,2,,22416,7/9/2020 18:36,,5,,"

If I understood correctly you're looking at a Multi-Objective Reinforcement Learning (MORL). Keep in mind however that many scientist will often follow the reward hypothesis (Sutton and Barto) which says that

All of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)

The argument for a scalar reward could be that even if you define your policy using some objective vector (as in MORL) - you will find a pareto bound of optimal policies, some of which favour one component of the objective over the other - leaving you (the scientist) responsible for making the ultimate decision concerning the objectives' tradeoff - thus eventually degenerating the reward objective into scalar.

In your example there might be two different "optimal" policies - one which results in a very high value of distance_x but relatively poor distance_y and a one that favours distance_y instead. It'll be up to you to find the sweet spot and collapse a reward function back to a scalar.

",22835,,,,,7/9/2020 18:36,,,,0,,,,CC BY-SA 4.0 22421,1,,,7/9/2020 19:37,,1,43,"

Isn't it true that using max over a softmax will be much slower because there is not a smooth gradient?

Max basically zeros out the gradients of all the non-maximum values. Especially at the beginning of training, this means it is zeroing out potentially useful features simply because of random weight initialization. Wouldn't this drastically slow down the training in the beginning?

",21158,,2444,,7/9/2020 19:58,7/9/2020 19:58,Isn't it true that using max over a softmax will be much slower because there is not a smooth gradient?,,0,1,,,,CC BY-SA 4.0 22422,1,22435,,7/9/2020 20:50,,1,238,"

I am training a neural network that takes an input (H, W, 3) and has the output of size (H', W', C). Now, to augment my dataset, since I only have 45k images, I am using the following in my custom data generator

def Generator():
img=cv2.imread(trainDir+'\'+imgpath)
img=tf.keras.preprocessing.image.random_rotation(img,20m,row_axis=0,col_axis=1,channel_axis=2)

output_mask=np.load(trainDir+'\'+maskpath)

yield(img/255-.5,output_mask)

Since I am rotating my input images and the output_masks are generated from information about the input (specifically, heat maps around the joint locations) do I also need to rotate the masks as well?

",32434,,2444,,7/10/2020 11:58,7/10/2020 11:58,"Do I need to rotate the masks, if I also rotate the images and the masks are generated from the input?",,1,0,,,,CC BY-SA 4.0 22423,1,22425,,7/9/2020 22:12,,0,106,"

I've trained a robot to walk in a straight line for as long as it can (using TD3), and now I'm using that pre-trained model for two new models with separate purposes: 1. Walk to a specific point and halt (adding target position to the NN inputs); 2. Walk straight at a specified velocity (adding a target velocity to NN inputs).

Now let's say I retrain the original model again to walk properly after changing, say, the mass of the robot. How can I approach "forwarding" this update to the two transfer-learned models? The purpose of this is to minimize re-training time for all future models transfer-learned from the original.

(What strikes me as particularly challenging is the fact that the input layer of the transfer-learned models have additional features, so this may re-wire the majority of the NN, making a "forwarded update" completely incompatible...)

",34441,,,,,7/10/2020 9:49,"How to ""forward"" updated NN model to a transferred model?",,1,0,,,,CC BY-SA 4.0 22424,1,22464,,7/9/2020 22:34,,2,79,"

I have recently made a work about the application of neural networks to time series forecasting, and I treated this as a supervised learning (regression) problem. I have come across the suggestion of treating this problem as an unsupervised, semi-supervised, or reinforcement learning problem. The ones that made this suggestion didn't know how to explain this approach and I haven't found any paper of this. So I found myself now trying to figure it out without any success. To my understanding:

Unsupervised learning problems (clustering and segmentation reduction) and semi-supervised learning problems (semi-supervised clustering and semi-supervised classification) can be used to decompose the time series but not forecast it.

Reinforcement learning problems (model-based and non-model-based on/off-policy) is to decision taken problems, not to forecast.

It is possible to treat forecasting time series with neural networks as an unsupervised, semi-supervised, or reinforcement learning problem? How it is done?

",38501,,2444,,7/11/2020 11:23,8/10/2020 22:02,Should forecasting with neural networks only be treated as a supervised learning (regression) problem?,,1,0,,,,CC BY-SA 4.0 22425,2,,22423,7/9/2020 23:14,,1,,"

I think there is no simple way to transfer knowledge changes between different models.

If you take your initial model and create a new version of it which you use to learn some other task (like "Walk to a specific location"), then the values copied from the first (original) model change in the second model. From that moment on, training the former model on another task will have different effects on its weights than continuing the training of the second model, whose parameter have been changed already.

Consider, for example, that you had changed the mass of the robot and trained the initial model on that new task already. Then, if you took all the re-trained parameters from the first model and implanted them into the second model (trained on walking to a certain location), then you would essentially overwrite the additional knowledge the second model had gained already during its initial transfer-learning-process (not even taking into consideration any additional parameters appended to the list of parameters in the second model).

So, you will have to re-train all three models (the original one and the two transfer-learning models) if you change the mass of the robot.

Edit:

There might be an option to apply the same knowledge changes to another model architecture if you refrain from pure transfer learning. This can be achieved with a more modularized model architecture.

Consider that you train your first model on walking straight head. Let's call this model $m_{walk}$.

Then you intend to recycle $m_{walk}$ for another task, like walking straight ahead to a given location. Such a model architecture could be realized in two ways:

  1. You apply real transfer learning, retraining a copy of $m_{walk}$ to walk straight ahead until it reaches a certain location
  2. You take $m_{walk}$, don't change it, but add a second model (let's call it $m_{navigator}$), which is trained on predicting $go$ vs. $stop$.

In the second case, your overall model architecture (let's call it $m_{go\_to}$) consists of both one model used for walking (i.e. $m_{walk}$) and one model architecture which is used for predicting $go$ vs. $stop$ (i.e. $m_{navigator}$). The idea then is that the robot executes the actions suggested by $m_{walk}$ until $m_{navigator}$ suggests stopping, upon which prediction the suggestions by $m_{walk}$ will be ignored.

Then, whenever you retrain your model $m_{walk}$ (e.g. because the mass of some robot changes), you can simply apply the changes to $m_{go\_to}$ by replacing $m_{walk}$ by a new version, leaving the rest of $m_{go\_to}$ intact.

If you generalize $m_{walk}$ to not only walk straight ahead, but also to take turns etc. and you generalize $m_{navigator}$ to predict going $left$, $right$, $straight\ ahead$, $back$, or $stop$ (being predicted when a certain destination has been reached), you can generalize $m_{go\_to}$ to walk where-ever you want it to go.

",37982,,37982,,7/10/2020 9:49,7/10/2020 9:49,,,,0,,,,CC BY-SA 4.0 22426,1,22427,,7/10/2020 0:37,,3,1233,"

Nowadays, the softmax function is widely used in deep learning and, specifically, classification with neural networks. However, the origins of this term and function are almost never mentioned anywhere. So, which paper introduced this term?

",2444,,,,,7/13/2020 22:29,"Which paper introduced the term ""softmax""?",,1,0,,,,CC BY-SA 4.0 22427,2,,22426,7/10/2020 0:37,,4,,"

The paper that appears to have introduced the term "softmax" is Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters (1989, NIPS) by John S. Bridle.

As a side note, the softmax function (with base $b = e^{-\beta}$)

$$\sigma (\mathbf {z} )_{i}={\frac {e^{-\beta z_{i}}}{\sum _{j=1}^{K}e^{-\beta z_{j}}}}{\text{ for }}i=1,\dotsc ,K {\text{ and }}\mathbf {z} =(z_{1},\dotsc ,z_{K})\in \mathbb {R} ^{K}$$

is very similar to the Boltzmann (or Gibbs) distribution

$$ p_i=\frac{e^{- {\varepsilon}_i / k T}}{\sum_{j=1}^{M}{e^{- {\varepsilon}_j / k T}}} $$

which was formulated by Ludwig Boltzmann in 1868, so the idea and formulation of the softmax function is quite old.

",2444,,2444,,7/13/2020 22:29,7/13/2020 22:29,,,,0,,,,CC BY-SA 4.0 22428,2,,21761,7/10/2020 0:54,,1,,"

The threshold is typically chosen empirically, so there is no exact answer.

It's dependent on how many corners you wish to select, and how strict you want the detection, which could depend on the use case, the dataset, and the block size of the algorithm. If you’re not sure what to choose for the threshold, I would suggest using a scheme relative to your images to deduce the best threshold, based on what performs best for your use case.

For example, let the criteria for selection, or scoring function, be $R$, which in the Shi-Tomasi case is $R = min(\lambda_1, \lambda_2)$. We could choose some value $q \in (0,1)$, so that the threshold becomes

$t = q\max_p{R}$,

where the max $R$ is calculated over all points $p$, an approach similar to OpenCV Good Features to Track.

You could examine the corners detected and adjust $q$ based on how stringently you wanted to select corners. If you don't want to waste the time manually fine-tuning the threshold, you could check out some automated approaches in the literature, such as Automated thresholding for low-complexity corner detection

",38505,,,,,7/10/2020 0:54,,,,0,,,,CC BY-SA 4.0 22429,1,,,7/10/2020 1:56,,1,148,"

I implemented a MCES for 2048 (the game), with a quality function implemented as a neural net of a single layer.

The starts are created with 6 cells filled with values between 64 and 1024, two cells are 1024 an ther other 8 cells are filled with 0. The game is then progressed until the AI loses or wins and another start is created.

After 10 wins the max cell created in the start is reduced in half. Thus, after the first 10 wins, the max cell created in the start is 512.

The issue I am having is that after the first 10 wins, the AI gets stuck, it can run around 3 million steps but doesn't get any more wins.

How should I create the starts for it to actually learn?

Code for reward (complete code here):

        ArrayList<DataSet> dataSets = new ArrayList<>();
        double gain = 0;

        for(int i = rewards.size()-1; i >= 0; i--) {
            gain = gamma * gain + rewards.get(i);

            double lerpGain = reward(gain);
            INDArray correctOut = output.get(i).putScalar(actions.get(i).ordinal(), lerpGain);
            dataSets.add(new DataSet(input.get(i), correctOut));
        }

        Qnetwork.fit(DataSet.merge(dataSets));  

Code:

public class SimpleAgent {
    private static final Random random = new Random(SEED);

    private static final MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
            .seed(SEED)
            .weightInit(WeightInit.XAVIER)
            .updater(new AdaGrad(0.5))
            .activation(Activation.RELU)
            .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
            .weightDecay(0.0001)
            .list()
            .layer(new DenseLayer.Builder()
                    .nIn(16).nOut(4)
                    .build())
            .layer(new OutputLayer.Builder()
                    .nIn(4).nOut(4)
                    .lossFunction(LossFunctions.LossFunction.SQUARED_LOSS)
                    .build())
            .build();


    public SimpleAgent() {
        Qnetwork.init();
        ui();
    }

    private static final double gamma = 0.02;

    private final ArrayList<INDArray> input = new ArrayList<>();
    private final ArrayList<INDArray> output = new ArrayList<>();
    private final ArrayList<Double> rewards = new ArrayList<>();
    private final ArrayList<GameAction> actions = new ArrayList<>();

    private MultiLayerNetwork Qnetwork = new MultiLayerNetwork(conf);
    private GameEnvironment oldState;
    private GameEnvironment currentState;
    private INDArray oldQuality;
    private double epsilon = 1;

    public void setCurrentState(GameEnvironment currentState) {
        this.currentState = currentState;
    }

    public GameAction act() {
        if(oldState != null) {
            double reward = currentState.points - oldState.points;

            if (currentState.lost) {
                reward = 0;
            }

            input.add(oldState.boardState);
            output.add(oldQuality);
            rewards.add(reward);

            epsilon -= (1 - 0.01) / 1000000.;
        }

        oldState = currentState;
        oldQuality = Qnetwork.output(currentState.boardState);

        GameAction action;

        if(random.nextDouble() < 1-epsilon) {
            action = GameAction.values()[oldQuality.argMax(1).getInt()];
        } else {
            action = GameAction.values()[new Random().nextInt(GameAction.values().length)];
        }

        actions.add(action);

        return action;
    }

    private final int WINS_TO_NORMAL_GAME = 100;
    private int wonTimes = 0;

    public void setHasWon(boolean won) {
        if(won) {
            wonTimes++;
        }
    }

    public boolean playNormal() {
        return wonTimes > WINS_TO_NORMAL_GAME;
    }

    public boolean shouldRestart() {
        if (currentState.lost || input.size() == 20) {
            ArrayList<DataSet> dataSets = new ArrayList<>();
            double gain = 0;

            for(int i = rewards.size()-1; i >= 0; i--) {
                gain = gamma * gain + rewards.get(i);

                double lerpGain = reward(gain);
                INDArray correctOut = output.get(i).putScalar(actions.get(i).ordinal(), lerpGain);
                dataSets.add(new DataSet(input.get(i), correctOut));
            }

            Qnetwork.fit(DataSet.merge(dataSets));

            input.clear();
            output.clear();
            rewards.clear();
            actions.clear();

            return true;
        }

        return false;
    }

    public Game2048.Tile[] generateState() {
        double lerped = lerp(wonTimes, WINS_TO_NORMAL_GAME);
        int filledTiles = 8;

        List<Integer> values = new ArrayList<>(16);

        for (int i = 0; i < 16-filledTiles; i++) {
            values.add(0);
        }

        for (int i = 16-filledTiles; i < 14; i++) {
            values.add((int) (7-7*lerped) + random.nextInt((int) (2- 2*lerped)));
        }

        values.add((int) ceil(10-10*lerped));
        values.add((int) ceil(10-10*lerped));

        Collections.shuffle(values);

        return values
                .stream()
                .map((value) -> (value == 0? 0: 1 << value))
                .map(Game2048.Tile::new)
                .toArray(Game2048.Tile[]::new);
    }

    private static double reward(double x) {
        return x/ 2048;
    }

    private static double lerp(double x, int maxVal) {
        return x/maxVal;
    }

    private void ui() {
        UIServer uiServer = UIServer.getInstance();
        StatsStorage statsStorage = new InMemoryStatsStorage();
        uiServer.attach(statsStorage);
        Qnetwork.setListeners(new StatsListener(statsStorage));
    }
}
",14892,,,,,7/13/2020 18:13,Monte Carlo Exploring Starts broke for 2048 game AI,,0,3,,,,CC BY-SA 4.0 22430,1,,,7/10/2020 2:23,,0,63,"

I want to use a neural network to predict the refractive index of a solution. My thinking is, instead of immediately training on many samples, I will first find the 'ultimate resolution' of the network given the experimental apparatus I am using. What I mean is I will make two different solutions which have refractive indices near the middle of the range of which I am interested. Then I will train the network to classify these two solutions based on reflectance measured from the solution. If it works, say with at least 95% accuracy, then I will make two different solutions in which the difference in refractive index is smaller than before. I will repeat this until the ANN classifies, say below 95%.

Will this method of finding the 'resolution' by classification extrapolate well to regression with many more training examples?

",14811,,14811,,7/13/2020 0:24,7/13/2020 0:24,Finding the 'ultimate resolution' of an ANN,,0,4,0,,,CC BY-SA 4.0 22432,2,,11226,7/10/2020 3:47,,0,,"

As far as I understand, the concept of non-Euclidean space doesn't bring the ordinality or hierarchy among the features, compared to that with the data formed in the Euclidean space.

The difference between both these techniques is not remarkable for discriminative tasks like classification. But, for generative modeling, the non-Euclidean techniques helps in defining the latent manifold space for the given data distribution. This can further help in traversing the manifold from the same distribution (to generate similar samples from the same or underlying manifold) even with $n$ degrees of freedom in the latent space. This is not possible with Euclidean techniques. One cannot fully traverse/generate samples from or outside the manifold without the minimal change in the Euclidean space. More precisely, it can, but it will only present it as noisy data.

",28451,,2444,,7/10/2020 11:51,7/10/2020 11:51,,,,1,,,,CC BY-SA 4.0 22433,1,22438,,7/10/2020 4:10,,1,195,"

I actually went through the Keras' batch normalization tutorial and the description there puzzled me more.

Here are some facts about batch normalization that I read recently and want a deep explanation on it.

  1. If you froze all layers of neural networks to their random initialized weights, except for batch normalization layers, you can still get 83% accuracy on CIFAR10.

  2. When setting the trainable layer of batch normalization to false, it will run in inference mode and will not update its mean and variance statistics.

",28451,,2444,,7/12/2020 13:26,8/11/2020 14:02,How does batch normalisation actually work?,,3,1,,,,CC BY-SA 4.0 22434,2,,22433,7/10/2020 6:08,,0,,"

A batch normalisation layer is like a standard FC layer but instead of learning weights and bias', you learn means and variances and scale the whole layer by said means and variances.

Fact 1:

Because it behaves just like a normal layer, and can learn, with the right structure it will learn to get a high enough accuracy.

Fact 2

Disabling learning on a batch norm layer is just like disabling learning on any other layer. It will not updated any of its parameters, and in this case the parameters are the means and variances, and so these will not be updated.

",26726,,,,,7/10/2020 6:08,,,,0,,,,CC BY-SA 4.0 22435,2,,22422,7/10/2020 8:08,,1,,"

Yes! This is crucial.

If you rotate your input images for segmentation, you need to rotate the output masks as well. Otherwise the loss of your network will not be correctly calculated and your network will not learn how to generalize to rotated input images.

If you use keras, you can use two ImageDataGenerator classes, one for the images and one for the masks, with the same random seed and augmentation parameters. It looks something like this:

data_gen_args = dict(rotation_range=45)

image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)

image_generator = image_datagen.flow(train,seed=SEED)
mask_generator = mask_datagen.flow(y_train,seed=SEED)
train_generator = zip(image_generator, mask_generator)

model.compile(...)
model.fit_generator(train_generator, ...)
",37120,,,,,7/10/2020 8:08,,,,1,,,,CC BY-SA 4.0 22436,1,22439,,7/10/2020 8:33,,1,57,"

I'm building a model, where, from a feature set A, I want to predict a target set C. I need to understand if another feature set B, together with A, can improve my model performances, instead of using only A.

Now I want to check if I can predict B directly from A, since, in my understanding, this would mean that info on B is already inside A.

If I get good predictions when testing the model A -> B, is it true then that adding B to A in predicting C is completely useless?

And furthermore, are there smarter ways to decide if/when a feature is useless?

",36504,,2444,,1/23/2022 9:09,1/23/2022 9:09,When is adding a feature useless?,,1,0,,,,CC BY-SA 4.0 22438,2,,22433,7/10/2020 8:52,,0,,"

I'm not sure how just training the batch normalisation layer, you can get an accuracy of 83%. The batch normalisation layer parameters $\gamma^{(k)}$ and $\beta^{(k)}$, are used to scale and shift the normalised batch outputs. These parameters are learnt during the back-propagation step. For the $k$th layer, $$y^{(k)} = \gamma^{(k)}\hat{x}^{(k)} + \beta^{(k)}$$ The scaling and shifting are done to ensure a non - linear activation being outputted by each layer. Because batch normalisation scales outputs between 0-1, some activation functions are linear within that range (Eg. $tahh$ and $sigmoid$)

Regarding the second fact however, the difference between training and inference mode is this. During training mode, the statistics of each batch norm layer $\mu_B$ and $\sigma^2_B$ is computed. This statistic is used in scaling and normalising the outputs of the batch norm layer to have 0 mean and unit variance. At the same time, the current batch statistic computed is also used to update the running mean and running variance of the population. $\mu_B[t]$ represents the current batch mean, $\sigma^2_B[t]$ represents the current batch variance, while $\mu'_B[t]$ and $\sigma'_B[t]$ represent the accumulated means and variance from the previous batches. The running mean and variance of the population is then updated as $$\mu'_B[t]=\mu'_B[t]× momentum+ \mu_B[t]×(1−momentum)$$ $$\sigma'^2_B[t]=\sigma'^2_B[t] × momentum + \sigma^2_B[t]×(1−momentum)$$

In inference mode, the batch normalisation uses the running mean and variance computed during training mode to scale and normalise inputs in the batch norm layer instead of the current batch mean and variance.

",32780,,,,,7/10/2020 8:52,,,,0,,,,CC BY-SA 4.0 22439,2,,22436,7/10/2020 8:59,,1,,"

Now I want to check if I can predict B directly from A, since, in my understanding, this would mean that info on B is already inside A.

This will help inform you how much redundancy there is between A and B. However, even if you can predict B with 100% accuracy from A, you may still be better off using A+B (instead of A alone) to predict C.

If I get good predictions when testing the model A -> B, is it true then that adding B to A in predicting C is completely useless ?

It is an indicator that adding B probably won't make great improvements to your prediction of C.

The only way to be sure is to make the model that uses A+B and compare its performance against a model that uses only A. If collecting B costs time or other resources, then perform this check by limiting both models to only learn from the subset of data where you have all of A, B, C available.

And furthermore, are there smarter ways to decide if/when a feature is useless ?

Another thing you can do is to try and predict C from B alone. It doesn't need to score well, but may indicate that something useful is in the data if you get better than chance results. Scoring badly unfortunately doesn't rule it out for working well in combination with A.

Generally, if you cannot reason it clearly one way or another from theory, the accepted method is to build variations of your model with and without a feature and test them. You do have to be aware of the chances of spurious correlation though, so usually you need to have some basis or motivation for considering any new feature, from your domain knowledge that applies to the model.

",1847,,1847,,7/10/2020 9:21,7/10/2020 9:21,,,,0,,,,CC BY-SA 4.0 22440,1,,,7/10/2020 10:14,,2,177,"

I've been using matterport's Mask R-CNN to train on a custom dataset. However, there seem to be some parameters that i failed to correctly define because on practically all of the images, the bottom or top of the object's mask is cut off:

As you can see, the bounding box is fine since it covers the whole blade, but the mask seems to suddenly stop in a horizontal line on the bottom.

On another hand, there is a stair-like effect on masks of larger and curvier objects such as this one (in addition to the bottom and top cut-offs):

  • The original images are downscaled to IMAGE_MIN_DIM = IMAGE_MAX_DIM = 1024 using the "square" mode.
  • USE_MINI_MASK is set to true with MINI_MASK_SHAPE = (512, 512) (somehow if i set it off, RAM gets filled and training chrashes).
  • RPN_ANCHOR_SCALES = (64, 128, 256, 512, 1024) since the objects occupy a large space of the image.

It doesn't feel like the problem comes from the amount of training. These two predictions come from 6 epochs of 7000 steps per epoch (took around 17 hours). And the problem appears from early stage and persists along all the epochs.

I posted the same question on stack overflow, and an answer pointed out that this issue is common when using mask r-cnn. It also suggested to look at PointRend, an implementation to mask r-cnn that addresses this issue.

Nevertheless I feel like I could still optimize my model and use the full potential of mask r-cnn before looking for an alternative.

Any idea on what changes to make ?

",19155,,19155,,7/13/2020 7:40,7/13/2020 7:40,Inaccurate masks with Mask-RCNN: Stairs effect and sudden stops,,0,0,,,,CC BY-SA 4.0 22443,1,,,7/10/2020 14:18,,1,195,"

What is the idea behind double DQN?

The target in double DQN is computed as follows

$$ Y_{t}^{\text {DoubleQ }} \equiv R_{t+1}+\gamma Q\left(S_{t+1}, \underset{a}{\operatorname{argmax}} Q\left(S_{t+1}, a ; \boldsymbol{\theta}_{t}\right) ; \boldsymbol{\theta}_{t}^{\prime}\right), $$ where

  • $\boldsymbol{\theta}_{t}^{\prime}$ are the weights of the target network
  • $\boldsymbol{\theta}_{t}$ are the weights of the online value network
  • $\gamma$ is the discount factor

On the other hand, the target in DQN is computed as

$$Y_{t}^{\mathrm{DQN}} \equiv R_{t+1}+\gamma \max _{a} Q\left(S_{t+1}, a ; \boldsymbol{\theta}_{t}^{-}\right),$$ where $\boldsymbol{\theta}_{t}^{-}$ are the weights of the target network.

The target network for evaluating the action is updated using weights of the online network and the value fed to the target value is basically the old q value of the action.

Any ideas on how or why adding another network based on weights from the first network helps? Any example?

",38519,,2444,,11/4/2020 21:32,11/4/2020 21:32,Why does adding another network help in double DQN?,,1,0,,11/4/2020 21:35,,CC BY-SA 4.0 22445,1,,,7/10/2020 17:39,,0,77,"

I am trying to learn backpropagation and this is what I know so far.

To update the weights of the neural network you have to figure out the partial derivative of each of the parameters on the loss function using the chain rule. List all of these partial derivatives in a column vector and you have your gradient vector of your current parameter's on the loss function. Then by taking the negative of the gradient vector to descend the loss function and multiplying it by the learning rate (step size) and adding it to your original gradient vector, you have your new weights.

Is my understanding correct? Also, how can this be done in iterations over training examples?

",38523,,2444,,7/10/2020 19:59,7/10/2020 20:18,Is my understanding of back-propogation correct?,,1,0,,,,CC BY-SA 4.0 22447,2,,21982,7/10/2020 18:48,,2,,"

You are right that the baseline score is near zero only when there are a large number of label classes, i.e., when k is large. We should have qualified this line in the paper more carefully.

In this sense, formally, the technique explains the *difference in prediction between the input score and the baseline score, as is made clear elsewhere in the paper (see Remark 1 and Proposition 1 for instance.)

",38525,,,,,7/10/2020 18:48,,,,1,,,,CC BY-SA 4.0 22448,2,,22445,7/10/2020 19:58,,2,,"

Your understanding seems to be correct (although your explanation isn't completely precise), apart from "adding it to your original gradient vector". You add the gradient vector to the parameters/weights vector.

(Note that back-propagation is just the algorithm that computes the gradient vector. The update of the parameters with the gradient of the loss function is the gradient descent/ascent step, even though it's true that some people, at least in the context of deep learning, refer to the combination of gradient descent and the computation of the gradients as the back-propagation algorithm, but this is just a terminology issue, which you should not dwell too much on.)

Also, how can this be done in iterations over training examples?

If I understand this question correctly, you want to know how we would update the parameters when there's more than one training example. In that case, you compute a gradient vector for each training example. Then the average of the gradient vectors (which is also a vector) is the actual gradient vector that you use to update the parameters. See this answer for more info.

",2444,,2444,,7/10/2020 20:18,7/10/2020 20:18,,,,0,,,,CC BY-SA 4.0 22449,1,,,7/10/2020 20:52,,2,640,"

I would like to ask a question about the relationship of accuracy with the loss function.

My experiment is a multiclass text classification problem, and I have built a Keras neural network to tackle it. My labels are something like

array([array([0, 0, 0, 0, 0, 1, 0, 1]), array([0, 1, 1, 0, 0, 0, 0, 1])])

For the final output layer I use the 'sigmoid' activation function and for loss the 'binary crossentropy', however, I am a bit confused about the metric. I am using the F1_score metric because Accuracy it's not a metric to count on when there are many more negative labels than positive labels. So, since the problem is multilabel classification shall I use the multi-label mode like tfa.metrics.F1_score(mode="micro"). Is that correct? or since I use binary_crossentropy and sigmoid activation function should I use the standard binary f1-score because every label-tag Is independent to the others and has a different Bernoulli distribution?

I would really like to get your input on this. My humble opinion is that I should you the binary standard mode of binary f1-score and not the multi-label micro approach even though my experiment is multi-label text classification.

My current approach (using micro F1-score since my y_train is Multi-label

model_for_pruning.compile(optimizer='adam',
                          loss='binary_crossentropy',
                          metrics=[tfa.metrics.F1Score(y_train[0].shape[-1], average="micro")])

My alternative approach (based on the binary_crossentropy and the sigmoid activation function, despite I have multi-label y_train)

model_for_pruning.compile(optimizer='adam',
                          loss='binary_crossentropy',
                          metrics=[tfa.metrics.F1Score(y_train[0].shape[-1], average=None)])

Τhe reason why I use sigmoid and not softmax as the output layer

relevant link Why Sigmoid and not Softmax in the final dense layer? In the final layer of the above architecture, sigmoid function as been used instead of softmax. The advantage of using sigmoid over Softmax lies in the fact that one synopsis may have many possible genres. Using the Softmax function would imply that the probability of occurrence of one genre depends on the occurrence of other genres. But for this application, we need a function that would give scores for the occurrence of genres, which would be independent of occurrences of any other movie genre.

relevant link 2 Binary cross-entropy rather than categorical cross-entropy. This may seem counterintuitive for multi-label classification; however, the goal is to treat each output label as an independent Bernoulli distribution and we want to penalize each output node independently.

Please check my reasoning behind this and I would be happy if we can contradict this explanation. To better explain my experiment, I want to predict movie genres. So a movie can belong to 1 or more genres ['Action', 'Comedy', 'Children'], so when I use softmax the probability sum to 1, while when I use sigmoid its single probability of a class has a range between (0,1). Thus, if the predictions are correct the genres with the highest probabilities are those assigned to the movie. So imagine that my vector of prediction probabilities are something like [0.15, 0.12, 0.54, 0.78, 0.99] sum()> 1, and not something like [0.12, 0.43, 11, 0.32, 0.01, 0.01) sum() = 1.

",38530,,38530,,7/11/2020 13:39,11/28/2022 18:05,Binary mode or Multi-label mode is correct when using binary crossentropy and sigmoid output function on multi-label classification,,1,0,,,,CC BY-SA 4.0 22450,1,,,7/10/2020 22:03,,3,174,"

I have been working on understanding how CornerNet works, but I couldn't figure out a few parts about the architecture.

First, the authors mention that there are 3 distinct parts to be predicted as a heatmap, embedding, and offset.

Also, in the paper, it is stated that the network was trained on the COCO dataset, which has bounding box and class annotations.

As far as I am concerned, since CornerNet is based on detecting the top-left and bottom-right corners, the ground-truth labels for heatmap should be composed of top-left and bottom-right pixel locations of bounding boxes with the class score (but I might be wrong). What is the heatmap used for?

Moreover, for the embedding part, authors used the pull&push loss at the ground-truth pixel locations to find out which corner pairs belong to which object, but I don't understand how to backpropagate this loss. How do I back-propagate the embedding loss?

",38533,,2444,,7/11/2020 18:33,6/5/2021 12:07,What is a heatmap in the CornerNet paper?,,1,0,,,,CC BY-SA 4.0 22451,2,,22410,7/11/2020 0:06,,1,,"

From how it looks, the most reliable method to try out is using Hough transform.

The Hough transform can be used to detect e.g. lines and circles in images (depending on which variant you are using; in this case it would amount to a combination of variants for both lines and circles obviously).

So, given some input image, the Hough transform tells you what the line/circle parameters are that have created a line in the input image. For example, given a line, it would tell you the intersection with the $y$-axis and the $slope$ of the line detected in the image. Then, you could use these parameter information to reconstruct lines and circles detected in the image.

The last remaining problem to be solved then is to check where a detected line starts and stops (since this is not obvious from line parameters like $m$ (=$slope$) and $b$ (=intersection with $y$-axis) in the equation $y = mx+b$ describing some line).

But for that, you could "walk" along a line in the image space and check where the line is present or not. Then, you can draw line segments in the reconstruction image when the corresponding elements are also present in the original image.

The problem with (C)NNs would be that they are sensitive to rotation and scale etc. You could of course take a tremendously large number of filters to account for shapes of different rotation and scale, but that would increase the demand for labeled training data again (which could of course be automatically be generated a priori in this simple case).

Anyway, I'd suggest checking out Hough transform. To get some feeling for it, there are lots of libraries available implementing it for Python or MatLab, for example.

For further information, check out Wikipedia or YouTube.

",37982,,,,,7/11/2020 0:06,,,,0,,,,CC BY-SA 4.0 22452,1,22459,,7/11/2020 8:25,,2,68,"

In the book of Barto and Sutton, there are 3 methods presented that solve an RL problem: DP, Monte Carlo, and TD. But which category does policy gradient methods (or actor-only methods) classify in? Should I classify them as the 4th method of solving a reinforcement learning problem?

",37169,,2444,,7/11/2020 11:25,7/11/2020 14:55,How can I classify policy gradient methods in RL?,,1,1,,,,CC BY-SA 4.0 22453,2,,22433,7/11/2020 9:47,,0,,"

The original paper by Sergey Ioffe and Christian Szegedy; https://arxiv.org/abs/1502.03167 "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" is very good. Make sure to go through the paper slowly and make annotations to truly understand it.

",36082,,,,,7/11/2020 9:47,,,,0,,,,CC BY-SA 4.0 22454,2,,22443,7/11/2020 11:21,,1,,"

As the authors of this paper state it:

In $Q$-learning, the agent updates the value of executing an action in the current state, using the values of executing actions in a successive state. This procedure often results in an instability because the values change simultaneously on both sides of the update equation. A target network is a copy of the estimated value function that is held fixed to serve as a stable target for some number of steps.

If I remember it correctly, the main concern is that the network could end up in a positive feedback loop, making sufficient exploration of various action and state combinations less likely to occur, which could be detrimental to the learning task.

",37982,,,,,7/11/2020 11:21,,,,0,,,,CC BY-SA 4.0 22455,2,,20808,7/11/2020 11:51,,3,,"

Some basic advantages of MCTS over Minimax (and its many extensions, like Alpha-Beta pruning and all the other extensions over that) are:

  • MCTS does not need a heuristic evaluation function for states. It can make meaningful evaluations just from random playouts that reach terminal game states where you can use the loss/draw/win outcome. So if you're faced with a domain where you have absolutely no heuristic domain knowledge that you can plug in, MCTS is likely a better choice. Minimax must have a heuristic evaluation function for states (exception: if you game is so simple that you can afford to compute the complete game tree and reach all terminal game states immediately from the the initial game state, you don't need heuristics). If you do have strong evaluation functions, you can still incorporate them and use them to improve MCTS too; they're just not strictly necessary for MCTS.

  • MCTS has simpler anytime behaviour; you can just keep running iterations until you run out of computing time, and then return the best move. Typically we expect the performance level of MCTS to grow with computatinon time / iteration count relatively smoothly (not always 100% true, but intuitively you can usually expect something like this). You can sort of achieve anytime behaviour in minimax with iterative deepening, but that's usually a bit less "smooth", a bit more "bumpy"; this is because every time you increase the search depth, you need significantly more processing time than you did for the previous depth limit. If you run out of time and have to abort your current search at your current depth limit, that last search will be completely useless; you'll have to discard it and stick to the results from the previous search with the previous depth limit.

A difference, which is not necessarily an advantage or disadvantage either way in the general case (but can be in specific cases):

  • The computation time of MCTS is generally dominated by running (semi-)random playouts. This means that functions for computing legal move lists, and applying moves to game states, typically dictate how fast or slow your MCTS runs; making these functions faster will generally make your MCTS faster. On the other hand, the computation time of Minimax is generally dominated by copying game states (or "undoing" moves, which is an operation that in most games will require additional memory usage for game states to be possible) and heuristic evaluation functions (though the latter are likely to also become important in terms of computation cost in MCTS if you choose to include them there). In some games it will be easier to provide efficient implementations for one of these, and in other games it may be different.

A basic advantage of Minimax over MCTS:

  • In settings where MCTS can only run very few iterations relative to the branching factor (or in the extreme case, fewer iterations than there are actions available in the root node), MCTS will perform extremely poorly / close to random play. We've noticed this being the case for quite a decent number of games in our general game system Ludii (where the "general game system" often implies that games are implemented less efficiently than they could be in a dedicated single-game-specific program) with low time controls (like 1 second per move). This same general game setting often makes it difficult to find super strong heuristics, but it's generally still possible to come up with some relatively simple ones (like just a simple material heuristic in chess). An alpha-beta search with just a couple of search plies and a basic, simple heuristic will often outperform a close-to-random MCTS if the MCTS can't manage to run significantly more iterations than it has legal moves in the root node.
",1641,,,,,7/11/2020 11:51,,,,2,,,,CC BY-SA 4.0 22456,1,,,7/11/2020 12:55,,0,87,"

I am trying to predict nursing activity using mobile accelerometer data. My dataset is a CSV file containing x, y, z component of acceleration. Each frame contains 20-second data. The dataset is highly imbalance, so I perform data augmentation and balance the data. In the data augmentation technique, I only use scaling and my assumption is, if I scale down or up a signal the activity remains the same. Using this assumption I augmented the data and my validation set not only contain the original signals but also the augmented (scaling) signals. Using this process, I am getting quite a good accuracy that I never being expected using only data augmentation. So, I am thinking that I performed a terrible mistake somewhere. I check the code, everything is right. So now I think, since my validation set has augmented data, that's the reason of getting this high accuracy (maybe the augmented data is really easy to classify).

",28048,,,,,7/11/2020 13:22,Can I use augmented data in the validation set?,,1,0,,,,CC BY-SA 4.0 22457,2,,22449,7/11/2020 13:21,,0,,"

Since you have a multiclass classification problem rather than a binary classification problem (i.e. a two-class problem), I recommend to adjust your architecture and use softmax instead of sigmoid as final activation function and categorcal_crossentropy instead of binary_crossentropy.

Softmax will ensure all your outputs are valid probabilities. This is crucial in calculating the loss of your model. Using sigmoid will only ensure this if you have binary classes. Further categorical crossentropy is used for multiclass problems, again this is to make sure you calculate the correct loss. This is just a general model characteristic:
binary -> sigmoid & binary_crossentropy
multi-class -> softmax & (sparse_)categorical_crossentropy

To answer you original question, I think you should use average = micro, macro or weighted when calculating the F1-score. This depends on what you like your model to put more emphasis on and how severe your class imbalances are.

",37120,,,,,7/11/2020 13:21,,,,2,,,,CC BY-SA 4.0 22458,2,,22456,7/11/2020 13:22,,2,,"

You should not use augmented data in the validation nor in the test set.

Validation and test set are purely used for hyperparameter tuning and estimating the final performance, i.e. estimating the generalization error. These two data sets should be as close as possible to other data, which you could have acquired, but you actually haven not, i.e. your true data distribution.

It's fine to augment training data, since it mimics other samples from the true distribution by applying transformations, noise, etc. and thus helps to increase generalization performance (assuming your augmentation assumptions are right). But evaluation should always be performed on original data.

",37120,,,,,7/11/2020 13:22,,,,0,,,,CC BY-SA 4.0 22459,2,,22452,7/11/2020 14:55,,2,,"

DP, Monte Carlo, and TD are methods of estimating returns. Policy gradient describes methods of learning a policy. So policy gradients serve a different purpose than the other things you mentioned. For clarity, you can use Monte Carlo or TD methods to estimate returns to construct the loss that you get your policy gradient from.

",37829,,,,,7/11/2020 14:55,,,,0,,,,CC BY-SA 4.0 22460,1,,,7/11/2020 17:01,,0,449,"

I have developed a basic feedforward neural network from scratch to classify whether image is of cat or not cat. It works fine, but after 2500 iterations, my cost function is not reducing properly.

The loss function which I am using is

$L(\hat{y},y) = -ylog\hat{y}-(1-y)log(1-\hat{y})$

Can you please point out where I am going wrong the link to the notebook is https://www.kaggle.com/sidcodegladiator/catnoncat-nn?

",38546,,38546,,7/12/2020 3:37,12/6/2022 10:04,Why isn't the loss of my neural network reduced after 2500 iterations?,,2,3,,,,CC BY-SA 4.0 22462,1,,,7/11/2020 18:13,,2,288,"

I'm trying to understand the DDPG algorithm using Keras

I found the site and started analyzing the code, I can't understand 2 things.

The algorithm used to write the code presented on the page

In the algorithm image, updating the critic's network does not require gradient

But the gradient is implemented in the code, why?

with tf.GradientTape() as tape:
    target_actions = target_actor(next_state_batch)
    y = reward_batch + gamma * target_critic([next_state_batch, target_actions])
    critic_value = critic_model([state_batch, action_batch])
    critic_loss = tf.math.reduce_mean(tf.math.square(y - critic_value))

critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables)
critic_optimizer.apply_gradients(zip(critic_grad, critic_model.trainable_variables))

The second question is why in the photo of the algorithm when calculating the actor's policy gradient are 2 gradients multiplied by themselves and in the code only one gradient is calculated for the critic's network and it's not multiplied by the second gradient?

with tf.GradientTape() as tape:
    actions = actor_model(state_batch)
    critic_value = critic_model([state_batch, actions])
    # Used `-value` as we want to maximize the value given
    # by the critic for our actions
    actor_loss = -tf.math.reduce_mean(critic_value)

actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables)
actor_optimizer.apply_gradients(zip(actor_grad, actor_model.trainable_variables))
",38547,,2444,,7/5/2022 20:53,7/5/2022 20:53,Why does this Keras implementation of the DDPG algorithm update the critic's network using the gradient but the pseudocode doesn't?,,1,1,,,,CC BY-SA 4.0 22463,1,,,7/11/2020 19:13,,2,55,"

One of my friends sent me a problem he was working on lately, and I couldn't help but I wonder how could it be solved using Q-learning. The statement is as follows:

Given the following datasets, the objective is to find a suitable strategy per customer contract to maximize Gain and minimize Cost according to the characteristics of the customer.

train.csv: 5000 independent rows, 33 columns.

Columns description:

  • Day (1, 2 or 3): on which day the strategy was applied.
  • 28 variables (A, B, C, ..., Z, AA, BB): characteristics of the individual;
  • Gain: the gain for this individual for the corresponding strategy;
  • Cost: the cost for this individual for the corresponding strategy;
  • Strategy (0, 1 or 2): the strategy applied on this individual;
  • Success: 1 if the strategy succeeded, 0 otherwise.
  • If Success is 1, then the net gain is Gain - Cost, and if Success is 0, consider a standardized cost of 5.
  1. test.csv: 2 000 independent rows, 31 columns.

Columns description:

  • Index: 0 to 1999, unique for each row.
  • Day (4): on which day the strategy will be applied.
  • 28 variables (A, B, C, ..., Z, AA, BB): characteristics of the client;
  • Gain: the gain for this individual for the corresponding strategy;
  • Cost: the cost for this individual for the corresponding strategy;

From what I understood, the train.csv file is used to build a Q-Learning model, and the test one for generating a strategy and predicting a Success.

My main question is:

How to formulate this problem as an RL problem? How to define an episode? Since the training data is labeled, this could be clearly a classification problem (predicting the strategy), but I have no idea how to solve it using RL (Q-learning ideally). Any ideas will be helpful.

",38550,,2444,,7/12/2020 10:56,7/12/2020 10:56,How can I formulate a prediction problem (given labeled data) as an RL problem and solve it with Q-learning?,,0,0,,,,CC BY-SA 4.0 22464,2,,22424,7/11/2020 21:13,,1,,"

I think the choice of technique strongly depends on how fine-grained your forecast-predictions need to be.

When it comes to forecasting by Reinforcement Learning (RL), one prominent example is the stock-trading RL agent. The agent must decide which stock to buy or sell, thereby drawing upon predictions concerting the expected future development of some stock. Given this approach, you would not necessarily let the RL agent explicitly generate estimates of how stock prices are gonna develop at any point, but instead you would only observe the predicted decision concerning whether to buy or sell etc.

But if you think hard enough, I am certain that you could come up with setups of RL agents that would allow you to explicitly generate future estimates of values to be forecast. In this case, the final buy/sell decision would have to depend on the explicit future stock price predictions to enforce accurate predictions.

Concerning unsupervised learning, you could cluster data points (training samples) with respect to how some value(s) of interest changed $t$ time steps in the future (after having observed the training sample). Then, you could associate clusters with rough forecast-estimates. After all, you would treat the forecast value as a label associated with data points. Afterwards, you could use some kind of nearest neighbor approach to determine which cluster is closest to some novel data sample. Then, you take as a prediction for the new data sample the forecast prediction (i.e. label) that is associated with the closest cluster/prototype etc. But strictly speaking, as soon as you start turning forecast values (which were previously part of some unlabeled time-series dataset) into labels, you turn the training procedure of course into a supervised technique again.

How well especially the latter training approach would work, I can't tell since I have never heard anyone using this method. But if training data is too scarce to employ some deep learning method, why not giving it at least a try if accuracy doesn't have to be too precise?

After all, it's just a matter of creativity and testing which method works best given your specific machine learning problem at hand.

",37982,,,,,7/11/2020 21:13,,,,0,,,,CC BY-SA 4.0 22465,1,,,7/11/2020 21:19,,1,46,"

I am trying to learn more about text-independent writer identification and was hoping for some advice.

I have a folder with 100k images, each of them with a different handwritten sentence. All of the images have sentences of different lengths. They range from about 30 to 80 English characters. The file names start at 1.png and go up to 100k.png. That's it, as far as input data. 95% of the sentences are written by different writers. 5% are written by the same writers. Some writers might have written 2 sentences, while others 300+.

Does anyone know of an identification method that would be able to determine what images were written by the same writer?

I know that most methods require each writer to have provided a full page of sample writing for training, but, of course, I do not have that.

",38553,,2444,,7/13/2020 20:01,7/13/2020 20:01,Can text-independent writer identification be done without multi-sentence training datasets for each writer?,,0,2,,,,CC BY-SA 4.0 22467,1,,,7/12/2020 2:50,,1,77,"

I tried the first neural network architecture and the second one, but keeping all other variables constants, I am getting better results with the second architecture. Why are these same neural network architecture giving different results? Or am I making some mistakes?

First one:

def __init__(self, state_size, action_size, seed, hidden_advantage=[512, 512],
             hidden_state_value=[512,512]):
    super(DuelingQNetwork, self).__init__()
    self.seed = torch.manual_seed(seed)
    hidden_layers = [state_size] + hidden_advantage
    self.adv_network = nn.Sequential(nn.Linear(hidden_layers[0], hidden_layers[1]),
                                     nn.ReLU(),
                                     nn.Linear(hidden_layers[1], hidden_layers[2]),
                                     nn.ReLU(),
                                     nn.Linear(hidden_layers[2], action_size))

    hidden_layers = [state_size] + hidden_state_value
    self.val_network = nn.Sequential(nn.Linear(hidden_layers[0], hidden_layers[1]),
                                     nn.ReLU(),
                                     nn.Linear(hidden_layers[1], hidden_layers[2]),
                                     nn.ReLU(),
                                     nn.Linear(hidden_layers[2], 1))                                                           
def forward(self, state):
    """Build a network that maps state -> action values."""
    # Perform a feed-forward pass through the networks
    advantage = self.adv_network(state)
    value = self.val_network(state)
    return advantage.sub_(advantage.mean()).add_(value)

Second one:

def __init__(self, state_size, action_size, seed, hidden_advantage=[512, 512],
             hidden_state_value=[512,512]):
    super(DuelingQNetwork, self).__init__()
    self.seed = torch.manual_seed(seed)

    hidden_layers = [state_size] + hidden_advantage
    advantage_layers = OrderedDict()
    for idx, (hl_in, hl_out) in enumerate(zip(hidden_layers[:-1],hidden_layers[1:])):
        advantage_layers['adv_fc_'+str(idx)] = nn.Linear(hl_in, hl_out)
        advantage_layers['adv_activation_'+str(idx)] = nn.ReLU()

    advantage_layers['adv_output'] = nn.Linear(hidden_layers[-1], action_size)

    self.network_advantage = nn.Sequential(advantage_layers)

    value_layers = OrderedDict()
    hidden_layers = [state_size] + hidden_state_value

    # Iterate over the parameters to create the value network
    for idx, (hl_in, hl_out) in enumerate(zip(hidden_layers[:-1],hidden_layers[1:])):
        # Add a linear layer
        value_layers['val_fc_'+str(idx)] = nn.Linear(hl_in, hl_out)
        # Add an activation function
        value_layers['val_activation_'+str(idx)] = nn.ReLU()

    # Create the output layer for the value network
    value_layers['val_output'] = nn.Linear(hidden_layers[-1], 1)

    # Create the value network
    self.network_value = nn.Sequential(value_layers)

def forward(self, state):
    """Build a network that maps state -> action values."""

    # Perform a feed-forward pass through the networks
    advantage = self.network_advantage(state)
    value = self.network_value(state)
    return advantage.sub_(advantage.mean()).add_(value)
",38540,,8448,,7/13/2020 18:02,7/13/2020 18:02,Why are these same neural network architecture giving different results?,,0,1,,,,CC BY-SA 4.0 22468,1,,,7/12/2020 3:25,,1,38,"

Forward propagation in Deep Neural Networks

In the "Forward Propagation in a Deep Network" video on Coursera, Andrew NG mentions that there's no way to avoid a for loop to loop through the different layers of the network during forward propagation.

See image showing a deep network with 4 layers, and the requirement of a forloop to compute activations for each layer during forward propagation: https://nimb.ws/CkRVLT

This makes intuitive sense since each layer's activation depends on the previous layer's output.

Warning: start of speculation

My rudimentary understanding of quantum computing is that it somehow "magically" can bypass computing intermediate states -> this is why supposedly quantum computers can break cryptography... or something like that.

I'm wondering if a quantum computer could perform vectorized forward propagation on an L layer deep neural network.

",38558,,2444,,7/12/2020 10:49,7/12/2020 10:49,Could a quantum computer perform vectorized forward propagation in deep networks?,,0,1,,,,CC BY-SA 4.0 22469,1,22476,,7/12/2020 10:00,,0,79,"

Just came across this article on GPT-3, and that lead me to the question:

In order to make a certain kind of neural network architecture smarter all one needs to do is to make it bigger?

Also, if that is true, how does the importance of computer power relates with the importance of fine-tuning/algorithmic improvement?

",10135,,,,,7/13/2020 0:23,Is the size of a neural network directly linked with an increase in its inteligence?,,1,1,,,,CC BY-SA 4.0 22471,2,,20411,7/12/2020 11:11,,1,,"

There's also the journal Advances in Cognitive Systems. According to their website

Advances in Cognitive Systems (ISSN 2324-8416) publishes research articles, review papers, and essays on the computational study of human-level intelligence, integrated intelligent systems, cognitive architectures, and related topics. Research on cognitive systems is distinguished by a focus on high-level cognition, reliance on rich, structured representations, a systems-level perspective, use of heuristics to handle complexity, and incorporation of insights about human thinking. Advances in Cognitive Systems reviews submissions within approximately three months and publishes accepted papers on the journal Web site immediately upon receipt of final versions. Articles are distributed freely over the internet by the Cognitive Systems Foundation.

",2444,,,,,7/12/2020 11:11,,,,0,,,,CC BY-SA 4.0 22472,2,,20411,7/12/2020 13:40,,1,,"

The Institute of Electrical and Electronic Engineers (IEEE) recently announced a new "IEEE Transaction on Artificial Intelligence". Although the topics listed do not specifically mention AGI they do not limit it. I think this will be an interesting journal to keep an eye on as there could be some interesting AGI papers. Below is from their web page:

Welcome to the page for the IEEE Transactions on Artificial Intelligence (IEEE TAI). IEEE has established the new journal our community has been waiting for to publish our work on Artificial Intelligence! The submission site for manuscripts opens on April 1, 2020, and the inaugural issue is set to publish August 2020. Don’t miss out on this opportunity and join the community of authors who started already preparing their papers for the IEEE TAI! In one month, you will be able to submit your paper. It is time to prepare your best piece of work for IEEE TAI to increase your visibility in the international AI community and to be among the first authors to publish in IEEE TAI.

There is also the IEEE Transactions on Cognitive and Developmental Systems (TCDS) which is not strictly AGI but publishes topics that pertain to AGI.

The IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS (TCDS) focuses on advances in the study of development and cognition in natural (humans, animals) and artificial (robots, agents) systems. It welcomes contributions from multiple related disciplines including cognitive systems, cognitive robotics, developmental and epigenetic robotics, autonomous and evolutionary robotics, social structures, multi-agent and artificial life systems, computational neuroscience, and developmental psychology. Articles on theoretical, computational, application-oriented, and experimental studies as well as reviews in these areas are considered.

TCDS is co-sponsored by the Computational Intelligence Society, the Robotics and Automation Society, and the Consumer Electronics Society. TCDS is technically co-sponsored by the Computer Society.

",5763,,2444,,7/12/2020 14:44,7/12/2020 14:44,,,,4,,,,CC BY-SA 4.0 22473,1,22474,,7/12/2020 14:15,,3,140,"

From the book:

Sutton, Richard S.,Barto, Andrew G.. Reinforcement Learning (Adaptive Computation and Machine Learning series) (p. 100). The MIT Press. Kindle Edition. "

following is stated:

"On-policy methods attempt to evaluate or improve the policy that is used to make decisions, whereas off-policy methods evaluate or improve a policy different from that used to generate the data."

Looking at off policy:

and on-policy:

What is meant by "generate the data"? I'm confused as to what 'data' means in this context.

Does "generate the data" translate to the actions generated by the policy ? or Does "generate the data" translate to the Q data state action mappings?

",12964,,12964,,7/12/2020 17:57,7/12/2020 17:57,"What is meant by ""generate the data"" in describing the difference between on-policy and off-policy?",,1,0,,,,CC BY-SA 4.0 22474,2,,22473,7/12/2020 17:20,,3,,"

In the book, the phrase "generate the data" refers to the data from observations about states, actions, next states and rewards, that then get used to make value estimate updates.

In both the SARSA and Q learning pseudocode from the book, there is a behaviour policy that selects the next action to take. Other than the initial start state, this policy drives the observations that the learning process must handle - every action is directly selected by it, and due to how MDPs work, every next state and reward are indirectly influenced by it. It is this behaviour policy therefore that must "generate the data".

You can see from the pseudocode that both algorithms describe this behaviour policy as "policy derived from Q (e.g. $\epsilon$-greedy)".

The difference in the two algorithms is in which target polcy is being used:

  • In SARSA, the update is based on the Bellman equation for the same policy that generated the most recent set of data for S, A, R, S' and A' - all of these values will be directly or indirectly caused by the behaviour policy. So the whole equation uses the same data that was generated. Another way to put that is behaviour policy and target policy are the same.

  • In Q learning, the $\text{max}_a Q(S',a)$ from the Bellman optimality equation removes the need to use A' in the update. Effectively it is a local search for actions to find one that is potentially better than A', and runs the update as if that one was the one that was taken. This revision of A' is what makes the off-learning evaluate a different target policy to the behaviour policy that generated the rest of the data used in the update.

",1847,,,,,7/12/2020 17:20,,,,0,,,,CC BY-SA 4.0 22475,1,,,7/12/2020 19:40,,1,63,"

Where can I find a machine learning library that implements loss functions measuring the Algorithmic Information Theoretic-friendly quantity "bits of information"?

To illustrate the difference between entropy, in the Shannon information sense of "bits" and the algorithmic information sense of "bits", consider the way these two measures treat a 1 million character string representing $\pi$:

Shannon entropy "bits" ($6$ for the '.'): $\lceil 1e6*\log_2(10) \rceil+6$

Algorithmic "bits": The length, in bits, of the shortest program that outputs 1e6 digits of $\pi$ .

All statistical measures of information, such as KL divergence, are based on Shannon information. By contrast, algorithmic information permits representations that are fully dynamical as in Chomsky type 0, Turing Complete, etc. languages. Since the world in which we live is dynamical, algorithmic models are at least plausibly more valid in many situations than are statistical models. (I recognize that recursive neural nets can be dynamical and that they can be trained with statistical loss functions.)

For a more authoritative and formal description of these distinctions see the Hutter Prize FAQ questions Why aren't cross-validation or train/test-set used for evaluation? and Why is Compressor Length superior to other Regularizations? For a paper-length exposition on the same see "A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions".

From what I can see, machine learning makes it difficult to relate loss to algorithmic information. Such an AIT-friendly loss function must, by definition, measure the number of bits required to reconstruct, without loss, the original training dataset.

Let me explain with examples of what I mean by AIT-friendly loss functions, starting with the baby-step of classification loss (usually measured as cross-entropy):

Let's say your training set consists of $P$ patterns belonging to $C$ classes. You can then construct a partial AIT loss function providing the length of the corrections to the model's classifications with a $P$-length vector, each element containing a $0$ if the model was correct for that pattern, or the class if not. These elements would each have a bit-length of $\lceil \log_2(C+1) \rceil$, and be prefixed by a variable length integer storing $P$. The more $0$ elements, the more compressible this correction vector until, in the limit, a single run-length code for $P$ $0$'s is stored as the correction, prefixed by $P$ and the length of the binary for the RLE algorithm itself. The bit-length of these, taken together, would comprise this partial loss function.

This is a reasonable first cut at an AIT-friendly loss function for classification error.

So now let's go one step further to outputs that are numeric, the typical approach is a summation of a function of individual error measures, such as squaring or taking their absolute value or whatever -- perhaps taking their mean. None of these are in units of bits of information. To provide the correction on the outputs to reproduce the actual training values requires, again, a vector of corrections. This time it would be deltas, the precision of which must be adequate to the original data being losslessly represented, hence requiring some sort of adaptive variable length quantity representation(s). These deltas would likely have a non-uniform distribution so they can be arithmetically encoded. That seems like a reasonable approach to another AIT-friendly loss function.

But now we get to the "model parameters" and find ourselves in the apparently well-defined but ill-founded notions like "L2 regularization", which are defined in terms of ill-defined "parameters", e.g. "parameter counts" aren't given in bits.

I'll grant that L2 regularization sounds like it is heading in the right direction by squaring the weights and summing them up, but when one looks at what is actually being done, it is:

  • applying additional functions to the sum such as mean
  • asking for a scaling factor to apply
  • applying the regularization on a per-layer basis rather than the entire model

I'm sure I missed some of the many ways L2 regularization fails to be AIT-friendly.

Finally, there is the model's pseudo-invariance, measured, not simply in terms of its hyperparameters but in terms of the length of the (compressed archive of the) actual executable binary running on the hardware. I say 'pseudo' because there is nothing that says one cannot vary, say, the number of neurons in a neural network during learning -- nor even change to another learning paradigm than neural networks during learning (in the most general case).

So that's pretty much the complete loss function down to the Universal Turing Machine iron, but I'd be happy to see just a reference to an existing TensorFlow or another library that tries to do even a partial loss function for AIT-theoretic learning.

",26053,,26053,,7/12/2020 23:11,7/12/2020 23:11,Loss Function In Units Of Bits?,,0,2,,,,CC BY-SA 4.0 22476,2,,22469,7/13/2020 0:23,,2,,"

First of all, there is no real 'intelligence' innate to artificial Neural Networks (NNs). All they do is trying to approximate a mathematical function with a certain degree of generalization (hopefully without learning a given dataset by heart, i.e. hopefully without overfitting).

The more nodes (or neurons) you include into the network, the more complex a function can be that a network can learn to approximate. It's similar to high-school math: The higher the degree of some polynomial, the better the polynomial can be adjusted to fit some observation to be modeled; with the only difference being that NNs commonly include non-linearities and are trained via some kind of stochastic gradient descent.

So, yes. The more nodes a model possesses, the higher the so-called model capacity, i.e. the higher the degree of freedom a NN-model has to fit some function. After all, NN are said to be universal function approximators - given they have enough internal nodes in their hidden layer(s) to fit some given function.

In practice, however, you don't want to blow up a model architecture unnecessarily, since this commonly results in overfitting if it doesn't cause some instabilities of the training procedure instead.

Generally, the larger the model to be trained, the higher the computational cost to train the network.

A common suggestion is to reduce the number of nodes in a network at the expense of increasing a network's depth, i.e. the number of hidden layers. Often, that can help reduce the demand for excessively many nodes.

",37982,,,,,7/13/2020 0:23,,,,0,,,,CC BY-SA 4.0 22478,1,22479,,7/13/2020 3:07,,1,282,"

I am new to working with neural networks. However, I have built some linear regression models in the past. My question is, is it worth looking for features with a correlation to my target variable as I would normally do in a linear regression or is it better to feed the neural network with all the data I have?

Assuming that the data I have is all related to my target variable of course. I am working with this dataset and building a neural network regressor for it.

https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0101EN/labs/data/concrete_data.csv

Here is a snippet of the data. The target variable is the concrete strength rate given a certain combination of materials for that concrete sample.

I greatly appreciate any tips and explanations. I excuse me if this is too noob of a question but unfortunately I did not find any info about it on google. Thanks again!

",33342,,,,,7/13/2020 9:56,Do correlations matter when building neural networks?,,1,1,,,,CC BY-SA 4.0 22479,2,,22478,7/13/2020 9:25,,0,,"

If there is some correlation between features, that is what the network will ideally find out on its own and learn to utilize. So, in general, don't take correlated samples or features out of the training loop only because they look correlated. After all, they could convey a lot of valuable information.

When it comes to correlation between data samples during training, this correlation is commonly broken up by training a network on randomly selected mini-batches of training data samples. So, you randomly sample e.g. 16 or 32 (or so) training examples based on which you apply a single update of the weights using some Stochastic Gradient Descent variant. Since the members of a mini-batch are sampled at random, chances for finding highly correlated training samples in some mini-batch shall be sufficiently minimized in order not to negatively affect the training outcome.

Having said that, if you are concerned about overfitting of your model or weights that would overly weight just a small subset of all available input features, you could try applying regularization techniques like L1 (encouraging sparse representations) or L2 (encouraging low weights in general) regularization or dropout. In your particular case, since the main concern is an excessive contribution of only a small set of input features, L2 shall yield better results (avoiding excessively large weights that would be required to excessively much weight just a small number of features).

Besides that: Commonly, you split your training dataset into 3 parts:

  1. Data used for fitting the model (actual training data)
  2. Data used for assessing the training progress & possibly for determining when to apply early stopping (validation data)
  3. Test data used to assess the performance of the system after all training & intermediate testing is done

The final evaluation on the test dataset shall reveal then the generalization ability of your trained model to novel data.

So, with regularization in place during training and relatively low error rates on the validation and test datasets, you are pretty much save even without checking for correlated data beforehand. Only when you really struggle decreasing the validation loss, it might be worth to further inspect what exactly is going wrong in terms of correlations and such.

",37982,,37982,,7/13/2020 9:56,7/13/2020 9:56,,,,1,,,,CC BY-SA 4.0 22481,1,,,7/13/2020 9:42,,1,584,"

From the article Dangerous Feedback Loops in ML

Let’s say our model has leads from Facebook, Google, and Bing. If our first model decides that the probability of conversion is 3%, 5%, and 1% from these given sources, and we have finite amount of callbacks we can make, we will only callback the 5% probability. Now fast forward two months. The second model finds these probabilities are now: 0.5%, 8.5%, and 0%. What happened?

Because we started only calling Google leads back, we increased our chances of converting these leads, and likewise, because we stopped calling Facebook and Bing leads, these leads never converted because we never called them. This is an example of a real world feedback loop

How can we solve this problem?

",38392,,2444,,7/13/2020 11:23,4/9/2021 13:04,"How to solve the ""dangerous feedback loops"" in machine learning?",,1,1,,,,CC BY-SA 4.0 22482,2,,22481,7/13/2020 11:00,,1,,"

I read the article you linked, and what you are missing are that the given conversion probabilities are assessed pre-callback - i.e. they include an assessment of whether you will even call them back or not. So of course the probabilities change if you change your behaviour. The writer of the article has created a bit of a straw man argument by defining a model and decision process that don't go well together. They should have used a model that predicted conversion rates after callback, then it could be used as they wanted.

So the sales example is pretty easy to solve. Make the model accept the lead source as a feature, and predict the conversion rate after callback. That will give you the "action value" of choosing to callback, which is much more useful value to have if you are deciding whether or not to callback.

To cover the possibility that probabilities change over time, you have to be willing to test that hypothesis by calling back at least some leads that the model predicts have a low probability of success, in order to update the model. This is related to exploration vs exploitation trade off in bandit problems or in reinforcement learning. The callback optimisation problem looks a lot like a bandit problem in fact.

Many problems of choosing how to act optimally, based on an updating statistical model of results, can be re-framed successfully as bandit problems or reinforcement learning problems. This is one way to try and address the feedback loops issue, because these include theory around decision making. It is not a magic fix for all these problems, but contextual bandits are very strong in advertising strategies that are very similar to the example you give.

To be able to address feedback loops like this well does require that source data is unbiased, so that adding conditional actions to your model helps obtain better ground truth. So the other examples that the linked article gives - recidivism and recruitment - are much harder to address, because biased data is inserted as ground truth even if you add the conditions (e.g. for exploring and tracking results in a statistically neutral way). For some, adding these conditions may be enough to help, it will make the model accurate enough that it starts to address bias. For others, there is no really good solution other than to be aware of weakness of the model.

You can attempt to de-bias the model by using some abstract ideal. A common technique used in recruiting for instance is to remove features that might be causing bias: Literally remove race, sex and related determiners from the CV on the grounds that these should not be part of any fair model, regardless of whether or not a ML model could make of use them to predict results. This might accidentally throw out data that would improve accuracy too, but can be used to achieve a decision process that observers agree should be free of contentious soruces of bias.

Which features to remove or adjust is an open and subjective question. Unfortunately ML models can often determine protected characteristics from other data, and with "black box" ones such as neural networks it is not clear whether or not they are doing so. It's still an open question, and the main defence against biased models is basically awareness that it is a problem and that feeding all your data into a computer to get a statistical model does not in any way "purify" it or achieve some higher level of objective truth.

",1847,,1847,,7/13/2020 11:07,7/13/2020 11:07,,,,0,,,,CC BY-SA 4.0 22484,1,,,7/13/2020 15:24,,1,53,"

I've seen a few mentions in papers that neural network parameters can be found using REINFORCE algorithm. It was mentioned in the context of nondifferentiable operations involving e.g. step function which appears in "hard attention" or weight pruning. Unfortunately, I haven't seen how to really do this. In Markov Decision Process we have, states (S), actions (A) and rewards (R) so what is what in case of neural net? I don't see how can we find parameters or neural net if the gradient is not well defined. Any code sample or explanation?

",38583,,,,,7/14/2020 14:20,How to optimize neural network parameters with REINFORCE,,1,0,,,,CC BY-SA 4.0 22486,1,,,7/13/2020 18:52,,2,1856,"

Should the training data be the same in each epoch?

If the training data is generated on the fly, for example, is there a difference between training 1000 samples with 1 epoch or training 1000 epochs with 1 sample each?

To elaborate further, samples do not need to be saved or stay in memory if they are never used again. However, if training performs best by training over the same samples repeatedly, then data would have to be stored to be reused in each epoch.

More samples is generally considered advantageous. Is there a disadvantage to never seeing the same sample twice in training?

",10957,,10957,,7/14/2020 16:26,7/14/2020 16:26,Should the training data be the same in each epoch?,,2,3,,,,CC BY-SA 4.0 22488,2,,22486,7/13/2020 21:21,,5,,"

Let's quickly get out our copies of Deep Learning by Goodfellow et al. (2016). More specifically, I'm referring to page 276.

On this page, the authors argue for a relatively small minibatch size, since there are less than linear returns for estimating the gradient when increasing the minibatch size. Returns here refer to the reduction of the standard error of the mean (gradient per weight) computed over a minibatch.

So, yes. In theory, having unlimited resources, you will get the best performance when averaging the loss over all samples in your dataset. In practice, however, the larger the size of minibatches, the slower the training procedure, and consequently the less the total number of weight updates that can be afforded. Reversely, in practice, the cheaper the weight updates, the quicker the training procedure can converge to a (subjectively) satisfactory result.

Eventually, also Goodfellow et al. state that rapidly computing gradients leads to much faster convergence (in terms of total computations) for most optimization algorithms than when training them more slowly on exact gradients.

So, to summarize: If the main concern is to get to a specific level of accuracy at all, go for rather low minibatch sizes, whereas you could go up to a few hundreds (as the Goodfellow et al. state as a reasonable upper bound on page 148) if you are interested in more accurate gradients for your weight updates.

",37982,,2444,,7/14/2020 0:21,7/14/2020 0:21,,,,1,,,,CC BY-SA 4.0 22489,2,,22486,7/13/2020 23:27,,0,,"

This would be more suitable as a comment but I don't have enough points; but here's my opinion.

Optimisation algorithms like gradient descent are iterative algorithms. So it is rarely possible that they arrive at the minima in 1 epoch. A single epoch means that all data points have been visited once or a certain number of data samples have been taken from a distribution. However more passes might be necessary.

generated on the fly

I am assuming that the data is being generated as a part of a fixed distribution. Hence multiple epochs of multiple samples is still the ideal scenario.

1000 samples 1 epoch: Not enough training.
1 sample 1000 epoch: Overfitting or possibly not enough training.

",31827,,2444,,7/14/2020 0:24,7/14/2020 0:24,,,,3,,,,CC BY-SA 4.0 22490,2,,13317,7/14/2020 1:22,,0,,"

Just wanted to add that the new text Deep Learning Architectures A Mathematical Approach mentions this result, but I'm not sure if it gives a proof. It does mention an improved result by Hanin (http://arxiv.org/abs/1708.02691) for which I think it does give at least a partial proof. The original paper by Hanin seems to omit some proofs as well, but the published version (https://www.mdpi.com/2227-7390/7/10/992/htm) may be more complete.

",38596,,,,,7/14/2020 1:22,,,,2,,,,CC BY-SA 4.0 22491,1,,,7/14/2020 7:21,,2,142,"

In this blog post: http://www.argmin.net/2016/04/18/bottoming-out/

Prof Recht shows two plots:

He says one of the reasons the plot below has a lower train-test gap is because that model was trained with a lower learning rate (and he also manually drops the learning rate at 120 epoch).

Why would a lower learning rate reduce overfitting?

",21158,,2444,,7/14/2020 11:59,7/14/2020 11:59,Why does learning rate reduce train-test generalization gap?,,0,2,,,,CC BY-SA 4.0 22492,1,22494,,7/14/2020 8:04,,1,113,"

...Designing such a likelihood function is typically challenging; however, we observe that features like spectrogram are effective when latent variables have limited degrees of freedom. This motivates us to infer latent variables via methods like Gibbs sampling, where we focus on approximating the conditional probability of a single variable given the others.

Above is an excerpt from a paper I've been reading, and I don't understand what the author means by degrees of freedom of latent variables. Could someone please explain with an example, or add more details?


References

Shape and Material from Sound (31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA)

",35585,,2444,,7/14/2020 12:05,7/15/2020 14:14,What is meant by degrees of freedom of latent variables?,,1,0,,,,CC BY-SA 4.0 22493,2,,22319,7/14/2020 9:20,,1,,"

Maybe LabelImg is what you are looking for?

LabelImg is a graphical image annotation tool and label object bounding boxes in images.

If not, maybe you can find other options for your problem on this summary of computer vision tools.

",37120,,,,,7/14/2020 9:20,,,,0,,,,CC BY-SA 4.0 22494,2,,22492,7/14/2020 10:12,,1,,"

A good example is the degree of freedom in Student's distribution:

‌ The degrees of freedom refers to the number of independent observations in a set of data.

For example:

When estimating a mean score or a proportion from a single sample, the number of independent observations is equal to the sample size minus one.

e.g, if we have 100 observation $X_1, \ldots, X_{100}$ and we want to estimate their mean $\bar{X}$, as $\bar{X} = \frac{X_1 + \cdots + X_{100}}{100}$, if we know the mean, just we need to find the value of 99 variables from $X_1, \ldots, X_{100}$. Hence, here the degree of freedom is 99.

Your referenced paragraph is a general explanation in the paper as well. However, base on the above example, the degree of freedom in the paragraph depends on the likelihood function and number of observations that we have from spectrograms.

Now, as the DoF of latent variables is not high, using Gibbs sampling we will approximate some observations, and then using them we will compute the value of the latent variables.

",4446,,4446,,7/15/2020 14:14,7/15/2020 14:14,,,,3,,,,CC BY-SA 4.0 22495,2,,22484,7/14/2020 14:20,,2,,"

The term REINFORCE actually corresponds to a method of estimating gradients, it is not particular to reinforcement learning. The paper you linked doesn't appear to deal with RL at all, so the issue they're describing is not one that you should expect to find in a policy gradient application.

If you're using REINFORCE to estimate policy gradients in RL (this is the common use case), your parameters are parameterizing a policy function. The inputs are states and the outputs are actions. The issue with estimating the gradient of the parameters is as follows. In RL the objective is to maximize expected rewards. This is an expectation with respect to the policy of the cumulative reward of a trajectory. If you take the gradient of this wrt the parameters, since the expectation is wrt the policy which depends on the parameters, you can't move the gradient inside the expectation so you can't estimate the gradient from samples. Using REINFORCE however, you use the "log-derivative" trick to rewrite the objective as an expected gradient, which can be estimated from samples.

",37829,,,,,7/14/2020 14:20,,,,0,,,,CC BY-SA 4.0 22497,1,22498,,7/14/2020 20:11,,4,130,"

I've been looking online for a while for a source that explains these computations but I can't find anywhere what does the $|A(s)|$ mean. I guess $A$ is the action set but I'm not sure about that notation:

$$\frac{\varepsilon}{|\mathcal{A}(s)|} \sum_{a} Q^{\pi}(s, a)+(1-\varepsilon) \max _{a} Q^{\pi}(s, a)$$

Here is the source of the formula.

I also want to clarify that I understand the idea behind the $\epsilon$-greedy approach and the motivation behind the on-policy methods. I just had a problem understanding this notation (and also some other minor things). The author there omitted some stuff, so I feel like there was a continuity jump, which is why I didn't get the notation, etc. I'd be more than glad if I can be pointed towards a better source where this is detailed.

",34516,,2444,,4/3/2022 14:51,4/3/2022 14:51,What does the term $|\mathcal{A}(s)|$ mean in the $\epsilon$-greedy policy?,,1,2,,,,CC BY-SA 4.0 22498,2,,22497,7/14/2020 20:35,,6,,"

This expression: $|\mathcal{A}(s)|$ means

  • $|\quad|$ the size of

  • $\mathcal{A}(s)$ the set of actions in state $s$

or more simply the number of actions allowed in the state.

This makes sense in the given formula because $\frac{\epsilon}{|\mathcal{A}(s)|}$ is then the probability of taking each exploratory action in an $\epsilon$-greedy policy. The overall expression is the expected return when following that policy, summing expected results from the exploratory and greedy action.

",1847,,1847,,7/14/2020 20:44,7/14/2020 20:44,,,,1,,,,CC BY-SA 4.0 22499,1,22508,,7/15/2020 4:49,,1,132,"

Do the test dataset feature order and inference (real world) feature order have to be the same as the training dataset? For example, if features are in the order (a,c,b,e,d) for the training dataset, does that particular order have to match for the inference and test dataset?

",38618,,,,,7/15/2020 21:33,Do the order of the features ie channel matter for a 1d convolutional network?,,1,0,,,,CC BY-SA 4.0 22501,2,,10738,7/15/2020 12:29,,0,,"

The current version of the document Code of conduct for data-driven health and care technology provides more details about all principles, including principle number 7, which I will quote here.

Consider how the introduction of AI will change relationships in health and care provision, and the implications of these changes for responsibility and liability. Use current best practice on how to explain algorithms to those taking actions based on their outputs.

When building an algorithm, be it a stand-alone product or integrated within a system, show it clearly and be transparent of the learning methodology (if any) that the algorithm is using. Undertake ethical examination of data use specific to this use-case. Achieving transparency of algorithms that have a higher potential for harm or unintended decision-making, can ensure the rights of the data subject as set out in the Data Protection Act 2018 are met, to build trust in users and enable better adoption and uptake.

Work collaboratively with partners, specify the context for the algorithm, specify potential alternative contexts and be transparent on whether the model is based on active, supervised or unsupervised learning. Show in a clear and transparent specification:

  • the functionality of the algorithm
  • the strengths and limitations of the algorithm (as far as they are known)
  • its learning methodology
  • whether it is ready for deployment or still in training
  • how the decision has been made on the acceptable use of the algorithm in the context it is being used (for example, is there a committee, evidence or equivalent that has contributed to this decision?)
  • the potential resource implications

This specification and transparency in development will build trust in incorporating machine-led decision-making into clinical care.

",2444,,,,,7/15/2020 12:29,,,,0,,,,CC BY-SA 4.0 22502,1,,,7/15/2020 15:21,,2,77,"

While reading a paper about Q-learning in network energy consumption, I came across the section on convergence analysis. Does anyone know what convergence analysis is, and why is convergence analysis needed in reinforcement learning?

",38632,,2444,,7/15/2020 16:14,7/16/2020 1:31,"What is convergence analysis, and why is it needed in reinforcement learning?",,1,1,,,,CC BY-SA 4.0 22503,1,,,7/15/2020 16:04,,1,59,"

I'm in the process of trying to learn more about RL by shadowing a course offered collaboratively by UCL and DeepMind that has been made available to the public. I'm most of the way through the course, which for auditors consists of a Youtube playlist, copies of the Jupyter notebooks used for homework assigments (thanks to some former students making them public on Github), and reading through Sutton and Barto's wonderful book Reinforcement Learning: An Introduction (2nd edition).

I've gone a little more than half of the book and corresponding course material at this point, thankfully with the aid of public solutions for the homework assignments and textbook exercises which have allowed me to see which parts of my own work that I've done incorrectly. Unfortunately, I've been unable to find such a resource for the last homework assignment offered and so I'm hoping one of the many capable people here might be able to explain parts of the following question to me.

We are given a simple Markov reward process consisting of two states and with a reward of zero everywhere. When we are in state $s_{0}$, we always transition to $s_{1}$. If we are in state $s_{1}$, there is a probability $p$ (which is set to 0.1 by default) of terminating, after which the next episode starts in $s_{0}$ again. With a probability of $1 - p$, we transition from $s_{1}$ back to itself again. The discount is $\gamma = 1$ on non-terminal steps.

Instead of a tabular representation, consider a single feature $\phi$, which takes the values $\phi(s_0) = 1$ and $\phi(s_1) = 4$. Now consider using linear function approximation, where we learn a value $\theta$ such that $v_{\theta}(s) = \theta \times \phi(s) \approx v(s)$, where $v(s)$ is the true value of state $s$.

Suppose $\theta_{0} = 1$, and suppose we update this parameter with TD(0) with a step size of $\alpha = 0.1$. What is the expected value of $\mathbb{E}[ \theta_T ]$ if we step through the MRP until it terminates after the first episode, as a function of $p$? (Note that $T$ is random.)

My real point of confusion surrounds $\theta_{0}$ being given as 1. My understanding was that the dimensionality of the parameter vector should be equal to that of the feature vector, which I've understood as being (1, 4) and thus two-dimensional. I also don't grok the idea of evaluating $\mathbb{E}[ \theta_T ]$ should $\theta$ be a scalar (as an aside I attempted to simply brute-force simulate the first episode using a scalar parameter of 1 and, unless I made errors, found the value of $\theta$ to not depend on $p$ whatsoever). If $\theta$ is two-dimensional, would that be represented as (1, 0), (0, 1), or (1, 1)?

Neither the 1-d or 2-d options make intuitive sense to me so I hope there's something clear and obvious that someone might be able to point out. For more context or should someone just be interested in the assignment, here is a link to the Jupyter notebook: https://github.com/chandu-97/ADL_RL/blob/master/RL_cw4_questions.ipynb

",38633,,38633,,7/16/2020 12:04,7/16/2020 12:04,Correct dimensionality of parameter vector for solving an MRP with linear function approximation?,,0,0,,,,CC BY-SA 4.0 22504,1,,,7/15/2020 17:03,,5,2275,"

I already know deep RL, but to learn it deeply I want to know why do we need 2 networks in deep RL. What does the target network do? I now there is huge mathematics into this, but I want to know deep Q-learning deeply, because I am about to make some changes in the deep Q-learning algorithm (i.e. invent a new one). Can you help me to understand what happens during executing a deep Q-learning algorithm intuitively?

",36107,,2444,,7/15/2020 22:32,7/15/2020 22:33,Why do we need target network in deep Q learning?,,1,2,,7/16/2020 15:46,,CC BY-SA 4.0 22508,2,,22499,7/15/2020 21:25,,1,,"

Generally, order matters. A (trained) Neural Network (NN) is just a mathematical function trained on taking some given input and producing the corresponding output. So, if you train a certain node on producing large output if (and only if) an animal is present in a picture (for example), but later you give it the numeric evidence for a car being present in the image, it will still produce large output, indicating an animal being present in the image. This is simply because a node doesn't know what it is receiving or supposed to detect. It just follows its standard mathematical procedure.

So, if you train your network on one kind of input data, you must provide the same kind of input data during testing (or at least you cannot simply exchange inputs or outputs of certain nodes without manually correcting for that in the remainder of the network).

In the most simple case, consider whether you could simply swap the inputs to that simple function: $f(x,y) = x^2 + y$. If you swap the inputs, the output will be different. The exactly same applies to NNs.

Addition: I think this post explains and especially illustrates the intuition nicely.

",37982,,37982,,7/15/2020 21:33,7/15/2020 21:33,,,,0,,,,CC BY-SA 4.0 22509,2,,22504,7/15/2020 21:53,,6,,"

In DQN that was presented in the original paper the update target for the Q-Network is $\left(r_t + \max_aQ(s_{t+1},a;\theta^-) - Q(s_t,a_t; \theta)\right)^2$ were $\theta^-$ is some old version of the parameters that gets updated every $C$ updates, and the Q-Network with these parameters is the target network.

If you didn't use this target network, i.e. if your update target was $\left(r_t + \max_aQ(s_{t+1},a;\theta) - Q(s_t,a_t; \theta)\right)^2$, then learning would become unstable because the target, $r_t + \max_aQ(s_{t+1},a;\theta)$, and the prediction, $Q(s_t,a_t; \theta)$, are not independent, as they both rely on $\theta$.

A nice analogy I saw once was that it is akin to a dog chasing it's own tail - it will never catch it because the target is non-stationary; this non-stationarity is exactly what the dependence between the target and the prediction cause.

",36821,,2444,,7/15/2020 22:33,7/15/2020 22:33,,,,1,,,,CC BY-SA 4.0 22510,1,,,7/15/2020 22:02,,1,123,"

I have an image dataset of about 400 images. 70% of these data points were used for training, 15% for validation, and 15% for testing. I am using the 70% to train a CNN-based binary classifier. I augmented the training data to around 8000 images. That makes my test set really small in comparison. Is that ok, and what is considered a decent size of images for a test set?

",38639,,2444,,7/15/2020 22:31,7/15/2020 22:31,What is the amount of test data needed to evaluate a CNN?,,0,2,,,,CC BY-SA 4.0 22511,2,,22502,7/16/2020 1:31,,3,,"

Convergence analysis is about proving that your policy and/or value function converge to some desired value, which is usually the fixed-point of an operator or an extremum. So it essentially proves that theoretically the algorithm achieves the desired function. Without convergence, we have no guarantees that the value function will be accurate or the policy will be any good, so in other words the proposed RL algorithm can completely fail at serving its purpose even in simple cases.

",37829,,,,,7/16/2020 1:31,,,,0,,,,CC BY-SA 4.0 22513,1,,,7/16/2020 2:47,,2,66,"

During a Machine Learning course which I have done I have learnt about the K means algorithm. Is it possible to use the principals of K means within a neural network?

",32636,,31949,,7/16/2020 7:03,7/16/2020 7:03,Would it be possible to implement the principals of the K means clustering algorithm in a Neural Network,,0,4,,,,CC BY-SA 4.0 22514,1,,,7/16/2020 7:40,,-1,307,"

Do we have to use the IOB format on labels in the NER dataset (such as B-PERSON, I-PERSON, etc.) instead of using the usual format (PERSON, ORGANIZATION, etc.)? If so, why? How will it affect the performance of the model?

",36258,,,,,1/26/2022 19:05,"Do we have to use the IOB format on labels in the NER dataset? If so, why?",,1,0,,,,CC BY-SA 4.0 22515,2,,22460,7/16/2020 8:04,,0,,"

You may try to adjust the learning rate first. As the learning rate has a great effect on changing the weights and the bias value.

See if the results has changed after adjusting the learning rate.

",38650,,,,,7/16/2020 8:04,,,,3,,,,CC BY-SA 4.0 22516,1,22519,,7/16/2020 8:54,,0,37,"

I would like to train a model that serializes a table of nutrition facts into it's values. The tables can vary in form and colour, but always contain the same set of keys (e.g. carbs, fats). Examples for these tables can be found here.

The end goal is to be able to take a picture of such a table and have it's values added to a database.

My initial idea was to train a model on finding subpictures of the individual key/value pairs and then using OCR to find out which value it actually is.

As I am relatively new to ML, I would love to have some ideas about how one could try to build this, so I can do further research on it.

Thanks

",38651,,,,,7/16/2020 12:30,Detect data in tables of roughly the same structure,,1,0,,,,CC BY-SA 4.0 22518,1,22520,,7/16/2020 11:33,,1,515,"

I'm looking for intuition in simple words but also some simple insights (I don't know if the latter is possible). Can anybody shed some light on the Turing test?

",30725,,2444,,7/16/2020 16:52,7/16/2020 17:02,What is the Turing test?,,2,0,,,,CC BY-SA 4.0 22519,2,,22516,7/16/2020 12:30,,0,,"

Assuming all of the tables will be oriented in similar ways (label and value running horizontally) and that all writing will be printed rather than handwritten, one solution method would be to use an image segmentation method such as edge detection to segregate these horizontal (label, value) pairs and then use a library like Tesseract for OCR.

There are many types of image segmentation methods that may all have value, but if my assumption holds regarding the neat, structured nature of the tables, then I think simple edge detection methods could be sufficient.

",38633,,,,,7/16/2020 12:30,,,,2,,,,CC BY-SA 4.0 22520,2,,22518,7/16/2020 12:56,,3,,"

The Turing test is a test proposed by Alan Turing (one of the founders of computer science and artificial intelligence), described in section 1 of paper Computing Machinery and Intelligence (1950), to answer the question

Can machines think?

More precisely, the Turing test was originally framed as an interactive quiz (denoted as the imitation game by Turing) where a human interrogator $C$ asks multiple questions to two entities, $A$ (a computer) and $B$ (a human), which stay in different rooms than the room of the interrogator, so the interrogator cannot see them, in order to figure out which one is $A$ (the computer) and which one is $B$ (the human). $A$ and $B$ can only communicate in written form or any form that avoids them being easily recognized by $C$. The goal of the computer is to fool the interrogator and make him/her believe that it is a human and the goal of $B$ is to somehow help him and make him believe that he/she is the actual human.

If the computer is able to fool the interrogator and make him/her believe that it is a human, then that would be an indication that machines can think. However, note that even Turing called this game the imitation game, so Turing was aware of the fact that this game would only really show that a machine can imitate a human (unless he was using the term "imitation" differently than its current meaning).

Nowadays, there are different variations of the Turing test and some people use the term Turing test to refer to any test that attempts to tell humans and computers apart. For example, some people consider the CAPTCHA test a Turing test. In fact, CAPTCHA stands for "Completely Automated Public Turing Test To Tell Computers and Humans Apart".

The Turing test also has different interpretations and meanings. Some people think that the Turing test is sufficient to test that a machine can actually think and possesses consciousness, other people think that this only tests human-like intelligence (and there could be other intelligences) and some people (like me) think that this test is limited and only tests the conversational skills (and maybe other properties too) of the machine. Even Turing attempted to address these issues in the same paper (section 2), where he discusses some advantages and disadvantages of his imitation game. In any case, we can all agree that, if machines (in particular, programs like Siri, Google Home, Cortana, or Alexa) were always able to pass the Turing test, they would be a lot more useful, interesting and entertaining than they are now.

",2444,,2444,,7/16/2020 16:57,7/16/2020 16:57,,,,0,,,,CC BY-SA 4.0 22521,1,,,7/16/2020 13:41,,4,118,"

I'm doing some introductory research on classical (stochastic) MABs. However, I'm a little confused about the common notation (e.g. in the popular paper of Auer (2002) or Bubeck and Cesa-Bianchi (2012)).

As in the latter study, let us consider an MAB with a finite number of arms $i\in\{1,...,K\}$, where an agent choses at every timestep $t=1,...,n$ an arm $I_t$ which generates a reward $X_{I_t,t}$ according to a distribition $v_{I_t}$.

In my understanding, each arm has an inherent distribution, which is unknown to the agent. Therefore, I'm wondering why the notation $v_{I_t}$ is used instead of simply using $v_{i}$? Isn't the distribution independent of the time the arm $i$ was chosen?

Furthermore, I ask myself: Why not simply use $X_i$ instead of $X_{I_t,t}$ (in terms of rewards). Is it because the chosen arm at step $t$ (namely $I_t$) is a random variable and $X$ depends on it? If I am right, why is $t$ used twice in the index (namely $I_t,t$)? Shouldn't $X_{I_t}$ be sufficient, since $X_{I_t,m}$ and $X_{I_t,n}$ are drawn from the same distribution?

",38657,,2444,,12/22/2020 14:57,12/22/2020 14:57,"Why do we use $X_{I_t,t}$ and $v_{I_t}$ to denote the reward received and the at time step $t$ and the distribution of the chosen arm $I_t$?",,2,0,,,,CC BY-SA 4.0 22522,2,,22518,7/16/2020 16:29,,1,,"

According to Wikipedia

The "standard interpretation" of the Turing test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination.

",32076,,2444,,7/16/2020 17:02,7/16/2020 17:02,,,,1,,,,CC BY-SA 4.0 22524,2,,22521,7/16/2020 17:03,,2,,"

Isn't the distribution independent of the time the arm $i$ was chosen?

Yes, but you don't know which arm was chosen at time $t$, that is what $I_t$ represents. $v_i$ would represent the $i$th arms distribution, whereas you want the distribution of the arm that was chosen at time $t$, which is $v_{I_t}$.

$X_{I_t,t}$ is used to represent the arm you chose at time $t$ and the time you chose it. Imagine if we chose the following arms at respective time steps {1,2,5,2,4}, then the reward you get at time 3 (assuming in my example time started at 1) would be $X_{5,3}$. You need this notation because every time you pull arm $i$ you get a different reward, because the reward is a random variable (unless the assumption was a deterministic reward but that would not be interesting).

It has been a while since I read the papers but I assume that the distributions of the arms are stationary, however this notation is more general and would allow for non-stationary distributions.

",36821,,36821,,7/16/2020 21:14,7/16/2020 21:14,,,,0,,,,CC BY-SA 4.0 22525,1,,,7/16/2020 17:14,,1,244,"

I'm confused as to the purpose of training a neural network (NN) for reinforcement learning (RL) tasks such as Gridworld. In RL tasks, namely q-learning, we have a q-learning update rule, which is designed to take some state and action and compute the value of that state-action pair.

Performing this process several times will eventually produce a table of states and what action will likely lead to a high reward.

In RL examples, I've seen them train a neural network to output q-values and a loss function like MSE to compute the loss between the q-learning update rule q-value and the NN's q value.

So:

(a) Q-learning update rule-> outputs target Q-values

(b) NN -> outputs Q values

MSE to compute the loss between (a) and (b)

So, given we already know what the target Q-value is from a, why do we need to train a NN?

",26159,,,,,7/16/2020 17:45,What is the purpose of a Neural Network in Reinforcement Learning when we have a Q-learning update rule?,,1,2,,,,CC BY-SA 4.0 22526,2,,22525,7/16/2020 17:45,,3,,"

I don't think people generally do use neural nets for grid world. As long as the state and action spaces are small enough, you should be able to store Q values in a table like you suggested. Neural nets come in handy when the state space is very large (or even continuous), so you can't afford to store a table of Q values. Also, neural nets have the ability to generalize across "similar" states -- for instance, if two states are very similar the neural net would likely produce similar values for those states, whereas a tabular implementation might not have seen enough data to accurately estimate the Q values of both.

",37829,,,,,7/16/2020 17:45,,,,3,,,,CC BY-SA 4.0 22528,2,,22521,7/16/2020 21:08,,2,,"

Isn't the distribution independent of the time the arm $i$ was chosen?

Each one of the two references you describe assumes the context of the random bandit problem proposed by Robbins (1952) where the underlying reward distributions of each bandit are fixed. Therefore, yes, the underlying distributions are independent of the current time.

Is it because the chosen arm at step $t$ (namely $I_t$) is a random variable and $X$ depends on it?

The reward is a random variable which is dependent on the chosen arm at time $t$. Since each arm has an underlying reward distribution, the index $I_t$ is a random variable that designates the specific arm we are pulling, and the index $t$ denotes the time step when we pull the arm.

Why is $t$ used twice in the index (namely $I_t,t$)?

Note that $t$ is used twice, but the observed value of $I_t$ does not encode any information about the time it was chosen. For example, if $I_m = 5$, then $X_{I_m,\ m} = X_{5,\ m}$. If we drop the second subscript, then we have no way to distinguish $X_{5,\ m}$ notationally from $X_{5,\ n}$ (where $I_{n\ \neq\ m} = 5$). Two distinct rewards $X_{5,\ m}$ and $X_{5,\ n}$ would map to the same reward $X_5$ notationally. At first glance, this introduces numerous potential notational problems, such as losing the count of the number of times each arm was pulled.

Why not simply use $X_i$ instead of $X_{I_t,\ t}$ (in terms of rewards)? Shouldn't $X_{I_t}$ be sufficient, since $X_{I_t,\ m}$ and $X_{I_t,\ n}$ are drawn from the same distribution?

Admittedly, there are probably ways to get around the extra subscript for certain algorithms. For example, maybe you are using an algorithm where past rewards from each arm are averaged to yield an estimate of each arm's expected reward (see Section 2.2 of Sutton and Barto). This may require a collection of lists that store the past rewards for each arm, or it may require the count of each arm being pulled and an associated current estimate of the expected reward (see Section 2.4 of Sutton and Barto). However, these methods introduce more parameters that would be unnecessary had we initially included a second subscript for time in our notation (e.g. the counts of each arm pulled, the current estimate of expected reward for each arm, the labels of each reward list corresponding to an arm, etc.). Most of the fundamental equations regarding multi-armed bandits that I have seen are either heavily or solely dependent on the reward random variable (e.g. the definition of regret). Keeping the time index in a single random variable promotes concision and consistency among various sources by preventing the need for delegating the time index to another random variable, data structure, etc., even though specific implementations or contexts may profit from other notations.

The dual-subscript notation also has the benefit of generalizing to other contexts aside from the one posed by Robbins (1952). These include nonstationary reward distributions (see Section 2.5 of Sutton and Barto), time discounting, and families of alternative bandit processes, among others (see Sections 2.2-2.4 of this book for info on the last two extensions).

",37607,,,,,7/16/2020 21:08,,,,0,,,,CC BY-SA 4.0 22529,1,,,7/17/2020 1:08,,1,65,"

I know the random forest is a bagging technique. But what if my random forest overfits on a dataset, so I reduce the depth of the decision tree and now it is underfitting. In this scenario, can I take the under-fitted random forest with little depth and try to boost it?

",28048,,2444,,7/17/2020 12:23,1/27/2023 22:03,Can I apply AdaBoost on a random forest?,,1,2,,,,CC BY-SA 4.0 22530,1,,,7/17/2020 1:17,,2,1793,"

Why is the expected return in Reinforcement Learning (RL) computed as a sum of cumulative rewards?

Would it not make more sense to compute $\mathbb{E}(R \mid s, a)$ (the expected return for taking action $a$ in the given state $s$) as the average of all rewards recorded for being in state $s$ and taking action $a$?

In many examples, I've seen the value of a state computed as the expected return computed as the cumulative sum of rewards multiplied by a discount factor:

$V^π(s)$ = $\mathbb{E}(R \mid s)$ (the value of state s, if we follow policy π is equal to the expected return given state s)

So, $V^π(s)$ = $\mathbb{E}(r_{t+1}+ γr_{t+2}+ (γ^2)_{t+3} + ... \mid s) = {E}(∑γ^kr_{t+k+1}\mid s)$

as $R=r_{t+1}+ γr_{t+2}+ {γ^2}r_{t+3}, + ... $

Would it not make more sense to compute the value of a state as the following:

$V^π(s)$ = $(r_{t+1} + γr_{t+2} + (γ^2)_{t+3}, + ... \mid s)/k = {E}(∑γ^kr_{t+k+1}\mid s)/k $ where k is the number of elements in the sum, thus giving us the average reward for being in state s.

Reference for cumulative sum example: https://joshgreaves.com/reinforcement-learning/understanding-rl-the-bellman-equations/

",26159,,26159,,7/17/2020 12:18,7/17/2020 16:46,Why is the expected return in Reinforcement Learning (RL) computed as a sum of cumulative rewards?,,1,6,,,,CC BY-SA 4.0 22531,1,,,7/17/2020 3:17,,1,145,"

It happened to my neural network, when I use a learning rate of <0.2 everything works fine, but when I try something above 0.4 I start getting "nan" errors because the output of my network keeps increasing.

From what I understand, what happens is that if I choose a learning rate that is too large, I overshoot the local minimum. But still, I am getting somewhere, and from there I'm moving in the correct direction. At worst my output should be random, I don't understand what is the scenario that causes my output and error to approach infinity every time I run my NN with a learning rate that is too large (and it's not even that large)

How does the red line go to infinity ever? I kind of understand it could happen if we choose a crazy high learning rate, but if the NN works for 0.2 and doesn't for 0.4, I don't understand that

",38668,,,,,7/17/2020 3:17,How can a learning rate that is too large cause the output of the network (and the error) to go to infinity?,,0,1,,,,CC BY-SA 4.0 22532,1,,,7/17/2020 5:47,,1,58,"

For example, in PointNet, you see the 1D convolutions with the following channels 64 -> 128 -> 1024. Why not e.g. 64 -> 1024 -> 1024 or 1024 -> 1024 -> 1024?

",21158,,2444,,7/17/2020 12:03,7/17/2020 12:03,Why does the number of channels in the PointNet increase as we go deeper?,<1d-convolution>,0,2,,,,CC BY-SA 4.0 22533,1,22540,,7/17/2020 5:51,,0,150,"

There was a lot of Negative news on Artificial Intelligence. Most people were first exposed to the idea of artificial intelligence from Hollywood movies, long before they ever started seeing it in their day-to-day lives. This means that many people misunderstand the technology. When they think about common examples that they’ve seen in movies or television shows, they may not realize that the killer robots they’ve seen were created to sell emotional storylines and drive the entertainment industry, rather than to reflect the actual state of AI technology.

There are few questions on our SE on how AI impacts/harms humankind. For example, How could artificial intelligence harm us? and Could artificial general intelligence harm humanity?

However, now, I'm looking for the positive impacts of AI on humans. How could AI help humankind?

",30725,,2444,,7/17/2020 11:40,12/14/2020 13:02,How is AI helping humanity?,,2,1,,,,CC BY-SA 4.0 22534,2,,22533,7/17/2020 7:08,,0,,"

We've already seen significant progress in fields that we could not even come close to prior to the explosion in AI research. For example, the automated identification of cancerous tumours in lung tissues could save countless lives, as a computer never tires and has no bias. Incredible advances in speech synthesis can allow people who lost or never had a voice to have their own human-sounding voice. Personally, I also predict soon we will be able to synthesise music to an individual's tastes.

Advances in rapid image processing, thanks to the polynomial complexity of machine learning techniques, allows for advanced guidance systems and autonomous cars that are only improving. Ranking algorithms for listings allow for finely tuned results for a user's search. Unsupervised techniques can help us separate suspicious activity in banking records to catch fraudulent transactions.

There is a very, very long list here and this only scratches the surface. AI is already benefiting us enormously, to the point where very soon, if not already, I would say we will become so dependant on learning techniques that if they were outlawed, entire systems may collapse.

",26726,,2444,,7/17/2020 11:53,7/17/2020 11:53,,,,2,,,,CC BY-SA 4.0 22535,1,,,7/17/2020 7:09,,2,59,"

I'm using Experience Replay based on the original Prioritized Experience Replay (PER) paper. In the paper authors show ~ an order of magnitude increase in data efficiency from prioritized sampling. There is space for further improvement, since PER remembers all experiences, regardless of their importance.

I'd like to extend PER so it remembers selectively based on some metric, which would determine whether the experience is worth remembering or not. The time of sampling and re-adjusting the importance of the experiences increases with the number of samples remembered, so being smart about remembering should at the very least speed-up the replay, and hopefully also show some increase in data efficiency.

Important design constrains for this remembering metric:

  • compatibility with Q-Learning, such as DQN
  • computation time, to speed up the process of learning and not trade off one type of computation for another
  • simplicity

My questions:

  1. What considerations would you make for designing such a metric?
  2. Do you know about any articles addressing the prioritized experience memorization for Q-Learning?
",38671,,38671,,7/17/2020 7:51,7/17/2020 7:51,Prioritised Remembering in Experience Replay (Q-Learning),,0,2,,,,CC BY-SA 4.0 22536,1,,,7/17/2020 7:28,,0,171,"

I'm trying to train the most popular Models (mobileNet, VGG16, ResNet...) with the CIFAR10-dataset but the accuracy can't get above 9,9%. I want to do that with the completely model (include_top=True) and without the weights from imagenet.

I have tried increasing/decreasing dropout and learning rate and I changed the optimizers but I become always the same accuracy.

with weights='imagenet' and include_top=False I achieve an accuracy of over 90% but I want to train the model without those parameters.

Is there any solution to solve this? It is possible, that the layers of those Models are not set to be trainable?

train_generator = ImageDataGenerator(
                                    rotation_range=2, 
                                    horizontal_flip=True,
                                    zoom_range=.1 )
val_generator = ImageDataGenerator(
                                    rotation_range=2, 
                                    horizontal_flip=True,
                                    zoom_range=.1)

train_generator.fit(x_train)
val_generator.fit(x_val)

base_model_1 = MobileNet(include_top=True,weights=None,input_shape=(32,32,3),classes=y_train.shape[1])

batch_size= 100
epochs=50

learn_rate=.001

sgd=SGD(lr=learn_rate,momentum=.9,nesterov=False)
adam=Adam(lr=learn_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

model_1.compile(optimizer=adam,loss='sparse_categorical_crossentropy',metrics=['accuracy'])

model_1.fit_generator(train_generator.flow(x_train,y_train,batch_size=batch_size),
                      epochs=epochs,
                      steps_per_epoch=x_train.shape[0]//batch_size,
                      validation_data=val_generator.flow(x_val,y_val,batch_size=batch_size),validation_steps=250,
                      verbose=1)

Results of MobileNet:

    Epoch 1/50
350/350 [==============================] - 17s 50ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1021
Epoch 2/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1030
Epoch 3/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
Epoch 4/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1014
Epoch 5/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1040
Epoch 6/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1009
Epoch 7/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1035
Epoch 8/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1013
Epoch 9/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1029
Epoch 10/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1023
Epoch 11/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1017
Epoch 12/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1020
Epoch 13/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1020
Epoch 14/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1033
Epoch 15/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1011
Epoch 16/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
Epoch 17/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1024
Epoch 18/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1024
Epoch 19/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1041
Epoch 20/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1010
Epoch 21/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
Epoch 22/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1014
Epoch 23/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1035
Epoch 24/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1032
Epoch 25/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1012
Epoch 26/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1018
Epoch 27/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
Epoch 28/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1031
Epoch 29/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
Epoch 30/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1015
Epoch 31/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1028
Epoch 32/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1015
Epoch 33/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1030
Epoch 34/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1003
Epoch 35/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1044
Epoch 36/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1012
Epoch 37/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1022
Epoch 38/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1021
Epoch 39/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1028
Epoch 40/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1012
Epoch 41/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1035
Epoch 42/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1009
Epoch 43/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1034
Epoch 44/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1024
Epoch 45/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
Epoch 46/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1028
Epoch 47/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1016
Epoch 48/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1033
Epoch 49/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1018
Epoch 50/50
350/350 [==============================] - 17s 49ms/step - loss: nan - accuracy: 0.0990 - val_loss: nan - val_accuracy: 0.1023

<tensorflow.python.keras.callbacks.History at 0x7fa30b188e48>
",38282,,38282,,7/17/2020 7:46,7/17/2020 7:46,"CIFAR-10 can't get above 10% Accuracy with MobileNet, VGG16 and ResNet on Keras",,0,5,,,,CC BY-SA 4.0 22537,1,,,7/17/2020 7:59,,1,49,"

What is the reason to train a Neural Network to estimate a task's success (i.e. robotic grasp planning) using a simulator that is based on analytic grasp quality metrics?

Isn't a perfectly trained NN going to essentially output the same probability of task success as the analytic grasp quality metrics that were used to train it? What benefits does this NN have with respect to just directly using said analytic grasp quality metrics to determine whether a certain grasp candidate is good or bad? Analytic metrics are by definition deterministic, so I fail to understand the reason for using them to train a NN that will ultimately output the same result.

This approach is used in high-caliber works like the Dex-Net2 from Berkeley Automation. I am rather new to the field and the only reason I can think of is computational efficiency in production?

",38673,,38673,,7/18/2020 14:15,7/18/2020 14:15,Advantages of training Neural Networks based on analytic success criteria,,0,2,,,,CC BY-SA 4.0 22540,2,,22533,7/17/2020 8:23,,1,,"

For good or bad, AI is the next step in automation. The impact which is already visible, and trends show will continue in the future, is the eradication of repetitive and body-straining labor.

Hopefully, the transformation will be gradual enough for the global labor market to re-adjust, otherwise, we'll face a problem of growing unemployment. It seems to me that we've become aware enough to foresee bad outcomes of our inventions, hence in almost every dimension affected by AI, a plausible and either positive or negative future can be presented, depending on the sentiment of the storyteller.

Regardless of what different experts and sci-fi writers tell us about the future, actually predicting it is a futile endeavour. Considering that predictions made for a dynamic system, even when we have a lot of data and good models (like about the weather), become unreliable just for a few weeks ahead.

",38671,,2444,,7/17/2020 11:56,7/17/2020 11:56,,,,0,,,,CC BY-SA 4.0 22541,2,,11949,7/17/2020 9:24,,0,,"

One could imagine using a segmentation network as a first step of processing. Then feeding an area corresponding to a bounding box of each segmented object to the classifier.

Potentially that could yield an increase in performance in classifying objects in an image, but not without a cost of training time, sine suddenly there are two networks to train instead of just one.

",38671,,,,,7/17/2020 9:24,,,,0,,,,CC BY-SA 4.0 22544,2,,22072,7/17/2020 12:14,,0,,"

The paper A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews could be useful for your purposes, although I have read only the abstract

Despite the recent advances in opinion mining for written reviews, few works have tackled the problem on other sources of reviews. In light of this issue, we propose a multimodal approach for mining fine-grained opinions from video reviews that is able to determine the aspects of the item under review that are being discussed and the sentiment orientation towards them. Our approach works at the sentence level without the need for time annotations and uses features derived from the audio, video and language transcriptions of its contents. We evaluate our approach on two datasets and show that leveraging the video and audio modalities consistently provides increased performance over text-only baselines, providing evidence these extra modalities are key in better understanding video reviews.

and looked at some of the diagrams and figures in the paper.

",2444,,,,,7/17/2020 12:14,,,,0,,,,CC BY-SA 4.0 22546,2,,22530,7/17/2020 12:50,,2,,"

Why is the expected return in Reinforcement Learning (RL) computed as a sum of cumulative rewards?

That is the definition of return.

In fact when applying a discount factor this should formally be called discounted return, and not simply "return". Usually the same symbol is used for both ($R$ in your case, $G$ in e.g. Sutton & Barto).

There are also other variations, such as truncated return (sum up to a given time horizon). They all share the feature that a return is a sum of reward values. You cannot really change that and keep the formal term "return", that's how it has been defined.

You can however define the value function to be something other than the expected return. Rather than looking for alternative definitions of return as your title suggests, you could be looking for alternative metrics to use as value functions.

You do go on to ask about computing "the value of a state" without mentioning the word "return", but it is not 100% clear whether you are aware that the way to resolve this is to not use return, but something else.

Would it not make more sense to compute the value of a state as the following: $V^π(s)$ = $(r_{t+1} + γr_{t+2} + (γ^2)_{t+3}, + ... \mid s)/k = {E}(∑γ^kr_{t+k+1}\mid s)/k $ where k is the number of elements in the sum, thus giving us the average reward for being in state s.

Your example would nearly always result in zero for long-running or non-episodic problems, as you are summing a decreasing geometric series possibly up to very large $k$, then dividing by the maximum $k$. Notation-wise you are also using $k$ to be an iterator and the maximum value of the same iterator, that would need fixing.

However, this is very close to a real value metric used in reinforcement learning, called the average reward setting.

The expected average reward value function for a non-episodic problem is typically given by

$$V^\pi(s) = \mathbb{E}[\lim_{h \to \infty}\frac{1}{h}\sum_{k=0}^{h}r_{t+k+1}|s_t = s]$$

Note there is no discount factor, it is not usually possible to combine a discount factor with the average reward setting.

Sutton & Barto point out in Reinforcement Learning: An Introduction chapter 10, section 10.4, that when using function approximation on continuing tasks, then a discount factor is not a useful part of the setting. Instead average reward is a more natural approach. It is also not so different, and quite easy to modify the Bellman equations and update rules. However, many DQN implementations still use discounted return to solve continuing tasks. That is because with high enough discount factor $\gamma$, e.g. $0.99$ or $0.999$, then the end result is likely to be the same optimal solution - the discount factor has moved from being part of the problem formulation to being a solution hyperparameter.

",1847,,1847,,7/17/2020 16:46,7/17/2020 16:46,,,,0,,,,CC BY-SA 4.0 22547,1,,,7/17/2020 14:02,,2,225,"

I'm trying to find out what kind of policy improvement and policy evaluation AlphaGo, AlphaGo Zero, and AlphaZero are using. By looking into their respective paper and SI, I can conclude that it is a kind of policy gradient actor-critic approach, where the policy is evaluated by a critic and is improved by and actor. Yet still can't fit it to any of the known policy gradient algorithms.

",31324,,,,,7/17/2020 14:02,"What kind of policy evaluation and policy improvement AlphaGo, AlphaGo Zero and AlphaZero are using",,0,0,,,,CC BY-SA 4.0 22548,1,22549,,7/17/2020 14:55,,3,1732,"

I am working on OpenAI's "MountainCar-v0" environment. In this environment, each step that an agent takes returns (among other values) the variable named done of type boolean. The variable gets a True value when the episode ends. However, I am not sure how each episode ends. My initial understanding was that an episode should end when the car reaches the flagpost. However, that is not the case.

What are the states/actions under which the episode terminates in this environment?

",36710,,36710,,7/18/2020 17:09,7/18/2020 17:09,"How does an episode end in OpenAI Gym's ""MountainCar-v0"" environment?",,2,0,,3/16/2021 13:42,,CC BY-SA 4.0 22549,2,,22548,7/17/2020 15:08,,3,,"

The episode ends when either the car reaches the goal, or a maximum number of timesteps has passed. By default the episode will terminate after 200 steps. You can customize this with the _max_episode_steps attribute of the environment.

",37829,,,,,7/17/2020 15:08,,,,0,,,,CC BY-SA 4.0 22550,2,,22548,7/17/2020 15:45,,3,,"

To answer your question, the specifics of some of the OpenAI Gym environments can be found on their wiki:

The episode ends when you reach 0.5 position, or if 200 iterations are reached.

There is a deeper question in what you asked, though:

My initial understanding was that an episode should end when the Car reaches the flagpost.

The environment certainly could be set up that way. Limiting the number of steps per episode has the immediate benefit of forcing the agent to reach the goal state in a fixed amount of time, which often results in a speedier trajectory by the agent (MountainCar-v0 further penalizes long trajectories through the reward signal). Also, the underlying learning algorithm may only perform policy updates after completion of the episode. If the agent will never reach the goal state under its current policy (i.e. the policy is very bad, lacks much randomness, etc.), then terminating the episode after a fixed amount of time will ensure that the agent is able to perform a policy update and try a new policy on the next episode (alternatively, the learning algorithm could perform a policy update during the episode). There is a branch of tasks called continuing tasks which never terminate (see Section 3.3 of Sutton and Barto), so the choice to limit the number of steps per episode is heavily dependent on the task at hand and the choice of learning algorithm.

",37607,,,,,7/17/2020 15:45,,,,0,,,,CC BY-SA 4.0 22551,1,22552,,7/17/2020 16:36,,1,112,"

I'm working on recognizing the numbers 3 and 7 using the MNIST data set. I'm using cnn_learner() function from fastai library.

When I plotted the learning rate, the curve started going backward after a certain value on X-axis. Can someone please explain what does it signify?

",31799,,2444,,7/18/2020 12:16,8/17/2020 15:03,Why would the learning rate curve go backwards?,,1,0,,,,CC BY-SA 4.0 22552,2,,22551,7/17/2020 17:44,,1,,"

I have not used fastai library but this also happens on tensorboard when you have more than one training being recorded on the same plot.

Looking at the picture, I think this is a very special type of graph because for a single LR value you have 2 loss values associated. Put in other words, you have the same LR value for different loss values. My guess it that some time-dependent issue is messing things here.

Another intuition that may help to solve the issue is representing data differently. If you rotate the graph 90º left-wise you could see how the LR is evolving with different loss values. LR should be decreasing along with loss value, but in this case it is not like that either. So review how you are setting the LR too!

My list of to-check would be:

  • Check how you update LR according to the loss value
  • Check the LR recorder instance is only being used by one training at a time
  • Try plotting the value of LR (y) against your loss values (x)
  • Try plotting the value of LR (y) against the current epoch number (x)

Hope it helps

",26882,,,,,7/17/2020 17:44,,,,1,,,,CC BY-SA 4.0 22554,1,22568,,7/18/2020 1:13,,2,63,"

For Keras on TensorFlow, a layer class constructor comes with these:

  • kernel_regularizer=...
  • bias_regularizer=...
  • activity_regularizer=...

For example, Dense layer:
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense#arguments_1

The first one, kernel_regularizer is easy to understand, it regularises weights, makes weights smaller to avoid overfitting on training data only.

Is kernel_regularizer enough? When should I use bias_regularizer and activity_regularizer too?

",2844,,2444,,7/18/2020 12:18,7/19/2020 0:50,When would bias regularisation and activation regularisation be necessary?,,1,0,,,,CC BY-SA 4.0 22555,2,,22355,7/18/2020 1:19,,1,,"

From my personal experience, the units hyperparam in LSTM is not necessary to be the same as max sequence length. Add more units to have the loss curve dive faster.

And about the number of LSTM layers, trying out a single LSTM layer is a good start point, the model trains better with more LSTM layers.

For example, MAX_SEQ_LEN=10, in Keras:

Lstm1 = LSTM(units=30, return_sequences=True); #Time sequence, to feed to next layer
Lstm2 = LSTM(units=20, return_sequences=True); #Time sequence, to feed to next layer
Lstm3 = LSTM(units=10, return_sequences=False);

Output = Lstm3(Lstm2(Lstm1));
```
",2844,,,,,7/18/2020 1:19,,,,0,,,,CC BY-SA 4.0 22556,2,,22460,7/18/2020 7:45,,0,,"

Since after a number of iterations the cost function is not reducing, this may be able to be diagnosed as a vanishing gradient problem. A solution to this is the use of a Residual neural network.

Another solution is that you carefully initialise your weights as throughout your neural network your gradient may exponentially explode or exponentially vanish.

Watch this video on how to initialise weights truly randomly: https://www.youtube.com/watch?v=s2coXdufOzE

Edit:

Another possible cause for your issue is that your algorithm is having an high bias problem. This is due to your algorithm not performing well on the training set. In your case one of the best solutions would be to make your network deeper and so it shall be able to conduct more complex functions and so perform better on your training set.

",31949,,31949,,7/19/2020 5:06,7/19/2020 5:06,,,,5,,,,CC BY-SA 4.0 22558,1,,,7/18/2020 9:52,,2,1588,"

I understand that, in tree search, an admissible heuristic implies that $A*$ is optimal. The intuitive way I think about this is as follows:

Let $P$ and $Q$ be two costs from any respective nodes $p$ and $q$ to the goal. Assume $P<Q$. Let $P'$ be an estimation of $P$. $P'\le P \Rightarrow P'<Q$. It follows from uniform-cost-search that the path through $p$ must be explored.

What I don't understand, is why the idea of an admissible heuristic does not apply as well to "graph-search". If a heuristic is admissible but inconsistent, would that imply that $A*$ is not optimal? Could you please provide an example of an admissible heuristic that results in a non-optimal solution?

",25671,,2444,,7/18/2020 19:54,11/8/2020 16:43,Is A* with an admissible but inconsistent heuristic optimal?,,1,0,,,,CC BY-SA 4.0 22561,1,,,7/18/2020 13:27,,4,841,"

Is there an upper limit to the maximum cumulative reward in a deep reinforcement learning problem?

For example, you want to train a DQN agent in an environment, and you want to know what the highest possible value you can get from the cumulative reward is, so you can compare this with your agents performance.

",,user38696,2444,,4/8/2022 19:31,4/8/2022 19:31,Is there an upper limit to the maximum cumulative reward in a deep reinforcement learning problem?,,3,0,,,,CC BY-SA 4.0 22563,1,,,7/18/2020 16:42,,1,309,"

I have a question about implementing policy gradient methods for problems with continuous action spaces.

Assume that actions are sampled from a diagonal Gaussian distribution with mean vector $\mu$ and standard deviation vector $\sigma$. As far as I understand, we can define a neural network that takes the current state as the input and returns a $\mu$ as its output. According to OpenAI Spinning Up, the standard deviation $\sigma$ can be represented in two different ways:

I don't completely understand the first method. Does it mean that we must set the log standard deviations to fix numbers? Then how do we choose these numbers?

",38699,,38700,,7/19/2020 1:00,7/19/2020 1:00,"In continuous action spaces, how is the standard deviation, associated with Gaussian distribution from which actions are sampled, represented?",,0,1,,,,CC BY-SA 4.0 22564,1,,,7/18/2020 19:32,,0,583,"

I am learning with the OpenAI gym's cart pole environment.

I want to make the observation states discrete (with small stepsize) and for that purpose, I need to change two of the observations from [$ -\infty, \infty$] to some finite upper and lower limits. (By the way, these states are velocity and pole velocity at the tip).

How can I change these limits in the actual gym's environment? Any other suggestions are also welcome.

",36710,,,,,7/19/2020 10:22,How can I change observation states' values in OpenAI gym's cartpole environment?,,1,0,,,,CC BY-SA 4.0 22565,2,,22561,7/18/2020 19:39,,2,,"

In any reinforcement learning problem, not just Deep RL, then there is an upper bound for the cumulative reward, provided that the problem is episodic and not continuing.

If the problem is episodic and the rewards are designed such that the problem has a natural ending, i.e. the episode will end regardless of how well the agent does in the environment, then then you could work it out by calculating the max possible reward in each step of the episode; however this is potentially non-trivial depending on your environment.

For an example in a trivial setting, however, imagine the problem of cartpole -- I could define the MDP to have a reward of +1 for every time step that the agent is able to balance the pole upright, and 0 when the pole falls. If I also defined that the problem terminates after 200 time steps then the upper bound on cumulative rewards for this problem would be 200.

In general, if the problem is continuing then in theory the problem goes on infinitely and so there is no upper bound, as the episode never ends -- this is partly why we use the discount factor, to ensure that $\sum_{k=0} \gamma^k R_{t+k}$ converges.

",36821,,36821,,7/19/2020 9:47,7/19/2020 9:47,,,,1,,,,CC BY-SA 4.0 22566,2,,22561,7/18/2020 20:48,,2,,"

My answer to:Is there an upper limit to the maximum cumulative reward in a deep reinforcement learning problem?

Yes but depending on the environment, if dealing with the theoretical environment, where there are infinite number of time steps.

Calculating the upper bound

In reinforcement learning (deep RL inclusive), we want to maximize the discounted cumulative reward i.e. Find the upper bound of: $\sum_{k=0}^\infty \gamma^kR_{t+k+1}, where$ $\gamma$ $\epsilon$ $[0, 1)$

Before we find the upper bound of the series above, we need to find out if the upper bound exists i.e. whether it converges according to the environment specifications such as the reward function.

I will provide one example environment where the series converges. It is an environment that has simple rules and goes on for infinite time steps. It's reward function definition is as follows:

-> A reward of +2 for every favorable action.

-> A reward of 0 for every unfavorable action.

So, our path through the MDP that gives us the upper bound is where we only get 2's.

Let's say $\gamma$ is a constant, example $\gamma = 0.5$, note that $\gamma$ $\epsilon$ $[0, 1)$

Now, we have a geometric series which converges:

$\sum_{k=0}^\infty \gamma^kR_{t+k+1}$ = $\sum_{k=1}^\infty (1)(2\gamma^{k-1})$ = $\sum_{k=1}^\infty 2\gamma^{k-1}$ = $\frac{2}{1 - 0.5}$ = $4$

Thus the upper bound is 4.

For environments that go on for a finite number of time steps the the upper bound does exist but for certain environments, likewise for the infinite time step environments, it may be a bit difficult to calculate but not necessarily impossible, the environments I speak of are ones with complicated reward functions and environments i.e. the environments are stochastic or the reward function's possible values are dependent on the state, they always are but we can loosely say that a reward function is independent of state when all possible reward values for an environment can be given in any state, obviously with regards to the actions taken though.

",30174,,30174,,7/18/2020 21:07,7/18/2020 21:07,,,,0,,,,CC BY-SA 4.0 22567,2,,22561,7/18/2020 21:24,,2,,"

Lets assume $\sup_{s,a} r(s,a)<b$. Then for continuing problems the upper bound can be obtained by \begin{align} \sum_{t=0}^{\infty} \gamma^{t}r(s_t,a_t) &\le \sum_{t=0}^{\infty} \gamma^{t} \sup_{s,a}r(s,a) \nonumber \\ &=\sum_{t=0}^{\infty} \gamma^{t} b = \frac{b}{1-\gamma}. \end{align}

We can use the same bound for episodic tasks with discounted return. For episodic tasks without discounting ($\gamma=1$) the above sum goes to infinity. However, if we know the episode length $T$, we can use $Tb$ as an upper bound.

",38700,,,,,7/18/2020 21:24,,,,2,,,,CC BY-SA 4.0 22568,2,,22554,7/19/2020 0:50,,2,,"

Regularizer's are used as a means to combat over fitting.They essentially create a cost function penalty which tries to prevent quantities from becoming to large. I have primarily used kernel regularizers. First I try to control over fitting using dropout layers. If that does not do the job or leads to poor training accuracy I try the Kernel regularizer. I usually stop at that point. I think activity regularization would be my next option to prevent outputs from becoming to large. I suspect weight regularization effectively can pretty much achieve the same result.

",33976,,,,,7/19/2020 0:50,,,,0,,,,CC BY-SA 4.0 22569,1,,,7/18/2020 21:21,,1,314,"

I have recently been exposed to the concept of decentralized applications, I know that neural networks require a lot of parallel computing infra for training.

What are the technical difficulties one may face for training neural networks in a p2p manner?

",,ram bharadwaj,2444,,7/19/2020 12:21,10/23/2022 13:05,Why can't we train neural networks in a peer-to-peer manner?,,1,2,,,,CC BY-SA 4.0 22571,2,,22564,7/19/2020 10:22,,1,,"

I don't recommend changing the rules of the environment.

What you could do:

Perform a method called bucketing i.e. take a value from a continuous state space see which discrete bucket it should go into and then let your agent use the bucket number as the observation.

e.g. Say I do have a continuous state space with one variable in range $[-\infty,\infty]$

The buckets can be as follows:

0). x < -1000

1). -1000 $\le$ x $<$ -500

2). -500 $\le$ x $<$ -100

3). -100 $\le$ x $<$ -50

4). -50 $\le$ x $<$ 0

5). 0 $\le$ x $<$ 50

6). 50 $\le$ x $<$ 100

7). 100 $\le$ x $<$ 500

8). 500 $\le$ x $<$ 1000

9). x > 1000

Therefore in this example scenario there are 9 buckets. Hence, the observations can be in range [0, 9] discretely.

",30174,,,,,7/19/2020 10:22,,,,0,,,,CC BY-SA 4.0 22572,1,,,7/19/2020 10:43,,1,62,"

Is there a way to get landmark features automatically learned by a neural network without having to manually pre-label them in the images that are being fed into the network?

",38711,,2444,,7/19/2020 12:29,8/19/2020 11:03,Is there a way to get landmark features automatically learned by a neural network?,,1,0,,,,CC BY-SA 4.0 22573,2,,7182,7/19/2020 10:46,,0,,"

You could say that the class of the data (e.g. spam vs not spam) is a hidden quality that can be inferred through the observable features (e.g. message subject contains "bitcoin"),

$$P(C \; | \; F)$$

which says that the probability of the class $C$ is conditioned on the visibility of the feature $F$.

Using Bayes' theorem we can write

$$P(C \; | \; F) = \frac{P(F \; | \; C) \, P(C)}{P(F)}$$

Your problem is that you are not taking into account the evidence, $P(F)$, denominator in this expression.

You can have multiple features in your likelihood function

$$P(F \; | \; C) = P(F_1,F_2,F_3,\ldots \; | \; C)$$

and these can be independent

$$\prod_k P(F_k \; | \; C)$$

and then you have your prior

$$P(C) \prod_k P(F_k \; | \; C)$$

but what you are missing is the evidence factor in the denominator

$$P(C \; | \; F) = \frac{P(C) \prod_k P(F_k \; | \; C)}{\sum_C \scriptsize{P(C) \prod_k P(F_k \; | \; C)}}$$

This is the general view of the problem. Specifically, to fix your issue, evaluate the probabilities for all your classes, add them together, then divide each probability by that sum.

For instance, if you have two classes and the probabilities are $p_0$ and $p_1$, then write

$$P(C=0) = \frac{p_0}{p_0 + p_1}$$

and

$$P(C=1) = \frac{p_1}{p_0 + p_1}$$

which guarantees that

$$P(C=0) + P(C=1) = 1$$

",38712,,,,,7/19/2020 10:46,,,,0,,,,CC BY-SA 4.0 22574,1,,,7/19/2020 11:02,,1,101,"

Is there a place where people can share (or buy) ready made neural networks instead of creating them themselves? Something like a Wikipedia for DNNs?

",38713,,2444,,6/26/2022 9:17,6/26/2022 9:17,Is there a place where people can share (or buy) ready made neural networks?,,1,1,,,,CC BY-SA 4.0 22575,2,,22574,7/19/2020 11:33,,2,,"

It's not a specific web for sell or share neural network models, but actually you can easily find other people models in Github! Just search it! For example, this is a random repo I've found for Cat Classification.

But.. the problem is everyone have different problems. So you can't easily use other people neural network models and then use it for your problem. That's why there's a "trick" called transfer learning or fine tuning. That method show you how to train other people models (it's called: pre-trained models) from different problems for your specific problem.

Tensorflow and PyTorch provide a lot of pre-trained model for common problems:

You can also find other pre-trained models in Github! This is an official PyTorch tutorial how to use that pre-trained model to your specific case.

UPDATE

You can also check HuggingFace Models Hub it's like a repository for machine learning model (most of them are transformer-based and for NLP). It's well organized, you can sort by your specific task or library

",16565,,16565,,6/25/2022 16:33,6/25/2022 16:33,,,,0,,,,CC BY-SA 4.0 22577,2,,21743,7/19/2020 14:58,,1,,"

After diving deeper into the material I am able to answer my own question:

Simulated Annealing tries to optimize a energy (cost) function by stochastically searching for minima at different temparatures via a Markov Chain Monte Carlo method. The stochasticity comes from the fact that we always accept a new state $c'$ with lower energy ($\Delta E < 0$), but a new state with higher energy ($\Delta E > 0$) only with a certain probability

$$p(c \to c') = \text{min}\{1, \exp(-\frac{\Delta E}{T}) \},$$ $$\Delta E = E(c') - E(c).$$

Where we used the Gibbs distribution $p(c) = \frac{1}{Z}\text{exp}(\frac{-E(c)}{T})$ to calculate probabilities for each state, with $Z$ being the partition sum. The temperature $T$ plays the role of a scaling factor for the probability distribution. If $T \to \infty $ we have a uniform distribution and all states are equally possible. If $T \to 0$ we have a Dirac delta function around the global optimum. By starting with a high $T$, sampling states and gradually decreasing it, we can make sure to sample enough states from the state space and accepting energetic higher states in order to escape local minima on the way to the global optimum. After sampling long enough while slowly decreasing the temperature, we theoretically arrive at the global optimum.

Deterministic Annealing on the other hand directly minimizes the free energy $F(T)$ of the system deterministically at each temperature, e.g. by Expectation-Maximization (EM-algorithm). The intuition behind it is that we like to find an optimum at a high temperature (where it is easier to find one because there are fewer local minima), accept this as intermediate solution, lower the temperature, thus scaling the cost function such it is more peaked around it's optima (making optimization a bit more difficult) and start deterministically looking for an optimum again. This is repeated until the temperature is low enough and we (hopefully) found a global solution to our problem. Major drawback is that there is no guarantee to arrive at a global optimum in contrast to simulated annealing. The whole idea of scaling the energy function is based on the concept of homotopy: "Two continuous functions [...] can be "continuously deformed" into each other."

",37120,,,,,7/19/2020 14:58,,,,0,,,,CC BY-SA 4.0 22580,1,,,7/19/2020 18:46,,3,200,"

I'm currently implementing the NEAT algorithm. But problems occur when testing it with problems which don't have a linear solution(for example xor). My xor only produces 3 correct outputs once at a time:

1, 0 -> 0.99
0, 0 -> 0
1, 1 -> 0
0, 1 -> 0

My genome class works fine, so I guess that the problem occurs on breeding or that my config is wrong.

Config

const size_t population_size = 150;
const size_t inputs = 3 (2 inputs + bias);
const size_t outputs = 1;
double compatibility_threshold = 3;
double steps = 0.01;
double perturb_weight = 0.9;
double mutate_connection = 0.05;
double mutate_node = 0.03;
double mutate_weights = 0.8;
double mutate_disable = 0.1;
double mutate_enable = 0.2;
double c_excess = 1;
double c_disjoint = 1;
double c_weight = 0.4;
double crossover_chance = 0.75;

Does anyone has an idea what the problem might be? I proof read my code multiple times, but wasnt able the figure it out.

Here is the github link to my code(not documented): click

",38720,,38720,,7/20/2020 5:25,7/20/2020 5:25,NEAT can't solve XOR completely,,0,0,,,,CC BY-SA 4.0 22581,1,,,7/19/2020 19:23,,4,446,"

I've been hearing a lot about GPT-3 by OpenAI, and that it's a simple to use API with text in text out and has a big neural network off 175B parameters.

But how did they achieve this huge number of parameters, and why is it being predicted as one of the greatest innovations?

",38719,,2444,,9/14/2020 13:34,9/14/2020 13:34,Why is GPT-3 such a game changer?,,1,1,,,,CC BY-SA 4.0 22585,2,,1987,7/20/2020 3:31,,1,,"

You can increase no of hidden layers. Following is an example (But not very efficient)

",38728,,,,,7/20/2020 3:31,,,,0,,,,CC BY-SA 4.0 22586,1,,,7/20/2020 3:49,,1,84,"

I am trying to implement a paper on Image tempering detection and localization, the paper is Image Manipulation Detection and Localization Based on the Dual-Domain Convolutional Neural Networks, I was able to implement the SCNN, the one surrounded by red dots, I could not quite understand the FCNN, the one that is surrounded with blue dots.

The problem I am facing is: How the network made features vector from (1048 x 100) to (523 x 100) through max-pooling (instead of 524 x 100), and from (523 x 100) to (260 x 100) and then (260 x 100) to (256, ).

It appears that the given network diagram might be wrong, but, if it is wrong, how could it be published in IEEE. Please, help me understand how the FCNN is constructed.

",38727,,2444,,7/20/2020 12:01,7/20/2020 12:01,How can the FCNN reduce the dimensions of the input from $1048 \times 100$ to $523 \times 100$ with max-pooling?,,0,3,,,,CC BY-SA 4.0 22587,1,,,7/20/2020 5:13,,1,41,"

I'm looking at some baseline implementations of RL agents on the Pendulum environment. My guess was to use a relatively small neural net (~100 parameters).

I'm comparing my solution with some baselines, e.g. the top entry on the Pendulum leaderboard. The models for these solutions are typically huge, i.e. ~120k parameters. What's more, they use very large replay buffers as well, like ~1M transitions. Such model sizes seem warranted for Atari-like environments, but for something as small as the Pendulum, this seems like complete overkill to me.

Are there examples of agents that use a more modest number of parameters on Pendulum (or similar environments)?

",37751,,2444,,7/20/2020 11:57,8/19/2020 22:07,Are there examples of agents that use a more modest number of parameters on Pendulum (or similar environments)?,,1,0,,,,CC BY-SA 4.0 22588,2,,22587,7/20/2020 5:20,,1,,"

Actually, I just started inspecting the entries further down in the leaderboard list, and there are in fact more modest architectures, e.g. this one, which uses 3 hidden layers with 8 units each.

",37751,,2444,,7/20/2020 21:16,7/20/2020 21:16,,,,0,,,,CC BY-SA 4.0 22589,1,22601,,7/20/2020 7:07,,5,451,"

Everybody is implementing and using DNN with, for example, TensorFlow or PyTorch.

I thought IBM's Deep Blue was an ANN-based AI system, but this article says that IBM's Deep Blue was symbolic AI.

Are there any special features in symbolic AI that explain why it was used (instead of ANN) by IBM's Deep Blue?

",2844,,2444,,7/21/2020 18:41,7/28/2020 0:30,Why is symbolic AI not so popular as ANN but used by IBM's Deep Blue?,,3,0,,,,CC BY-SA 4.0 22590,1,,,7/20/2020 7:36,,1,42,"

Traditionally, when working with tabular data, one can be sure(or at least know) that a model works because the included features could explain a target variable, say "Price of a ticket" good. More features can be then be engineered to explain the target variable even better.

I have heard people say, that there is no need to hand-engineer features when working with CNNs or RNNs or Deep Neural Networks, provided all the advancements in AI and computation. So, my question is, how would one know, before training, why a particular architecture worked(or would work) when it did or why it didn't when the performance isn't acceptable or very bad. And also that not all of us would have the time to try out all possible architectures, how can one know or at least be sure that something would work for the problem in hand. Or to say, what are the things one needs to follow when designing an architecture to train for a problem, to ensure that an architecture will work?

",38060,,2444,,7/20/2020 11:43,7/20/2020 11:43,How can one be sure that a particular neural network architecture would work?,,0,0,,,,CC BY-SA 4.0 22591,1,,,7/20/2020 9:53,,4,577,"

I'm creating a CNN network without other frameworks such as PyTorch, Keras, Tensorflow, and so on.

During the forward pass, the Flatten layer reshapes the previous layer's activation. I know there are a lot of questions about it, but what should I do with the Flatten layer during back-propagation? Should I compute the derivative of $dA$ and reshape it for the next layer or just reshape $dA$ of the previous layer?

",38736,,2444,,7/20/2020 11:41,9/10/2020 21:38,What should I do with the flatten layer during back-propagation?,,0,0,0,12/16/2020 15:03,,CC BY-SA 4.0 22593,2,,22572,7/20/2020 10:05,,1,,"

No, you can't. In CNN, if you want to detect landmark, you need to prepare data with region box, it's coordinates, width, height, than number of points that should be detected and points coordinates. Then your target vector should be,

This is your target vector. Optionally you can use YOLO algorithm.

",38736,,,,,7/20/2020 10:05,,,,0,,,,CC BY-SA 4.0 22595,1,,,7/20/2020 12:17,,0,70,"

When we are doing multi-label segmentation tasks, our y_true (the mask) will be (w, h, 3), but, in our model, at the last layer, we will be getting (w, h, number of classes) as output.

How do we make our outputs to have the same size as the true mask so that to apply the loss function, given that, currently, the shapes are not equal? Also, if we are done with applying the loss function and trained the model, how do I make results in the shape of (w, h, 3) from (w, h, number of classes)?

",38737,,2444,,7/20/2020 21:09,12/8/2022 5:00,How do we make our outputs to have the same size as the true mask?,,1,0,,,,CC BY-SA 4.0 22596,2,,22595,7/20/2020 12:56,,0,,"

You can create a mapping from classes to colors for a simple one is:

y # y.shape = (W, H, n_classes)
_, y_color = y.max(dim=-1, keepdim=True) / n_classes # y_color.shape = (W, H, 1)
y_color = torch.cat([y_color] * 3, dim=-1) # y_color.shape = (W, H, 3)

(using pytorch like code)

This mapping is visualizable, of course, you may get nicer visualizations if your class to color mapping is more sophisticated.

",38741,,,,,7/20/2020 12:56,,,,0,,,,CC BY-SA 4.0 22599,1,22611,,7/20/2020 21:11,,2,82,"

I have implemented several policy gradient algorithms (REINFORCE, A2C, and PPO) and am finding that the resultant policy's action probability distributions can be rather extreme. As a note, I have based my implementations on OpenAI's baselines. I've been using NNs as the function approximator followed by a Softmax layer. For example, with Cartpole I end up with action distributions like $[1.0,3e-17]$. I could understand this for a single action, potentially, but sequential trajectories end up having a probability of 1. I have been calculating the trajectory probability by $\prod_i \pi(a_i|s_i)$. Varying the learning rate changes how fast I arrive at this distribution, I have used learning rates of $[1e-6, 0.1]$. It seems to me that a trajectory's probability should never be 1.0 or 0.0 consistently, especially with a stochastic start. This also occurs for environments like LunarLander.

For the most part, the resulting policies are near-optimal solutions that pass the criteria for solving the environments set by OpenAI. Some random seeds are sub-optimal

I have been trying to identify a bug in my code, but I'm not sure what bug would be across all 3 algorithms and across environments.

Is it common to have such extreme policy's probabilities? Is there a common way to handle an update so the policy's probabilities do not end up so extreme? Any insight would be greatly appreciated!

",38256,,2444,,7/22/2020 11:22,7/22/2020 11:22,Is it common to have extreme policy's probabilities?,,1,2,,,,CC BY-SA 4.0 22600,2,,22558,7/20/2020 23:03,,1,,"

It depends on what you mean by optimal.

  1. A* will always find the optimal solution (that is, the algorithm is admissible) as long as the heuristic is admissible. (Note that the definition of admissible is overloaded and means something slightly different for an algorithm and a heuristic.)

  2. If you talk about the set of nodes expanded by A*, then it expands the minimal set of nodes up to tie-breaking even with an inconsistent heuristic.

  3. If you talk about the number of expansions, then A* is not optimal. A* can perform up to $2^N$ expansions of $N$ states with an inconsistent heuristic. This result comes from Martelli, 1977.

  4. Algorithm B has worst-case $N^2$ expansions and budgeted graph search (BGS) has worst-case $N \log C$, where $C$ is the optimal solution cost.

You can see a demo that shows the worst-case performance of A* as well as the performance of BGS here:

https://www.movingai.com/SAS/INC/

",17493,,17493,,11/8/2020 16:43,11/8/2020 16:43,,,,0,,,,CC BY-SA 4.0 22601,2,,22589,7/21/2020 5:43,,2,,"

ANNs as used today need 1. a lot of data 2. a lot of computational power. Before we had any of the above two, we didn't really know how to properly build ANNs since we didn't quite have the means to train the network, and thus couldn't evaluate it.

"Symbolic AI" on the other hand, is very much just a bunch of if-else/logical conditions, much like regular programming. You don't need to think too much about the whole "symbolic" part of it. The main/big breakthrough is that you had a lot of clever "search algorithms" and a lot of computation power relative to before.

Point being, is just that symbolic AI was the main research program at the time, and people didn't really bother with "connectionist" methods.

",6779,,,,,7/21/2020 5:43,,,,1,,,,CC BY-SA 4.0 22602,1,23179,,7/21/2020 8:49,,1,96,"

I have questions on the way AlphaGo Zero is trained. From original AlphaGo Zero paper, I knew that AlphaGo Zero agent learns a policy, value functions by the gathered data $\{(s_t, \pi_t, z_t)\}$ where $z_t = r_T \in \{-1,1\}$.

However, the fact that the agent tries to learn a policy distribution when $z_t = -1$ seems to be counter-intuitive (at least to me).

My assertion is that the agent should not learn the policy distribution of when it loses (i.e, gets $z_t=-1$), since such a policy will guide it to lose.

I think I have missed some principles and resulted in that assertion. Or is my assertion reasonable, either?

",38753,,2444,,7/21/2020 13:45,8/21/2020 19:59,How AlphaGo Zero is learning from $\pi_t$ when $z_t = -1$?,,1,0,,,,CC BY-SA 4.0 22603,1,22605,,7/21/2020 9:59,,4,1316,"

In reinforcement learning, an agent can receive a positive reward for correct actions and a negative reward for wrong actions, but does the agent also receive rewards for every other step/action?

",2844,,2444,,11/2/2020 22:00,12/20/2020 16:59,Is a reward given at every step or only given when the RL agent fails or succeeds?,,1,1,,,,CC BY-SA 4.0 22604,1,,,7/21/2020 10:29,,2,322,"

With over 100 papers published in the area of artificial intelligence, machine learning and their subfields every day (source), accounting for ~3% of all publications world wide per year (source) and dozens of annual conferences like NeurIPS, ICML, ICLR, ACL, ... I wonder how you keep up with the current state of the art and latest developments? The field is progressing very fast, models that were considered SOTA not even a decade ago are now (almost) outdated (Attention Is All You Need). A lot of this progress is driven by big tech companies (source), e.g. 12% of all papers accepted at NeurIPS 2019 have at least one author from Google and DeepMind (source).

My strategy is to read blogs and articles to maintain a general overview and not to miss any important breakthroughs. To be up to date in subfields of my own interest, I read specific papers once in a while. What are your personal strategies? Continuous education is a big keyword here. It's not about understanding every detail and being able to reproduce results, but rather maintaining a bird's eye view, having an idea about the direction of research and knowing whats already possible.

To name a few of my preferred sources there are the research blogs of the big players: OpenAI, DeepMind, Google AI, FAIR. Further there are very good personal blogs with a more educational character, like the well known one of Christopher Olah, the recently started one of Yoshua Bengio and the one from Jay Alammar. Unfortunately finding personal blogs is hard, it often depends on luck and referrals, also the update frequency is generally lower since these people have (understandably) other important things to do in life as well.

Therefore I'm always looking for new sources, which I can bookmark and read later, if I like to avoid doing other stuff.

Can you name any other personal / corporate research blogs or news websites that publish latest advances in ML & AI?

",37120,,37120,,7/21/2020 11:41,7/21/2020 11:41,Ways to keep up with the latest developments in Machine Learning and AI?,,0,5,,,,CC BY-SA 4.0 22605,2,,22603,7/21/2020 10:40,,3,,"

In reinforcement learning (RL), an immediate reward value must be returned after each action, along with the next state. This value can be zero though, which will have no direct impact on optimality or setting goals.

Unless you are modifying the reward scheme to try and make an environment easier to learn (called reward shaping), then you should be aiming for a "natural" reward scheme. That means granting reward based directly on the goals of the agent.

Common reward schemes might include:

  • +1 for winning a game or reaching a goal state granted only at the end of an episode, whilst all other steps have a reward of zero. You might also see 0 for a draw and -1 for losing a game.

  • -1 per time step, when the goal is to solve a problem in minimum time steps.

  • a reward proportional to the amount of something that the agent produces - e.g. energy, money, chemical product, granted on any stop where this product is obtained, zero otherwise. Potentially a negative reward based on something else that the agent consumes in order to produce the product, e.g. fuel.

",1847,,2444,,12/20/2020 16:59,12/20/2020 16:59,,,,0,,,,CC BY-SA 4.0 22607,1,22610,,7/21/2020 11:17,,1,55,"

In the paper - "Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems", on page 1083, on the 6th line from the bottom, the authors define expectation of the empirical model as $$\hat{\mathbb{E}}_{s,s',a}[V(s')] = \sum_{s' \in S} \hat{P}^{a}_{s, s'}V(s').$$ I didn't understand the significance of this quantity since it puts $V(s')$ inside an expectation while assuming the knowledge of $V(s')$ in the definition on the right.

A clarification in this regard would be appreciated.

EDIT: The paper defines $\hat{P}^{a}_{s, s'}$ as, $$\hat{P}^{a}_{s, s'} = \frac{|(s, a, s', t)|}{|(s, a, t)|}.$$ Where $|(s, a, t)|$ is the number of times state $s$ was visited and action $a$ was taken and $|(s, a, s', t)|$ as the number of times among the $|(s, a, t)|$ times $(s, a)$ was visited when the next state landed in was $s'$ during model learning.

No explicit definition for $V$ is provided however, $V^{\pi}$ is defined as the usual expected discounted return, using the same definition as Sutton and Barto or other sources.

",28384,,28384,,7/21/2020 13:39,7/21/2020 14:53,What is the expectation of an empirical model in model based RL?,,1,2,,,,CC BY-SA 4.0 22608,2,,22569,7/21/2020 11:32,,0,,"

Data management and bandwidth are key issues for interconnecting multiple GPUs. These are such big issues that it is hard to think about other challenges like neural network architecture, metrics, etc. The key to success for interconnecting multiple GPUs on a single computer is NVIDIA's NVLink:

NVLink is a wire-based communications protocol for near-range semiconductor communications developed by Nvidia that can be used for data and control code transfers in processor systems between CPUs and GPUs and solely between GPUs. NVLink specifies a point-to-point connection with data rates of 20 and 25 Gbit/s (v1.0/v2.0) per differential pair.

Compare 25 Gbit/s to a typical peer to peer connection over the web of 100Mbps. NVLINK provides a 250x advantage assuming everything else is equal which it is not. This means that, considering bandwidth only, a neural network which takes one day to train on a computer with two GPUs connected with NVLINK could take 250 days over the internet using two computers with the same GPU!

",5763,,5763,,7/21/2020 11:37,7/21/2020 11:37,,,,1,,,,CC BY-SA 4.0 22609,1,22617,,7/21/2020 14:20,,2,322,"

I understand the fact that the neural network is used to take the states as inputs and it outputs the Q-value for state-action pairs. However, in order to compute this and update its weights, we need to calculate the maximum Q-value for the next state $s'$. In order to get that, in the DDQN case, we input that next state $s'$ in the target network.

What I'm not clear on is: how do we train this target network itself that will help us train the other NN? What is its cost function?

",34516,,2444,,11/5/2020 1:04,11/5/2020 1:19,How does the target network in double DQNs find the maximum Q value for each action?,,1,1,,,,CC BY-SA 4.0 22610,2,,22607,7/21/2020 14:53,,1,,"

If I understand your question correctly, the significance of this is due to the fact that $s'$ is random. In the RHS of the equation it is assumed that $V(\cdot)$ is known for each state, but the quantity is measuring the expected value of the next state given the current state and action.

",37829,,,,,7/21/2020 14:53,,,,2,,,,CC BY-SA 4.0 22611,2,,22599,7/21/2020 16:11,,2,,"

Your policy gradient algorithms appear to be working as intended. All standard MDPs have one or more deterministic optimal solutions, and those are the policies that solvers will converge to. Making any of these policies more random will often reduce their effectiveness, making them sub-optimal. So once consistently good actions are discovered, the learning process will reduce exploration naturally as a consequence of the gradients, much like a softmax classifier with a clean dataset.

There are some situations where a stochastic policy can be optimal, and you could check your implementations can find those:

  • A partially observable MDP (POMDP) where one or more key states requiring different optimal actions are indistinguishable to the agent. For example, the state could be available exits in a corridor trying to get the end in a small maze, where one location secretly (i.e. without the agent having any info in the state representation that the location is different) reverses all directions, so that progressing along it is not possible for a deterministic agent, but a random agent would eventually get through.

  • In opposing guessing games where a Nash equilibrium occurs for specific random policies. For example scissor, paper, stone game where the optimal policy in self-play should be to choose each option randomly with 1/3 chance.

The first example is probably easiest to set up a toy environment to show that your implementations can find stochastic solutions when needed. A concrete example of that kind of environment is in Sutton & Barto: Reinforcement Learning, An Introduction chapter 13, example 13.1 on page 323.

Setting up opposing agents in self-play is harder, but if you can get it to work and discover the Nash equilibrium point for the policies, it would be further proof that you have got something right.

",1847,,1847,,7/21/2020 16:16,7/21/2020 16:16,,,,0,,,,CC BY-SA 4.0 22614,1,22615,,7/21/2020 18:18,,3,1355,"

While learning RL, I came across some problems where the Q-matrix that I need to make is very very large. I am not sure if it is ever practical. Then I research and came to this conclusion that using the tabular method is not the only way, in fact, it is a very less powerful tool as compared to other methods such as deep RL methods.

Am I correct in this understanding that with the increasing complexity of problems, tabular RL methods are getting obsolete?

",36710,,2444,,7/21/2020 18:37,7/21/2020 20:07,Are tabular reinforcement learning methods obsolete (or getting obsolete)?,,1,0,,,,CC BY-SA 4.0 22615,2,,22614,7/21/2020 20:07,,3,,"

Am I correct in this understanding that with the increasing complexity of problems, tabular RL methods are getting obsolete?

Individual problems don't get any more complex, but the scope of solvable environments increases due to research and discovery of better or more apt methods.

Using deep RL methods with large neural nets can be a lot less efficient for solving simple problems. So tabular methods still have their place there.

Practically, if your state/action space (number of states times number of actions) is small enough to fit a Q table in memory, and it is possible to visit all relevant state/action pairs multiple times in a relatively short time, then tabular methods offer guarantees of convergence that approximate methods cannot. So tabular approaches are often preferred if they are appropriate.

Many interesting, cutting edge problems that are relevant to AI, such as autonomous robots acting in the real world, do not fit the tabular approach. In that sense, the approach is "obsolete" in that it no longer provides challenging research topics for practical AI (there are still unanswered theoretical questions, such as proof of convergence for Monte Carlo control).

It is still worth understanding tabular value-based methods in detail, because they form the foundations of the more complex deep learning methods. In some sense they represent ideal solutions that deep RL tries to approximate, and the design of tabular solutions can be the inspiration for changes and adjustments to neural-network methods.

",1847,,,,,7/21/2020 20:07,,,,1,,,,CC BY-SA 4.0 22616,2,,22589,7/21/2020 21:32,,3,,"

You might also ask if there's any particular reason why we would use a neural net. If we're to train a neural net to play chess, we need to be able to:

1. Feed it positions as input vectors (easy enough),

2. Decide on an output format. Perhaps a distribution over possible moves (but then, how to represent that such that the meaning of a specific output cell doesn't change drastically based on the board state? Or perhaps instead, we let the resulting board state after a candidate move be the input, and let the output be a score that represents the desirability of that state. That'll require exponentially more forward/backprop passes, though.

3. Provide it with an error signal to whatever output vector it produces. This is the really tricky bit, since we don't know whether a given move will result in victory until the very end.

Do we play the game to the very end, storing decisions as we go, and then at the end, replay each input, feeding it an error signal if we lost? This will give the same error to the good moves as to the ones that actually lost the game. With enough games, this will work, since the good moves will get positive feedback a bit more often than negative, and vice versa for the bad ones. But it'll take a lot of games. More than a human is going to be willing to play. We can have different networks learn by playing against each other, but not on 1996 hardware.

Do we instead provide a score based on another heuristic of the board state? In that case, why not just use minimax? It's provably optimal for a given heuristic up to however many moves deep we look, and it doesn't need training.


Add to this the fact that if we don't choose a good representation at each of these steps, there's a good chance that the network will only learn the positions it's specifically been trained on, rather than generalizing to unseen states, which is the main reason for using a neural network in the first place.

It's certainly possible to use neural nets to learn chess (DeepMind's approach can be found here, for instance), but they're not a natural fit to the problem by any means. Minimax, by contrast, fits the problem very well, which is why it was one of the techniques used by Deep Blue. Neural nets are an amazing tool, but they're not always the right tool for the job.

Addendum: I didn't stress this point much, since K.C. already brought it up, but training large neural nets require us to perform a huge number of matrix vector multiplications, and this wasn't especially practical before GPUs got powerful and cheap.

",2212,,2212,,7/22/2020 14:19,7/22/2020 14:19,,,,2,,,,CC BY-SA 4.0 22617,2,,22609,7/22/2020 0:08,,1,,"

Both in DQN and in DDQN, the target network starts as an exact copy of the Q-network, that has the same weights, layers, input and output dimensions, etc., as the Q-network.

The main idea of the DQN agent is that the Q-network predicts the Q-values of actions from a given state and selects the maximum of them and uses the mean squared error (MSE) as its cost/loss function. That is, it performs gradient descent steps on

$$\left(Y_{t}^{\mathrm{DQN}} -Q\left(s_t, a_t;\boldsymbol{\theta}\right)\right)^2,$$

where the target $Y_{t}^{\mathrm{DQN}}$ is defined (in the case of DQN) as

$$ Y_{t}^{\mathrm{DQN}} \equiv R_{t+1}+\gamma \max _{a} Q\left(S_{t+1}, a ; \boldsymbol{\theta}_{t}^{-}\right) $$

$\boldsymbol{\theta}$ are the Q-network weights and $\boldsymbol{\theta^-}$ are the target network weights.

After a usually fixed number of timesteps, the target network updates its weights by copying the weights of the Q-network. So, basically, the target network never performs a feed-forward training phase and, thus, ignores a cost function.

In the case of DDQN, the target is defined as

$$ Y_{t}^{\text {DDQN}} \equiv R_{t+1}+\gamma Q\left(S_{t+1}, \underset{a}{\operatorname{argmax}} Q\left(S_{t+1}, a ; \boldsymbol{\theta}_{t}\right) ; \boldsymbol{\theta}_{t}^{-}\right) $$

This target is used to decouple the selection of the action (i.e. the argmax part) from its evaluation (i.e. the computation of the Q value at the next state with this selected action), as stated the paper that introduced the DDQN)

The max operator in standard Q-learning and DQN, in (2) and (3), uses the same values both to select and to evaluate an action. This makes it more likely to select overestimated values, resulting in overoptimistic value estimates. To prevent this, we can decouple the selection from the evaluation

",36055,,2444,,11/5/2020 1:19,11/5/2020 1:19,,,,0,,,,CC BY-SA 4.0 22618,1,22624,,7/22/2020 1:18,,1,262,"

I am trying to implement the DDPG algorithm based on this paper.

The part that confuses me is the actor network's update. I don't understand why the policy loss is simply the mean of $-Q(s, \mu(s))$, where $Q$ is the critic network and $\mu$ is the policy network. How does one arrive at this?

",30632,,2444,,7/22/2020 11:28,7/22/2020 12:49,"Why is the policy loss the mean of $-Q(s, \mu(s))$ in the DDPG algorithm?",,1,0,,,,CC BY-SA 4.0 22619,1,22666,,7/22/2020 5:57,,1,109,"

XOR data, without labels:

[[0,0],[0,1],[1,0],[1,1]]

I'm using this network for auto-classifying XOR data:

H1  <-- Dense(units=2, activation=relu)    #any activation here
Z   <-- Dense(units=2, activation=softmax) #softmax for 2 classes of XOR result
Out <-- Dense(units=2, activation=sigmoid) #sigmoid to return 2 values in (0,1)

There's a logical problem in the network, that is, Z represents 2 classes, however, the 2 classes can't be decoded back to 4 samples of XOR data.

How to fix the network above to auto-classify XOR data, in unsupervised manner?

",2844,,2444,,7/25/2020 13:57,7/25/2020 16:35,What should the output of a neural network that needs to classify in an unsupervised fashion XOR data be?,,1,3,,,,CC BY-SA 4.0 22622,1,,,7/22/2020 8:00,,4,163,"

I'm trying to decide which policy improvement algorithm to use in the context of my problem. But let me emerge you into the problem

Problem

I want to move a set of points in a 3D space. Depending on how the points move, the environment gives a positive or negative reward. Further, the environment does not split up into episodes, so it is a continuing problem. The state space is high-dimensional (a lot of states are possible) and many states can be similar (so state aliasing can appear), also states are continuous. The problem is dense in rewards, so for every transition, there will be a negative or positive reward, depending on the previous state.

A state is represented as a vector with dimension N (initially it will be something like ~100, but in the future, I want to work with vectors up to 1000).

In the case of action, it is described by a matrix 3xN, where N is the same as in the case of the state. The first dimension comes from the fact, that action is 3D displacement.

What I have done so far

Since actions are continuous, I have narrowed down my search to policy gradient methods. Further, I researched methods, that work with continuous state spaces. I found a deep deterministic policy gradient (DDPG) and the Proximal Policy Gradient (PPO) would fit here. Theoretically, they should work but I'm unsure and any advice would be gold here.

Questions

Would those algorithms be suitable for the problem (PPO or DDPG)? There are other policy improvement algorithms that would work here or a family of policy improvement algorithms?

",31324,,31324,,7/22/2020 8:19,7/28/2020 10:16,Choosing a policy improvement algorithm for a continuing problem with continuous action and state-space,,1,2,,,,CC BY-SA 4.0 22623,1,22628,,7/22/2020 12:02,,6,354,"

AFAIK, GANs are used for generating/synthesizing near-perfect human faces (deepfakes), gallery arts, etc., but can GANs be used to generate something other than images?

",30725,,2444,,7/22/2020 20:49,7/23/2020 0:20,Can GANs be used to generate something other than images?,,1,0,,,,CC BY-SA 4.0 22624,2,,22618,7/22/2020 12:49,,2,,"

This is not quite the loss that is stated in the paper.

For standard policy gradient methods the objective is to maximise $v_{\pi_\theta}(s_0)$ -- note that this is analogous to minimising $-v_{\pi_\theta}(s_0)$. This is for a stochastic policy. In DDPG the policy is now assumed to be deterministic.

In general, we can write $$v_\pi(s) = \mathbb{E}_{a\sim\pi}[Q(s,a)]\;;$$ to see this note that $$Q(s,a) = \mathbb{E}[G_t | S_t = s, A_t=a]\;;$$ so if we took expectation over this with respect to the distribution of $a$ we would get $$\mathbb{E}_{a\sim\pi}[\mathbb{E}[G_t|S_t=s, A_t=a]] = \mathbb{E}[G_t|S_t=s] = v_\pi(s)\;.$$

However, if our policy is deterministic then $\pi(\cdot|s)$ is a point mass (a distribution which has probability 1 for a specific point and 0 everywhere else) for a certain action, so $\mathbb{E}_{a\sim\pi}[ Q(s,a)] = Q(s,a=\pi(s)) = v_\pi(s)$. Thus the objective is still to maximise $v_\pi(s)$ it is just that now we know the policy is deterministic we say we want to maximise $Q(s,a=\pi(s))$.

The policy gradient of this term was shown to be \begin{align} \nabla_\theta Q(s,a=\pi_\theta(s)) & \approx \mathbb{E}_{s \sim \mu}[\nabla_\theta Q(s,a=\pi_\theta(s))]\;; \\ & = \mathbb{E}_{s\sim\mu}[\nabla_aQ(s,a=\pi(s)) \nabla_\theta \pi_\theta(s)]\;; \end{align}

where if we put a minus at the front of this term then we would arrive at the loss from the paper. Intuitively this makes sense, you want to know how much the action-value function changes with respect to the parameter of the policy, but this would be difficult to directly calculate so you use the chain rule to see how much the action-value function changes with $a$ and in term how much $a$ (i.e. our policy) changes with the parameter of the policy.

I realise I have changed notation from the paper you are reading so here $\pi$ is our policy as opposed to $\mu$ and here where I have used $\mu$ I take this to be the state distribution function.

",36821,,,,,7/22/2020 12:49,,,,0,,,,CC BY-SA 4.0 22627,1,24429,,7/22/2020 20:41,,4,374,"

For the past few days, I am trying to learn graph convolutional networks. I saw some of the lectures on youtube. But I can not able to get any clear concept of how those networks are trained. I have a vague understanding of how to perform convolution, but I can not understand how we train them. I want a solid mathematical understanding of graph convolutional networks. So, can anyone please suggest me how to start learning graph convolutional network from start to expert level?

",28048,,2444,,7/22/2020 20:46,11/29/2021 12:36,What is the best resources to learn Graph Convolutional Neural Networks?,,2,1,,,,CC BY-SA 4.0 22628,2,,22623,7/23/2020 0:20,,2,,"

They can indeed. Although generally they are kept to images because at the moment, they are the best at that, but not the best in other areas that you might consider.

GANs can be used for audio generation, with many examples such as GANsynth and GAN voice generation. But each of these tasks are outperformed by other methods. With music generation, WaveNet is the best (last I checked, and it also performs very well at speech synthesis), and a more powerful model for voice generation is achieved through the use of a VAE.

This is only looking at one area that you could use GANs for, because in reality you could use them for any kind of generation if you wanted to, but at the moment the vast majority of the research into GANs is into image generation, and as such other areas do not compete with the current SOTA techniques, unless there's some big paper I've missed within the last few months.

",26726,,,,,7/23/2020 0:20,,,,0,,,,CC BY-SA 4.0 22630,1,22646,,7/23/2020 4:56,,0,107,"

AI is the emerging field and biggest business opportunity of the next decade. It's already automating manual and repetitive tasks. And in some areas, it can learn faster than humans, if not yet as deeply.

From the Forbes article

In the AI-enabled future, humans will be able to converse and interact with each other in the native language of choice, not having to worry about miscommunicating intentions.

I would like to know more about how artificial intelligence will change the future?

",30725,,2444,,12/12/2021 13:31,12/12/2021 13:31,How artificial intelligence will change the future?,,1,0,,12/12/2021 13:31,,CC BY-SA 4.0 22631,1,22663,,7/23/2020 6:14,,3,180,"

In many diagrams, as seen below, residual neural networks are only depicted with ReLU activation functions, but can residual NNs also use other activation functions, such as the sigmoid, hyperbolic tangent, etc.?

",32636,,2444,,7/25/2020 2:50,7/25/2020 3:38,Can residual neural networks use other activation functions different from ReLU?,,1,0,,,,CC BY-SA 4.0 22632,1,22688,,7/23/2020 7:10,,-1,147,"

Normalisation transform data into a range: $$X_i = \dfrac{X_i - Min}{Max-Min}$$

Practically, I found out that the model doesn't generalise well when using normalisation of input data, instead of standardisation (another formula shown below).

Before training a neural net, data are usually standardised or normalised. Standardising seems good as it makes the model generalise better, while normalisation may make the model not working with values out of training data range.

So I'm using standardisation for input data (X), however, I'm confusing whether I should standardise the expected output values too?

For a column in input data: $$X_i = \dfrac{(X_i - Mean)}{Standard\ Deviation\ of\ the\ Column}$$

Should I apply this formula to the expected output values (labels) too?

",2844,,2844,,12/21/2021 4:53,12/21/2021 4:53,Is it necessary to standardise the expected output,,1,2,,,,CC BY-SA 4.0 22633,1,,,7/23/2020 7:52,,2,85,"

I'm wondering, has anyone seen any paper where one trains a network but biases it to produce similar outputs to a given model (such as one given from expert opinion or it being a previously trained network).

Formally, I'm looking for a paper doing the following:

Let $g:\mathbb{R}^d\rightarrow \mathbb{R}^D$ be a model (not necessarily, but possibly, a neural network) trained on some input/output data pairs $\{(x_n,y_n)\}_{n=1}^N$ and train a neural network $f_{\theta}(\cdot)$ on $$ \underset{\theta}{\operatorname{argmin}}\sum_{n=1}^N \left\| f_{\theta}(x_n) - y_n \right\| + \lambda \left\| f_{\theta}(x_n) - g(x_n) \right\|, $$ where $\theta$ represents all the trainable weight and bias parameters of the network $f_{\theta}(\cdot)$.

So put another way...$f_{\theta}(\cdot)$ is being regularized by the outputs of another model...

",31649,,,,,7/23/2020 7:52,Forcing a neural network to be close to a previous model - Regularization through given model,,0,7,,,,CC BY-SA 4.0 22634,1,,,7/23/2020 9:33,,2,728,"

I have used a different setting, but DDPG is not learning and it does not converge. I have used these codes 1,2, and 3 and I used different optimizers, activation functions, and learning rate but there is no improvement.

    parser.add_argument('--actor-lr', help='actor network learning rate', default=0.001)
    parser.add_argument('--critic-lr', help='critic network learning rate', default=0.0001)
    parser.add_argument('--gamma', help='discount factor for critic updates', default=0.95)

    parser.add_argument('--tau', help='soft target update parameter', default=0.001)
    parser.add_argument('--buffer-size', help='max size of the replay buffer', default=int(1e5))
    parser.add_argument('--minibatch-size', help='size of minibatch for minibatch-SGD', default=64)

    # run parameters
    # parser.add_argument('--env', help='choose the gym env- tested on {Pendulum-v0}', default='MountainCarContinuous-v0')
    parser.add_argument('--random-seed', help='random seed for repeatability', default=1234)
    parser.add_argument('--max-episodes', help='max num of episodes to do while training', default=200)
    parser.add_argument('--max-episode-len', help='max length of 1 episode', default=100)

I have trained in the same environment with A2C and it converged.

Which parameters should I change to make the DDPG converge? Can anyone help me with this?

",21181,,2444,,7/25/2020 11:57,7/29/2020 12:51,Why is DDPG not learning and it does not converge?,,0,0,,,,CC BY-SA 4.0 22635,2,,22094,7/23/2020 10:23,,3,,"

The unrolling step is due to the fact you end up with an equation that you can keep expanding indefinitely.

Note that we start with calculating $\nabla v_\pi(s)$ and arrive at $$\nabla v_\pi(s) = \sum_a\left[ \nabla \pi(a|s) q_\pi(s,a) + \pi(a|s) \sum_{s'}p(s'|s,a) \nabla v_\pi (s') \right]\;,$$ which contains a term for $\nabla v_\pi(s')$. This is a recursive relationship, similar to the bellman equation, so we can substitute in a term for $\nabla v_\pi(s')$ which will be a term similar just with $\nabla v_\pi(s'')$. As I mentioned, we can do this indefinitely which leads us to

$$\nabla v_\pi(s) = \sum_{x \in \mathcal{S}} \sum_{k=0}^\infty \mathbb{P}(s\rightarrow x, k, \pi) \sum_a \nabla \pi(a|x) q_\pi(x,a)\;.$$

We need the term $\sum_{x \in \mathcal{S}} \sum_{k=0}^\infty \mathbb{P}(s\rightarrow x, k, \pi)$ because we want to take an average over the state space, however due to unrolling there are many different $s_t$'s that we need to average over (this comes from the $s',s'',s''',...$ in the unrolling) so we also need to add the probability of transitioning from state $s$ to state $x$ in $k$ time steps, where we sum over an infinite horizon due to the repeated unrolling.

If you are wondering what happens to the terms $\pi(a|s)$ and $p(s'|s,a)$ terms and why they are not explicitly shown in this final form, it is because this is exactly what the $\mathbb{P}(s\rightarrow x, k, \pi)$ represents. The average over all possible states accounts for the $p(s'|s,a)$ and the fact that we follow policy $\pi$ in the probability statement accounts for the $\pi(a|s)$.

",36821,,36821,,1/8/2023 18:00,1/8/2023 18:00,,,,0,,,,CC BY-SA 4.0 22636,1,,,7/23/2020 10:44,,3,258,"

Suppose I have a model that was trained with a dataset that contains the features (f1, f2, f3, f4, f5, f6). However, my test dataset does not contain all features of the training dataset, but only (f1, f2, f3). How can I predict the true label of the entries of this test dataset without all features?

",38808,,2444,,7/25/2020 12:29,7/27/2020 10:13,How can I predict the true label for data with incomplete features based on the trained model with data with more features?,,2,0,,,,CC BY-SA 4.0 22637,1,,,7/23/2020 13:55,,2,235,"

In the basic variant of GCN we have the following:

Here we aggregate the information from the adjacent node and pass it to a neural network, then transform our own information and add them all.

But the main question is: how can we ensure that $W_{k}(\sum(\frac{h_k}{N(V)})$ will be the same size as $B_{k}h_{v}$ and does $B_{k}$ emply another neural network?

",28048,,28048,,3/13/2021 9:31,3/13/2021 9:31,How Graph Convolutional Neural Networks forward propagate?,,1,0,,,,CC BY-SA 4.0 22638,2,,17734,7/23/2020 14:01,,2,,"

It is not so much the problem of using Reinforcement Learning to train the neural networks, it is the assumptions made about the data given to standard Neural Networks. They are not capable of handling strongly correlated data which is one of the motivations for introducing Recurrent Neural Networks, as they can handle this correlated data well.

",36821,,,,,7/23/2020 14:01,,,,0,,,,CC BY-SA 4.0 22639,2,,22637,7/23/2020 14:14,,2,,"

I think the picture you're presenting is mostly for educational purposes and that's why they are excluding the node itself from it's neighbors and using two distinct networks (most of the papers I've read they are using the same network for the neighbors and for the center node). But you are right the two networks needs to have the same input and output shapes otherwise the point-wise summation between the two terms is not possible.

",20430,,,,,7/23/2020 14:14,,,,2,,,,CC BY-SA 4.0 22640,2,,12311,7/23/2020 14:22,,0,,"

Get a raspberry with a camera, power it by a battery bank, attach it to the EV3 and run a 2 one program on raspberry and another on ev3 communicating with each other via MQTT

",38815,,,,,7/23/2020 14:22,,,,0,,,,CC BY-SA 4.0 22641,1,,,7/23/2020 17:26,,1,349,"

DQN implemented at https://github.com/PacktPublishing/PyTorch-1.x-Reinforcement-Learning-Cookbook/blob/master/Chapter07/chapter7/dqn.py uses the mean square error loss function for the neural network to learn the state -> action mapping :

self.criterion=torch.nn.MSELoss()

Could cross-entropy be used instead as the loss function? Cross entropy is typically used for classification, and mean squared error for regression.

As the actions are discrete (the example utilises the mountain car environment - https://github.com/openai/gym/wiki/MountainCar-v0) and map to [0,1,2] can cross-entropy loss be used instead of mean squared error? Why use regression as the state -> action function approximator for deep Q learning instead of classification?

Entire DQN src from https://github.com/PacktPublishing/PyTorch-1.x-Reinforcement-Learning-Cookbook/blob/master/Chapter07/chapter7/dqn.py :

'''
Source codes for PyTorch 1.0 Reinforcement Learning (Packt Publishing)
Chapter 7: Deep Q-Networks in Action
Author: Yuxi (Hayden) Liu
'''

import gym
import torch

from torch.autograd import Variable
import random


env = gym.envs.make("MountainCar-v0")



class DQN():
    def __init__(self, n_state, n_action, n_hidden=50, lr=0.05):
        self.criterion = torch.nn.MSELoss()
        self.model = torch.nn.Sequential(
                        torch.nn.Linear(n_state, n_hidden),
                        torch.nn.ReLU(),
                        torch.nn.Linear(n_hidden, n_action)
                )
        self.optimizer = torch.optim.Adam(self.model.parameters(), lr)


    def update(self, s, y):
        """
        Update the weights of the DQN given a training sample
        @param s: state
        @param y: target value
        """
        y_pred = self.model(torch.Tensor(s))
        loss = self.criterion(y_pred, Variable(torch.Tensor(y)))
        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()


    def predict(self, s):
        """
        Compute the Q values of the state for all actions using the learning model
        @param s: input state
        @return: Q values of the state for all actions
        """
        with torch.no_grad():
            return self.model(torch.Tensor(s))



def gen_epsilon_greedy_policy(estimator, epsilon, n_action):
    def policy_function(state):
        if random.random() < epsilon:
            return random.randint(0, n_action - 1)
        else:
            q_values = estimator.predict(state)
            return torch.argmax(q_values).item()
    return policy_function


def q_learning(env, estimator, n_episode, gamma=1.0, epsilon=0.1, epsilon_decay=.99):
    """
    Deep Q-Learning using DQN
    @param env: Gym environment
    @param estimator: DQN object
    @param n_episode: number of episodes
    @param gamma: the discount factor
    @param epsilon: parameter for epsilon_greedy
    @param epsilon_decay: epsilon decreasing factor
    """
    for episode in range(n_episode):
        policy = gen_epsilon_greedy_policy(estimator, epsilon, n_action)
        state = env.reset()
        is_done = False

        while not is_done:
            action = policy(state)
            next_state, reward, is_done, _ = env.step(action)
            total_reward_episode[episode] += reward

            modified_reward = next_state[0] + 0.5

            if next_state[0] >= 0.5:
                modified_reward += 100
            elif next_state[0] >= 0.25:
                modified_reward += 20
            elif next_state[0] >= 0.1:
                modified_reward += 10
            elif next_state[0] >= 0:
                modified_reward += 5

            q_values = estimator.predict(state).tolist()

            if is_done:
                q_values[action] = modified_reward
                estimator.update(state, q_values)
                break

            q_values_next = estimator.predict(next_state)

            q_values[action] = modified_reward + gamma * torch.max(q_values_next).item()

            estimator.update(state, q_values)

            state = next_state


        print('Episode: {}, total reward: {}, epsilon: {}'.format(episode, total_reward_episode[episode], epsilon))

        epsilon = max(epsilon * epsilon_decay, 0.01)

n_state = env.observation_space.shape[0]
n_action = env.action_space.n
n_hidden = 50
lr = 0.001
dqn = DQN(n_state, n_action, n_hidden, lr)


n_episode = 1000

total_reward_episode = [0] * n_episode

q_learning(env, dqn, n_episode, gamma=.9, epsilon=.3)



import matplotlib.pyplot as plt
plt.plot(total_reward_episode)
plt.title('Episode reward over time')
plt.xlabel('Episode')
plt.ylabel('Total reward')
plt.show()
",12964,,,,,7/23/2020 17:26,Classification or regression for deep Q learning,,0,1,,,,CC BY-SA 4.0 22642,1,,,7/23/2020 17:32,,7,228,"

It is proved that the Bellman update is a contraction (1).

Here is the Bellman update that is used for Q-Learning:

$$Q_{t+1}(s, a) = Q_{t}(s, a) + \alpha*(r(s, a, s') + \gamma \max_{a^*} (Q_{t}(s', a^*)) - Q_t(s,a)) \tag{1} \label{1}$$

The proof of (\ref{1}) being contraction comes from one of the facts (the relevant one for the question) that max operation is non expansive; that is:

$$\lvert \max_a f(a)- \max_a g(a) \rvert \leq \max_a \lvert f(a) - g(a) \rvert \tag{2}\label{2}$$

This is also proved in a lot of places and it is pretty intuitive.

Consider the following Bellman update:

$$ Q_{t+1}(s, a) = Q_{t}(s, a) + \alpha*(r(s, a, s') + \gamma SAMPLE_{a^*} (Q_{t}(s', a^*)) - Q_t(s,a)) \tag{3}\label{3}$$

where $SAMPLE_a(Q(s, a))$ samples an action with respect to the Q values (weighted by their Q values) of each action in that state.

Is this new Bellman operation still a contraction?

Is the SAMPLE operation non-expansive? It is, of course, possible to generate samples that will not satisfy equation (\ref{2}). I ask is it non-expansive in expectation?

My approach is:

$$\lvert\,\mathbb{E}_{a \sim Q}[f(a)] - \mathbb{E}_{a \sim Q}[g(a)]\, \rvert \leq \,\,\mathbb{E}_{a \sim Q}\lvert\,\,[f(a) - g(a)]\,\,\rvert \tag{4} \label{4} $$

Equivalently:

$$\lvert\,\mathbb{E}_{a \sim Q}[f(a) - g(a)] \, \rvert \leq \,\,\mathbb{E}_{a \sim Q}\lvert\,\,[f(a) - g(a)]\,\,\rvert$$

(\ref{4}) is true since:

$$\lvert\,\mathbb{E}[X] \, \rvert \leq \,\,\mathbb{E} \,\,\lvert\,\,[X]\,\,\rvert $$

But, I am not sure if proving (\ref{4}) proves the theorem. Do you think that this is a legit proof that (\ref{3}) is a contraction.

(If so; this would mean that stochastic policy q learning theoretically converges and we can have stochastic policies with regular q learning; and this is why I am interested.)

Both intuitive answers and mathematical proofs are welcome.

",38818,,,user9947,7/25/2020 8:28,7/25/2020 8:28,Is the Bellman equation that uses sampling weighted by the Q values (instead of max) a contraction?,,0,2,,,,CC BY-SA 4.0 22643,1,22644,,7/23/2020 20:44,,3,472,"

Why does every neuron in a hidden layer of a multi-layer perceptron (MLP) typically have the same activation function as every other neuron in the same or other hidden layers (so I exclude the output layer, which typically has a different activation function) of the MLP? Is this a requirement, are there any advantages, or maybe is it just a rule of thumb?

",30885,,2444,,1/17/2021 16:24,1/17/2021 16:24,Why does every neuron in hidden layers of a multi-layer perceptron typically have the same activation function?,,1,0,,2/3/2021 11:25,,CC BY-SA 4.0 22644,2,,22643,7/23/2020 21:08,,2,,"

As you stated, it's popular to have some form of a rectified linear unit (ReLU) activation in hidden layers and the output layer is often a softmax or sigmoid (depending also on the problem: multi-class or binary classification, respectively), which provides an output that can be viewed as a probability distribution.

You could generalize this further to blocks of different activation functions within the same layer. This is something I've thought about, haven't done, but imagine has been attempted. In some sense, the idea here would be to allow for a subsection of the network to develop a representation that may not be feasible otherwise. These different representations within the same layer would then be unified by subsequent layers as we move closer to the output.

",5210,,2444,,1/17/2021 16:21,1/17/2021 16:21,,,,5,,,,CC BY-SA 4.0 22646,2,,22630,7/24/2020 6:41,,1,,"

Here is something I've noticed about humans: We're bad at projecting the future with all of its 2nd, 3rd ... N order effects, and we're REALLY bad a projecting and quantifying risk. So, I'm not sure that you'll get an answer that is anything more than either trivially true ("Chatbots will be commonplace") or a correct but wasn't justified ("We'll explore the stars by sending out swarms of AI-enhanced satellites that will use their computing power to improve themselves since they are always on but have copious amounts of down-time").

But if you want to limit the timeframe to the next decade and don't mind the previous caveat, here goes:

  • Chatbots will be commonplace. This will include ones that can handle conversation, like Gridspace but can also handle multiple languages and code-switching.
  • In professional use, AI will augment professionals (such as lawyers, physicians, teachers, and accountants) at the upper end (who will add value with strategy, experience, and research) and replace them at the lower end.
  • In military use, they will be used for target identification, automated assessment of damage, threat notification, and forecasting.
  • We'll explore the stars by sending out swarms of AI-enhanced satellites that will use their computing power to improve themselves since they are always on but have copious amounts down-time.

I don't hold a lot of hope out for communication without error of to speakers of different languages as you can have two native fluent speakers who have known each other for years speak to each and still have them miscommunicate.

",30750,,,,,7/24/2020 6:41,,,,3,,,,CC BY-SA 4.0 22647,2,,22105,7/24/2020 7:02,,1,,"

A chromosome in this case could be a set of filters, each extracting a different feature (analogous to Convolutional Neural Network). Your question doesn't say what you want to do with these features, so this solution is made under the assumption that there is a fitness function which would take these features as an input and output a score. Then, each gene is a parameter for a filter, each chromosome defines a set of such filters, which makes up an individual. A population is a set of such individuals.

",38671,,,,,7/24/2020 7:02,,,,0,,,,CC BY-SA 4.0 22649,1,,,7/24/2020 8:54,,1,50,"

I want to teach a neural network to distinguish between different types of defects. For that, I generated images of fake-defects. The images of the fake-defect types are attached.

I tried many different network architectures now:

  • resnet18
  • squeezenet
  • own architectures: a narrow network with broad layers and high dropout rates.

I have to say that some of these defects have really random shapes, like the type single-dirt or multi-dirt. I imagine that the classification should not be as easy as I thought before, due to the lack of repetitive features within the defects. But I always feel like the network is learning some "weird" features, which do not occur in the test set, and the results are really frustrating. I felt like teaching binary images had way better results, which should IMO be not the case.

Still, I feel like a neural network should be able to learn to distinguish them.

Which kind of network architecture would you recommend to classify the images in the attachment?

",26857,,2444,,7/25/2020 12:22,7/25/2020 12:22,Which neural network should I use to distinguish between different types of defects?,,0,0,,,,CC BY-SA 4.0 22650,1,,,7/24/2020 8:58,,1,146,"

Kipf et al. described in his paper that we can write graph convolution operation like this:

$$H_{t+1} = AH_tW_t$$

where $A$ is the normalized adjacency matrix, $H_t$ is the embedded representation of the nodes and $W_t$ is the weight matrix.

Now, can I imagine the same formula as first performing 2D convolution with fixed-size kernel over the whole feature space then multiply the result with the adjacency matrix?

If this is the case, I think I can create a graph convolution operation just using the Conv2D layer then performing simple matrix multiplication with adjacency matrix using PyTorch.

",28048,,2444,,11/30/2021 7:03,11/30/2021 7:03,Can I think of the graph convolution operation as a regular 2D convolution for images?,,0,1,,,,CC BY-SA 4.0 22651,1,,,7/24/2020 9:54,,2,51,"

In the paper What You Get Is What You See: A Visual Markup Decompiler, the authors have proposed a method to extract the features from the CNN and then arrange those extracted features in a grid to pass into an RNN encoder. Here's an illustration.

I can easily extract features from either the existing model, like ResNet, VGG, or make a new CNN model easily as they have described in the paper.

For example, let us suppose, I do this

features = keras.applications.ResNet()(images_array) # just hypothetical

How can I convert these images to the grid?? I am supposed to feed the output of the changed grid to an LSTM Encoder as:

keras.layers.LSTM()(grid) # again, hypothetical type

I just want to know what the author means from changing the output in the grid format.

",36062,,2444,,7/25/2020 12:18,7/25/2020 12:18,"What is meant by ""arranging the final features of CNN in a grid"" and how to do it?",,0,0,,,,CC BY-SA 4.0 22653,1,22659,,7/24/2020 11:13,,3,99,"

When training a relatively small DL model, which takes several hours to train, I typically start with some starting points from literature and then use a trial-and-error or grid-search approach to fine-tune the values of the hyper-parameters, in order to prevent overfitting and achieve sufficient performance.

However, it is not uncommon for large models to have training time measured in days or weeks [1], [2], [3].

How are hyperparameters determined in such cases?

",38829,,2444,,1/18/2021 15:15,1/18/2021 15:15,How are training hyperparameters determined for large models?,,1,1,,,,CC BY-SA 4.0 22654,1,,,7/24/2020 12:33,,1,65,"

I am working with co-reference resolution in a large text. Is there an optimal way to split the text into small parts? Or the best correct procedure is to use the entire text?

Just for reference, I am using the library spacy-neuralcoref in Python that is based on Deep Reinforcement Learning for Mention-Ranking Coreference Models by Kevin Clark and Christopher D. Manning, EMNLP 2016.

Why am I asking about splitting the text?

I am applying coreference to chapters of books (roughly 30 pages of text). All the examples I have seen show situations of coreference applied to small pieces of texts. I applied to a chapter and I found strange results. However, this is not a clear justification for that since the state of art in coreference is about 60%. Am I right?

I didn't check all databases that people use to test coreference but the ones I took a look (like MUC 3 and MUC 4 Data Sets), if I understand well, they were composed by a collection of a small number of paragraphs.

A test Example:

TST1-MUC3-0001

GUATEMALA CITY, 4 FEB 90 (ACAN-EFE) -- [TEXT] THE GUATEMALA ARMY DENIED TODAY THAT GUERRILLAS ATTACKED THE "SANTO TOMAS" PRESIDENTIAL FARM, LOCATED ON THE PACIFIC SIDE, WHERE PRESIDENT CEREZO HAS BEEN STAYING SINCE 2 FEBRUARY.

A REPORT PUBLISHED BY THE "CERIGUA" NEWS AGENCY -- MOUTHPIECE OF THE GUATEMALAN NATIONAL REVOLUTIONARY UNITY (URNG) -- WHOSE MAIN OFFICES ARE IN MEXICO, SAYS THAT A GUERRILLA COLUMN ATTACKED THE FARM 2 DAYS AGO.

HOWEVER, ARMED FORCES SPOKESMAN COLONEL LUIS ARTURO ISAACS SAID THAT THE ATTACK, WHICH RESULTED IN THE DEATH OF A CIVILIAN WHO WAS PASSING BY AT THE TIME OF THE SKIRMISH, WAS NOT AGAINST THE FARM, AND THAT PRESIDENT CEREZO IS SAFE AND SOUND.

HE ADDED THAT ON 3 FEBRUARY PRESIDENT CEREZO MET WITH THE DIPLOMATIC CORPS ACCREDITED IN GUATEMALA.

THE GOVERNMENT ALSO ISSUED A COMMUNIQUE DESCRIBING THE REBEL REPORT AS "FALSE AND INCORRECT," AND STRESSING THAT THE PRESIDENT WAS NEVER IN DANGER.

COL ISAACS SAID THAT THE GUERRILLAS ATTACKED THE "LA EMINENCIA" FARM LOCATED NEAR THE "SANTO TOMAS" FARM, WHERE THEY BURNED THE FACILITIES AND STOLE FOOD.

A MILITARY PATROL CLASHED WITH A REBEL COLUMN AND INFLICTED THREE CASUALTIES, WHICH WERE TAKEN AWAY BY THE GUERRILLAS WHO FLED TO THE MOUNTAINS, ISAACS NOTED.

HE ALSO REPORTED THAT GUERRILLAS KILLED A PEASANT IN THE CITY OF FLORES, IN THE NORTHERN EL PETEN DEPARTMENT, AND BURNED A TANK TRUCK.

",38831,,2444,,7/25/2020 12:08,7/25/2020 12:08,Is there an optimal way to split the text into small parts when working with co-reference resolution?,,0,2,,,,CC BY-SA 4.0 22655,1,,,7/24/2020 13:51,,1,74,"

I am trying to classify tampered, pristine images from set of images, in that I have built a network in which I would divide the image into multiple overlapping patches and then classify them into pristine or fake(based on the probability outputs), but now I want extend the same to Image level. That is I want to build some model or some rule over output probabilities of patches of each image to get probability that the image is fake or pristine.

ways I am thinking to do is -

  1. Build a shallow network over the probabilities of the patch probabilities. In this case problem is all images are of different shape
  2. Apply a ML classifier (something like Logistic Regression), with output probabilities by appending zeros to the output probability vector generated so that all image has same sized probability vector as input
  3. generate a mask using patches and then build a simple classification network over the masks using original image labels.

I can't really say which among the above three is better or worse, I don't even know the possibility of the above three. (Kind of hit a roadblock in thinking)

Now the question am I thinking in right direction, what would be better among the ideas I am considering and why. Is there anything better than what I am thinking. It would helpful in suggesting some resources.

",38727,,,,,7/24/2020 13:51,Extending patch based image classification into image classification,,0,0,,,,CC BY-SA 4.0 22657,1,,,7/24/2020 17:54,,2,115,"

Recently I was reading this paper Skeleton Based Action RecognitionUsing Spatio Temporal Graph Convolution. In this paper, the authors claim (below equation (\ref{9})) that we can perform graph convolution with the following formula

$$ \mathbf{f}_{o u t}=\mathbf{\Lambda}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I}) \mathbf{\Lambda}^{-\frac{1}{2}} \mathbf{f}_{i n} \mathbf{W} \label{9}\tag{9} $$

using the standard 2d convolution with kernels of shape $1 \times \Gamma$ (where $\Gamma$ is defined under equation 6 of the paper), and then multiplying it with the normalised adjacency matrix

$$\mathbf{\Lambda}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I}) \mathbf{\Lambda}^{-\frac{1}{2}}$$

For the past few days, I was thinking about his claim but I can't find an answer. Does anyone read this paper and can help me to find it out, please?

",28048,,2444,,7/24/2020 20:28,11/1/2020 1:29,Why can we perform graph convolution using the standard 2d convolution with $1 \times \Gamma$ kernels?,,0,25,,,,CC BY-SA 4.0 22659,2,,22653,7/24/2020 19:11,,3,,"

In general, it is definitely very computationally expensive, so an exhaustive search is not performed in practice. However, there are some recent approaches for determining whether the architecture is "fine" without training the neural network first - by looking at the covariance matrix after forwarding the data, for example, in a recent paper Neural Architecture Search without Training. However, such an approach is very limited.

",38846,,2444,,1/18/2021 15:12,1/18/2021 15:12,,,,0,,,,CC BY-SA 4.0 22661,2,,22589,7/25/2020 1:22,,3,,"
  • I'm not sure any intelligent mechanism can be entirely free of symbolic logic.

Even where a decision is statistically based, a machine that takes actions must include some form of:

IF {some condition}
THEN {some action}

As to the popularity of newly proven statistical AI methods (ANN and genetic algorithms), this derives from the greater utility they demonstrate at ever more complex problems compared to expert systems ("good old fashioned AI") for problems that do not have a mathematical solution.

(i.e. the statistical approach for 3x3 Tic-tac-toe is overkill and unnecessary b/c the 3x3 form is a solved game. But for larger-order gameboards $m*m$ or $m*n$, the n-dimensional game, $m^n$, barring a mathematical solution that applies to every variation, ANN is way to go.)

The main issue with expert systems, no matter how complex, is "brittleness"—inability to adapt to changes without human programmer intervention. As conditions change, the mechanism demonstrates diminishing utility, or simply "breaks" (invalid input as an example.)

  • The amount of human effort required to create DeepBlue was monumental, which is why it took decades to achieve it goal, funded by a large corporation with a history of basic research.

Compare to a simple ANN that can be trained to achieve the same goal in an extremely short timeframe.

It's possible future artificial general intelligences of whatever strength would involve statistical AI programming and adapting its own symbolic functions.

Finally, symbolic AI is still vastly more widely implemented than statistical AI, in that all of the basic functions of modern computing, all of the mathematical functions, all traditional software and apps, utilize symbolic logic, even if the high level function is statistically driven. This will likely always be the case.

Thus, in terms of what method is best for a given problem, it really depends on the nature/structure of the problem, it's solvability or even decidability, as well as its tractability.

",1671,,1671,,7/28/2020 0:30,7/28/2020 0:30,,,,0,,,,CC BY-SA 4.0 22662,1,,,7/25/2020 3:12,,1,42,"

I came across a paper that describes its model architecture in the following way.

Our TRIL network is a two-channel network jointly trained to predict the expert’s action given state and the system’s next state transition given state and expert action. The training procedure of TRIL is similar to that of a multi- channel supervised classifier with regularization. Let $\theta_{π_0}$ be the parameters of TRIL and $L_{ce}$ be the cross entropy loss for predicting expert’s action and $L_{mse}$ be the mean squared error loss on predicting next state given current state and the expert’s action

The loss function is given in the following manner $$L(\theta_{\pi_0}) = L_{ce}(a, \pi_0(s)) + \lambda L_{mse}(T_{\pi_0}(s,a),s')$$

TRIL is a dual- channel network that shares certain hidden layers and jointly predicts expert action(a) and state transitions(s’)

I am not sure what a dual channel network means and what does it mean when it is able to jointly predict two outputs ? It seems something similar to a multi-task learning since there is shared hidden layers and different "task" prediction but i am not too sure of that either.

",32780,,,,,7/25/2020 3:12,What is a multi channel supervised classifier?,,0,0,,,,CC BY-SA 4.0 22663,2,,22631,7/25/2020 3:32,,2,,"

The problem with certain activation functions, such as the sigmoid, is that they squash the input to a finite interval (i.e. they are sometimes classified as saturating activation functions). For example, the sigmoid function has codomain $[0, 1]$, as you can see from the illustration below.

This property/behaviour can lead to the vanishing gradient problem (which was one of the problems that Sepp Hochreiter, the author of the LSTM, was trying to solve in the context of recurrent neural networks, when developing the LSTM, along with his advisor, Schmidhuber).

Empirically, people have noticed that ReLU can avoid this vanishing gradient problem. See e.g. this blog post. The paper Deep Sparse Rectifier Neural Networks provides more details about the advantage of ReLUs (aka rectifiers), so you may want to read it. However, ReLUs can also suffer from another (opposite) problem, i.e. the exploding gradient problem. Nevertheless, there are several ways to combat this issue. See e.g. this blog post.

That being said, I am not an expert on residual networks, but I think that they used the ReLU to further avoid the vanishing gradient problem. This answer (that I gave some time ago) should give you some intuition about why residual networks can avoid the vanishing gradient problem.

",2444,,2444,,7/25/2020 3:38,7/25/2020 3:38,,,,0,,,,CC BY-SA 4.0 22664,1,,,7/25/2020 7:58,,2,118,"

Is there a type of neural network that can be fed patterns to train itself on to complete new patterns that it has not seen before?

What I'm trying to do is train a neural network to transform an image into another image. The image may be slightly different each time (denoted with different lines in the shapes) but a human would get the idea of how the new images should look. I'd like to make a network that can learn how to learn what comes next and then predict the rest of the sequence from the first part of a new sequence.

Taking the picture below as an example. The neural network would be fed the patterns in grey and learn how to predict the next ones in the sequence. Then the user would put the blue shapes into the network and hope to get the green ones out.

Is there a neural network that could perform this type of function of completing a pattern based on only a small number of examples to start the pattern based on the other patterns it has seen?

EDIT: Corrected image and added more context

",38857,,36578,,11/10/2020 14:08,11/10/2020 14:08,What kind of neural network can be trained to recognise patterns?,,1,4,,,,CC BY-SA 4.0 22665,1,,,7/25/2020 11:31,,1,67,"

Reading the paper 'Reinforcement Learning for FX trading 'at https://stanford.edu/class/msande448/2019/Final_reports/gr2.pdf it states:

While our end goal is to be able to make decisions on a universal time scale, in order to apply a reinforcement learning approach to this problem with rewards that do not occur at each step, we formulate the problem with a series of episodes. In each episode, which we designate to be one hour long, the agent will learn the make decisions to maximize the reward (return) in that episode, given the time series features we have.

This may be a question for the authors but is it not better in RL to apply rewards at each time step instead of "rewards that do not occur at each step"? If apply rewards at each time step then the RL algorithm will achieve better convergence properties as a result of learning at smaller time intervals rather than waiting for "one hour". Why not apply rewards at each time step?

",12964,,,,,7/25/2020 11:31,When to apply reward for time series data?,,0,0,,,,CC BY-SA 4.0 22666,2,,22619,7/25/2020 16:35,,1,,"

How to fix the network above to auto-classify XOR data, in unsupervised manner?

This cannot be done, except accidentally.

Unsupervised learning cannot replace or emulate supervised learning.

As a thought experiment, consider why you would expect the network to discover XOR, when simply considering outputs rounded to binary, you could equally find AND, OR, NAND, NOR or any of the 16 possible mapping functions from input to output. All of the possible maps are equally valid functions, and there is no reason why a discovered function mapping should become any one of them by preference.

Unsupervised learning approaches typically find patterns that optimise some measure across the dataset without using labelled data. Clustering is a classic example, and auto-encoding is sometimes considered unsupervised because there is no separate label (although the term self-supervised is also used, because there is still technically a label used in training, it happens to equal the input).

You cannot use auto-encoding approaches here anyway, because XOR needs to map $\{0,1\} \times \{0,1\} \rightarrow \{0,1\}$

You could potentially use a loss function based on how close to a 0 or 1 any output is. That should cause the network to converge to one of the 16 possible binary functions, based on random initialisation. For example, you could use $y(1-y)$ as the loss.

",1847,,,,,7/25/2020 16:35,,,,3,,,,CC BY-SA 4.0 22667,1,,,7/25/2020 17:19,,3,373,"

In this article I am reading:

$D_{KL}$ gives us inifity when two distributions are disjoint. The value of $D_{JS}$ has sudden jump, not differentiable at $\theta=0$. Only Wasserstein metric provides a smooth measure, which is super helpful for a stable learning process using gradient descents.

Why is this important for a stable learning process? I have also the feeling this is also the reason for mode collapse in GANs, but I am not sure.

The Wasserstein GAN paper also talks about it obviously, but I think I am missing a point. Does it say JS does not provide a usable gradient? What exactly does that mean?

",38812,,2444,,1/25/2021 19:04,1/25/2021 19:04,What is the reason for mode collapse in GAN as opposed to WGAN?,,1,0,,,,CC BY-SA 4.0 22671,1,22672,,7/26/2020 4:39,,4,790,"

AIs like Siri and Alexa respond to their names being called. How does the system recognize the name by ignoring all the other words that have been said before their name? For example, "Hey Siri" would trigger Siri to start listening for commands, but if a user said "hey how are you hey Siri" the system will ignore "hey how are you" but trigger the system to "hey Siri". Is it because their listening function reloads in milliseconds or even nanoseconds, or is there a different way it works?

",38873,,-1,,7/28/2020 0:25,7/28/2020 0:25,How do AIs like Siri and Alexa respond to their names being called?,,1,1,,,,CC BY-SA 4.0 22672,2,,22671,7/26/2020 7:39,,10,,"

Is it because their listening function reloads in milliseconds or even nanoseconds

Yes, it expects the keyword to start every moment of time and it ignores the rest.

Overall, the algorithm is described here, you can read for details:

https://machinelearning.apple.com/research/hey-siri

",3459,,2444,,7/26/2020 13:04,7/26/2020 13:04,,,,1,,,,CC BY-SA 4.0 22673,1,22715,,7/26/2020 8:12,,16,7278,"

When I studied neural networks, parameters were learning rate, batch size etc. But even GPT3's ArXiv paper does not mention anything about what exactly the parameters are, but gives a small hint that they might just be sentences.

Even tutorial sites like this one start talking about the usual parameters, but also say "model_name: This indicates which model we are using. In our case, we are using the GPT-2 model with 345 million parameters or weights". So are the 175 billion "parameters" just neural weights? Why then are they called parameters? GPT3's paper shows that there are only 96 layers, so I'm assuming it's not a very deep network, but extremely fat. Or does it mean that each "parameter" is just a representation of the encoders or decoders?

An excerpt from this website shows tokens:

In this case, there are two additional parameters that can be passed to gpt2.generate(): truncate and include_prefix. For example, if each short text begins with a <|startoftext|> token and ends with a <|endoftext|>, then setting prefix='<|startoftext|>', truncate=<|endoftext|>', and include_prefix=False, and length is sufficient, then gpt-2-simple will automatically extract the shortform texts, even when generating in batches.

So are the parameters various kinds of tokens that are manually created by humans who try to fine-tune the models? Still, 175 billion such fine-tuning parameters is too high for humans to create, so I assume the "parameters" are auto-generated somehow.

The attention-based paper mentions the query-key-value weight matrices as the "parameters". Even if it is these weights, I'd just like to know what kind of a process generates these parameters, who chooses the parameters and specifies the relevance of words? If it's created automatically, how is it done?

",9268,,9268,,7/26/2020 9:04,7/28/2020 9:50,"What exactly are the ""parameters"" in GPT-3's 175 billion parameters and how are they chosen/generated?",,1,4,,,,CC BY-SA 4.0 22674,1,,,7/26/2020 8:38,,2,68,"

In my research, I remember to have read that, in case of an environment which can be modeled by partially observable MDP, there are no convergence guarantees (unfortunately, I do not find the paper anymore and I would appreciate if someone can post the link to the reference).

If the performance of an RL agent in a partially observable environment is "good" (i.e. the agent does pretty well in achieving its goal), is this likely only accidental or due to chance?

",37169,,2444,,7/26/2020 22:10,7/26/2020 22:10,"If the performance of an RL agent in a partially observable environment is ""good"", is this likely only accidental?",,0,0,,,,CC BY-SA 4.0 22676,1,,,7/26/2020 14:13,,7,3115,"

I have an NLP model for answer-extraction. So, basically, I have a paragraph and a question as input, and my model extracts the span of the paragraph that corresponds to the answer to the question.

I need to know how to compute the F1 score for such models. It is the standard metric (along with Exact Match) used in the literature to evaluate question-answering systems.

",23350,,2444,,1/26/2021 15:35,12/22/2021 23:11,How is the F1 score calculated in a question-answering system?,,2,0,,,,CC BY-SA 4.0 22678,1,22680,,7/26/2020 15:43,,2,205,"

I'm new to reinforcement learning. I have a problem where an action is composed of an order (rod with a required length) and an item from a warehouse (an existing rod with a certain length, which will be cut to the desired length and the remainder put back to the warehouse).

I imagine my state as two lists of a defined size: orders and warehouse, and my action as an index from the first list and an index from the second list. However, I have only worked with environments where it was only possible to pick single action and I'm not sure how to deal with two indexes. I'm not sure how DQN architecture should look like to give me such action.

Can anyone validate my general idea and help me find a solution? Or maybe just point me to some papers where similar problems are described?

",38881,,2444,,7/26/2020 22:06,7/26/2020 22:08,Reinforcement learning with action consisting of two discrete values,,1,0,,,,CC BY-SA 4.0 22680,2,,22678,7/26/2020 19:04,,3,,"

You would still be picking a single action. Your action space is now $\mathcal{A} = \mathcal{O} \times \mathcal{I}$ where I've chosen $\mathcal{O}$ to be the set of possible orders from your problem and $\mathcal{I}$ to be the set of possible items.

Provided both of these sets are finite, then you should still be able to approach this problem with DQN. Theoretically, this should be easy to see, as any element from $\mathcal{A}$ is still a single element it just happens that this element is now a tuple.

From a programming point of view, let's consider the simple example of cartpole, where the possible actions are left and right. Your $Q$-function obviously won't know the meanings of 'left' and 'right', you just assign it to an element of a vector, i.e. your $Q$-function would output a vector in $\mathbb{R}^2$ with e.g. the first element corresponding to the score for 'left' and the second element corresponding to the score for 'right'. This is still the case in your problem formulation, you will just have a $Q$-function that outputs a vector in $\mathbb{R}^d$ where $d = |\mathcal{A}|$ - you would just have to make sure you know which element corresponds to which action.

Also, there is the possibility that this approach could leave you with a large dimensional vector output, which I imagine would probably mean you'd need more simulations to properly explore the action space.

Hope this helps.

",36821,,2444,,7/26/2020 22:08,7/26/2020 22:08,,,,1,,,,CC BY-SA 4.0 22681,1,,,7/26/2020 22:05,,1,50,"

I built a DRL model to trade stocks in the financial market but the number of observations is relatively small and I would like to increase it by training the same model with stocks from several different companies. My problem is that I don't know what is the correct way to do this since the price series is a time series. Someone to enlighten me? I have read articles that show that this is possible but none that say how.

",38886,,2444,,7/26/2020 22:17,7/26/2020 22:17,How can I build a deep reinforcement learning model that can be trained with multiple time series datasets,,0,1,,,,CC BY-SA 4.0 22682,1,22683,,7/26/2020 22:41,,4,170,"

I am not asking what activation function is better. I want to know what activation functions are more used in research or deployment. Also, are they used in combination? E.g., ReLU, ELUs, etc. I'd appreciate any statistics or insight on this.

",36341,,32410,,12/21/2021 10:41,12/21/2021 10:42,What activation functions are currently popular?,,1,0,,,,CC BY-SA 4.0 22683,2,,22682,7/27/2020 2:32,,2,,"

Currently, both ReLU and ELUs are the most popular activation functions (AF) used in neural nets (NNs). This is because they eliminate the vanishing gradient problem that causes major problems in the training process and degrades the accuracy and performance of NN models.

Also, these AFs, more specifically ReLU, are very fast learning AF which makes them even more useful in research.

However, depending on the type of NN you working on, it's always a good practice to pay some attention to new studies.

",38892,,32410,,12/21/2021 10:42,12/21/2021 10:42,,,,2,,,,CC BY-SA 4.0 22684,2,,22676,7/27/2020 2:50,,0,,"

It really depends on what you are looking for your model to do. For example, do false negatives or false positives really cost your research (or your business)? Also, it's very important to consider your label (class) distribution.

If you just want to achieve the highest accuracy, and you don't have any issue with your class distribution (that I believe you probably don't in your case) then accuracy works pretty well.

F1 score might be a better option to use if you need to seek a balance between precision and recall and there is an uneven class distribution.

",38892,,32410,,12/22/2021 14:45,12/22/2021 14:45,,,,1,,,,CC BY-SA 4.0 22685,2,,22664,7/27/2020 3:20,,1,,"

The idea is simple, but it requires some time to develop.

Assumption: I am assuming in your problem the final model will have seen all possible shapes.

What your algorithm needs is a convolutional NN to understand each shape by extracting features, but you just need to be very careful with pooling.

Then what you need is a recurrent NN. In the example you showed (the image of shapes) we have bigrams (sequence of 2) which means we have the first shape as an input and the second shape is a target. In this case normal RNN should work.

But if you have sequence of many shapes, for example 10 shapes and let's say the final shape, 10th, is the target; And also if the sequence is in the way that 10th shape could be more depend on initial shapes (e.g. 1st or 2nd shape) then what you also need to consider beside RNN is long-short-term-memory (LSTM).

I cannot think of a simpler solution than this.

",38892,,,,,7/27/2020 3:20,,,,0,,,,CC BY-SA 4.0 22686,1,,,7/27/2020 3:51,,1,78,"

According to deep Q learning, we want to learn $Q^*(s,a)$, which is the optimal action-value function. It does make sense because we assume there is only one optimal function so the algorithm will converge supposedly.

But when it comes to actor-critic method, we use critic network (also called value network) to estimate $Q_\pi(s, a)$. This is what confused me. Since our policy $\pi$ will change through time, the target $Q_\pi(s, a)$ of value network will also change. What will happen for a network to learn a changing function?

",38894,,,,,7/27/2020 9:47,"Why can we use a network to estimate $Q_\pi(s, a)$ in Actor-Critic Method?",,1,0,,,,CC BY-SA 4.0 22687,2,,22636,7/27/2020 4:00,,0,,"

I assume you trained your model on (f1, f2, f3, f4, f5, f6) and in your test data you sometimes have (f1, f2, f3) and sometimes have for example (f1, f2, f3, f4, f5, f6), right? Because if your test data always have (f1, f2, f3), then isn't it better to just train a model on available features?

So if my assumption is correct what I would do is to manipulate the training set a bit, keeping some training set with (f1, f2, f3, f4, f5, f6) and some others with (f1, f2, f3) with replacement of real values in their (f4, f5, f6) by e.g. mean of respective feature. So all training set still have (f1, f2, f3, f4, f5, f6) but some of them have manipulated (f4, f5, f6). Then finally when testing, do the same manipulation to those test data that have a smaller number of features.

I think like this your model learn how to predict base on (f1, f2, f3) when other features are not available. but at the same time, take advantage of all features if they are all available.

It's probably not the best approach but it worth to try.

",38892,,,,,7/27/2020 4:00,,,,2,,,,CC BY-SA 4.0 22688,2,,22632,7/27/2020 4:18,,2,,"

It depends, as mentioned in the comments, on your model and labels. For example, how would you use standardization on a multi-classification problem?

Generally, standardization is more favorable for input data as its mean is around 0.

I assume you have a regression model and in that case, using standardization could be better than normalization.

",38892,,32410,,12/21/2021 4:06,12/21/2021 4:06,,,,0,,,,CC BY-SA 4.0 22689,1,22692,,7/27/2020 4:40,,5,2085,"

What is the difference between vanilla policy gradient (VPG) with a baseline as value function and advantage actor-critic (A2C)?

By vanilla policy gradient I am specifically referring to spinning up's explanation of VPG.

",38895,,2444,,1/12/2022 21:05,3/4/2022 13:56,What is the difference between vanilla policy gradient with a baseline as value function and advantage actor-critic?,,1,0,,,,CC BY-SA 4.0 22690,2,,22686,7/27/2020 9:36,,1,,"

The policy doesn't change over time. That is, the values will change, otherwise we would not be learning anything, but our rules for action selection don't. I.e. we always take action according to the distribution postulated to our current estimate of the policy $\pi_\theta(a|s)$, we don't suddenly start taking $\max_a \pi_\theta(a|s)$, which would be a true change in policy and would make learning both the actor and the critic unstable.

Because NN's are able to handle noisy target distributions, this is how they can deal with the changing data. If you think of how Actor-Critic methods work, you would initially start to shift your NN to some unfeasible values (due to random initialisation and the Actor-Critic not having any information about the environment), but as you start to interact with the environment you will start to update the agent towards the 'true' policy.

An analogy in supervised learning would be to have some noisy data which is incorrect and some true data. If you trained your network on the noisy data for a small number of epochs and then never showed it to the network again and trained it solely on the correct data, it would forget it has ever seen the noisy data and only represent the new, true data.

",36821,,36821,,7/27/2020 9:47,7/27/2020 9:47,,,,4,,,,CC BY-SA 4.0 22691,1,,,7/27/2020 10:00,,1,89,"

I wanted to plot a graph to show the effect of increasing the batch size on loss calculated (MNIST dataset). But I am not able to decide if I should show change in loss over training time of the neural network or number of updates made to weights and biases (iterations and epochs basically, but for large differences in batch sizes, I think number of updates made makes more sense?). I am confused about what makes more sense (or neither of them makes sense, idk).

With Loss vs training time graph, I can show that for any training time, the loss for large batch is more. From what I have read on wiki, with Loss vs number of updates made graph, I can show that change in loss is smoother for larger batches (rate of convergence). But can't the same conclusion be made when plotted against time? (Smooth convergence means lesser spikes right?)

",33029,,33029,,7/27/2020 10:06,7/27/2020 10:06,Plotting loss vs number of updates made and plotting loss vs run time,,0,0,,,,CC BY-SA 4.0 22692,2,,22689,7/27/2020 10:07,,6,,"

The difference between Vanilla Policy Gradient (VPG) with a baseline as value function and Advantage Actor-Critic (A2C) is very similar to the difference between Monte Carlo Control and SARSA:

  • The value estimates used in updates for VPG are based on full sampled returns, calculated at the end of episodes.

  • The value estimates used in updates for A2C are based on temporal difference bootstrapped from e.g. a single step difference, and the Bellman function.

This leads to the following practical differences:

  • A2C can learn during an episode which can lead to faster refinements in policy than with VPG.

  • A2C can learn in continuing environments, whilst VPG cannot.

  • A2C relies on initially biased value estimates, so can take more tuning to find hyperparameters for the agent that allows for stable learning. Whilst VPG typically has higher variance and can require more samples to achieve the same degree of learning.

",1847,,2444,,1/12/2022 21:06,1/12/2022 21:06,,,,6,,,,CC BY-SA 4.0 22693,2,,22636,7/27/2020 10:13,,1,,"

Assuming that you have access to the training data set, you could use an autoencoder network to predict what features f4, f5, f6 'could be' for the test data set. The way to do this is to train the autoencoder on the training data set with features f1, f2, f3 as inputs, and then use f1,f2,f3,f4,f5,f6 as the output of the network. The autoencoder then effectively learns to map any input samples with (f1,f2,f3) to (f1,f2,f3,f4,f5,f6). By passing your test data through the autoencoder, you can then use the output and pass it to your model.

",34180,,,,,7/27/2020 10:13,,,,1,,,,CC BY-SA 4.0 22694,1,,,7/27/2020 10:46,,2,117,"

Let's suppose that our RL agent needs to play a game with different levels. If we train our RL agent sequentially or with sequential data, our agent will learn how to play level 1, but then it will learn to play level 2 differently, because our agent learns how to play level 2 and forgets how to play level 1, since now our model is fitted using only experiences from level 2.

How does an experience replay buffer change this? Can you explain this in simple terms?

",37831,,2444,,7/27/2020 13:11,11/27/2022 21:08,What is the advantage of using experience replay (as opposed to feeding it sequential data)?,,1,0,,,,CC BY-SA 4.0 22695,1,,,7/27/2020 11:18,,1,489,"

I have been reading a few papers in this area recently and I keep coming across these two terms. As far as I'm aware, Belief-MDPs are when you cast a POMDP as a regular MDP with a continuous state space where the state is a belief (distribution) with some unknown parameters.

Are they not the same thing?

",38899,,2444,,7/27/2020 13:13,2/21/2021 23:00,What is the difference between Bayes-adaptive MDP and a Belief-MDP in Reinforcement Learning?,,1,1,,,,CC BY-SA 4.0 22696,1,22699,,7/27/2020 11:51,,4,506,"

Why do DQNs tend to forget? Is it because when you feed highly correlated samples, your model (function approximation) doesn't give a general solution?

For example:

  • I use level 1 experiences, my model $p$ is fitted to learn how to play that level.

  • I go to level 2, my weights are updated and fitted to play level 2 meaning I don't know how to play level 1 again.

",37831,,2444,,7/27/2020 13:16,7/27/2020 13:16,Why do DQNs tend to forget?,,1,2,,,,CC BY-SA 4.0 22699,2,,22696,7/27/2020 12:13,,4,,"

You are referring to catastrophic forgetting which could be an issue in any neural net. More specifically for DQN refer to this article.

",38892,,,,,7/27/2020 12:13,,,,3,,,,CC BY-SA 4.0 22700,1,,,7/27/2020 13:12,,1,87,"

I have developed a multi label classifier using BERT. I'm leveraging Hugging Face Pytorch implementation for transformers.

I have saved the pretrained model into the file directory in dev environment. Now, the application is ready to be moved the production environment.

Is it a good practice to save the models into file system in prod ? Can I serialize the model files and word embeddings into any DB and read again ?

",22195,,,,,7/27/2020 13:12,Is it good practice to save NLP Transformer based pre-trained models into file system in production environment,,0,0,,,,CC BY-SA 4.0 22701,1,,,7/27/2020 13:48,,1,74,"

I'm looking to encode PDF documents for deep learning such that an image representation of the PDF refers to word embeddings instead of graphic data

So I've indexed a relatively small vocabulary (88 words). I've generated images that replace graphic data with word indexed (1=cat, 2=dog, etc) data. Now I'm going to my NN model

right_input = Input((width, height, 1), name='right_input')
right = Flatten()(right_input)
right = Embedding(wordspaceCount, embeddingDepth)(right)
right = Reshape((width, height, embeddingDepth))(right)
right = vgg16_model((width, height, embeddingDepth))(right)

Image data is positive-only and embedding outputs negative values though so I'm wondering if it is necessary to normalize the embedding layer with something like this after the Embedding layer

right = Lambda(lambda x: (x + 1.)/2.)(right)

The word indexed image looks like this:

Also, is this a problematic concept generally?

",10957,,10957,,7/27/2020 13:53,7/27/2020 13:53,Embedding Layer into Convolution Layer,,0,1,,,,CC BY-SA 4.0 22702,1,22703,,7/27/2020 14:14,,1,125,"

I always use RELUs actication functions when I need to and I understand limitations of ELUs. So in what situation do I need to consider ELUs over RELUs?

",38906,,,,,12/24/2021 0:08,In what situations ELUs should be used instead of RELUs?,,2,0,,,,CC BY-SA 4.0 22703,2,,22702,7/27/2020 14:37,,2,,"

ELU does not suffer from dying neurons issue, unlike ReLU. While ELU can help you to achieve better accuracy, it is slower than ReLU because of its non-linearity in its negative range.

Choosing the right activation function totally depends on the situation but you need to also consider other similar types of activation functions such as leaky ReLU.

Check this link out. It could be useful.

",38892,,32410,,12/22/2021 14:45,12/22/2021 14:45,,,,0,,,,CC BY-SA 4.0 22704,1,,,7/27/2020 15:19,,1,81,"

In some research papers, I have seen that, for training the autoencoders, instead of giving the non-anomalous input images, they add some anomalies to the normal input images, and train the auto-encoders with this anomalous images.

And, during testing, they take and pass an anomalous image and get the output, take their pixel-wise difference, and, based on a threshold, they detect if it is an anomaly or not.

If we are adding noise or anomalies to the training set, are we generalizing the model's capability to recreate the original normal input?

How does it help to detect the anomaly?

My understanding is that we should train using only normal data without adding any noise, then give an anomaly image at test, take the loss as a threshold.

",38908,,2444,,7/28/2020 12:56,7/29/2020 13:53,How can a de-noising auto-encoder act as an anomaly detection model?,,0,1,,,,CC BY-SA 4.0 22706,1,22712,,7/27/2020 17:57,,2,456,"

Why L2 loss is more commonly used in Neural Networks than other loss functions? What is the reason to L2 being a default choice in Neural Networks?

",37414,,,,,7/28/2020 3:58,Why L2 loss is more commonly used in Neural Networks than other loss functions?,,1,1,,,,CC BY-SA 4.0 22707,1,,,7/27/2020 18:19,,1,86,"

Is there any recent work on combining clustering approaches (k-means, or gaussian mixture or PGM) with deep learning for computer vision?

In particular I'm interested in if anyone has used the first few layers of a deep learning network as feature extractors in conjunction with clustering algorithms which have been engineered to induce things like translation and rotation invariance while preserving basic object structure?

Taking the max value out of the output of each feature and fitting them to a gaussian mixture is the easiest approach but I'm interested in seeing other ways you could structure the clustering algorithm. For example I'm interested in seeing how you might learn structure between features that includes position information.

",32390,,,,,7/27/2020 18:19,Combining clustering and deep learning for computer vision,,0,4,,,,CC BY-SA 4.0 22710,1,,,7/27/2020 22:11,,1,158,"

My OpenAI CartPole-v0 problem's implementation using basic Q-learning does not learn at all. I am a beginner and have implemented my first ever Q-learning from scratch after learning from tutorials.

Can anyone suggest what is going wrong?

I have seen through testing that the problem may be that most of the states are remain unvisited even after 10,000 runs. Hence, Q-table remains mostly unchanged at the end of all episodes. I have seen other things in the implementation and they all seem fine to me, at least. Any tip where I should start looking at?

The reward is -200 flat, for all the episodes! which suggests that the improvement is NILL/NADDA/NONE!

Some relevant images are given at the end.

The q-learning part of code is given below:

env.reset()
while not done:    
    current_state = current_state_to_string(assign_obs_to_bins(obs, bins))

    if np.random.uniform() < EPSILON:
        act = env.action_space.sample()
        best_q_value = return_max_from_dict(q[current_state], action = act)
    else:
        act, best_q_value = return_max_from_dict(q[current_state])

    obs, reward, done, _  = env.step(act)
    q[current_state][act] += LEARNING_RATE * (reward + DISCOUNT_FACTOR * best_q_value - q[current_state][act])
    cnt+=1
    total_reward += reward

",36710,,,,,7/27/2020 22:11,OpenAI gym's CartPole problem system does not learn,,0,5,,,,CC BY-SA 4.0 22712,2,,22706,7/28/2020 3:58,,5,,"

I'll cover both L2 regularized loss, as well as Mean-Squared Error (MSE):

MSE:

  1. L2 loss is continuously-differentiable across any domain, unlike L1 loss. This makes training more stable and allows for gradient-based optimization, as opposed to combinatorial optimization.
  2. Using L2 loss (without any regularization) corresponds to the Ordinary Least Squares Estimator, which, if you're able to invoke Gauss-Markov assumptions, can lead to some beneficial theoretical guarantees about your estimator/model (e.g. that it is the "Best Linear Unbiased Estimator"). Source: https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem.

L2 Regularization:

  1. Using L2 regularization is equivalent to invoking a Gaussian prior (see https://stats.stackexchange.com/questions/163388/why-is-the-l2-regularization-equivalent-to-gaussian-prior) on your model/estimator. If modeling your problem as a Maximum A Posteriori Inference (MAP) problem, if your likelihood model (p(y|x)) is Gaussian, then your posterior distribution over parameters (p(x|y)) will also be Gaussian. From Wikipedia: "If the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian" (source: https://en.wikipedia.org/wiki/Conjugate_prior).

  2. As in the case above, L2 loss is continuously-differentiable across any domain, unlike L1 loss.

",36611,,,,,7/28/2020 3:58,,,,5,,,,CC BY-SA 4.0 22714,1,,,7/28/2020 6:49,,2,172,"

In building my first Q-learning algorithm for OpenAI gym's CartPole problem, many of my states remain unvisited. I believe it is the reason that my agent does not learn.

Can I be told of the reasons I can look into why that may happen? I have read and seen tutorials and I know a lot has already been done for this problem. My goal here is to learn and hence this simple implementation with Q-learning.

PS. The specific question to my problem is asked here.

PSS. As an edit, I am inserting my whole code in the following.

import numpy as np
import gym
import matplotlib.pyplot as plt

env = gym.make('MountainCar-v0')

EPISODES = 5000
LEARNING_RATE = 0.1
SHOW_AFTER = 1500
DISCOUNT_FACTOR = 0.95
EPSILON = 0.1
NUMBER_OF_BINS = 10
OBSERVATION_SPACE = 2
MAX_STATES = 100 # e.g. 23 means obs one is 2, obs two is 3

'''This function breaks the continuous states into discrete form'''
def digitize_states():
    bins = np.zeros((OBSERVATION_SPACE,NUMBER_OF_BINS))
    bins[0] = np.linspace(-1.2, 0.6, NUMBER_OF_BINS)
    bins[1] = np.linspace(-.07, 0.07, NUMBER_OF_BINS)
    return bins

'''This function assign the observations into discrete bins using 
   digitize function and the bins that we created using digitize_states()
'''
def assign_obs_to_bins(obs, bins):
    states = np.zeros((OBSERVATION_SPACE))
    states[0] = np.digitize(obs[0], bins[0]) 
    states[1] = np.digitize(obs[1], bins[1]) 
    return states

'''This function merely make the states in form of the strings so that we can
   later use those strings (i.e. number of states) as the KEYs in our q-table 
   dictionary.
'''
def get_all_states_as_strings():
    states = []
    for i in range(MAX_STATES):
        states.append(str(i).zfill(OBSERVATION_SPACE))
    return states

'''Convert the current state into string so that it can be used as key for dictionary '''
def current_state_to_string(state):
    current_state = ''.join(str(int(e)) for e in state)
    return current_state

'''This function iniquation the q-table to zeros'''
def initialize_q():
    states = get_all_states_as_strings()
    q = {}
    for state in states:
        q[state] = {}
        for action in range(env.action_space.n):
            q[state][action] = 0 
    return q

def initialize_Q():
    Q = {}

    all_states = get_all_states_as_strings()
    for state in all_states:
        Q[state] = {}
        for action in range(env.action_space.n):
            Q[state][action] = np.random.uniform(-.5, 0.5, 1)
    return Q

'''This function returns the maximum Q-value from Q-table'''
def return_max_from_dict(dict_var, action = None):
    '''    Arguments
    # dict_var: Dictionary variable, which represent the q-table.
    
    # Return
    # max_key: Best Action
    # max_val: max q-value for the current state, taking best action
    '''
    if(action == None):
        max_val = float('-Inf')
        for key, val in dict_var.items():
            if val > max_val:
                max_val = val
                max_key = key
        return max_key, max_val
    else:
        return dict_var[action]   
        

'''Main code starts here'''

bins = digitize_states()
all_states = get_all_states_as_strings()

q = initialize_Q()    

Total_reward_matrix = []
_testing_action_matrix = []
_testing_state_matrix = [] 
_testing_states = []
_testing_random = 0
_testing_greedy = 0

for episode in range(EPISODES):
    
    done = False
    cnt = 0
    
    # Reset the observations -> then assign them to bins
    obs = env.reset()
    
    if episode%SHOW_AFTER == 0:
        print(episode)
    
    total_reward = 0
    
    while not done:
        current_state = current_state_to_string(assign_obs_to_bins(obs, bins))
        _testing_state_matrix.append(int(current_state))

        if np.random.uniform() < EPSILON:
            act = env.action_space.sample()
            best_q_value = return_max_from_dict(q[current_state], action = act)
            _testing_random+=1
        else:
            act, best_q_value = return_max_from_dict(q[current_state])
            _testing_greedy+=1
        
        obs, reward, done, _  = env.step(act)
        _testing_action_matrix.append(act)
        
        q[current_state][act] = (1-LEARNING_RATE)*q[current_state][act] + LEARNING_RATE * (reward + DISCOUNT_FACTOR * best_q_value)
        cnt+=1
        total_reward += reward
    
        if done and cnt > 200:
            print(f'reached at episode: {episode} in count {cnt}') 
            Total_reward_matrix.append(total_reward)
        elif done:
#             print('Failed to reach flag in episode ', episode)
            Total_reward_matrix.append(total_reward)
        
    
env.close()
",36710,,36710,,7/28/2020 20:05,7/28/2020 20:05,Most of state-action pairs remain unvisited in the q-table,,0,5,,,,CC BY-SA 4.0 22715,2,,22673,7/28/2020 9:50,,12,,"

Parameters is a synonym for weights, which is the term most people use for a neural networks parameters (and indeed in my experience it is a term that machine learners will use in general whereas parameters is more often found in statistics literature). Batch size, learning rate etc. are hyper-parameters which basically means they are user specified, whereas weights are what the learning algorithm will learn through training.

",36821,,,,,7/28/2020 9:50,,,,0,,,,CC BY-SA 4.0 22716,2,,22622,7/28/2020 10:10,,1,,"

I'm using my own implementation of A2C (Advantage Actor Critic) in an industrial application based on Markov Process (present state alone provides sufficient knowledge to make an optimal decision). It's simple and versatile, its performance proven in many different applications. The results so far have been promising.

One of my colleagues had issues with solving a simple task of mapping images to coordinates with OpenAI's Stable Baselines implementations of PPO and TRPO. Hence I'm biased against this framework.

My suggestion is to try the simplest model and if that doesn't satisfy your expectations for performance, then try something fancier. Once you've made a pipeline for learning, switching to a different algorithm is relatively time inexpensive.

Here is a list of algorithms for continuous action and state space from the wikipedia article about RL:

",38671,,38671,,7/28/2020 10:16,7/28/2020 10:16,,,,0,,,,CC BY-SA 4.0 22717,1,,,7/28/2020 10:26,,0,402,"

As what the title said. Does not deepAR require the time series being stationary?

",38926,,,,,12/17/2022 18:02,"Why does not the deepAR model of Amazon require the time series being stationary, as opposed to ARMA model?",,1,0,,,,CC BY-SA 4.0 22721,1,,,7/28/2020 14:31,,1,129,"

I have read many mixed definitions around these two terms. For example, is it right to say deep learning is any ANN with more than two hidden layers?

What are formal definitions for these two?

",38892,,2444,,12/21/2021 18:10,12/21/2021 18:10,What is the difference between artificial neural networks and deep learning?,,1,0,,,,CC BY-SA 4.0 22722,1,22730,,7/28/2020 15:19,,2,72,"

If a policy maps states to actions in reinforcement learning, then for a path planning with obstacles, can't we simply use Artificial Potential Field fields for path planning and model policy mathematically as a field where the obstacles form repulsive field and goal form attractive field?

So, technically, is a policy simply a field?

",36047,,2444,,7/29/2020 22:56,7/29/2020 22:56,Is a policy in reinforcement learning analogous to a field such as APF?,,1,0,,,,CC BY-SA 4.0 22723,2,,15374,7/28/2020 15:28,,1,,"

In general, it's better to not use the sigmoid function in any hidden layer. There are many other great options such as ReLU and ELU. However, if for any reason you have to use a sigmoid-like function, then go with the Tanh function, at least it has ~0 mean.

",38892,,32410,,12/21/2021 10:39,12/21/2021 10:39,,,,0,,,,CC BY-SA 4.0 22724,1,,,7/28/2020 15:31,,-1,170,"

The game of cribbage https://en.wikipedia.org/wiki/Cribbage is a two-player card game played over a series of deal with the goal to reach 121 points.

The game's elements are:

  • the discard. There are three hands, one for each player of 4 cards selected from six. The discards go face down to a hand which belongs to the dealer (the crib). On average, because non-dealer is optimising one hand, his hand scores slightly more points, say 8 points vs 7.8 for dealer, while the crib scores around 4 points, because one player aims to toss trash (while not damaging his own hand), while the other aims to toss good cards (while likewise maximising his own hand)
  • pegging. Here the dealer expects to peg on average around 4 points (he plays second, so has an advantage in responding to non-dealer's lead), and non-dealer around 2 points
  • the cut. This is a fifth card which belongs to all three hands. If the cut is a Jack, dealer scores 2 points, for free.
  • the order of play. This is:
    • the cut. Averages 2/13 = 0.15 points for dealer (if a Jack is cut)
    • pegs (played one card at a time, scoring for each). Dealer will peg at least 1 point, and non-dealer will peg 0 or more. Averages 4 and 2 points.
    • non-dealer shows his hand. Averages 8 points
    • dealer shows his hand. Averages 8 points.
    • dealer shows his crib. Average 4 points.

As should be obvious the hands are generally the main source of points, so players will tend to optimise their hands to maximise points there. This problem is OUTSIDE OF THE SCOPE of this post, since there are exhaustive analyses that have solved this problem reasonably well. E.g., https://cliambrown.com/cribbage/

There are 6C4 = 15 discards possible for each hand, and often the correct play will be unambiguous. For example, if we had the hand A49QKK with mixed suits (no more than 3 of any 1 suit) and we are non-dealer, then the correct hold is obvious - A4KK (this hand has two ways to make 15 A4K and A4K for 2 points each, and a pair for 2 points, plus it improve swith cuts of A, 4, 5, T, J, Q K)

After we've held the card we must then 'play the pegging game'. The scope of this post/question is therefore limited to how to play a given 4-card hand that we have already selected using an exhaustive combinatorial analysis, hence I'm assuming that the bots inputs are a given 4 card hand, as well as the three dead cards (discard and cut), and the replies to each play from our opponent.

The scoring for pegging is:

  • each pair scores 2 for the player playing the second card, 6 for 3 of a kind, 12 for 4 of a kind
  • each time the count reaches 15, the player playing the card scores 2
  • each time the count reaches 31, the player reaching scores 2 (it is not permitted to exceed 31, and if you have no card that will keep the count at/below 31, you must call 'go', and if both players call 'go', the last player scores 1 point). After this the cards are turned over, and play continues from zero
  • each time a run of 3 or more cards is visible (between the cards played by both players), then you score 3, 4, 5, 6, 7, or even 8 points according to the length of the run.

Follows discussion about pegging strategy, which I have blockquoted for those who find this tl;dr

The optimum play for a given hand is not necessarily clear-cut. For example, with the hand A4KK, then we could lead:

  • K - on the basis that if dealer replies with another K (for 2 points), then we reply with the third K (for 6 points), and then most likely dealer does not have an Ace, so we score 2 points for the 31 (K = 10, A = 1), AND dealer must lead the next round of pegging, which is a considerable disadvantage.
  • not K - because there's a bias to holding 5s, so if dealer was dealt a 5, then he can reply with the 5 scoring 2 points for the 15, to which we have no scoring reply. In addition, non-dealer generally has a bias to leading from a pair, so if we lead the K, then in general a smart dealer says 'he is likely to have a pair of Ks'. So even if dealer has a K, he may decline to peg it, especially if he doesn't hold the A himself
  • 4 - this is the most popular lead in cribbage because 15 cannot be scored on it. As indicated above there are 16 10 cards (TJQK), as opposed to 4 of every other denomination, so it's more likely to receive a ten back than any other denomination, and we hold an Ace, so we would score 2 points following a ten card reply
  • A - this has the same advantage that dealer cannot reach 15, and again if we get a ten card back we can score 15-2.
  • It might be that the A is better to lead because the 4 is such a popular lead that dealer is more likely to pair it (risking that we hold the third 4 for 6 points), in which case we have no good reply. If we lead the A, then the dealer is [probably!] less likely to reply with an A (which doesn't help us).
  • We might prefer to lead the 4 if we think that the A is more likely to allow us to play the last card before 31, in which case dealer is forced to lead first next round, which is a disadvantage.

During pegging we have considerable information:

  • which cards we hold and discarded. For example, clearly if we hold 3 4s, and the 4th was cut, then there is no chance that we get one played against us.
  • which cards have been played and therefore which cards are likely to remain, based on the hand selection process. For example, if a 4 is led then x% of the time that will be from A4 TJQK, y% it will be from 4456, etc. As more cards are played, then this becomes more obvious. For example, if we've seen the 3 cards 5TK, then it's VERY likely that the 4th card is another TJQK card, and somewhat likely that it's a 5. It could be ANY card because maybe of the six cards the player was dealt he ended up with an unconnected card. But we can say that the chance of this is low.

In terms of exactly what cards are held, then if we analysed millions of games, we could calculate in general knowing one, two or three cards what the remaining cards could be.

Although there are in theory 270,725 4 card hands, in pegging terms (ignoring suits), there are only 1820 distinct hands (https://docs.google.com/spreadsheets/d/1fxkLBkWC2LA6J06zhku21jcG2ATHqE1RNHlPSDcc4fQ/edit#gid=834958733)

For any given hand, if we for example held the cards 358TTT and we were non-dealer, then we would choose to hold 5TTT and toss 38. We would then lead the T. In this spot, clearly of the 1820 possible hands, then combinations such as TT xy are no longer possible.

As another example if we were dealt A778JK, then we'd toss the JK. Here we'd likely lead the 7.

Before we play the 7 it's relatively simple to calculate the odds that dealer was dealt two 7s. Since he had six cards, that is 2/461/456C2, which is 1.45%. The chance that he chose to hold those two 7s is a different number, and we could theoretically calculate it by figuring out for each of the 45C6 (but simplifiable!) combinations, which hold he would make. However it won't be too far from this number of 1.45%.

HOWEVER, once we have played a 7 and dealer replies with a second 7, then this number is now very far away from accurate. Firstly because it's now a simple conditional probability where one condition is already satisfied, so the chance of any of 5 unknown cards being a specific card (e.g., the 7 of diamonds) is now almost 1/9. Of course dealer has 3 cards, not 5, so again the chances of this are not that number, but not too far off, because of those hands where dealer was dealt 77, and he holds one seven, then he is seldom going to toss the other one.

In terms of our possible plays, we have at the count of 14:

  • play the Ace and score 2 points for the count of 15.
  • play the 7 and bring the count to 21, which is 6 points. However, dealer is fairly likely to have a TJQK (bringing the count to 31, scoring 2 points, and making us lead the next deal). In addition, given that dealer has a similar analysis process to us, in hands where he holds say 78 then he probably replies to the 7 lead with the 8, as it doesn't allow us to score 6 points. So given that dealer has replied with 7, this increases the chance that he holds the fourth 7 as well. It's hard to say what this chance is, but for example there is an x% chance of 31-2, a y% chance of 28-12, and a z% chance of something else. Or we could consider the chance of any given card.
  • probably not play the 8, because at 778 the count is 22, and dealer has many scoring replies: 6 is a run (3 points), 9 is a run AND 31 (5 points), and 8 is a pair (2 points)

In general it can be seen that there are:

  • up to 1820 unique 4-card hands
  • up to 455 3-card hands
  • up to 91 2-card hands
  • up to 13 1-card hands

Clearly we could weight each hand by the chance it has to be held, and this would work well BEFORE a card has been played.

However after 1 or more cards has been played this approach is going to be hopelessly naive. For example, let's say we led the 4. If dealer is holding a hand like 5JJK then he's definitely not going to reply with the 5, because we can play a 3 for 3 points (run), or a 6 for 5 points (run and count of 15).

Further, against non beginner-level players, it will be obvious that we often hold hands like A4 JK or 23 KK, etc. So if we lead the 3, and dealer replies with a ten card (TJQK), then it's likely he's NOT holding cards such as 789, which a non-beginner player would likely prefer to reply with here. It's possible of course that dealer has the same cards. For example, if we hold 23JK, dealer might be holding 23QQ. In this case, the Q reply might be preferred by dealer, because if the play goes 3-Q-2, then dealer can pair our 2, and perhaps dealer doesn't like to pair our 3 lead, for fear we have a third 3, while he lacks the fourth.

In addition, while for pegging terms suits don't matter at all, suit information is during important, so long as the player has a flush. For example if we were dealt 2468h Jd Js, we'd hold the 2468 of hearts, because that's a flush worth 4 points. So if during we play, we've seen 468h from our opponent, then the probability distribution of possible hands is going to contain a good weight for every heart card. Whereas if we've seen 4h 6d 8h, then we know after two cards that he does NOT have a flush, so this weights likely hands towards hand such as 4568, 4678 and so on, and it's unlikely that the remaining card is, say, a K.

In general the goal of a pegging bot could be seen as:

  • score most points or
  • concede fewest points or
  • maximise net points

Nuance: In early game it's likely max net points is the best approach. But if we are at a score like 117-117 (121 is the winning mark) as dealer, then it's (let's say) 80% likely that non-dealer has 4 points in his hand, which means he wins if we do not peg 4 points. In this case non-dealer would try to hold cards that score 4 points (as a hand), and if there are multiple options, try to hold cards that reduce dealer's chance of pegging. Meanwhile dealer's realistic route to victory would be to hold the 4 cards that give him the best chance of pegging 4 points (remembering that dealer will on average score more pegs, while non-dealer scores his hand first). If the score was 113-113, then dealer would play differently as he has NO chance of pegging 8 points, but there's perhaps a 40% chance that non-dealer fails to score 8 points from pegging and his hand. So in this case dealer would try to stop non-dealer pegging anything. So it seems that an AI would need to take into account the current score to decide how to peg.

I have read a couple of papers on cribbage AI, but they have been superficial and tended to concentrate on the discard game, which can be optimised at least for discarding purposes (without considering the relative pegging value), by simply iterating through the possible hands.

Now the question is to what extent this is a problem of machine learning, and to what extent this is an exhaustive analysis? If we ignore the discard problem, and we say that we will input four-card hands to our bot using the output of a process similar to the one here https://cliambrown.com/cribbage/methodology.php then for example:

  • we could calculate the exact probability that our opponent holds any given four card hand, by iterating through the 45C6 hands (subject to appropriate simplifications) that our opponent could have, and then producing a weight for each of the (up to) 1820 possible hands, and for the 715 different hands of 4 unique denominations, the further chance for each to be a flush
  • I am not quite clear how computationally expensive this is, but it seems to me that should be calculate the weights in a reasonable amount of time

So we have 4 cards, and we have weighted possibilities for each of 1820 hands.

Clearly it's not appropriate for us to simply randomly iterate through the hands. I.e. there are four choices for our first card, likewise four for our opponent, then 3 for the next. Roughly there will be 4!*4! choices in this way (roughly because the issue of the count resetting at 31 means that the card order after the third card is not always the samme). But our opponent's reply is not random. If we lead the 5, then he will certainly reply with a TJQK if he has one. He won't possibly reply with a 5.

So it seems to me that some kind of learning process is appropriate. But I am not really familiar with AI to say how this should work. Do we need a pool of human training data? Or do we allow the bot to play against itself?

And what role do probability tables have in this process? Is it going to be productive on a hand level to iterate through possible hands that our opponent might have, or is a Monte Carlo process just as good?

I should note that humans might make sub-optimal play. So for example, if we play PERFECTLY, then if we hold the hand 6699, and the 4 is led, then we SHOULD reply with the 9, rather than risk a 5 on our 6, conceding 5 points. So the play of a 6 on a 4 SHOULD indicate that we hold a hand such as 5566, in which case then we are informed accordingly. But clearly the chance that we hold 6699 and just made a blunder is not zero. So the bot cannot ever be completely certain about our holds.

Likewise it might be we choose to make 'sub-optimal' plays in order to avoid becoming predictable ourselves. For example, the question 'will our opponent pair our lead' is an important one - if we have a pair, we generally want him to. But sometimes we will hold a hand such as A4 JK, in which case we don't want him to pair our lead. Some players will play more aggressively than others, and against an aggressive player, he might pair our lead every time, and a more defensive player might almost never pair our lead.

",38920,,38920,,7/30/2020 11:41,12/22/2021 18:02,Do I need a large pool of training data to train a bot to play the 'pegging' game in the cribbage card game,,1,9,,,,CC BY-SA 4.0 22725,2,,22721,7/28/2020 15:42,,2,,"

A few years ago, deep learning was a buzzword, but now is de-facto a standard term or expression, and it's widely used in all research papers, although deep learning is almost never defined rigorously (but this doesn't seem to be a big problem!).

From my experience (this is not just an opinion, of course!), after having read so many papers on the topic, deep learning typically refers to the subset of machine learning algorithms, specifically, gradient descent and back-propagation, applied to neural networks, and, in particular, neural networks with multiple layers. However, there is no consensus on what multiple refers to. In fact, Schmidhuber (co-author of the LSTM), in [1], writes

At which problem depth does Shallow Learning end, and Deep Learning begin? Discussions with DL experts have not yet yielded a conclusive response to this question. Instead of committing myself to a precise answer, let me just define for the purposes of this overview: problems of depth > 10 require Very Deep Learning.

So, although there is typically no agreement on the definition of deep or multiple, there are several problems, such as the vanishing or exploding gradient problems, that arise as the number of layers in a neural network increases, so there are reasons behind the distinction of deep and non-deep/shallow learning.

Note that some research papers (e.g., [1]) distinguish between deep learning applied to neural networks and other types of deep learning, so as not to exclude the layered composition of other models (e.g., SVMs) other than neural networks and the use of machine learning algorithms (e.g. gradient descent) to train them.

To answer your question more directly, neural networks are models or function approximators, while deep learning is the set of machine learning techniques applied to the layered composition of models (typically, neural networks).

",2444,,2444,,12/21/2021 18:07,12/21/2021 18:07,,,,0,,,,CC BY-SA 4.0 22726,1,22727,,7/28/2020 17:56,,1,84,"

I found that the regret in Online Machine Learning is stated as:

$$\operatorname{Regret}_{T}(h)=\sum_{t=1}^{T} l\left(p_{t}, y_{t}\right)-\sum_{t=1}^{T} l\left(h(x), y_{t}\right),$$

where $p_t$ is the answer of my algorithm to the question $x$ and $y_t$ is the right answer, while $h()$ is one of the hypotheses in the hypothesis space. Intuitively, as denoted in the paper, our objective is to minimize this Regret in order to optimize our algorithm, but in the following formula

$$ \operatorname{Regret}_{T}(\mathcal{H})=\max _{h^{\star} \in \mathcal{H}} \operatorname{Regret}_{T}\left(h^{\star}\right) $$

they maximize this value. Am I interpreting the $max$ wrongly?

",32694,,2444,,7/29/2020 12:36,7/29/2020 12:36,Does this $\max$ mean that we need to maximize the regret in this regret formula?,,1,0,,,,CC BY-SA 4.0 22727,2,,22726,7/28/2020 19:49,,1,,"

Yes, you're interpreting the $\max$ there wrongly. In your second formula

$$ \operatorname{Regret}_{T}(\mathcal{H})=\max _{h^{\star} \in \mathcal{H}} \operatorname{Regret}_{T}\left(h^{\star}\right) \label{1}\tag{1} $$

The sign $=$ means "is defined as", so maybe the following notation is less confusing

$$ \operatorname{Regret}_{T}(\mathcal{H}) \triangleq \max _{h^{\star} \in \mathcal{H}} \operatorname{Regret}_{T}\left(h^{\star}\right) $$

In fact, in section 2.1 of the same paper, there's a similar but more detailed formula that should clarify the meaning of $\max$ $$ \operatorname{Regret}_{T}(\mathcal{H})=\max _{h \in \mathcal{H}}\left(\sum_{t=1}^{T}\left|p_{t}-y_{t}\right|-\sum_{t=1}^{T}\left|h\left(\mathbf{x}_{t}\right)-y_{t}\right|\right) $$

Note that $\operatorname{Regret}_{T}(\mathcal{H})$ is defined for $\mathcal{H}$, while your first formula is defined for $x$. So, \ref{1} is the regret for the whole hypotheses class $\mathcal{H}$, which is thus the maximum regret that you can have across all possible hypotheses $h \in \mathcal{H}$. This should make sense.

",2444,,,,,7/28/2020 19:49,,,,4,,,,CC BY-SA 4.0 22728,2,,16801,7/28/2020 22:04,,5,,"

I don’t know for certain, but I can make a guess. This is just my opinion, some others may disagree.

The field of ALife has four branches that I’m aware of:

Self-Organizing/self assembly behavior. This is the application you refer to, another context it’s useful is swarm control (for drone swarms, for example). While this is technically ALife, as far as I’m aware it’s not really where most of the emphasis is. Swarm control and self assembly are seen as “different” problems, as machines that can work together and also build more of themselves is interesting (and potentially dangerous), but is missing out on the diversity, the open-endedness that life on earth has. Much of ALife research is focused on trying to formally define this open-endedness and coming up with systems that achieve that. Self assembly and swarm control are interesting and difficult problems, just different. This leads to the other three sides of ALife research:

Coming up with environments, and running tests on them. This is a constant game of coming up with a definition that seems to capture open-endedness, then coming up with ALife sims that meet that criteria but fall short of our expectations. So new definitions are made and we repeat. Geb is a classic example: Geb has passed pretty much every test so far, but it’s fairly uninspiring to watch. Most of those programs you reference chose a particular ALife paradigm, but that paradigm may not be the right one, and is often disappointing. Because we still haven’t found something that really “looks like life”, new paradigms and programs are constantly being created and abandoned when they fail to work (Or perhaps some would have already worked, but the computing time is too much). That’s what you’re seeing. Without any unifying theory or sim that is really convincing, I suspect it’ll stay this way for a while. And because:

  • we still haven’t made much progress since Karl Sims in the 90s, or since Geb (this point is debatable)
  • these sorts of sims don’t really have much commercial use aside from games

the direction of making new simulators seems to be lacking funding and research interest, as far as I can tell. Commercial sim games seem to push the boundary these days.

Fortunately there’s a sub field of cellular automata life that’s pretty interesting, its software is slightly more developed due to the overlap with cellular automata and ease of implementation, and research seems to be progressing there at an okay rate.

Realistically, there seem to be two things people want: novel behavior, and novel bodies. My two cents is that these are separate problems, and achieving both is more expensive than just achieving one. But most of these sims end up not balancing development happening in both of these factors (doing this is very difficult), so one factor develops much further than the other, and this disconnect is disappointing to the sim creator. For example, Geb does behavioural diversity really well, while Karl Sims does body diversity well. Sensitivity to small details like mutation rate or genetic encoding also can be quite frustrating. Fortunately, eventually we’ll sorta get behavioural diversity for free in any sim once RL/AI is really understood well.

The third piece of ALife research I’m aware of is the theoretical side, which right now mostly isn’t really far enough along to warrant practical implementation. One big branch of this is the learning theory side, represented by Valiant’s Evolvability theory and followups. Essentially this talks about what functions are possible to evolve, and using stuff like PAC Learning theory they are able to prove some things. Some of these models are more natural than others, but it’s an interesting perpendicular approach to coming up with sims and seeing if they do what we want. Maybe eventually these two approaches will meet in the middle at some point, but they haven’t yet.

The fourth piece is Artificial Chemistry. I recommend this paper as a somewhat dated overview. While this is technically a field of ALife, and is centered around understanding a chemical system that has the necessary emergent properties, it has broken off into applications that may have industrial relevance. For example, robust self repairing and self assembling electronic systems, DNA computing (DNA is capable of simulating arbitrary chemical reaction networks which are capable of arbitrary computing), and artificial hormone systems for automatic task assignment. This has some software developed, but much of that software isn’t really considered ALife anymore since it has branched off into its own domain.

",6378,,6378,,7/30/2020 16:01,7/30/2020 16:01,,,,0,,,,CC BY-SA 4.0 22730,2,,22722,7/29/2020 0:53,,0,,"

Since most policies depend solely on actions and states/observations, then if you model the space of this field as the Cartesian Product of your state and action spaces, then the policy learns a surface over this combined space, similar to the way a field is parameterized.

The policy an agent learns could exhibit the same behavior as the field you describe above (obstacles form an repulsive field, and goal(s) form an attractive field). However, unlike the field described above, it is not guaranteed that the learned policy will capture this behavior - the policy learned depends on:

  1. The learning algorithm used
  2. The approximators (e.g. neural networks) used for learning, and their respective hyperparameters
  3. The formulation of the reward function
  4. The number of episodes/total steps the policy/agent is trained over.

To sum this answer up, I believe you could train the policy in such a way (using the mechanisms above) such that it resembles the field you describe.

",36611,,,,,7/29/2020 0:53,,,,4,,,,CC BY-SA 4.0 22731,2,,8554,7/29/2020 3:33,,0,,"

My understanding is that it will be the same as p(s' | s, a) for any s, a, s', r combination.

The reward r(s, a, s') is already defined in terms of s, a, s'.

Since p(s', r| s, a) = p(r|s', a, s)* p(s'| a, s). For each case the p(r | s', a, s) is equal to 1 by definition. Thus, the column for p(s', r| a, s) = p(s'| s, a)

For instance,

If, S= high, A=search and S'=High, then the reward is always alpha.

p(r=alpha| s=high, a = search , s'=high) = 1 (you always get that reward for this case) etc.

",38874,,,,,7/29/2020 3:33,,,,0,,,,CC BY-SA 4.0 22732,1,,,7/29/2020 4:56,,1,66,"

I know there are quite a few good deep learning books out there, but most explain neural networks and deep learning via application on images. If there are examples/code, they are often done on the MNIST data set.

I was wondering if there's a book out there that goes in depth into neural networks and is equally well written but explains it on non-image data. I am particularly interested in time series application, although some talk on cross-sectional application would be helpful as well. I am particularly interested in learning about:

  • What types of layers/functions/structure are better suited for time-series data
  • Various models for time series and pros/cons/applications of each (Convoluted NN, LTSM, etc...)
  • Typical structures of your neural network (depth, sparse connections, etc...) that seem to work well on time series data
  • Special considerations or settings you should have in your neural network when working with time series data
  • Maybe some talk/examples on how time series prediction using traditional models like ARIMA, can be reproduced or done better using neural networks. Or side by side comparison of pros/cons of using one vs the other.

Thanks!

",29801,,29801,,7/29/2020 6:49,7/29/2020 6:49,Recommendations or resources for neural network/deep learning for time series application?,,1,0,,,,CC BY-SA 4.0 22733,2,,22732,7/29/2020 5:43,,1,,"

I'm currently working with Temporal Convolution Networks (TCNs) for making predictions with time series data (link to article here: https://medium.com/@raushan2807/temporal-convolutional-networks-bfea16e6d7d2). These types of networks, like other types of convolutional networks for time series, use a dilated convolution operation, which, unlike the standard convolution in these networks, is a causal operation that preserves the causality of the time series input.

I've also seen CNNs used in Temporal Differencing applications, but that may be more useful in the context of video processing rather than time series analysis.

",36611,,,,,7/29/2020 5:43,,,,1,,,,CC BY-SA 4.0 22734,1,,,7/29/2020 6:36,,1,471,"

I just started working with the GPT-2 models and want to retrain one on a pretty narrow topic, so I have problems finding training material.

How large should the corpus be to optimally retrain the GPT-2 model? And what is the bare minimum size? Should it simply be as large as possible or can it flip over and make the model worse in some way?

I am also not certain how many steps you should let the retraining run. I have been using 6000 steps when testing, and it seems not much happens after that, loss only moved from 0.2 to 0.18 last 1000 steps.

",38955,,38955,,7/30/2020 5:49,8/28/2021 12:08,How large should the corpus be to optimally retrain the GPT-2 model?,,1,0,,,,CC BY-SA 4.0 22735,1,,,7/29/2020 6:39,,0,131,"

I wanted to build a digit recognition neural network using MATLAB ANFIS kit.

I started by using the MNIST database and I figured out it's almost impossible to classify 784 dimension data using ANFIS. So, I reduced my data dimension from 784 to 13, using an autoencoder in Python. With the new data, I had about 80 percent accuracy in classification using a sequential model. I implemented my data in MATLAB too.

Since MATLAB treats the problem as a regression problem, I had about 1.5 RMSE after 10 epoch of learning, in both grid partitioning and subtractive clustering, and also the error curve almost seems constant in the process.

Is there any way that I can have less error?

",38957,,2444,,7/29/2020 12:53,7/29/2020 12:53,Is there a way to reduce the RMSE error when training a neural network to recognise MNIST digits using ANFIS?,,0,2,,,,CC BY-SA 4.0 22736,1,,,7/29/2020 6:46,,1,45,"

I see many papers in AAMAS talk about artificial intelligence and mechanism design simultaneously. I was wondering, for the sake of being pedantic, is mechanism design could be classified under AI.

",8895,,,,,7/29/2020 6:46,Does Algorithmic Mechanism Design come under the field of AI?,,0,1,,,,CC BY-SA 4.0 22737,2,,20993,7/29/2020 9:48,,0,,"

The good thing about genetic algorithms is that they are exchangeable. If you have the fitness of each individual, any algorithm (i.e. roulette, rank, tournament) will do.

",15530,,,,,7/29/2020 9:48,,,,0,,,,CC BY-SA 4.0 22738,1,,,7/29/2020 10:27,,2,134,"

When I read the paper, Reformer: The Efficient Transformer, I cannot get the same complexity of the memory-efficient method in Table 1 (p. 5), which summarizes time/memory complexity of scaled dot-product, memory efficient, and LSH attention.

The memory complexity of the memory-efficient method is as follow:

$$\max \left(b n_{h} l d_{k}, b n_{h} l^{2}\right)$$

$b$: batch size $l$: sequence length $n_h$: the number of attention head $d_k$: the dimension of query or key

To the best of my knowledge, the memory-efficient method will do a loop for each query, therefore, the whole attention matrix will not show up.

So, shouldn't the memory complexity be $\max(b n_h l d_k, b n_h l)=(b n_h l d_k)$ instead of $\max(b n_h l d_k,b n_h l^2)$?

",38960,,2444,,11/30/2021 15:30,11/30/2021 15:30,What is the memory complexity of the memory-efficient attention in Reformer?,,0,0,,,,CC BY-SA 4.0 22741,2,,17203,7/29/2020 12:11,,0,,"

Some notes about VQ-VAE:

  1. In the paper, they used PixelCNN to learn the prior. PixelCNN is trained on images.
  2. The discrete latent variables are just the indices of the embedding vectors. For example, you can put your embedding vectors in an array.

For a single input image, the number of the output channels of the encoder before quantization equals the dimensionality of the embedding vectors. For an analogy, the output of the encoder is like an image but with a number of channels that equals to the dimensions in the embedding space. With that, each pixel is a vector in the space of the embedding vectors1. The quantization is done on each pixel by mapping each pixel to the nearest embedding vector $e$.

Now, each pixel can be represented by a single integer number which equals to the index of its nearest embedding (what you call the discrete latent variables). Thus, you have two representations for the quantized output, one is a large tensor with many channels (embedding vectors), and another is a simple discrete image with one channel (discrete latent variables).

The one channel discrete images are used to train PixelCNN. This is great because they are small size images. while the large tensor with the embedding vectors is used for the decoder as it holds the information needed for reconstruction.

So, the discrete latent variables for the prior model and the embedding vectors for the encoder.

1This is just an analogy, pixels values belong to a closed interval.

",38964,,38964,,7/30/2020 13:07,7/30/2020 13:07,,,,0,,,,CC BY-SA 4.0 22742,1,,,7/29/2020 13:29,,2,159,"

For example, if I have the following architecture:

  • Each neuron in the hidden layer has a connection from each one in the input layer.
  • 3 x 1 Input Matrix and a 4 x 3 weight matrix (for the backpropagation we have of course the transformed version 3 x 4)

But until now, I still don't understand what the point is that a neuron has 3 inputs (in the hidden layer of the example). It would work the same way, if I would only adjust one weight of the 3 connections.

But in the current case the information flows only distributed over several "channels", but what is the point?

With backpropagation, in some cases the weights are simply adjusted proportionally based on the error.

Or is it just done that way, because then you can better mathematically implement everything (with matrix multiplication and so on)?

Either my question is stupid or I have an error in my thinking and assume wrong ideas. Can someone please help me with the interpretation.

In tensorflow playground for example, I cut the connections (by setting the weight to 0), it just compansated it by changing the other still existing connection a bit more:

",38966,,38966,,7/29/2020 22:17,8/29/2020 4:05,Why does a neuron in a multi-layer network need several input connections?,,3,0,,,,CC BY-SA 4.0 22744,2,,22742,7/29/2020 15:06,,0,,"

It doesn't.

Whether or not this is useful is another story, but it is totally fine to do that neural net you have with just one input value. Perhaps you choose one pixel of the photo and make your classification based on the intensity in that one pixel (I guess I'm assuming a black-and-white photo), or you have some method to condense an entire photograph into one value that summarizes the photo. Then each neuron in the hidden layer only has one input connection.

Likewise, you are allowed to decide that the top neuron in the hidden layer should have only one input connection; just drop the other two.

Again, this might not give useful results, but they're still neural networks.

",25529,,,,,7/29/2020 15:06,,,,7,,,,CC BY-SA 4.0 22745,1,22746,,7/29/2020 15:58,,0,108,"

I want to implement this function on a voice searching application:

$$ Q(S, A) \leftarrow Q(S, A)+\alpha\left(R+\gamma Q\left(S^{\prime}, A^{\prime}\right)-Q(S, A)\right) $$

And also restricted to use epsilon-greedy policy based on a given Q-function and epsilon. I simply need a $\epsilon$-greedy policy for updating my q-table.

",38931,,22079,,7/29/2020 23:02,7/29/2020 23:02,How can I update my Q-table in Python?,,1,1,,,,CC BY-SA 4.0 22746,2,,22745,7/29/2020 16:01,,2,,"

Just try returning a function that takes the state as an input and returns the probabilities for each action in the form of a numpy array of length of the action space (set of possible actions). Here, is one attempt:

def EpsilonGreedyPolicy(Q, epsilon, no_of_actions):

    def policy(state): 
        probabilities = np.ones(no_of_actions, dtype = float) * 
                    epsilon / num_actions 
        best_action = np.argmax(Q[state]) 
        probabilities[best_action] += (1.0 - epsilon) 
        return probabilities 
   
    return policy
",38803,,38803,,7/29/2020 16:20,7/29/2020 16:20,,,,0,,,,CC BY-SA 4.0 22747,2,,15685,7/29/2020 16:29,,0,,"

I eventually used the keep-efficient function from this answer to calculate the Pareto front and used the k-means function to calculate the centroid of the front. This gave me the approximate knee-point of the front, which is usually the optimal solution. One of the calculations was to maximise the distance moved in x direction (dx) vs. minimising the energy consumed (e), so since the x axis needed positive maximization and the y axis needed minimization, I inverted the y axis, since min(f(y)) = - max(-f(y)). This helped get the pareto front toward the top right side of the graph and both the x and y axes were maximization objectives. The optimal point calculated was the robot that had the best fitness.

",9268,,,,,7/29/2020 16:29,,,,0,,,,CC BY-SA 4.0 22748,1,22749,,7/29/2020 16:44,,1,95,"

I have done creating the virtual environment, creating the Q-table, initializing the q-parameters, then I made a training module and stored it in a numpy array. After completion of training, I have updated the q-table and now I get the plots for the explorations But how can I code for rate decay? Here is my sample code for every step of the training module,

for step in range(max_steps): 
        exploration_rate_threshold = random.uniform(0,1)

        if exploration_rate_threshold > exploration_rate:
            action = np.argmax(q_table[state,:])
        else:
            action = env.action_space.sample()
",38931,,22079,,7/29/2020 23:01,7/29/2020 23:01,How can I fetch ​exploration decay rate of an iterable Q-table in Python?,,1,0,,,,CC BY-SA 4.0 22749,2,,22748,7/29/2020 16:47,,1,,"

Here is one way to calculate the exploration rate decay:

exploration_rate = min_exploration_rate + \ (max_exploration_rate - min_exploration_rate) * np.exp(-exploration_decay_rate*episode)
",38803,,,,,7/29/2020 16:47,,,,2,,,,CC BY-SA 4.0 22750,2,,17450,7/29/2020 18:30,,1,,"

However, I’m not sure which policy is saved

The policy from the Monte Carlo tree search is stored, as we can get the policy estimate from the network later by passing the given state through the network, which is used to calculate the cross entropy loss to update the network's policy (summed with Mean squared error loss between value head's prediction and the actual value/reward).

Wouldn’t it be more logical to choose the move with the highest probability calculated by the policy head?

Depends on the number of searches you've performed, after thousands of simulations the MCTS would give better results, as it approximates the minimax tree.

",37657,,,,,7/29/2020 18:30,,,,3,,,,CC BY-SA 4.0 22751,1,,,7/29/2020 18:35,,3,57,"

I have an optimization problem that I'm looking for the right algorithm to solve.

What I have: A large set of low-res 360 images that were taken on a regular grid within a certain area. each of these images is quite sparsely sampled and each of these 360 images has an accurate XYZ position assigned of its center. There are millions of these small images, clusters of close-by images obviously share a lot of information while images farther apart can be completely different.

What I want to do is to compress these small 360 images.

If two 360 images are close by each other, they can be 'warped' into each other by projecting it onto a sphere of finite distance and then moving that sphere (so a closeby 360 image can be a good aproximation of another 360 image when it has been warped that way).

Based on this idea, I want to compress these small low-res 360 images by replacing each of them with:

  • N (N being something like 2-5) indices into an archive of M (M being something like 50-500) different 'prototype' images (of possibly higher resolution than the low res 360 images), each of which has an XZY location assigned plus a radius
  • N blend weights

Such that if I want to reconstruct one of the small, sparsely sampled 360 images I take the N indices stored for this image, look-up the corresponding prototype images from the archive, warp them based on the radius of the archive image and the delta vector of archive XZY and compressed image XYZ location, and then blend the N prototype images based on the N blend weights (and possibly scale down in the prototype images are higher res)

I guess this goes into the direction of Eigen Faces, but with Eigen faces each compressed face has a weight stored for each eigen-face, whereas I want that each compressed sphere only has N non-zero weights.

So my input is: a lot of small 360 images plus a XYZ location each

my output should be:

  • an archive of M "prototype" images, each assigned an XYZ location and a projection radius
  • all compressed spheres, with each sphere compressed to N indices and N weights

This seems to be some non-linear least squares problem, but I wonder if someone can point me into the right direction on how to solve this?

As a completely alternative approach I also looked into spherical harmonics, but with those I only get enough high-frequency details at l=6 which takes 36 coefficients which is too much and also too slow to decompress.

",32111,,,,,7/29/2020 18:35,Looking for the proper algorithm to compress many lowres images of nearby locations,,0,0,,,,CC BY-SA 4.0 22752,1,,,7/29/2020 18:37,,0,51,"

I'm trying to generate rhymes, so it would be very helpful to have a language model where I could input a final word, and have it output a sequence of words that ends with that word.

I could train my own model and reverse the direction of the mask, but I was wondering if there was any way I could use a pretrained model but apply a different mask to achieve this goal.

If this isn't possible, what's the best way to sample a forward-predicting model to achieve the highest probability sentence ending with a particular word?

Thanks!

",37521,,,,,7/29/2020 18:37,Using transformer but masking in reverse direction/smart sampling for desired final word?,,0,2,,,,CC BY-SA 4.0 22753,2,,22724,7/29/2020 19:36,,0,,"

Deep reinforcement learning might be suitable for your requirement. If possible I would recommend you to add DRL hashtag to your thread, so that it can reach to larger number of DRL researchers and practitioners, to help you out.

Thanks , Durga

",33878,,,,,7/29/2020 19:36,,,,0,,,,CC BY-SA 4.0 22755,2,,22702,7/29/2020 20:19,,0,,"

The answer above makes some great comparisons/trade-offs. To help address the non-linearity issue with eLU units that the previous answer brings up, you can also use Leaky-ReLU units, which are linear in both the positive and negative range, and piecewise linear across the whole real domain.

Please see the link here for more details.

",36611,,32410,,12/24/2021 0:08,12/24/2021 0:08,,,,1,,,,CC BY-SA 4.0 22756,2,,16214,7/29/2020 21:45,,0,,"

I think you are looking for the field known as explainable artificial intelligence. The book Interpretable Machine Learning: A Guide for Making Black Box Models Explainable will surely help you to understand the issues and existing techniques. See also the following question Which explainable artificial intelligence techniques are there?.

",2444,,,,,7/29/2020 21:45,,,,0,,,,CC BY-SA 4.0 22757,2,,22742,7/29/2020 22:35,,0,,"

If you adopt a slightly different point-of-view, then a neural network of this static kind is just a big function with parameters, $y=F(x,P)$, and the task of training the network is a non-linear fit of this function to the data set.

That is, training the network is to reduce all of the residuals $y_k-F(x_k,P)$ simultaneously. This is a balancing act, just tuning one weight to adjust one residual will in general worsen some other residuals. Even if that is taken into account, methods that adjust one variable at a time are usually much slower than methods that adjust all variables simultaneously along some gradient or Newton direction.

The usual back-propagation algorithm sequentializes the gradient descent method for the square sum of the residuals. Better variants improve that to a Newton-like method by some estimate of the Hessean of this square sum or following along the idea of the Gauß-Newton method.

",38980,,,,,7/29/2020 22:35,,,,1,,,,CC BY-SA 4.0 22759,2,,22742,7/30/2020 2:23,,2,,"

There's a few reasons I can think of, though I have not read an explicit description of why it is done this way. It's likely that people just started doing it this way because it's most logical, and people who have attempted to try your method of having reduced connections have seen a performance hit and so no change was made.

The first reason is that if you allow all nodes from one layer to connect to all others in the next, the network will optimise unnecessary connections out. Essentially, the weighting of these connections will become 0. This, however, does not mean you can trim these connections, as ignoring them in this local minima might be optimal, but later it might be really important these connections remain. As such, you can never truly know if a connection between one layer and the next is necessary, so it's just better to leave it in case it helps improve network performance.

The second reason is it's just simpler mathematically. Networks are implemented specifically so it's very easy to apply a series of matrix calculations to perform all computations. Trimming connections means either:

  • A matrix must contain 0 values, wasting computation time
  • A custom script must be written to calculate this networks structure, which in the real world can take a very long time as it must be implemented using something like CUDA (on a GPU level, making it very complicated)

Overall, it's just a lot simpler to have all nodes connected between layers, rather than on connection per node.

",26726,,,,,7/30/2020 2:23,,,,0,,,,CC BY-SA 4.0 22760,1,22762,,7/30/2020 4:03,,-1,94,"

Similarly to the question, What is artificial intelligence?

Cognitive Intelligence, as well as being a part of Artificial Intelligence, is an area that mainly covers the technology and tools that allow our apps, websites, and bots to see, hear, speak and understand the needs of the user through natural language

What is the definition of Cognitive Intelligence?

",30725,,30725,,7/31/2020 3:38,7/31/2020 3:38,What is Cognitive Intelligence?,,1,1,,,,CC BY-SA 4.0 22761,1,,,7/30/2020 6:06,,1,399,"

I am working on an object detection model and have thought of looking into stratified splits for the dataset.

Now since I am doing object detection I have a variable number of "labels" for every image because in each image there is variable number of occurrences for each object I am looking for (car, truck, motorbike, etc.).

Obviously single-label stratification does not apply.

From what I understand multi-label stratification is only applicable if there are basically label "features" that we know are always present, which does not seem the case here.

My question is... is there a way to perform stratified split in this case so that in each split there is roughly the same number of cars/trucks/bikes/etc.? (Or is it going to actually improve the results at all?)

",22885,,,,,7/30/2020 6:06,Multilabel stratified split for images/object detection,,0,6,,,,CC BY-SA 4.0 22762,2,,22760,7/30/2020 6:48,,1,,"

The term it comes from cognitive science - depending on the paradigm, it can have many meanings correlated with neuroscience as well as the brain.

Activities based on this sphere is :

  • language
  • learning
  • thinking
  • perception
  • awareness
  • decision making
  • intelligence

The mentioned cognitive processes - activate the appropriate regions in the brain narrow cognitive intelligence - it's an effective response to in particular in unforeseen and uncertain situations.

However, it requires the ability to set goals, learn, plan and think about your own thinking.

",32352,,,,,7/30/2020 6:48,,,,0,,,,CC BY-SA 4.0 22763,1,,,7/30/2020 7:00,,0,1675,"

I have a dataset where each of the training instances is different in the length and the data is sequential. So, I design an LSTM but I am thinking about how to train the LSTM. In fixed-length data, we just keep all of the input in an array and pass it to the network, but here the case is different. I can not store varying length data in an array and I do not want to use padding to make it fixed length. Now, should I train the LSTM where each training instance are varying in length?

",28048,,,,,7/30/2020 9:08,How to train an LSTM with varying length input?,,1,2,,,,CC BY-SA 4.0 22764,1,,,7/30/2020 7:37,,0,96,"

I built a simple X*Y grid world environment to learn and then trained my agent over it. All worked fine and the agent learned as well. Let me give some detail about the environment.

Environment:

  • A 4x4 grid world with episode starting at (0,0) and terminal state (3,3)
  • Four actions: Left, Up, Right, Down
  • The reward of -1 for moving into new state from the previous state to a new state. The reward of 0 when reaching the terminal state. The reward of -2 for bouncing off of the boundary
  • Epsilon-greedy scheme for action selection.

All works fine, and the following are the learning results of the agent.

Later I ran a test run of my TRAINED QL-agent where I used greedy action selection. All I could see in every episode was that my agent start from (0,0), take right to move to (1,0), and then take left to move back to (0,0) again and this goes on and on and on... I check the Q table and it makes sense because the Q-values for these actions justifies such behaviour. But this is not a practical agent should be doing.

",36710,,,,,7/30/2020 7:37,Strange behavior of Q-learning agent after being trained,,0,6,,,,CC BY-SA 4.0 22765,2,,22763,7/30/2020 8:48,,2,,"

You could sequentially pass in each element of your sequential data and save the hidden and cell states in a separate buffer. In a typical LSTM implementation, you input the entire sequence and the hidden and cell states are propagated internally. In the end, the final hidden and cell states returned as the output. This works if your input is all the same length. Instead, you can handle sequentially giving the next element to the LSTM as well as the hidden and cell state yourself. To keep it efficient you can batch your inputs by the batch dimension (batch_first=True in the pytorch LSTM implementation).

For example, propose you have 5 sequences of length 5, 4, 3, 2, and 1. You initialize your hidden and cell state buffers for each of the sequences and pass the first batch containing the first element of all 5 sequences. You save the output hidden and cell states in the buffers for each sequence. Next, you input the batch of the second element of the 4 sequences with length > 1, and save the states in the respective sequence buffers and so and so forth until you exhaust the sequence of greatest length.

",38985,,38985,,7/30/2020 9:08,7/30/2020 9:08,,,,4,,,,CC BY-SA 4.0 22766,1,22771,,7/30/2020 9:25,,1,195,"

Afaik, investigating meta reinforcement learning algorithms requires a collection of two or more environments which have similar structure but are still different enough. When I read this paper it was unclear to me what the meta-training and meta-testing environments were.

For eg., a graph is given for Ant-Fwd-Bkwd showing its performance with number of gradient steps. I'm guessing these are the meta-testing performances. So, which environment was it 'meta-trained' on?

Was it meta-trained on the same Ant-Fwd-Bkwd environment?

",35679,,,,,8/29/2020 15:05,How are mujoco environments used for meta-rl?,,1,0,,,,CC BY-SA 4.0 22768,1,,,7/30/2020 12:15,,2,83,"

What I mean to say is

  1. For example, if I give the meaning of Apple from the dictionary as input to the program, it should give output as Apple.
  2. Or I say My day to day job involves monitoring and managing the resources - the output should be Project management.

The meaning and the word could be a dictionary or it could be custom. I am looking for ideas and tools to go further on this.

",37468,,32410,,12/22/2021 14:45,12/22/2021 23:11,"How to predict the ""word"" based on the meaning in a document?",,1,0,,,,CC BY-SA 4.0 22769,1,,,7/30/2020 13:31,,5,944,"

What is the difference between eager learning and lazy learning?

How does eager learning or lazy learning help me build a neural network system? And how can I use it for any target function?

",38931,,2444,,7/31/2020 14:13,1/2/2023 21:30,What is eager learning and lazy learning?,,1,0,,,,CC BY-SA 4.0 22771,2,,22766,7/30/2020 13:36,,0,,"

According to this paper (PEARL):

These locomotion task families require adaptation across reward functions (walking direction for Half-CheetahFwd-Back, Ant-Fwd-Back, Humanoid-Direc-2D, target velocity for Half-Cheetah-Vel, and goal location for Ant-Goal2D) or across dynamics (random system parameters for Walker-2D-Params).

It looks like different versions of the same environment with differing reward functions are used. For eg., Forward direction might be rewarded positively in one version, Negative in another, Both directions rewarded positively in yet another.

",35679,,,,,7/30/2020 13:36,,,,0,,,,CC BY-SA 4.0 22772,2,,22717,7/30/2020 13:36,,0,,"

I have have been part of a project where we implemented Amazon Forecast service in production. As per my practical experience, we need not worry about stationarity while applying DeepAR.

As per my understanding as in ARIMA the prediction function is a function around the time series moving average, but in DeepAR we are more relying on the backtest window and leg-horizon length.

DeepAR uses recurrent neural Networks (RNN) and gated recurrent unit (GRU) without any assumption on their probability distribution to learn the sequence for prediction.

For a detailed explanation, you can refer. https://aws.amazon.com/blogs/machine-learning/forecasting-time-series-with-dynamic-deep-learning-on-aws/

",25978,,,,,7/30/2020 13:36,,,,1,,,,CC BY-SA 4.0 22773,1,,,7/30/2020 13:41,,1,108,"

I understand that the actor-critic method is probably where I want to start because of how it works with continuous action spaces.

However, the problem I am trying to solve would require the action be a vector of 11 continuous values. When I go to design my training environment and the architecture of my DRL network, I am not sure how to map a vector of values to the state for the state-action pairs.

I am trying to use this article as a jumping off point, but I am not sure where to go: https://medium.com/@asteinbach/actor-critic-using-deep-rl-continuous-mountain-car-in-tensorflow-4c1fb2110f7c

",38992,,,,,7/30/2020 13:41,What is the best way to make a deep reinforcement learning environment with a continuous 2D action space?,,0,0,,,,CC BY-SA 4.0 22774,1,,,7/30/2020 13:51,,1,56,"

If I want to find a (linear) subspace onto which a data-set projects well, I can simply use PCA. However, often the data can project with much smaller error if I first separate it into a couple of classes and then perform PCA for each class individually. But what If I don't know what kind of classes there might be in my data and into how many classes it would make sense to split the data? What kind of machine learning algorithm can do this well?

Example:

If I'd just cluster first based on distance in the high-dimensional space, I would arrive at the bad clustering. There are 5 clusters and the green and red clusters don't project very well onto a 2D subspace.

As a human looking at the data, I see however that if I separate the data as indicated, red and blue will project very well onto a plane each and green will project very well onto a line, so I can run PCA for each group individually.

How can I automate this clustering based on how well it will project onto as low-imensional subspaces as possible?

Something like minimize E = SumOverClusters(SumOverPoints(SquaredDist(projected_point, original_point)) * (number_dims_projected / number_dims_original)) + C * number_of_clusters

What technique is well suited to do that?

(edit: while the example shows a 3d space, I'm more interested in doing that in about 64dimensional spaces)

",32111,,32111,,7/30/2020 13:56,7/30/2020 13:56,How to cluster data points such that the number of clusters is kept minimal and each cluster projects well onto a lower-dimensional subspace?,,0,1,,,,CC BY-SA 4.0 22776,1,22777,,7/30/2020 19:40,,9,1859,"

I started looking into the double DQN (DDQN). Apparently, the difference between DDQN and DQN is that in DDQN we use the main value network for action selection and the target network for outputting the Q values.

However, I don't understand why would this be beneficial, compared to the standard DQN. So, in simple terms, what exactly is the advantage of DDQN over DQN?

",37831,,2444,,11/4/2020 17:15,11/4/2020 17:15,What exactly is the advantage of double DQN over DQN?,,1,0,,,,CC BY-SA 4.0 22777,2,,22776,7/30/2020 20:08,,9,,"

In $Q$-learning there is what is known as a maximisation bias. That is because the update target is $r + \gamma \max_a Q(s,a)$. If you slightly overestimate your $Q$-value then this error gets compounded (there is a nice example in the Sutton and Barto book that illustrates this). The idea behind tabular double $Q$-learning is to have two $Q$-networks, $Q_1,Q_2$, and you choose an action $a$ from them, e.g. from $Q_1 + Q_2$. You then flip a coin to decide which to update. If you choose to update $Q_1$ then the update target becomes $r + \gamma Q_2(s', \arg\max_a Q_1(s',a))$.

The idea is that if you overshoot your estimate on one $Q$ network then having the second will hopefully control this bias when you would take the max.

In Deep Double $Q$-learning the idea is essentially the same but instead of having to maintain and train two $Q$-networks, they use the target network from vanilla DQN to provide the target. To make this more concrete, the update target they use is $$r + \gamma Q(s', \arg\max_aQ(s',a;\theta);\theta^-)\;,$$ where $Q(s,a;\theta^-)$ denotes the target network whose parameters are only updated to the current networks every $C$ time steps.

As before, the idea is that if we have overestimated our value of being state $s'$ in our current network when taking the max action, using the target network to provide the target will help control for this bias.

Maximisation Bias

I will here explain maximisation bias from the simple example given from the Sutton and Barto book.

The Markov Decision Process in the image is defined as follows: we start in state A and can take the 'right' action which gives us 0 reward and immediately leads to termination. If we choose 'left' we get 0 immediate reward where we then move to state B. From there, we have an arbitrary number of action we can take where they all lead to the terminal state and the reward is drawn from a Normal(-0.1,1) distribution.

Clearly, the optimal action is always to move to the right from state A as this gives 0 expected future returns. Taking the left action will give a $\gamma \times -0.1$ expected future returns (the $\gamma$ is our discount factor).

Now, if we got into state $B$ and took some random action our initial reward could be bigger than 0 -- after all it is drawn from a Normal(-0.1,1) distribution.

Now, consider we are updating our $Q$-function for state A and taking the left action. Our update target will be $0 + \gamma \max_a Q(B,a)$. Because we are taking the max over all possible actions, this will lead to a positive reward and so we are backing up the belief of our expected future rewards from taking action left in state A to be something positive -- clearly this is wrong since we know it should be -0.1. This is what is known as the maximisation bias, because it gives us a kind of 'optimistic' estimate of the action value!

I've attached an image below that shows the %age of time the agent chose the left action, which it shouldn't be choosing). As you can see, it takes normal $Q$-learning along time to even start to correct itself, whereas double $Q$-learning corrects the mistake almost immediately.

",36821,,36821,,7/31/2020 13:26,7/31/2020 13:26,,,,6,,,,CC BY-SA 4.0 22779,2,,22768,7/30/2020 21:52,,1,,"

While your question has some ambiguities, I try to answer.

From my understanding you want your model to predict the “topic” of a sentence or a description. It’s just a classification problem with a huge possible number of output classes.

The first initial issue is a very short length of documents (sentences). Most of the topic modeling algorithms such as LDA have a statistical approach and do not work very well with very short documents (less than 50 words could be a good definition of a very short document).

The second issue is how do you want to collect enough data to train your model that is supposed to predict the target out of an extremely large number of output classes? Dictionaries are not enough because they offer a single definition for each word. Examples of words in dictionaries don’t help much and they will probably affect your model adversely. How can your model be generalized by a single (or few) example(s) for each class?

So, it’s not possible, but maybe having some innovations can help.

Here is the definition of “apple” in the oxford dictionary: ”a round fruit with shiny red or green skin that is fairly hard and white inside”. There are just two nouns in the definition: "fruit" and "Skin", if we just read the definition without considering these two words, even we, as humans, struggle to guess.

Consider nouns in input data and use them to build up a natural graph. You just consider main classes such as "fruit". If you’re getting some good results, consider other words, adj, adv, ...

",38892,,32410,,12/22/2021 23:11,12/22/2021 23:11,,,,1,,,,CC BY-SA 4.0 22782,1,,,7/31/2020 2:29,,0,54,"

For instance, consider the following piece of text:

'The father of Richard is a very nice guy. He was born in a poor family. Because of that, Richard learnt very good values. Richard is also a very nice guy. However, Richard's mother embarasses the family. She was born rich and she does not know the real value of the money. She did not have to be a hard worker to succeed in life."

How should a perfect coreference should work? Is this solution the perfect solution?

Cluster 1:

'The father of Richard' (first sentence) <-> 'He' (second sentence)

Cluster 2:

'Richard' (third sentence) <-> 'Richard' (forth sentence)

Cluster 3:

'Richard\'s mother' (fifth sentence) <-> She (sixth sentence) <-> she (sixth sentence) <-> She (seventh sentence)  

If I use the coreference library of spacy (neuralcoref), I get these clusters:

Clusters:

[Richard: [The father of Richard, He, Richard, Richard, Richard], a poor family: [a poor family, the family], Richard's mother: [Richard's mother, She, she, She]]

Note that this output says that "Richard" is the same for sentences that is true. However, "He" in the second sentence is not related to "Richard", but to his father. Also "Richard" and the "Father of Richard" are together in the same cluster. Furthermore, "poor family" and "family" should not come together. However, this is realy difficulty since in this case there is some level of ambiguity.'

I know that this is a very difficult problem. The point is not criticize this fantastic library. I am just trying to understand what I should expect as perfect result.

If I change a little the text:

'The mother of Richard is a very nice woman. She was born in a poor family. Because of that, Richard learnt very good values. Richard is a very nice guy. However, Richard's father embarasses the family. He was born rich and he does not know the real value of the money. He did not have to be a hard worker to succeed in life.'

The clusters are:

[Richard: [The mother of Richard, She, Richard, Richard, Richard, He, he, He], a poor family: [a poor family, the family]]

In this case, the clusters become stranger, since "She" and "Richard" are in the same cluster. Furthermore, the "He" related to the "father of Richard" belongs to the cluster, but not "Richard's father".

So, my question is:

What is the perfect result that I should expect from a "perfect" coreference system?

",38831,,,,,12/27/2022 23:00,NLP: What is expected from the output of a perfect coreference system?,,1,0,,,,CC BY-SA 4.0 22783,1,,,7/31/2020 2:48,,8,1750,"

In these slides, it is written

\begin{align} \left\|T^{\pi} V-T^{\pi} U\right\|_{\infty} & \leq \gamma\|V-U\|_{\infty} \tag{9} \label{9} \\ \|T V-T U\|_{\infty} & \leq \gamma\|V-U\|_{\infty} \tag{10} \label{10} \end{align} where

  • $F$ is the space of functions on domain $\mathbb{S}$.
  • $T^{\pi}: \mathbb{F} \mapsto \mathbb{F}$ is the Bellman policy operator
  • $T: \mathbb{F} \mapsto \mathbb{F}$ is the Bellman optimality operator

In slide 19, they say that equality $9$ follows from

\begin{align} {\scriptsize \left\| T^{\pi} V-T^{\pi} U \right\|_{\infty} = \max_{s} \gamma \sum_{s^{\prime}} \operatorname{Pr} \left( s^{\prime} \mid s, \pi(s) \right) \left| V\left(s^{\prime}\right) - U \left(s^{\prime}\right) \right| \\ \leq \gamma \left(\sum \operatorname{Pr} \left(s^{\prime} \mid s, \pi(s)\right)\right) \max _{s^{\prime}}\left|V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right| \\ \leq \gamma\|U-V\|_{\infty} } \end{align}

Why is that? Can someone explain to me this derivation?

They also write that inequality \ref{10} follows from

\begin{align} {\scriptsize \|T V-T U\|_{\infty} = \max_{s} \left| \max_{a} \left\{ R(s, a) + \gamma \sum_{s^{\prime}} \operatorname{Pr} \left( s^{\prime} \mid s, a \right) V \left( s^{\prime} \right) \right\} -\max_{a} \left\{R(s, a)+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) U\left(s^{\prime}\right)\right\} \right| \\ \leq \max _{s, a}\left|R(s, a)+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right) -R(s, a)-\gamma \sum \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right) \right| \\ = \gamma \max _{s, a}\left|\sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\left(\sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\right) \max _{s^{\prime}}\left|\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\|V-U\|_{\infty} } \end{align}

Can someone explain to me also this derivation?

",20581,,2444,,8/10/2020 13:01,1/22/2022 15:39,Why are the Bellman operators contractions?,,2,3,,,,CC BY-SA 4.0 22784,1,22794,,7/31/2020 3:53,,1,258,"

AFAIK, deep learning became popular in 2012 with the victory of ImageNet Competition - Large Scale Visual Recognition Challenge 2012 where winners of this contest actually used deep learning techniques for optimizing the solution for object recognition.

Who first coined the term deep learning? Is there any published research paper that first used that term?

",30725,,2444,,7/31/2020 14:18,8/1/2020 3:19,"Who first coined the term ""deep learning""?",,1,0,,,,CC BY-SA 4.0 22786,1,22823,,7/31/2020 8:34,,1,69,"

Normally, in practice, people use those loss functions with minima, e.g. $L_1$ mean absolute loss, $L_2$ mean squared error, etc. All those come with a minimum to optimize to.

However, there's another thing, logistic loss, I'm reading about, but don't get it why the logistic function could be used as a loss function, given that it has the so-called minimum at infinity, but that isn't a normal minimum. Logistic loss function (black curve):

How can an optimizer minimize the logistic loss?

",2844,,2444,,8/2/2020 12:54,8/2/2020 12:54,How could logistic loss be used as loss function for an ANN?,,1,0,,,,CC BY-SA 4.0 22787,1,,,7/31/2020 9:02,,2,89,"

When we use DDQN, we often use the target network in case our online network overestimates a value, but this doesn't make sense to me, because

  1. What happens if our target network is the one that overestimated a value, then we’d keep using that overestimated value

  2. Why can't we use both target network for selection and evaluation

",37831,,2444,,7/31/2020 12:32,7/31/2020 12:32,What happens if our target network overestimates the value?,,0,0,,,,CC BY-SA 4.0 22788,1,,,7/31/2020 9:57,,0,41,"

I'm a relative beginner in deep learning (understand by that, I'm doing my first Kaggle competition right now, and I have loads to learn still) and I was just wondering something.

Let's say you have pathology/biopsy tissue images from patients dying from a disease A and patients dying from other causes (whatever causes actually but not related to disease A).

To date, I think we can say that nobody actually really knows what causes at the level of a biopsy the disease A.

My idea, as my group could have actually a lot of these biopsies for both groups, would be to use them to fuel a neural network.

Why would I do that? Biopsies images are rather complex images, and maybe some fine details are hard to guess for a human being, or maybe the sum of some details is actually important to tell whether disease A kills the patient or not. But again, I don't think anybody could come and say: on those tissue biopsy, the sign(s) for disease A are x, y, z.

My question then becomes a bit more theoretical: given the fact that you have enough data to actually give chances to the algorithm to find differences, is it a good idea to train a neural network without having actually any idea of what could differentiate the two groups? Do you know examples of such a strategy? How hard is it afterwards - in the case of a rather good accuracy - to understand what makes it so recognisable?

",40012,,2444,,12/12/2021 13:31,12/12/2021 13:31,Is it a good idea to train a neural network to classify images without base-hypothesis?,,0,2,,,,CC BY-SA 4.0 22789,1,,,7/31/2020 10:02,,3,194,"

I know we should scale the input and output (assuming regression task) before we feed it to the neural network. Then the gradient descent will give the better minima much faster. But I have subtle confusion whether gradient descent with feature scale and without feature scale gives the same result or just gradient descent is not scale-invariant.

",28048,,28048,,7/31/2020 10:41,4/27/2021 14:34,Is gradient descent scale invariant or not?,,1,2,,,,CC BY-SA 4.0 22791,2,,22789,7/31/2020 10:29,,-1,,"

What are you hoping to get out of the answer for this question? Feature scaling is a method you CAN(but don't have to) use, so that your algorithm performs faster and reaches better general accuracy. I would say that on a simple regression task, where the feature value ranges do not vary a lot, the output would probably be almost the same, but as soon as you introduce data that has one feature of range 1-10, and other range of 10000-100000, that is where you would notice that you NEED to standardize/normalize your features in order to reach optimal results. That's why it's almost a general rule to just scale your data, so your algorithm can generalize better, and you don't have to worry about your algorithm giving higher importance to a feature with higher values, instead of the one with lower values(just an example).

",38919,,,,,7/31/2020 10:29,,,,1,,,,CC BY-SA 4.0 22792,2,,22769,7/31/2020 11:16,,3,,"

What is eager learning or lazy learning?

Eager learning is when a model does all its computation before needing to make a prediction for unseen data. For example, Neural Networks are eager models.

Lazy learning is when a model doesn't require any training, but all of its computation during inference. An example of such a model is k-NN. Lazy learning is also known as instance-based learning [1, 2, 3].

How does eager and lazy learning help me build a neural network system? And how can I use it for any target function?

To answer your second question, you can't employ lazy learning to train a neural network, because they are inherently eager models.

",26652,,-1,,1/2/2023 21:30,1/2/2023 21:30,,,,6,,,,CC BY-SA 4.0 22794,2,,22784,7/31/2020 12:23,,0,,"

The term was introduced to the machine learning and computer science community by Rina Dechter in Learning While Searching in Constraint-Satisfaction-Problems (1986) [1], where she writes

Discovering all minimal conflict-sets amounts to acquiring all the possible information out of a dead-end. Yet, such deep learning may require considerable amount of work.

When deep learning is used in conjunction with restricting the level of learning we get deep first-order learning (identifying minimal conflict sets of size 1) and deep second-order learning (i.e. identifying minimal conflict-sets of sizes 1 and 2).

Our experiments (implemented in LISP on a Symbolits LISP Machine) show that in most cases both performance measures improve as we move from shallow learning to deep learning and from first-order to second order.

However, note that the term deep learning was used before 1986 in other contexts, for example, in [2]. Moreover, note that Rina Dechter did not use the term in the context of neural networks, which was probably used later.

",2444,,2444,,8/1/2020 3:19,8/1/2020 3:19,,,,0,,,,CC BY-SA 4.0 22795,1,,,7/31/2020 14:27,,0,40,"

I am working on a problem similar to this one:(supervised, artificial data)

x=np.random.uniform(-100,100,10000)
y=np.random.uniform(-100,100,10000)
z=np.random.uniform(-100,100,10000)
a=np.random.uniform(-100,100,10000)
b=np.random.uniform(-100,100,10000)

i= x**2/1000 + 2*x*y/1000 + 4*np.sqrt(abs(z)) + z*a + 4*a + abs(b)**1.5 -1/((b+1)**2) * a + 10*y

Since I am not creating the data myself I want to make sure, that my customer provided all the relevant input features. Is there a way to find out, whether the input is complete and not lacking a feature, say "a"? Obviously if the input is the same and the output differs it would be evidence of missing data but it isn't guaranteed that any two input samples are the same. Another way I thought of would be to use an autoencoder to find the dimension of the dataset(including the output) and hope it is exactly the input dimension but in my case it is also possible that there are redundant features. Is there any other way to check whether a function is computable from the given inputs?

",40019,,,,,7/31/2020 14:27,Finding whether an input column is missing,,0,2,,,,CC BY-SA 4.0 22797,1,,,7/31/2020 15:34,,1,238,"

I decided to start learning neural networks by creating a bot for the game. One of the intermediate steps is to create a global map from a series of inaccurate overlapping sub-maps. This task can be solved using OpenCV, but this solution will be too limited and sensitive (in the future I intend to complicate the task and work directly with the map image, instead of binary masks).

I've tried the following options:

  • predict the position of a new map area within the global map. (as a probability distribution)

  • predict the new state of the global map from the old and new minimap.

I've tried a lot of options formulation of the problem of network architecture, including the idea of conjoined networks, but nothing gave any relevant results.

Some articles about solving similar problems:

Here is an example of one of the options statement of the problem:

",38036,,38036,,8/2/2020 22:05,8/2/2020 22:05,How to choice CNN architecture for stitching images,,0,2,,,,CC BY-SA 4.0 22799,1,,,8/1/2020 1:48,,1,85,"

Most, if not all, AI systems do not imitate humans. Some of them out-perform humans. Examples include using AI to play a game, classification problems, auto-driving, and goal-oriented chatbots. Those tasks usually come with an easily and clearly defined value function, which is the objective function for the AI to optimize.

My question is: how is deep reinforcement learning, or related techniques, to be applied to an AI system that is designed to just imitate humans but not outperform humans?

Note this is different from a human-like system. Our objective here is to let the AI become a human rather than a superintelligence. For example, if a human consistently makes a mistake in image identification, then the AI system must also make the same mistake. Another example is the classic chatbot to pass the Turing test.

Is deep reinforcement learning useful in these kinds of tasks?

I find it is really hard to start with because the value function cannot be easily calculated.

What is some theory behind this?

",38299,,38299,,7/10/2022 12:10,12/7/2022 13:06,Are there fundamental learning theories for developing an AI that imitates human behavior?,,1,0,,,,CC BY-SA 4.0 22800,1,22822,,8/1/2020 2:33,,1,229,"

I am trying to create a CNN model that classifies if a person is wearing a seatbelt or not to verify they drive safely. I know to get images of people wearing seatbelts and people not wearing seatbelts, but I have a problem.

What if the person doesn’t submit a picture of them in a car at all? How do I construct the rest of the dataset to determine if that picture is an actual picture of a person wearing a seatbelt?

Do I insert completely random pictures in a different category? Do I classify images that don’t have a high confidence score as "wrong" images? Or leave it?

",40029,,2444,,8/2/2020 20:52,8/2/2020 20:52,How to handle images that don’t pertain to image classifier at all?,,2,1,0,,,CC BY-SA 4.0 22804,2,,22800,8/1/2020 8:25,,1,,"

Yes, a category "no person" or "random image" would make sense. Binary classification is only helpful if you know that your input always belongs to one or the other category, for example by pre-filtering the inputs.

",22993,,,,,8/1/2020 8:25,,,,2,,,,CC BY-SA 4.0 22805,1,,,8/1/2020 9:28,,2,503,"

The genetic algorithm consists of 5 phases of which 4 are repeated:

  1. Initial population (initially)
  2. Fitness function
  3. Selection
  4. Crossover
  5. Mutation

In the selection phase, the number of solutions decreases. How is it avoided to run out of the population before reaching a suitable solution?

",27777,,2444,,1/30/2021 21:54,1/30/2021 21:54,How to avoid running out of solutions in genetic algorithm due to selection?,,3,0,,,,CC BY-SA 4.0 22808,1,,,8/1/2020 12:13,,2,78,"

Neural networks are known to be generally better modeling techniques as compared to tree-based models (such as decision trees). Are there any exceptions to this?

",36000,,2444,,12/31/2020 21:28,12/31/2020 21:28,What are some applications where tree models perform better than neural networks?,,1,0,,,,CC BY-SA 4.0 22809,2,,22805,8/1/2020 14:33,,2,,"

There are multiple ways to interpret those steps. The most common standard approaches are

  • select two parents and produce two offspring; repeat until child population is the same size as parent population, and let the children replace their parents unconditionally (generational GA)

  • same as the above, but allow a few parents to live on instead of a few children if the parents have higher fitness (elitism)

  • each iteration, select two parents, produce one child, let the child replace a member of the parent population if it is better (steady state GA)

But there are other ways to go. There's an algorithm called CHC that lets the child population get smaller over time, and when it reaches zero, the algorithm triggers a smart restart. The point is there's no single definition for what makes an evolutionary algorithm. It's up to you to decide how to make something that works well for your problem. When you're a beginner though, it's handy to start from known points, like the three I mentioned above.

",3365,,,,,8/1/2020 14:33,,,,0,,,,CC BY-SA 4.0 22810,1,22876,,8/1/2020 16:05,,3,514,"

I am trying to predict the number of likes an article or a post will get using a NN.

I have a dataframe with ~70,000 rows and 2 columns: "text" (predictor - strings of text) and "likes" (target - continuous int variable). I've been reading on the approaches that are taken in NLP problems, but I feel somewhat lost as to what the input for the NN should look like.

Here is what I did so far:

  1. Text cleaning: removing html tags, stop words, punctuation, etc...
  2. Lower-casing the text column
  3. Tokenization
  4. Lemmatization
  5. Stemming

I assigned the results to a new column, so now I have "clean_text" column with all the above applied to it. However, I'm not sure how to proceed.

In most NLP problems, I have noticed that people use word embeddings, but from what I have understood, it's a method used when attempting to predict the next word in a text. Learning word embeddings creates vectors for words that are similar to each other syntax-wise, and I fail to see how that can be used to derive the weight/impact of each word on the target variable in my case.

In addition, when I tried to generate a word embedding model using the Gensim library, it resulted in more than 50k words, which I think will make it too difficult or even impossible to onehot encode. Even then, I will have to one hot encode each row and then create a padding for all the rows to be of similar length to feed the NN model, but the length of each row in the new column I created "clean_text" varies significantly, so it will result in very big onehot encoded matrices that are kind of redundant.

Am I approaching this completely wrong? and what should I do?

",38887,,38887,,8/9/2020 18:21,8/9/2020 18:21,How to use text as an input for a neural network - regression problem? How many likes/claps an article will get,,1,0,,,,CC BY-SA 4.0 22811,2,,22805,8/1/2020 18:08,,1,,"

This is a more complex question than it might initially seem. A genetic algorithm models a biological process,namely population genetics. No biological population evolves to a single cloned individual, a process in genetic algorithms referred to as premature convergence where the population converges to a single non optimal, though possibly locally optimal, solution. The avoidance of premature convergence or the maintenance of population diversity is an important aspect of the genetic model that is often not well addressed, and one that the five step model you detail definitely does not.

The one operator that will maintain diversity is mutation, since it is a purely random operator. However, what the mutation rate should be is highly argued over. A general consensus is that if each chromosome is of length N then the mutation rate should be 1/N. Likewise, the consensus is that 60% of the population should be replaced in each breeding cycle.

However, these settings do not emerge directly from biological reality and premature convergence remains problematic. A more realistic model is to reflect the fact that in biology resources are finite, and to adjust the fitness of individuals proportionate to the number of similar individuals on the assumption that similar individuals are chasing the same resource. The fitness landscape is thus dynamically warped by the changing distribution of the population. You will still have to retain memory of the fittest solution before adjustment. A common solution is to apply cluster analysis to the population, reducing the individual’s fitness by the size of the cluster to which it is allotted. A seminal paper is by Yin and Germay A Fast Genetic Algorithm with Sharing Scheme Using Cluster Analysis Methods in Multimodal Function Optimization `. The assumption is still made that the population is modelling a single biological species. How diversity does not merely maintain diversity but results in a population dividing into separate reproductively isolated species is a question for another day, and one that divides biologists to the current day.

",26382,,,,,8/1/2020 18:08,,,,0,,,,CC BY-SA 4.0 22812,1,,,8/1/2020 18:56,,0,121,"

In the concept of the vanilla policy gradient algorithm, is it possible for our trajectory size to be fixed?

For example, my environment is the space of embedded images (using a pre-trained encoder to take images in a space with smaller dimensions), the action I am performing is clustering via k-means algorithm and the reward is the silhouette score metric applied on the clustered images.

I am thinking to set batches of size 100 (dataset is MNIST and its trainset size is 60000). Then take the mean of them and consider this as one observation. Then feed this into the policy network to give me its logits which is an array of size 20 (20 discrete actions). These actions tell me the number of k clusters in the k-means clustering algorithm. One value for k is sampled and the k-means algorithm is applied on these 100 images and then reward is calculated.

I can set a constant number for trajectory sizes, for example, 20 and sum the rewards to get the R(trajectory). Is this possible in the context of the RL and policy gradient, or the trajectory's size cannot be fixed? Also, the action that our policy gives us must lead us to the next observation in the environment, but here images are independent of the policy network's parameters.

I wonder if I can utilize RL to implement this. I appreciate any hints.

",37744,,2444,,8/2/2020 11:51,12/30/2020 14:04,Is it possible to have a fixed trajectory size in the vanilla policy gradient algorithm?,,1,0,,,,CC BY-SA 4.0 22813,2,,22805,8/1/2020 20:56,,3,,"

It is not true that the number of solutions necessarily decreases during the selection phase (if by solutions you mean the number of individuals in the population). The number of solutions is usually constant, i.e., you can start with $N$ individuals, then, every iteration (or generation), you can e.g. select two individuals from the population (typically, the fittest ones, but you can have some more sophisticated selection criteria), then you merge them to create two new individuals (i.e. crossover), which will then replace (with a certain probability) the two least fit individuals from the current population, so the population's size remains constant.

If you are talking about reaching a local minimum, i.e. none of the solutions in the population are "good enough", then, as someone has already suggested, there are potentially multiple ways to address this issue, such as

  • increase the population size
  • run the genetic algorithm for a longer time (if you have the resources)
  • change your genetic operators (i.e. the mutation and crossover) so that to introduce more diversity
  • tweak the replacement, mutation, and crossover rates
  • change your selection strategy (there are many selection strategies)
  • make sure that the representation of the solutions is suitable (e.g. once, by mistake, I was using an array of integers rather than floating-point numbers, so I couldn't ever find the correct solution, which was an array of floating-point numbers)
  • use something like novelty search

The correct approach will probably depend on the context.

",2444,,2444,,8/1/2020 22:24,8/1/2020 22:24,,,,0,,,,CC BY-SA 4.0 22815,2,,8894,8/2/2020 0:10,,1,,"

GANs are usually trained in a self-supervised fashion, i.e. they use the unlabelled data as the supervisory signal. Note that some self-supervised learning methods are unsupervised learning techniques, given that no human-annotated data is needed. However, not all SSL techniques are used for solving an unsupervised learning task. In fact, there are SSL techniques that are specifically used to generate labeled data, which can then be used to train a model in a supervised fashion.

",2444,,,,,8/2/2020 0:10,,,,0,,,,CC BY-SA 4.0 22819,2,,22808,8/2/2020 7:45,,1,,"

Hard to say in general. Speaking from my own experience and by looking at which models win Kaggle competitions (see here and here), I would say tree-based models e.g. Random Forests, Decision Trees, Gradient Boosting are favorable over neural networks when working with low-dimensional data and easy interpretable features (usually simple tabular data with numeric, ordinal or categorical features).

Whereas when working with everything high-dimensional like images, text, time series or other data with non-trivial features, I would recommend neural networks.

Of course there might be exceptions and the future may prove me wrong.

",37120,,,,,8/2/2020 7:45,,,,0,,,,CC BY-SA 4.0 22821,2,,22799,8/2/2020 9:03,,0,,"

There is an interesting discussion the progress achieved in this field by far in the paper of Francois Chollet - https://arxiv.org/abs/1911.01547. At the present time, many architectures are able to outperform the human in particular tasks, because they have a strong priors coded into them and ability to process a huge amount of data.

However, when it comes to generalization, or even more difficult task of doing the things, that the coder has not put into the model, the present algorithms do no perform well. There is a rather sophisticated and developed mathematical definition of what is required from the system to be called intelligent. And in a nutshell, it is ability to develop a sensible behaviour and enough accuracy for new tasks without putting strong priors and large amount of experience.

In the end of the paper, there is proposed a benchmark to measure the intelligence of AI system.

",38846,,,,,8/2/2020 9:03,,,,0,,,,CC BY-SA 4.0 22822,2,,22800,8/2/2020 11:01,,1,,"

From what I understood, you want to be able to determine whether the input to your classifier is a valid picture or not. Where:

  • Valid picture: image of a person wearing or not wearing a seatbelt
  • Not valid picture: unrelated images (say a kitchen picture) or noise, or a black image (no input at all)

For that you could build a Bayesian model from your current deep-learning model. Check out Pyro (from pytorch). The main idea behind it, is that the model will always predict a class: person wearing or not wearing a seatbelt. But since it is a Bayesian model it will also tell you the confidence of the prediction in terms of the similarity of the input with the input distribution for which the model was trained. In other words, it will also tell you "how valid" is that prediction, or "how valid" is the input for that prediction.

Check out this post where it is very well explained. Hope it helps!

",26882,,,,,8/2/2020 11:01,,,,2,,,,CC BY-SA 4.0 22823,2,,22786,8/2/2020 11:24,,2,,"

I see why you might be confused. First, the logistic-loss or log-loss is technically called cross-entropy loss. This function is very simple:

$CE = -[y \log(p) + (1 - y) \log(1 - p)]$

This tells basically if the predicted class $y$ was right $y=1$ then the loss is $CE=-\log(p)$, if the predicted class was not the right one then the loss is $CE=-\log(1-p)$.

If we look at the function as a pure math concept we see that:

$CE = f(x) = - \log(x)$

And as you point out, that function is minima-unbounded as its domain is $D(f(x)) = [0, +\infty]$. You can check that in here:

However the trick is that the inputs must be bounded, meaning, the inputs to the loss function must be in range $[0, 1]$. This bounding is achieved by applying a sigmoid activation function as the final "layer" of the network. Then if the inputs to the loss function are bounded the function has a clear minima.

Check how the function looks in reality from one of the most important papers in loss function in the AI world: Focal Loss (I really encourage you to read it as the first section explains in detail the cross-entropy loss). The blue curve is the one you are looking for.

Finally, you might want to review your log-loss/CE function since it should have an asymptote for $f(x=0) = \infty$

",26882,,,,,8/2/2020 11:24,,,,0,,,,CC BY-SA 4.0 22824,2,,21045,8/2/2020 11:36,,4,,"

Yes, it is not specified because the region proposal algorithm did not change from R-CNN (the previous version from Fast R-CNN, however, in the next verion, Faster R-CNN, this algorithm is replaced by a CNN).

The region proposal algorithm you are looking for is called selective search. You can find in the R-CNN paper that the algorithm is described in "Selective Search for Object Recognition", I found a copy here.

The algorithm is based on a series of segmentation and aggregation techniques of the input image for generating the proposed regions. Check it out 4 iterations of segmentation & aggregation over the same input image to build the proposed regions.

All the algorithm is doing is just iterating over 4 steps:

  1. Initial regions based on segmentation by pixel light intensity are obtained by applying a segmentation algorithm described in the paper. For example, given a picture of a shepherd with his sheep in the mountain, it is segmented by light intensity, and the image of Figure (a) is obtained.
  2. Different regions are proposed based on the previous segmentation, Figure (e)
  3. The similarity between the proposed regions is calculated using the formula proposed in equation 6 in Section 3.2 in the paper which is nothing more than an aggregate metric of the similarity of two regions based in 4 metrics: similarity in color, texture, size and fill (measures how well a region within another)
  4. Add the regions based on similarity and get Figure (b). Then return to step two.

That is how iteratively you get all the images depicted.

",26882,,26882,,2/4/2021 10:46,2/4/2021 10:46,,,,4,,,,CC BY-SA 4.0 22825,2,,20694,8/2/2020 11:47,,1,,"

As you ask, "in general...", I will answer generally, however this changes a lot from model to model and the way they handle close objects.

In general, yes, they would do a poor job detecting very close objects, switch to segmentation models for that (for class or better, instance segmentation).

In general, objects detectors learn to tell an object from other based in 2 criterion:

  • Intersection over union: for object of the same class
  • Class probability: for objects of different class

So, if two objects of the same class are very close, the 2 detected bounding boxes will be highly overlapping, then, the Non Maximal Suppression filter will remove one of them. This is where objects detector, in general, perform worse.

Similarly, if two objects belong to different classes the 2 detected bounding boxes will be highly overlapping but the NMS filter won't remove them (again, in general, NMS is set only for same class objects). However when 2 objects are very close, there is a high chance they are partially occluded. Objects detectors, in general, don't handle occlusions very well.

So, in conclusion, objects detectors will perform better detecting far-away objects.

",26882,,,,,8/2/2020 11:47,,,,0,,,,CC BY-SA 4.0 22826,2,,8274,8/2/2020 12:51,,1,,"

The paper referenced by Martin Thoma is the go-to for semantic segmentation. However I will also like to add the Panoptic Segmentation metric as an aggregated method to measure both the detection task and segmentation task of the model.

It is a very well-known and widely used metric since it is the standard metric for COCO dataset (segmentation)

This is the paper where the metric is proposed.

And here is the metric:

",26882,,,,,8/2/2020 12:51,,,,0,,,,CC BY-SA 4.0 22827,1,22911,,8/2/2020 13:02,,3,222,"

Can AlphaZero considered as Multi-Agent Deep Reinforcement Learning?

I could not find a clear answer on this. I would say yes it is Multi Agent Learning, as there are two Agents playing against each other.

",40042,,,,,8/7/2020 10:02,Can AlphaZero considered as Multi-Agent Deep Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 22828,1,,,8/2/2020 13:16,,1,70,"

Suppose, instead of playing against a random opponent, the reinforcement learning algorithm described above played against itself, with both sides learning. What do you think would happen in this case? Would it learn a different policy for selecting moves?

Above is an extract from Reinforcement Learning: An Introduction by Andrew Barto and Richard S. Sutton, and I wasn't quite sure about what the answer to the question would be, so thought of posting it here. The algorithm being referred to is the one for playing the game tic-tac-toe.

In my opinion, if the same algorithm plays both sides, it may end up assisting itself to win every time - and not really learn anything. What do you think?

",35585,,2444,,8/2/2020 20:59,8/2/2020 20:59,What does self-play in reinforcement learning lead to?,,0,0,,,,CC BY-SA 4.0 22829,2,,22812,8/2/2020 13:55,,2,,"

Trajectory size can be fixed, but in this case problem would be formulated as something similar to the multi-armed bandit problem where there is a single state and a set of actions to choose from. There is no sequential decision making since samples are not correlated, they are picked at random. So, if you take a batch of 20 examples then you would basically have 20 single state trajectories. For each of those trajectories you can calculate policy gradient and average it over the batch size (which would be 20 in this case).

",20339,,,,,8/2/2020 13:55,,,,1,,,,CC BY-SA 4.0 22830,2,,22154,8/2/2020 16:11,,1,,"

When it talks to other domains such as image or music, using transformer will always face a problem of sequence length limitation. To the best of my knowledge, the bottleneck of self-attention which uses a $n^2$ matrix quite limits transformer being applied to other domains. For example, a 32x32 pixel image, means a sequence of 1024 tokens.

OpenAI did some related research, as the followings.

Generative Modeling with Sparse Transformers: In the paper, transformers with sparse attention are applied to image and waveform.

ImageGPT: A large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. (Abstract from the blog)

",38960,,,,,8/2/2020 16:11,,,,1,,,,CC BY-SA 4.0 22831,1,,,8/2/2020 17:18,,2,128,"

In deep Q-learning, $Q(s, a)$ and $Q'(s, a)$ are predicted or estimated by the neural network itself. In supervised learning, the target value is a true unbiased value. However, this isn't the case in reinforcement learning. So, how can we be sure that deep Q-learning converges? How do we know that the target Q values are accurate?

",37831,,2444,,8/2/2020 21:03,8/2/2020 21:03,How can deep Q-learning converge if the targets may not be correct?,,0,1,,,,CC BY-SA 4.0 22832,1,22849,,8/2/2020 19:12,,0,42,"

I'm trying to learn to use AI, and so I've followed some basic tutorials like training an MLP to predict the price of a car given properties like its age and manufacturer. Now I want to see if I can do it myself, so I thought it'd be fun to predict what score I would give a movie given some data scraped off IMDB.

I immediately got stuck, because how do you deal with the cast? A single property with multiple values, where a particular actor may impact the final score (or a combination of actors - that's for the neurons to suss out).

I haven't found a way to do this when googling, but it may just be that I'm unfamiliar with the terminology. Or have I accidentally chosen a really difficult problem?

Note that I'm completely new to all of this, so if you have suggestions, please try to put it as simply as possible.

",40048,,22079,,8/5/2020 12:39,8/5/2020 12:39,How to input dataset with multi-value properties?,,1,0,,,,CC BY-SA 4.0 22834,1,,,8/2/2020 21:13,,1,90,"

From Wikipedia:

According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

But what if the complexity of the problem of self-improving the software grows at a faster rate than the AGI intelligence self-improvement?

From experience we know that problems tend to be harder to solve at every iteration, with diminishing returns. Take as an example the theory of gravitation. Newtonian physics is relatively easy to formulate and covers the majority of high level gravitation phenomenas. A more refined picture, like General Relativity, fill few holes in the theory with a huge increase in complexity. To describe black holes and primordial cosmology we need a theory of quantum gravity, which appears to require a further step in complexity.

What "saved" us so far is the economic growth of our civilisation, which allowed more and more scientist to focus on solving the next problems. It's true that the first AGI will have the luxury of being duplicated, being a software base intelligence, but at the same time is likely that the first AGI will be extremely compute intensive. But even assuming that we (or maybe it would better to say, they) have the hardware to run $10^{2}$ instances, if the complexity of every substantial self-improvement grows say by $10^{d}$x with $d=3$ while the improvement to the intelligence is only $10^{l}$x with $l=1$, the self-improvement cycle will quickly slow down.

So is increasing software complexity the most likely bottleneck to the AI singularity? And what are likely values for $d$ and $l$?

",16363,,1671,,8/19/2020 1:21,8/19/2020 1:21,Is increasing software complexity the most likely bottleneck to the AI singularity?,,1,4,,,,CC BY-SA 4.0 22837,1,22840,,8/2/2020 22:40,,4,1184,"

I have been reading about deterministic and stochastic environments, when I came up with an article that states that tic-tac-toe is a non-deterministic environment.

But why is that?

An action will lead to a known state of the game and an agent has full knowledge of the board and of its enemies' past moves.

",40052,,2444,,1/7/2022 18:04,1/7/2022 18:04,Why is tic-tac-toe considered a non-deterministic environment?,,1,1,,,,CC BY-SA 4.0 22840,2,,22837,8/3/2020 8:29,,3,,"

The game of TIC-TAC-TOE can be modelled as a non-deterministic Markov decision process (MDP) if, and only if:

  • The opponent is considered part of the environment. This is a reasonable approach when the goal is to solve playing against a specific opponent.

  • The opponent is using a stochastic policy. Stochastic policies are a generalisation that include deterministic policies as a special case, so this is a reasonable default assumption.

An action will lead to a known state of the game and an agent has full knowledge of the board and of Its enemies past moves.

Whilst this is true, the next state and reward as observed by an agent may not be due to the postion it plays in (with the exception being if it wins or draws on that move), but the position after the opponent plays.

It is also possible to frame TIC-TAC-TOE as a partially observed MDP (POMDP) if you consider the opponent to not have a fixed policy, but to be reacting to play so far, perhaps even learning from past games. In which case, the internal state of the opponent is the unknown part of the state. In standard game playing engines and in games of perfect information, this is resolved by assuming the opponent will make the best possible (or rational) move, which can be determined using a search process such as minimax. When there is imperfect information, such as in poker, it becomes much harder to allow for an opponent's action.

",1847,,1847,,8/3/2020 14:59,8/3/2020 14:59,,,,2,,,,CC BY-SA 4.0 22841,2,,7875,8/3/2020 8:45,,1,,"

Yes, it must be taken seriously. There are two main reasons:

  1. There is no sharp argument or no-go theorem against the existence of a singularity. It's unclear how fast the singularity could develop, but many authors given a non zero probability to this event (see this reference, it contains different points of view on the singularity by leading experts).
  2. The consequences of a singularity would be dramatic. So even if the perceived probability in point 1 is very small, it's worth studying possible mitigations routes.
",16363,,,,,8/3/2020 8:45,,,,0,,,,CC BY-SA 4.0 22842,1,,,8/3/2020 8:45,,0,26,"

Let's say I have a time series data which is a bunch of observations that occur at different time stamps and intervals. For example, my observations come from a camera located at a traffic intersection. It only records when something occurs, like a car passes, a pedestrian crosses, etc... But otherwise doesn't record information.

I want to produce a LTSM NN (or some other memory based NN for time series data), but since my features don't occur at even time intervals, I am not sure how having a memory would help. For example let's consider two sequences:

Scenario 1:

  • At 1PM, I recorded a car passing.
  • At 105, some people cross.
  • At 150 some more people cross.
  • At 2PM, another car passes.

Scenario 2:

  • At 1 PM a car passes
  • At 2 PM a car passes

In the first scenario, the last car passed 3 observations ago. In the second scenario, the last car passed 1 observation ago. Yet in both scenarios, the last car passed 1 hour ago. I am afraid that any model would treat the last car passing in scenario 1 as 4 time periods ago and the last car passing in scenario 2 as 1 time period ago, even though in reality, the time difference is the same. My hypothesis is that the time difference is a very important feature, probably more so than the intermediate activity between the two car passing. In other words, knowing that the last car passed 1 hour ago is equal or likely more important than knowing that there were some people crossing in the last hour. With that said, knowing that people crossed is important too so I can't just remove that feature.

Another example of my issue can be scene below:

Scenario 1

  • 1PM Car passes
  • 2PM Car passes

Scenario 2

  • 1PM Car passes
  • 10PM Car passes

Once again, in my data set, this would be treated as adjacent observations, but in reality, the time gap is vastly different and thus, the two scenarios should be viewed as very dissimilar.

What are some ways to solve these issues?

  1. I've thought of just expanding the data set by creating a row for every possible time stamp, but I don't think this is the right choice as it would make my dataset humongous and most rows would have 0s across the board. I have observations that occur in microseconds so it would just become a very sparse dataset.
  2. It would be nice to include time difference as a feature, but I am not sure if there's a way to include a dynamic feature in your data set. For example, in the first scenario, at 105, the 1PM observation needs a feature that says this occurred 5 minute ago. But at 150, that feature needs to be changed to this occurred 50 minutes ago, and then at 2PM, that feature needs to now say that it occurred 1 hour ago.
  3. Would the solution to just give the NN the raw data and not worry about it? When building a NN on word prediction, I guess if you give the model enough data, it'll learn the language even if the relevant word happened 10 paragraphs ago... However, I am not sure if there are enough examples of the exact same sequences (even with the amount of data) for it to obtain the predictability I want.

Any ideas on ways to solve this problem while keeping in mind that the goal is to build a NN? Another way to think about it is, the time when a data point occurred relative to when the prediction will be made, in my situation, is a crucial piece of information for prediction.

",29801,,,,,8/3/2020 8:45,What are some solutions for dealing with time series data that are recorded at uneven intervals?,,0,3,,,,CC BY-SA 4.0 22843,1,,,8/3/2020 8:50,,1,93,"

In the label propagation algorithm in section 3.2.3, we know the label of some nodes and we want to predict the label for the rest of the nodes whose labels we don't know. The update formula for this is the following: $$F(t+1) = \alpha SF(t) + (1-\alpha)Y $$ where $F(t)$ is predicted label from timestep t and $S$ can be considered as an adjacency matrix, $Y$ is the label for both the unlabeled data and labeled data. In the case of labeled data, we initialize $Y$ with ground truth and for the unlabeled data, we randomly initialize their label and assign it to $Y$. Now, the most problematic part is I think the $Y$ matrix. Since I do not know the label of some nodes, so we initialize with some random value and keep Y as a constant throughout this iterative process. We can calculate the optimal value of F directly using: $$F^{*} = (I - \alpha S)^{-1}Y$$ But my question is, if we keep Y as a constant ( assign random numbers to unknown nodes as labels) what kind of sense does it make?

",28048,,,user9947,8/5/2020 17:55,8/5/2020 17:55,How to make sense of label propagation formula in graph neural networks?,,0,3,,,,CC BY-SA 4.0 22844,1,,,8/3/2020 9:28,,5,1066,"

I'm trying to train and use a neural network to detect a specific word in an audio file. The input of the neural network is an audio of 2-3 seconds duration, and the neural network must determine whether the input audio (the voice of a person) contains the word "hello" or not.

I do not know what kind of network to use. I used the SOM network, but I did not get the desired result. My training data contains a large number of voices that contain the word "hello".

Is there any python code for dis problem?

",23216,,2444,,8/4/2020 12:15,1/27/2021 9:02,How can I find a specific word in an audio file?,,1,3,,,,CC BY-SA 4.0 22845,2,,21500,8/3/2020 10:00,,1,,"

The route/trajectory followed by the optimization algorithm basically depends your dataset and the loss function. However, what really matters, for the purpose of final accuracy performance, is the final point which the trajectory converges to.

",32621,,,,,8/3/2020 10:00,,,,0,,,,CC BY-SA 4.0 22846,2,,16922,8/3/2020 10:48,,0,,"

If I understood well you have 2 questions.

  • How to get the bounding box given the network output
  • What Smooth L1 loss is

The answer to your first question lies in the equation (2) in the section 3.2.1 from the Faster R-CNN paper. As all anchor based object detector (Faster RCNN, YOLOv3, EfficientNets, FPN...) the regression output from the network are not the bounding box coordinates. The regression output predicts the shift of the predicted bounding box with respect to the selected anchor (all of these networks, use more than 1 anchor per location, check section 3.1.1 from the paper).

So basically what your network predict is $t_x, t_y, t_w, t_h$:

And the bounding box coordinates are $x, y, w, h$, and the anchor coordinates are $x_a, y_a, w_a, h_a$. So in order to compute $x, y, w, h$ from $t_x, t_y, t_w, t_h$, you just have to invert the equations above. However I think you could gain more intuition about it if you take your time and read the whole section 3.1 from the paper. I know sometimes is a pain, but you will grasp the high level concept.

With regard to your second question. Yes the loss is computed with the output from the network and the "coded" ground truth, meaning you compute the loss with the paramters $t$ (predicted) against $t^*$ (coded ground truth) instead for computing loss with the real coordinates of the bounding boxes (decoded output from the network). For the equation on Smooth L1 loss check this wonderful documentation.

",26882,,26882,,8/3/2020 10:53,8/3/2020 10:53,,,,0,,,,CC BY-SA 4.0 22848,1,,,8/3/2020 13:04,,1,48,"

I'm training a neural network on some input data. I know that loss increasing may be related to:

  • overfitting, if the loss increases on test data (while still decreases on training data)
  • oscillations near the optimal point, if the learning rate is too big

However, I find that while for some input data the net makes good predictions, for other data the loss continues to increase, even if I only train on one data point and if the learning rate is fairly low. To me, it's quite strange that performing the training on only one point the loss continues to increase and not decreases; in fact, the only reason I can find for this is a big learning rate.

Can you think about some other reason?

",32915,,2444,,8/6/2020 11:18,8/6/2020 11:18,Why would the loss increase on a single fixed input?,,0,1,,,,CC BY-SA 4.0 22849,2,,22832,8/3/2020 14:08,,0,,"

You could use scikit-learn's MultiLabelBinarizer. It's essentially the multi-label equivalent of one-hot encoding. For each movie, create a vector of zeros, where each zero is associated with a particular actor. If an actor is in that movie, change their zero to a one. In the context of a neural network, think of it as each actor having their own input neuron, which will fire only if they are in that movie.

The caveat is that to represent all actors, you'd need a rediculously long vector. In such cases, it's often sufficient to only look at the most common, say, 100 and ignore the rest. It intuitively makes sense that having a big actor who's done a lot of movies says more about the quality of a movie than whoever's playing Unimportant Bystander #3. This is actually how natural language processing represents words in the English language - take the top n and ignore the rest.

",40060,,,,,8/3/2020 14:08,,,,0,,,,CC BY-SA 4.0 22851,1,22854,,8/3/2020 16:30,,6,3991,"

Generally speaking, is there a best-practice procedure to follow when trying to define a reward function for a reinforcement-learning agent? What common pitfalls are there when defining the reward function, and how should you avoid them? What information from your problem should you take into consideration when going about it?

Let us presume that our environment is fully observable MDP.

",25560,,2444,,11/2/2020 21:47,1/20/2021 12:55,What are some best practices when trying to design a reward function?,,2,0,,,,CC BY-SA 4.0 22853,1,24680,,8/3/2020 19:10,,1,76,"

I have an ensemble of 231 time series, the largest among them being 14 lines long. The task at hand is to try to predict these time-series. But I'm finding this difficult due to the very small size of the data. Any suggestions about what algorithm to use? I'm thinking about going for a hidden markov model, but I don't know if that's a wise choice.

",38261,,,,,11/17/2020 20:55,"How to deal with very, very small time-series?",,1,5,,,,CC BY-SA 4.0 22854,2,,22851,8/3/2020 22:13,,6,,"

Designing reward functions

Designing a reward function is sometimes straightforward, if you have knowledge of the problem. For example, consider the game of chess. You know that you have three outcomes: win (good), loss (bad), or draw (neutral). So, you could reward the agent with $+1$ if it wins the game, $-1$ if it loses, and $0$ if it draws (or for any other situation).

However, in certain cases, the specification of the reward function can be a difficult task [1, 2, 3] because there are many (often unknown) factors that could affect the performance of the RL agent. For example, consider the driving task, i.e. you want to teach an agent to drive e.g. a car. In this scenario, there are so many factors that affect the behavior of a driver. How can we incorporate and combine these factors in a reward function? How do we deal with unknown factors?

So, often, designing a reward function is a trial-and-error and engineering process (so there is no magic formula that tells you how to design a reward function in all cases). More precisely, you define an initial reward function based on your knowledge of the problem, you observe how the agent performs, then tweak the reward function to achieve greater performance (for example, in terms of observable behavior, so not in terms of the collected reward; otherwise, this would be an easy problem: you could just design a reward function that gives infinite reward to the agent in all situations!). For example, if you have trained an RL agent to play chess, maybe you observed that the agent took a lot of time to converge (i.e. find the best policy to play the game), so you could design a new reward function that penalizes the agent for every non-win move (maybe it will hurry up!).

Of course, this trial-and-error approach is not ideal, and it can sometimes be impractical (because maybe it takes a lot of time to train the agent) and lead to misspecified reward signals.

Misspecification of rewards

It is well known that the misspecification of the reward function can have unintended and even dangerous consequences [5]. To overcome the misspecification of rewards or improve the reward functions, you have some options, such as

  1. Learning from demonstrations (aka apprenticeship learning), i.e. do not specify the reward function directly, but let the RL agent imitate another agent's behavior, either to

    • learn the policy directly (known as imitation learning [8]), or
    • learn a reward function first to later learn the policy (known as inverse reinforcement learning [1] or sometimes known as reward learning)
  2. Incorporate human feedback [9] in the RL algorithms (in an interactive manner)

  3. Transfer the information in the policy learned in another but similar environment to your environment (i.e. use some kind of transfer learning for RL [10])

Of course, these solutions or approaches can also have their shortcomings. For example, interactive human feedback can be tedious.

Reward shaping

Regarding the common pitfalls, although reward shaping (i.e. augment the natural reward function with more rewards) is often suggested as a way to improve the convergence of RL algorithms, [4] states that reward shaping (and progress estimators) should be used cautiously. If you want to perform reward shaping, you should probably be using potential-based reward shaping (which is guaranteed not to change the optimal policy).

Further reading

The MathWorks' article Define Reward Signals discusses continuous and discrete reward functions (this is also discussed in [4]), and addresses some of their advantages and disadvantages.

Last but not least, the 2nd edition of the RL bible contains a section (17.4 Designing Reward Signals) completely dedicated to this topic.

Another similar question was also asked here.

",2444,,2444,,1/20/2021 12:55,1/20/2021 12:55,,,,0,,,,CC BY-SA 4.0 22855,2,,22851,8/3/2020 22:14,,5,,"

If your objective is for the agent to attain some goal (say, reaching a target), then a valid reward function is to assign a reward of 1 when the goal is attained and 0 otherwise. The problem with this reward function is that it's too sparse, meaning the agent has little guidance on how to modify their behavior to become better at attaining said goal, especially if the goal is hard to attain through a random policy in the first place (which is probably roughly what the agent starts with).

The practice of modifying the reward function to guide the learning agent is called reward shaping.

A good start is Policy invariance under reward transformations: Theory and application to reward shaping by Ng et al. The idea is to create a reward potential (see Theorem 1) on top of the existing reward. This reward potential should be an approximation of the true value of a given state. For instance, if you have a gridworld scenario where the goal is for the agent to reach some target square, you could create a reward potential based on the Manhattan distance to this target (without accounting for obstacles), which is an approximation to the true value of a given position.

Intuitively, creating a reward potential that is close to the true values makes the job easier for the learning agent because it reduces the disadvantage of being myopic, and the agent more quickly gets closer to a "somewhat good" policy from which it is easier to crawl toward the optimal policy.

Moreover, reward potentials have the property that they are consistent with the optimal policy. That is, the optimal policy to the true problem will not become suboptimal under the new, modified problem (with the new reward function).

",3373,,,,,8/3/2020 22:14,,,,0,,,,CC BY-SA 4.0 22856,1,,,8/4/2020 5:59,,1,61,"

I am trying to build an AI that needs to have some information about the past states as well. Therefore, LSTMs are suitable for this.

Now, I want to know that for a problem/game like Breakout, where we require previous states as well, does A3C perform better than TD3, given that TD3 does not have LSTM?

Or without the LSTM TD3 should perform better than A3C despite the fact that A3C has LSTM in it.

",40051,,2444,,8/4/2020 11:44,8/4/2020 11:44,"When past states contain useful information, does A3C perform better than TD3, given that TD3 does not use an LSTM?",,0,0,,,,CC BY-SA 4.0 22857,1,22872,,8/4/2020 7:10,,6,304,"

I would like to understand the difference between the standard policy gradient theorem and the deterministic policy gradient theorem. These two theorem are quite different, although the only difference is whether the policy function is deterministic or stochastic. I summarized the relevant steps of the theorems below. The policy function is $\pi$ which has parameters $\theta$.

Standard Policy Gradient $$ \begin{aligned} \dfrac{\partial V}{\partial \theta} &= \dfrac{\partial}{\partial \theta} \left[ \sum_a \pi(a|s) Q(a,s) \right] \\ &= \sum_a \left[ \dfrac{\partial \pi(a|s)}{\partial \theta} Q(a,s) + \pi(a|s) \dfrac{\partial Q(a,s)}{\partial \theta} \right] \\ &= \sum_a \left[ \dfrac{\partial \pi(a|s)}{\partial \theta} Q(a,s) + \pi(a|s) \dfrac{\partial}{\partial \theta} \left[ R + \sum_{s'} \gamma p(s'|s,a) V(s') \right] \right] \\ &= \sum_a \left[ \dfrac{\partial \pi(a|s)}{\partial \theta} Q(a,s) + \pi(a|s) \gamma \sum_{s'} p(s'|s,a) \dfrac{\partial V(s') }{\partial \theta} \right] \end{aligned} $$ When one now expands next period's value function $V(s')$ again one can eventually reach the final policy gradient: $$ \dfrac{\partial J}{\partial \theta} = \sum_s \rho(s) \sum_a \dfrac{\pi(a|s)}{\partial \theta} Q(s,a) $$ with $\rho$ being the stationary distribution. What I find particularly interesting is that there is no derivative of $R$ with respect to $\theta$ and also not of the probability distribution $p(s'|s,a)$ with respect to $\theta$. The derivation of the deterministic policy gradient theorem is different:

Deterministic Policy Gradient Theorem $$ \begin{aligned} \dfrac{\partial V}{\partial \theta} &= \dfrac{\partial}{\partial \theta} Q(\pi(s),s) \\ &= \dfrac{\partial}{\partial \theta} \left[ R(s, \pi(s)) + \gamma \sum_{s'} p(s'|a,s) V(s') \right] \\ &= \dfrac{R(s, a)}{\partial a}\dfrac{\pi(s)}{\partial \theta} + \dfrac{\partial}{\partial \theta} \left[\gamma \sum_{s'} p(s'|a,s) V(s') \right] \\ &= \dfrac{R(s, a)}{\partial a}\dfrac{\pi(s)}{\partial \theta} + \gamma \sum_{s'} \left[p(s'|\mu(s),s) \dfrac{V(s')}{\partial \theta} + \dfrac{\pi(s)}{\partial \theta} \dfrac{p(s'|s,a)}{\partial a} V(s') \right] \\ &= \dfrac{\pi(s)}{\partial \theta} \dfrac{\partial}{\partial a} \left[ R(s, a) + p(s'|s,a) V(s') \right] + \gamma p(s'|\pi(s),s) \dfrac{V(s')}{\partial \theta} \\ &= \dfrac{\pi(s)}{\partial \theta} \dfrac{\partial Q(s, a)}{\partial a} + \gamma p(s'|\pi(s),s) \dfrac{V(s')}{\partial \theta} \\ \end{aligned} $$ Again, one can obtain the finaly policy gradient by expanding next period's value function. The policy gradient is: $$ \dfrac{\partial J}{\partial \theta} = \sum_s \rho(s) \dfrac{\pi(s)}{\partial \theta} \dfrac{\partial Q(s,a))}{\partial a} $$ In contrast to the standard policy gradient, the equations contain derivatives of the reward function $R$ and the conditional probability $p(s'|s, a,)$ with respect to $a$.

Question

Why do the two theorems differ in their treatment of the derivatives of $R$ and the conditional probability? Does determinism in the policy function make such a difference for the derivatives?

",34041,,2444,,5/11/2022 8:01,5/11/2022 8:01,Why do the standard and deterministic Policy Gradient Theorems differ in their treatment of the derivatives of $R$ and the conditional probability?,,1,2,,,,CC BY-SA 4.0 22858,1,,,8/4/2020 8:34,,3,1389,"

I've been reading a lot lately about self-supervised learning and I didn't understand very well how to generate the desired label for a given image.

Let's say that I have an image classification task, and I have very little labeled data.

How can I generate the target label from the other data in the dataset?

",38093,,2444,,11/20/2020 12:56,11/20/2020 12:56,How to generate labels for self-supervised training?,,1,3,,,,CC BY-SA 4.0 22859,1,,,8/4/2020 11:28,,0,134,"

What if I have some data, let's say I'm trying to answer if education level and IQ affect earnings, and I want to analyze this data and put in a regression model to predict earnings based on the IQ and education level. My confusion is, what if the data is not linear or polynomial? What if it's a mess but there are still patterns that the linear plane algorithm can't capture? How do I figure out if plotting all of the independent variables will form a line or a polynomial curve like here?

I mean, with one dependent and one independent variable it's easy because you can plot it and see, but in a situation with multiple independent variables... how do I figure out if the relationship is linear or something like this? How do I figure out if I should use a regression model?

Let's say I want to predict a store's daily revenue based on the day of the week, weather and the number of people arrived in the city. My data would look something like this:

+-----------+---------+----------------+---------+
| DAY       | WEATHER | PEOPLE ARRIVED | REVENUE |
+-----------+---------+----------------+---------+
| Monday    | Sunny   | 1115           | $500    |
+-----------+---------+----------------+---------+
| Tuesday   | Cloudy  | 808            | $250    |
+-----------+---------+----------------+---------+
| Wednesday | Sunny   | 450            | $300    |
+-----------+---------+----------------+---------+

I'm a bit confused about what ML algorithm I should use in such a scenario. I can represent the days of the week as (Monday - 1, Tuesday - 2, Wednesday - 3, etc.) and the weather as (Sunny - 1, Cloudy - 2, Normal - 3, etc.) but would a regression model work? I'm skeptical because I'm not sure if there's a linear relationship between the variables and I'm not sure if a hyperplane can create accurate representation of what's going on.

",32539,,32539,,8/4/2020 12:44,1/23/2023 2:22,What ML algorithm should I use that suits this data?,,3,0,,,,CC BY-SA 4.0 22860,2,,22858,8/4/2020 11:38,,6,,"

How can I generate the target label from the other data in the dataset?

If you are asking how you can create the learning signal in SSL, when given an unlabelled dataset, for learning representations of these unlabelled data, then there is no general answer. The answer depends on the type of data that you have (which can be e.g. textual or visual), and which features do you think you want to learn or can be learned from your unlabelled data. This paper and other answers to this question provide some examples of how that can be done (depending on the type of data). Below, I also provide an example.

Let me try to explain this more in detail.

Let's assume that you have both

  1. an unlabelled dataset $U = \{ u_i \}_{i=1}^m$ and

  2. a labelled dataset $D = \{(x_i, y_i) \}_{i=1}^n$

where we may have $m \gg n$ (although this is not a strict requirement), i.e. you may have a lot more unlabelled data than labelled data (this can easily be the case, given that, in general, manual data annotation is expensive/laborious). Let's say that your ultimate task is to perform object recognition (or classification). Let's call this task the downstream task. So, you may think that $x_i$ and $u_i$ are images and $y_i$ are labels, like "cat" or "dog" (let's say that you want to differentiate between cats and dogs).

You want to solve this downstream task by supervised learning with $D$. However, given that your labeled dataset is not big enough, you may think that training a neural network from scratch (i.e. by randomly initializing its weights) with $D$ may not lead to good performance. So, you think that it could be useful to start training from a pre-trained model that already contains useful representations of data similar to your labeled data, i.e. to perform transfer learning. To pre-train such a model, you could use SSL.

So, to solve your downstream task with SSL, there are 2 different steps

  1. Self-supervised learning (SSL): learn representations of your images $u_i \in U$ by training a neural network $M$ with $U$ to solve a so-called pretext (or auxiliary task); there are many pre-text tasks: you can find many examples here, here and here (see example below too);

  2. Supervised learning (SL) by transfer learning: fine-tune $M$ with $D$ (the labeled dataset), in a supervised way; this task is known as downstream task (as stated above)

In this process, there are 2 different labels.

  • In step 1, you have the labels that are generated automatically. But how are these labels generated? As I said, there are many ways. Let me describe one way (among many others!). Let's say that your unlabelled dataset $U$ contains high-resolution images (i.e. $u_i \in U$ are high-resolution images), then you could define your pre-text task as follows. You lower the resolution of your high-resolution images to create other images. Let $v_i$ be the low-resolution image created from the high-resolution image $u_i \in U$, then the training pair to your neural network $M$ is $(v_i, u_i) \in U'$, where $u_i$ is the label (which is the original high-resolution image) and $U'$ the labeled dataset automatically generated (i.e. with the algorithm I've just explained).

    So, these labels $u_i$ (high-resolution images) are semantically different than $y_i$ ("cat" or "dog") in the pairs $(x_i, y_i) \in D$. They are different because, here, we want to learn representations and not to perform object recognition/classification: the idea is that, by solving this pre-text task, your final trained neural network, should have learned features of the images in the unlabelled data (i.e. representation learning). These learned features can then be used to bootstrap training in the downstream task.

  • In step 2, you use the labeled dataset $D$, which has been typically annotated (or labeled) by a human. As stated above, this dataset contains pairs $(x_i, y_i)$, where $y_i$ is, for example, the label "cat" or "dog".

    In this step, the pre-trained model $M$, with the SSL technique, can be fine-tuned with $D$ in a supervised fashion. Given that we start with a pre-trained model $M$, we are effectively performing transfer learning.

Note that SSL can also refer to something (slightly) different than what has been explained in this answer. See my other answer for more details. Moreover, note that you can perform representation learning with SSL without necessarily solving a downstream task later, which may also not be an SL task (in the example above, I've described a downstream task that is an SL task only for simplicity).

If this answer is still unclear, maybe you should have a look at existing implementations of SSL techniques (such as this) for more inspiration.

",2444,,2444,,11/20/2020 12:47,11/20/2020 12:47,,,,0,,,,CC BY-SA 4.0 22861,1,,,8/4/2020 15:44,,1,28,"

I am a programmer, but just now attempting to enter the world of ML. I'm eyeballing a potential project/problem related to foosball.

Pro foosball is a thing believe it or not and I'm wondering if I can use decades worth of game footage to determine where defensive holes are most likely to be.

The way shooting works in pro foosball is you front pin the ball and walk it back and forth in front of the goal. The defense meanwhile is attempting to randomly move two men in front of the goal. Of course, our human brains are not truly random and this I'd like ML to help me understand and exploit.

Questions like, if I walk the ball left, then step right, historically where is the open hole likely to be?

If you'd like to better understand the nature of shooting, here is video of pro foosball: https://www.youtube.com/watch?v=uOdnqmwOQhA&t=16s

So what ML topics should I research and what strategies and tools do you recommend for creating such a model?

",40088,,,,,8/4/2020 15:44,Can I use ML to discover via videos the best place to shoot in foosball?,,0,0,,,,CC BY-SA 4.0 22862,2,,20778,8/4/2020 15:56,,0,,"

Your list is complete for what is considered 'popular' by most practitioners who apply AI for stock trading. Supervised learning and rule learning are at the top for accuracy. There are more academic papers published on classifiers than on regression approaches; classifiers are typically more accurate than regressors.

",5763,,,,,8/4/2020 15:56,,,,0,,,,CC BY-SA 4.0 22863,1,,,8/4/2020 17:00,,1,46,"

I am trying to get a better grasp of how object detection works. I (almost) completely understand the concept behind RPNs. However, I am a little bit confused with the selective search algorithm part. This algorithm does not really learn anything, as far as I understand.

So, for example, when I have an image containing people (even though my network does not need to classify these), will the selective search still propose these people to my network?

Of course, my CNN has not learned to classify a human and will output a very low probability for every class (it did learn), and thus, this way, the human will not contain a bounding box.

Also, in further iterations of the R-CNN model, they proposed using regressors to improve the bounding box.

Does this mean that this part of the model got the CNNs feature maps, and, based on this, learned to output a bounding box (this way smaller instances of a detected object would get a smaller bounding box)?

So, in this first iteration, they probably did not need bounding boxes in the training data (since there was no way to learn the size of the bounding boxes and thus no need to find a loss function for this problem)?

Lastly, I understand that the selective search algorithm is an improvement on the sliding window algorithm. It tries to have a high recall, so having false positives is not bad, as long as we have all the true positives. Again, I do not seem to understand HOW this algorithm knows when it has the object it needs without really learning. Any intuïtive explanation or visual (I am a visual learner at first) on how this algorithm works is greatly appreciated.

",34359,,2444,,1/19/2021 17:25,1/19/2021 17:25,Does the selective search algorithm in object detection learn?,,0,1,,,,CC BY-SA 4.0 22865,2,,20971,8/4/2020 20:39,,0,,"

1. What is typically meant by 3D-face recognition? We are usually extracting the face encoding from 2D-images, right?

Yes. The goal is to reconstruct the three-dimensional shape, as well as the texture of a face from a single or multiple images of that person.

In recent years, "the performance of 2D face recognition algorithms has significantly increased with the use of deep neural networks and the use of large-scale labeled training data." Deep 3D Face Ifentification with the latter still being a notable problem because there are so few of them and they are usually needed to better measure and hence train the deep networks. (The referenced paper gives you a nice overview.) Even with enough data, occlusions, lens distortions or possible variations of expressions, etc. make the problem tough. So many other tricks and augmentation techniques are needed.

2. How can 3D face recognition used for liveness detection?

3D face recognition is used to differentiate 2D spoofs from actual present people. Using multiple time-delayed images one could possibly track if the face moves like a living person as well, or if it is static or stretching weird like a mask, puppet, or moving display.

Nonetheless, often even advanced 2D image-based techniques fall flat in real-world applications because of the used cameras. They might not have enough resolution to recognize if the input is a video of a face on a high-resolution display as explained here.

",40095,,,,,8/4/2020 20:39,,,,0,,,,CC BY-SA 4.0 22869,1,,,8/5/2020 2:17,,0,59,"

Let's say that I want to classify whether a document is a legal document or not. I have a list of keywords that will be presented only in legal documents.

What is the proper way or algorithm to calculate probability based on this list?

",40100,,40100,,8/5/2020 2:26,12/25/2022 20:02,How do I classify whether a document is legal or not given a set of keywords that appear only in legal documents?,,1,1,,,,CC BY-SA 4.0 22872,2,,22857,8/5/2020 10:23,,5,,"

In the policy gradient theorem, we don't need to write $r$ as a function of $a$ because the only time we explicitly 'see' $r$ is when we are taking the expectation with respect to the policy. For the first couple lines of the PG theorem we have \begin{align} \nabla v_\pi(s) &= \nabla \left[ \sum_a \pi(a|s) q_\pi (s,a) \right] \;, \\ &= \sum_a \left[ \nabla \pi(a|s) q_\pi(s,a) + \pi(a|s) \nabla\sum_{s',r} p(s',r|s,a)(r+ v_\pi(s')) \right] \; ; \end{align} you can see that we are taking expectation of $r$ with respect to the policy, so we don't need to write something like $r(s,\pi(a|s))$ (especially because this notation doesn't really make sense for a stochastic policy). This is why we don't need to take the derivative of $r$ with respect to the policy parameters. Now, the next line of the PG theorem is $$\nabla v_\pi(s) = \sum_a \left[ \nabla \pi(a|s) q_\pi(s,a) + \pi(a|s)\sum_{s'} p(s'|s,a) \nabla v_\pi(s') \right] \; ;$$ so now we have an equation similar to the bellman equation in terms of the $\nabla v_\pi(s)$'s, so we can unroll this repeatedly meaning we never have to take an explicit derivative of the value function.

For the deterministic gradient, this is a bit different. In general we have $$v_\pi(s) = \mathbb{E}_\pi[Q(s,a)] = \sum_a \pi(a|s) Q(s,a)\;,$$ so for a deterministic policy (denoted by $\pi(s)$ which represents the action taken in state $s$) this becomes $$v_\pi(s) = Q(s,\pi(s))$$ because the deterministic policy has 0 probability for all actions except one, where it has probability one.

Now, in the deterministic policy gradient theorem we can write $$\nabla v_\pi(s) = \nabla Q(s,\pi(s)) = \nabla \left(r(s, \pi(s)) + \sum_{s'} p(s'|s,a)v(s') \right)\;.$$

We have to write $r$ explicitly as a function of $s,a$ now because we are not taking an expectation with respect to the actions because we have a deterministic policy. Now, if you replace where I have written $\nabla$ with the notation you have used for the derivatives you will arrive at the same result and you'll see why you need to use the chain rule, which I believe you understand because your question was more why don't we use the chain rule for the normal policy gradient, which I have hopefully explained -- it is essentially because of how an expectation over the action space works with a deterministic policy vs. a stochastic policy.

Another way to think of this is as follows -- the term you're concerned with is obtained by expanding $\nabla q_\pi(s,a) = \nabla \sum_{s', r}p(s',r|s,a)(r(s,a) + v_\pi(s'))$. Because, by definition of the $Q$ function, we have conditioned on knowing $a,s$ then $a$ is completely independent of the policy in this scenario - we could even condition on an action that the policy would have 0 probability for - thus the derivative of $r$ with respect to the policy parameters is 0.

However, in the deterministic policy gradient we are taking $\nabla q_\pi(s, \pi(s)) = \nabla \left(r(s, \pi(s)) + \sum_{s'} p(s'|s,a) v_\pi(s')\right)$ -- here $r$ clearly depends on the policy parameters because the action taken was the deterministic action given by the policy in the state $s$, thus the derivative wrt the policy parameters is not necessarily 0!

",36821,,36821,,8/6/2020 10:01,8/6/2020 10:01,,,,8,,,,CC BY-SA 4.0 22875,2,,22859,8/5/2020 15:42,,0,,"

Regression Model will definitely work on That problem.

You only need to change shape of predicting variables like(Day, Weather, people arrived) into 1D array if you got error.. Otherwise you can simply apply Linear Regression, SVM etc to get your output with good accuracy.

",40109,,,,,8/5/2020 15:42,,,,0,,,,CC BY-SA 4.0 22876,2,,22810,8/5/2020 16:46,,3,,"

I'll answer in a couple of stages.

I feel somewhat lost as to what the input for the NN should look like.

Your choices boil down to two options, each with their own multitude of variants:

  1. Vector Representation: Your input is a vector of the same size as your vocabulary where the elements represent the tokens in the input example. The most basic version of this is a bag-of-words (BOW) encoding with a 1 for each word that occurs in the input example and a 0 otherwise. Some other variants are (normalized) word counts or TF-IDF values. With this representation padding will not be necessary as each example will be encoded as a vector of the same size as the vocabulary. However, it suffers from a variety of issues: the input is high-dimensional and very sparse making learning difficult (as you note), it does not encode word order, and the individual word representations have little (TF-IDF) to no (BOW, counts) semantic information. It also limits your NN architecture to a feed-forward network, as more "interesting" architectures such a RNNs, CNNs, and transformers assume a matrix-like input, described below.

  2. Matrix Representation: Here your input representation is a matrix with each row being a vector (i.e. embedding) representation of the token at that index in the input example. How you actually get the pretrained embeddings into the model depends on a number of implementation-specific factors, but this stackoverflow question shows how to load embeddings from gensim into PyTorch. Here padding is necessary because the input examples will have variable numbers of tokens. This stackoverflow answer shows how to add zero padding in PyTorch. This representation will be significantly better than the vector representation as it is relatively low-dimensional and non-sparse, it maintains word order, and using pretrained word-embeddings means your model will have access to semantic information. In fact, this last point leads to your next question.

Learning word embeddings creates vectors for words that are similar to each other syntax-wise, and I fail to see how that can be used to derive the weight/impact of each word on the target variable in my case.

Word embeddings are based on the assumptions of distributional semantics, the core tenet of which is often quoted as "a word is characterized by the company it keeps". That is, the meaning of a word is how it relates to other words. In the context of NLP, models can make better decisions because similar words are treated similarly from the get-go.

For example, say that articles about furry pets get a lot of likes (entirely plausible if you ask me). However, the mentions of furry pets in these articles will be varied, including words like "dog", "cat", "chinchilla", "poodle", "doggo", "good boy", etc. An input representation that treats these mentions as completely distinct (such as BOW) will need to learn individual correlations between each word and the number of likes (that's a lot of learning). A well-trained word embedding, on the other hand, will be able to immediately group these mentions together and learn general correlations between groups of similar words and likes. Fair warning, this is a very imprecise description of why word embeddings work, but I hope it gives you some intuitive understanding.

Finally, since you're doing regression, make sure you choose your objective function accordingly. Mean squared error would be my first try.

",37972,,,,,8/5/2020 16:46,,,,1,,,,CC BY-SA 4.0 22877,1,22881,,8/5/2020 17:35,,1,8295,"

I know it cost around $4.3 million dollars to train, but how much computing power does it cost to run the finished program? IBM Watson chatbot AI only costs a few cents per chat message to use, OpeenAI Five seemed to run on a single gaming PC setup. So I'm wondering how much computing power does it need to run the finished ai program.

",40112,,,,,8/9/2020 16:30,How much computing power does it cost to run GPT-3?,,2,2,,8/10/2020 16:56,,CC BY-SA 4.0 22878,1,,,8/5/2020 19:37,,1,178,"

I am currently working on a public project for the National Weather Model. We are experimenting with using a recurrent neural network to replace the output of a quadratic formula that is in use. The aim of the experiment is to get a speedup in the computation by using a neural network to essentially mimic the output of the quadratic formula. We have achieved an accuracy of about +-.02 but would like to see that improve to +-.001 or so in order to make the outputs indiscernible from a usage standpoint. Despite changing or increasing the training data size, validation data size, number of layers, size of layers, optimizer, batch size, epoch number, normalizations, etc. we cannot seem to move past this level of accuracy. We have changed and tested every standard metric we can find on how to improve the model, but nothing improves the accuracy beyond that threshold.

The main question we have is whether or not Keras is rounding at some point between each layer or has some limiting factor on the backend limiting the model's significant figures in the output. The training data resolution should allow for a finer level of accuracy, but as stated before, any changes made the model cannot improve past what has been achieved. Any insight on what is holding the model back would be greatly appreciated and could help with applying this method elsewhere. The Github has a readme file explaining what is occurring in each file and how to run the model as this is still a work in progress. I would be happy to dive deeper into any aspect of the model as well.

https://github.com/NOAA-OWP/t-route/tree/testing/src/lookup_routing

",40114,,40114,,8/7/2020 13:57,8/7/2020 13:57,Keras model accuracy not improving beyond threshold,,0,0,,,,CC BY-SA 4.0 22880,1,22885,,8/5/2020 21:09,,0,229,"

I am quite new to neural networks. I am trying to implement in Python a neural network having only one hidden layer with $N$ neurons and $1$ output layer.

The point is that I am analyzing time series and would like to use the output layer as the input of the next unit: by feeding the network with the input at time $t-1$ I obtain the output $O_{t-1}$ and, in the next step, I would like to use both the input at time $t$ and $O_{t-1}$, introducing a sort of auto-regression. I read that recurrent neural network are suitable to address this issue.

Anyway I cannot imagine how to implement a network in Keras that involves multilayer recurrence: all the references I found are linked to using the output of a layer as input of the same layer in the next step. Instead, I would like to include the output of the last layer (the output layer) in the inputs of the first hidden layer.

",40115,,40115,,8/5/2020 22:30,8/5/2020 22:30,Is there a neural network that accepts both the current input and previous output?,,2,6,,,,CC BY-SA 4.0 22881,2,,22877,8/5/2020 21:35,,2,,"

I can't anwser your question on how much computing power you might need, but you'll need atleast a smallgrid to run the biggest model just looking at the memory requirments (175B parameters so 700GB of memory). The biggest gpu has 48 GB of vram
I've read that gtp-3 will come in eigth sizes, 125M to 175B parameters. So depending upon which one you run you'll need more or less computing power and memory.
(https://lambdalabs.com/blog/demystifying-gpt-3/)
For an idea of the size of the smallest, "The smallest GPT-3 model is roughly the size of BERT-Base and RoBERTa-Base."

",30100,,30100,,8/8/2020 14:26,8/8/2020 14:26,,,,0,,,,CC BY-SA 4.0 22883,2,,22880,8/5/2020 21:44,,1,,"

You want to look at recurrent neural networks.

",36821,,,,,8/5/2020 21:44,,,,0,,,,CC BY-SA 4.0 22885,2,,22880,8/5/2020 21:55,,0,,"

You could just do this; concatenate your input_vector with zero's vector that has the size of your output. Then in the first pass you concatenate with the output instaid of the zero's vector. After that repeat.. At the end just compare (compute the loss) your entire output from t0 to t1 to your target and backprop.

You might want to look into recurrent layers, these are layers that have connections back to themselves so that the network can learn what to "remember". These have some problems with longer sequences, so the "newer" versions try to deal with that. (LSTM and GRU) You can also use attention mechanisms if you're dealing with sequences. (basically you learn what parts of your input sequence to look at given a certain "query", in your case maybe the last timestep) (generally used in natural language processing) But it's a bit more exotic and complicated.

",30100,,,,,8/5/2020 21:55,,,,0,,,,CC BY-SA 4.0 22886,2,,20783,8/6/2020 4:54,,0,,"

1. Does BERT have any such models?

If you mean the pretrained model, then the answer is YES. BERT has Cased and Uncased models and also models for other language (BERT-Base, Multilingual Cased and BERT-Base, Multilingual Uncased, trained in 104 languages). You can check those models here.

2. Is it possible to check similarity between two words using BERT?

Refer to this Google Colab Notebook, it is not appopriate with BERT Embeddings for Word-Level similarity comparisons.

However, doing sentence embedding similarity comparisons are still valid with BERT.

That's it, hope it helps you :)

",40119,,,,,8/6/2020 4:54,,,,0,,,,CC BY-SA 4.0 22888,1,22896,,8/6/2020 7:42,,1,392,"

Could someone please help me gain some intuition as to why the optimal policy for a Markov Decision Process in the infinite horizon case (agent acts forever) is deterministic?

",35585,,2444,,3/19/2021 1:04,3/19/2021 1:04,Why is the optimal policy for an infinite horizon MDP deterministic?,,2,1,,,,CC BY-SA 4.0 22889,1,,,8/6/2020 8:09,,1,1027,"

Is it okay if I label my images with their original size and then resize them, or should I first resize them and then label them?

I mean do I need to recalibrate my labels if I resized my images?

",37414,,,,,8/6/2020 8:09,Can I resize my images after labeling them?,,0,4,,,,CC BY-SA 4.0 22890,1,,,8/6/2020 8:30,,3,71,"

I'm interested in using the sigmoid (or tanh) activation function instead of RELU. I'm aware of RELU advantages on faster computation and no vanishing gradient problem. But about vanishing gradient, the main problem is about the backpropagation algorithm going to zero quickly if using sigmoid or tanh. So I would like to try to compensate this effect that affects deep layers with a variable learning rate for every layer, increasing the coefficient every time you go a layer deeper to compensate the vanishing gradient.

I have read about adaptive learning rate, but it seems to refer to a technique to change the learning rate on every epoch, I'm looking for a different learning rate for every layer, into any epoch.

  1. Based on your experience, do you think that is a good effort to try?

  2. Do you know some libraries I can use that already let you define the learning rate as a function and not a constant?

  3. If such function exists, it will be better to define a simple function lr=(a*n)*0.001 where n is layer number, and a a multiplier based on experience, of we will need the inverse of the activation function to compensate enough the gradient vanishing?

",25257,,2444,,8/6/2020 11:31,8/6/2020 11:31,Would a different learning rate for every neuron and layer mitigate or solve the vanishing gradient problem?,,0,3,,,,CC BY-SA 4.0 22892,1,,,8/6/2020 10:38,,1,54,"

Is it a good idea to change the learning rate at each training step as a function of the loss? i.e. for points with high loss value, put a high learning rate and for low loss value a low learning rate (using a tailored function)?

I know that the update of the parameters is done via $\gamma \nabla L$, where $\nabla L$ is the gradient and $\gamma$ the learning rate, and that points with high loss should correspond to a high gradient. Hence the dependency of the update of the parameters on the value of the loss should be already contained, although in a more indirect way. Is doing what I propose dangerous and/or useless?

",32915,,2444,,8/6/2020 11:35,8/6/2020 11:35,Is it a good idea to change the learning rate at each training step as a function of the loss?,,0,8,,,,CC BY-SA 4.0 22895,2,,22462,8/6/2020 13:48,,2,,"

The answer to your first question is because the line 'update the critic by minimising the loss $L = \frac{1}{N} \sum_i \left( y_i - Q(s_i, a_i |\theta^Q)\right)^2$ is implying that you will do this by using a gradient, i.e. you calculate the gradient of the loss wrt the parameters and perform a gradient descent step.

For the second question, I am not 100% sure because I don't use tensorflow but I would assume that in the automatic differentiation used it will do the multiplication automatically behind the scenes. In PyTorch I know that it automatically calculates the chain rule when you take the derivative, so it is likely to be a reason similar to this.

",36821,,,,,8/6/2020 13:48,,,,0,,,,CC BY-SA 4.0 22896,2,,22888,8/6/2020 14:31,,3,,"

Suppose you learned your action-value function perfectly. Recall that the action-value function measures the expected return after taking a given action in a given state. Now, the goal when solving an MDP is to find a policy that maximizes expected returns. Suppose you're in state $s$. According to your action-value function, let's say actions $a$ maximizes the expected return. So, according to the goal of solving an MDP, the only action you would ever take from state $s$ is $a$. In other words $\pi(a'\mid s) = \mathbf{1}[a'=a]$, which is a deterministic policy.

Now, you might argue that your action-value function will never be perfect. However, this just means you need more exploration, which can manifest itself as stochasticity in the policy. However, in the limit of infinite data, the optimal policy will be deterministic since the action-value function will have converged to the optimum.

",37829,,,,,8/6/2020 14:31,,,,0,,,,CC BY-SA 4.0 22897,1,,,8/6/2020 14:47,,2,661,"

I've read the paper A Neural Algorithm of Artistic Style by Gatys et. al. and I find the application of neural style transfer very fun.

I also read that Exploring the structure of a real-time, arbitrary neuralartistic stylization network by Ghiasi et. al. is a more modern approach to NST.

My question is whether the above paper by Ghiasi et. al. is still the state-of-the-art method in NST, or maybe new algorithms perform even more efficiently.

I shall precise that my goal is to deploy some NST algorithm on a web page as a fun project to apply some deep learning and learn about backend-frontend interactions.

",38660,,,,,11/10/2021 0:04,What is the state-of-the-art algorithm for neural style transfer?,,1,1,,,,CC BY-SA 4.0 22899,1,,,8/6/2020 17:45,,1,190,"

When designing CNN for image recogition a commonly used sainty check to see if a model is working/designed fine is to see if we are able to overfit the model with a very small subset of images.

I am trying out GANs. While designing GAN I took a dataset with just one image(full black image). I used the DCGAN implementation in pytorch websitecode link.

I tried training the model with this just one black image and even after training for 100s-1000 epochs. I am not able to overfit the model ie generate a black (or something close). All what is generated are random noise image as below

However the model does work well for celeba dataset(the one used in the tutorial). Which means the model is good. Can anybody help me why overfitting is very difficult/impossible when using a single image.

",40134,,,,,8/6/2020 17:45,How to overfit GANs with a single image,,0,5,,,,CC BY-SA 4.0 22900,1,22948,,8/6/2020 22:06,,12,1223,"

I'm reading Reinforcement Learning by Sutton & Barto, and in section 3.2 they state that the reward in a Markov decision process is always a scalar real number. At the same time, I've heard about the problem of assigning credit to an action for a reward. Wouldn't a vector reward make it easier for an agent to understand the effect of an action? Specifically, a vector in which different components represent different aspects of the reward. For example, an agent driving a car may have one reward component for driving smoothly and one for staying in the lane (and these are independent of each other).

",40138,,2444,,10/8/2020 14:15,9/8/2021 14:04,Why is the reward in reinforcement learning always a scalar?,,3,2,,,,CC BY-SA 4.0 22901,1,,,8/6/2020 23:01,,0,505,"

I am trying to train an AI with an environment where the states are continuous but the actions are discrete, that means I can not apply DDPG or TD3.

Can someone please help to let know what should be the best algorithm for discrete action spaces and is there any version of DDPG or TD3 which can be applied to discrete action spaces on partially observable MDPs.

",40051,,2444,,8/6/2020 23:31,5/2/2022 9:40,Which is the best RL algo for continuous states but discrete action spaces problem,,0,4,,,,CC BY-SA 4.0 22904,2,,8844,8/6/2020 23:46,,1,,"

Upgrade

This film depicts a very plausible near future when drones oversee our lives (e.g. the police use them to fight crime) and common people possess self-driving cars.

This is definitely one of the best science fiction movies I have ever watched in my entire life, and I have watched many, such as 2001, Blade Runner, or The Matrix. In fact, these are the four best science fiction movies ever made, in my opinion (and I have some knowledge of cinema, cinematography, directing, etc.)

",2444,,,,,8/6/2020 23:46,,,,0,,,,CC BY-SA 4.0 22907,1,22910,,8/7/2020 3:57,,4,283,"

"If a model is not available, then it is particularly useful to estimate action values (the values of state-action pairs) rather than state values. With a model, state values alone are sufficient to determine a policy; one simply looks ahead one step and chooses whichever action leads to the best combination of reward and next state, as we did in the chapter on DP. Without a model, however, state values alone are not sufficient. One must explicitly estimate the value of each action in order for the values to be useful in suggesting a policy."

The above extract is from Sutton and Barto's Reinforcement Learning, Section 5.2 - part of the chapter on Monte Carlo Methods.

Could someone please explain in some more detail, as to why it is necessary to determine the value of each action (i.e. state-values alone are not sufficient) for suggesting a policy in a model-free setting?


P.S.

From what I know, state-values basically refer to the expected return one gets when starting from a state (we know that we'll reach a terminal state, since we're dealing with Monte Carlo methods which, at least in the book, look at only episodic MDPs). That being said, why is it not possible to suggest a policy solely on the basis of state-values; why do we need state-action values? I'm a little confused, it'd really help if someone could clear it up.

",35585,,35585,,8/7/2020 4:06,8/7/2020 9:24,Why are state-values alone not sufficient in determining a policy (without a model)?,,1,1,,,,CC BY-SA 4.0 22908,1,22909,,8/7/2020 5:19,,5,270,"

One of my friends and I were discussing the differences between Dynamic Programming, Monte-Carlo, and Temporal Difference (TD) Learning as policy evaluation methods - and we agreed on the fact that Dynamic Programming requires the Markov assumption while Monte-Carlo policy evaluation does not.

However, he also pointed out that Temporal Difference Learning cannot handle non-Markovian domains, i.e. it depends on the Markov assumption. Why is it so?

The way I understand it, the TD learning update is, in essence, the same as the Monte-Carlo update, except for the fact that the return instead of being calculated using the entire trajectory, is bootstrapped from the previous estimate of the value function, i.e. we can update the value as soon as we encounter a $(s,a,r,s')$ tuple, we don't have to wait for the episode (if finite) to terminate.

Where is the Markov assumption being used here, i.e the future is independent of the past given the present?

",35585,,2444,,9/11/2021 12:53,9/11/2021 12:53,Why does TD Learning require Markovian domains?,,1,0,,,,CC BY-SA 4.0 22909,2,,22908,8/7/2020 8:34,,6,,"

The Markov assumption is used when deriving the Bellman equation for state values:

$$v(s) = \sum_a \pi(a|s)\sum_{r,s'} p(r,s'|s,a)(r + \gamma v(s'))$$

One requirement for this equation to hold is that $p(r,s'|s,a)$ is consistent. The current state $s$ is a key argument of that function. There is no adjustment for history of previous states, actions or rewards. This is the same as requiring the Markov trait for state, i.e. that $s$ holds all information necessary to predict outcome probabilities of the next step.

The one step TD target that is sampled in basic TD learning is simply the inner part of this:

$$G_{t:t+1} = R_{t+1} + \gamma \hat{v}(S_{t+1})$$

which when sampled is equal to $v(s)$ in expectation *, when $S_t = s$. That is, when you measure a single instance of the TD target and use it to update a value function, you implicitly assume that the values or $r_{t+1}$ and $s_{t+1}$ that you observed occur with probabilities determined by $\pi(a|s)$ and $p(r,s'|s,a)$ as shown by the Bellman equation.

So the theory behind TD learning uses the Markov assumption, otherwise the sampled TD targets would be incorrect.

In practice you can get away with slightly non-Markov environments - most measurements of state for machinery are approximations that ignore details at some level, for instance, and TD learning can solve optimal control in many robotics environments. However, Monte Carlo methods are more robust against state representations that are not fully Markov.


* Technically this sample is biased because $\hat{v}(S_{t+1})$ is not correct when learning starts. The bias reduces over time and multiple updates. So the expected value during learning is approximately the same as the true value as shown by the Bellman equation.

",1847,,1847,,8/7/2020 15:34,8/7/2020 15:34,,,,3,,,,CC BY-SA 4.0 22910,2,,22907,8/7/2020 8:55,,3,,"

why is it not possible to suggest a policy solely on the basis of state-values; why do we need state-action values?

A policy function takes state as an argument and returns an action $a = \pi(s)$, or it may return a probability distribution over actions $\mathbf{Pr}\{A_t=a|S_t=s \} =\pi(a|s)$.

In order to do this rationally, an agent needs to use the knowledge it has gained to select the best action. In value-based methods, the agent needs to identify the action that has the highest expected return. As an aside, whilst learning it may not take that action because it has decided to explore, but if it is not capable of even identifying a best action then there is no hope of it ever finding an optimal policy, and it cannot even perform $\epsilon$-greedy action selection, which is a very basic exploration approach.

If you use an action value estimate, then the agent can select the greedy action simply:

$$\pi(s) = \text{argmax}_a Q(s,a)$$

If you have state values, then the agent can select the greedy action directly only if it knows the model distribution $p(r,s'|s,a)$:

$$\pi(s) = \text{argmax}_a \sum_{r,s'}p(r,s'|s,a)(r + \gamma V(s'))$$

In other words, to find the best action to take the agent needs to look ahead a time step to find out what the distribution of next states would be following that action. If the only values the agent knows are state values, this is the only way the agent can determine the impact of any specific action.

Although there are alternatives to this specific equation, there is no alternative that does not use a model in some form. For instance, if you can simulate the environment, you could simulate taking each action in turn, and look over multiple simulation runs to see which choice ends up with the best $(r + \gamma V(s'))$ on average. That would be a type of planning, and perhaps the start of a more sophisticated approach such as MCTS. However, that simulation is a model - it needs access to the transition probabilities in some form in order to correctly run.

It is possible to have an entirely separate policy function that you train alongside a state value function. This is the basis of Actor-Critic methods, which make use of policy gradients to adjust the policy function, and one of the value-based methods, such as TD learning, to learn a value function that assists with calculating the updates to the policy function. In that case you would not be using a value-based method on its own, so the quote from that part of Sutton & Barto does not apply.

",1847,,1847,,8/7/2020 9:24,8/7/2020 9:24,,,,4,,,,CC BY-SA 4.0 22911,2,,22827,8/7/2020 10:02,,3,,"

Depends on perspective.

On one hand, you have an agent playing in an environment with another agent also evolving. This falls under the definition of Multi-Agent Learning, as can be seen with works such as

  • Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artificial Intelligence, 136(2):215 – 250, 2002.

  • Michael Bowling. Convergence and no-regret in multiagent learning. In Proceedings of the 17th International Conference on Neural Information Processing Systems, NIPS’04, pages 209–216, Cambridge, MA, USA, 2004. MIT Press.

  • M. D. Awheda and H. M. Schwartz. Exponential moving average q-learning algorithm. In 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pages 31–38, April 2013.

  • Sherief Abdallah and Victor Lesser. A multiagent reinforcement learning algorithm with non-linear dynamics. Journal of Artificial Intelligence Research, 33:521–549, 2008.

However, you can also claim that you simply have a single agent learning on a non-stationary environment (the environment contains both the game rules and the opponent), and you simply learn on that basis. From this perspective, there is no multi-agent learning at all.

",7496,,,,,8/7/2020 10:02,,,,2,,,,CC BY-SA 4.0 22912,1,,,8/7/2020 11:48,,2,71,"

I have two signals that I want to use to model my reward.

The first one is the CPU TIME: running mean from this diagram:

The second one is the MAX RESIDUAL from this diagram:

Since they are both equally important, I can weight them together like this:

$r = w_\rho \rho + w_\tau \tau$

where $r$ is the reward function, $\tau$ is the CPU TIME: running mean, and $\rho$ is the MAX RESIDUAL. The problem is, how to set the weights $w_\tau,w_\rho$ to make the contributions equally important if $\rho$ and $\tau$ are on very different scales?

RL algorithms will learn policies based on increases/decreases of the reward, and if one signal has values that are much smaller than the other, it will influence the reward less, if this is done in a wrong way.

On the other hand, if the algorithm converges, they must be on different scales, as I want the CPU time to go ideally to $0$, and residuals $\rho$ to be minimized as well.

Modeling the reward function is a crucial RL step, because it decides what the algorithm will in fact optimize. How are examples like these handled? Are there any best practices for this? Also, what happens when there are $n$ such signals, that have to be combined with "equally important" weighting into a reward function?

Basing the weights $w$ on the current signal values is possible to define their reward, but then the reward contributions won't see $\max(\rho), \min(\rho)$ and $\max(\tau), \min(\tau)$ over time.

So, how do you do feature scaling for reward signals?

",37627,,2444,,11/2/2020 21:46,11/2/2020 21:46,"How to combine two differently equally important signals into the reward function, that have different scales?",,0,2,,,,CC BY-SA 4.0 22913,1,,,8/7/2020 12:55,,0,7205,"

How do I determine the computational complexity (big-O notation) of the forward pass of a convolutional neural network?

Let's assume for simplicity that we use zero-padding such that the input size and the output size are the same.

",40147,,2444,,7/9/2021 16:48,7/9/2021 16:48,What is the computational complexity of the forward pass of a convolutional neural network?,,1,2,,,,CC BY-SA 4.0 22914,1,,,8/7/2020 13:19,,3,1220,"

For MCTS there is an expansion phase where we make a move and list down all the next states. But this is complicated by the fact that for some games, after making the move, there is a stochastic change to the environment. Consider the game 2048, after I make a move, random tile is generated. So the state of the world after my next move is a mix of possibilities!

How does MCTS work in a stochastic environment? I am having trouble understanding how to keep track of the expansion, do I expand all stochastic possibilities and weight the return via their chance of happening?

",20234,,,,,8/13/2020 16:14,How to run a Monte Carlo Tree Search MCTS for stochastic environment?,,1,3,,,,CC BY-SA 4.0 22916,2,,22869,8/7/2020 15:40,,0,,"

Maybe this is what you are looking for: https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm

Basically you would

  1. build and store a finite-state machine that resembles a trie with additional links between the various internal nodes using the given keywords.
  2. for the candidate document, go through it with the above finite-state machine.

Then based on the result

  1. If there is any match, then the document is valid.
  2. Otherwise it is possible that the document is invalid. However, in terms of the probability, I do not have any good idea yet.
",17133,,,,,8/7/2020 15:40,,,,0,,,,CC BY-SA 4.0 22917,2,,13848,8/7/2020 16:50,,0,,"

I've been working with a TD3 implementation for bipedal hardcore. It solved the easy version (v2 and v3) in about 300 epochs (https://github.com/QasimWani/policy-value-methods). I've been training it for hardcore and even after about 1200 episodes, it's no where close to convergence. Did you end up solving, and if so, what algorithm did you end up going with? Cheers, Q.

",40155,,,,,8/7/2020 16:50,,,,0,,,,CC BY-SA 4.0 22918,1,,,8/7/2020 17:53,,1,155,"

In supervised learning we have an unbiased target value, but in reinforcement learning this isn’t the case

The network predicts its own target value, now how exactly does it converge if the network predicts its target value

Can someone explain this to me ??

",40049,,,,,8/7/2020 17:53,How does DQN convergence work in reinforcement learning,,0,10,,,,CC BY-SA 4.0 22919,2,,18378,8/7/2020 18:06,,1,,"

In framing the problem as an episodic reinforcement learning problem, the goal is to find a policy that optimizes $\mathbb{E}[\sum_{t=0}^\tau r(s_t)],$ where $\tau$ is the random time at which the robot leaves the maze. This implicitly assigns a reward of 0 to the out-of-maze state, $s_{terminal}$. If you include this state then the transformation $r\rightarrow 1\cdot r - 1$ does not change the optimal policy.

If we rewrite this episodic objective accounting for $s_{terminal}$ (and a horizon $H$) we get the following objective: \begin{align*}\mathbb{E}\left[\sum_{t=0}^\tau r(s_t) + \sum_{\tau}^H r(s_{terminal})\right] &= \mathbb{E}\left[(H-\tau) r(s_{terminal}) + \sum_{t=0}^\tau r(s_t)\right]\\ &= \mathbb{E}\left[(H-\tau) r(s_{terminal}) + r(s_{goal}) + (\tau-1) r(s_{maze}) \right]\end{align*}

Where $s_{goal}$ is the exit state from the maze, the goal, and $s_{maze}$ represents the other states of the maze. In the question, 1 is subtracted from $s_{goal}$ and $s_{maze}$, but not $s_{terminal}$. Thus, this is not a positive affine transformation of the reward function. In effect, this changes the relative value of $s_{terminal}$ from $\min_s r(s)$ to $\max_s r(s)$, and that changes the optimal policy.

",40157,,,,,8/7/2020 18:06,,,,1,,,,CC BY-SA 4.0 22922,2,,8496,8/8/2020 6:01,,0,,"

According to http://tianlinliu.com/files/notes_exercise_RL.pdf, MDP may not be feasible to multi-target tasks.

In contrast, EA-based methods like NSGA-II, NSGA-III, can solve the multi-target tasks.

And also, tasks that need more than one state to predict the next action are also not suitable to use MDP. For example, when we predict the next action a stranger who just meets you at a party will do, we need to consider all the behaviors he did in the past minutes. It will be more suitable than using the MDP way, which will simply label the stranger is a "good" guy or "bad" guy (or a continuous number between good or bad).

",40164,,40164,,8/8/2020 6:17,8/8/2020 6:17,,,,0,,,,CC BY-SA 4.0 22929,2,,22913,8/8/2020 14:18,,5,,"

What is the time complexity?

The time complexity of an algorithm is the number of basic operations, such as multiplications and summations, that the algorithm performs. The time complexity is usually expressed as a function of the input's size $n$ (but this does not always have to be the case: for instance, you can express the time complexity as a function of the output's size).

Example

Rather than giving you a full answer to your question, I will try to help you by explaining, with the simplest example, how you should calculate the time complexity.

For simplicity, let's assume that we have a kernel $\mathbf{H} \in \mathbb{R}^{3 \times 3}$ and input image $\mathbf{I} \in \mathbb{R}^{3 \times 3}$ (i.e. the kernel has the same dimensions as the input), we use a stride of $1$ and no padding. If we convolve $\mathbf{I}$ with $\mathbf{H}$, how many operations will we perform? The convolution is defined as a scalar product, so it is composed of multiplications and summations, so we need to count both of them. We have $9$ multiplications and $8$ summations, for a total of $17$ operations.

\begin{align} \mathbf{I} \circledast \mathbf{H} &= \begin{bmatrix} i_{11} & i_{12} & i_{13} \\ i_{21} & i_{22} & i_{23} \\ i_{31} & i_{32} & i_{33} \end{bmatrix} \odot \begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix}\\ &= \sum_{ij} \begin{bmatrix} i_{11} h_{11} & i_{12} h_{12} & i_{13} h_{13} \\ i_{21} h_{21} & i_{22} h_{22} & i_{23} h_{23} \\ i_{31} h_{31} & i_{32} h_{32} & i_{33} h_{33} \end{bmatrix}\\ &= i_{11} h_{11} + i_{12} h_{12} + i_{13} h_{13} + i_{21} h_{21} + i_{22} h_{22} + i_{23} h_{23} + i_{31} h_{31} + i_{32} h_{32} + i_{33} h_{33} \end{align}

Time complexity

What is the time complexity of this convolution? To answer this question, you first need to know the input's size, $n$. The input contains $9$ elements, so its size is $n = 9$. How many operations did we perform with respect to the input's size? We performed $17$ operations, so the time complexity $\mathcal{O}(2*n) = \mathcal{O}(n)$, i.e. this operation is linear. If you are not familiar with the big-O notation, I suggest that you get familiar with it, otherwise, you will not understand anything about computational complexity.

To calculate the time complexity in the case the input's dimensions are different than the kernel's dimensions, you will need to calculate the number of times you slide the kernel over the input. You can't ignore this (as I ignored the constant $2$ above) because the number of times you slide the kernel over the input depends on the input's size, so it's a function of the input. Anyway, the paper A guide to convolution arithmetic for deep learning contains a lot of information about convolution arithmetic, so it will be helpful.

Non-linearities, pooling and fully connected layers

Note that, in the above example, I ignored the non-linearities and pooling layer. You can easily extend my reasoning to include these operations too. I also ignored the operations in the final fully connected layers. You can find how to calculate the number of operations in an MLP in this answer. If you also perform other operations or have other layers other than convolutional, pooling and fully connected, of course, you will also need to take them into account.

Forward pass

Moreover, the time complexity of the forward pass of a CNN depends on all these operations in these different layers, so you need to compute the number of operations in each layer first. However, once you know how to compute the number of operations for one convolutional layer, one pooling layer, and one fully connected layer, you can easily compute the number of operations for the other convolutional, pooling, and fully connected layers. Then you just need to sum all these operations and express your time complexity as a function of the input (and probably number of layers).

Space complexity

If you also want to compute the space complexity, you just need to do the same thing, but as a function of the space that you use, i.e. how many variables you use to perform the convolution.

",2444,,2444,,8/8/2020 14:44,8/8/2020 14:44,,,,1,,,,CC BY-SA 4.0 22930,1,22935,,8/8/2020 22:58,,0,71,"

I am somewhat a novice at the topic of Neural Netoworks and PyTorch.

I am trying to create a model that takes a word (that I have modified very slightly) and a 'window' of context around it and predicts one of 5 tags (the tags relate to what sort of action I should perform on that word to get its correct form).

For example, here's what I would call a window of size 7 and it's tag (what it means isn't too important, it's just the 'target'):

        Sentence                    Label
here is a sentence for my network     N

sentence is the word that I want the network to predict the label for, but the 3 words on either side provide contextual meaning. My problem is, how would I get a network to know I want it to predict for that central word but not outright ignore the others? I am familiar with more normal NLP tasks such as NMT and character level classification.

I have already gotten my dataset 'padded' out so they're all of equal size.

Any help is appreciated

",40177,,,,,8/9/2020 2:11,Get Neural Network to predict a tag/class on a certain word using the surrounding words as context [PyTorch]?,,1,3,,,,CC BY-SA 4.0 22931,1,22937,,8/8/2020 23:43,,0,663,"

What are some books on reinforcement learning (RL) and deep RL for beginners?

I'm looking for something as friendly as the head first series, that breaks down every single thing.

",40049,,2444,,8/9/2020 10:43,8/10/2020 4:07,What are some (deep) reinforcement learning books for beginners?,,2,0,,8/14/2020 11:45,,CC BY-SA 4.0 22934,1,,,8/9/2020 0:38,,1,35,"

I am currently working with some models aimed at predicting time series (89 days for training, 22 for testing), including a CNN LSTM and a convLSTM.

When training these models, I had the following scenario:

In the first case, it is possible to see the val loss moving more sharply away from the train loss. In the second case, it seems to me that this also happens, but in a much smoother way.

What do these graphs mean? What causes these situations to occur? If they are problematic situations, is it possible to correct them? If so, how?

",38864,,,,,8/9/2020 0:38,Understanding graphs of the mean square error: relationships between val loss and train loss,,0,0,,,,CC BY-SA 4.0 22935,2,,22930,8/9/2020 2:01,,1,,"

You may want to take a look at this article, but I'll summarize. You can use BERT (or some other tool) to make embeddings of every word in every sentence. Then for each word, make a contextualized embedding vector using the rest of the sentence. bert-embedding does all of this itself. Then keep the embedding vector for the important words.

For each important word, you would then have two pieces of information: the embedding vector and the correct label (which could easily be made into an integer from $0$ to $4$). Depending on the size of the embedding vectors, you could use PCA to reduce the size, although this may not be needed. Using this data, you can then train a neural network or use a k-nearest neighbors classifier.

There is more information in the article, which I suggest that you read. They do a better job explaining than me and also have some actual code you may want to look at.

",40178,,40178,,8/9/2020 2:11,8/9/2020 2:11,,,,0,,,,CC BY-SA 4.0 22936,1,28125,,8/9/2020 3:14,,2,253,"

Can a computer solve the following problem, i.e. make a proof by induction? And why?

Prove by induction that $$\sum_{k=1}^nk^3=\left(\frac{n(n+1)}{2}\right)^2, \, \, \, \forall n\in\mathbb N .$$

I'm doing a Ph.D. in pure maths. I love coding when I wanna have some fun, but I've never got too far in this field. I say my background because maybe there's someone who wants to explain this in a more abstract language there's a chance that I will understand it.

",40179,,2444,,8/9/2020 10:41,6/7/2021 21:45,Can a computer make a proof by induction?,,2,1,,,,CC BY-SA 4.0 22937,2,,22931,8/9/2020 6:19,,2,,"

Reinforcement Learning: An Introduction by Richard Sutton and Andrew Barto is undoubtedly one of the best books, to begin with. Despite its age, the book is still the canonical introduction to reinforcement learning. It does require some patience, but I think it's very approachable and rigorous at the same time!

",35585,,,,,8/9/2020 6:19,,,,0,,,,CC BY-SA 4.0 22939,2,,11528,8/9/2020 9:58,,0,,"

GA is well suited to optimize a non-linear fitness function with a lot of variables. Each vector that is a possible solution is evaluated with the fitness function that we want to optimize.

So, GA is well suited to optimize schedules for a lot of people, optimizing resources, etc.

Your mission is to define all variables that you can move-tune and that affect your objectives, and assign an estimated weight on every variable.

For example, create a total project cost as the fitness function, to minimize. One variable will be the number of programmers, multiplied by cost and total months. If you don't want delays over 3 months, add a correction factor: +10000*max(0, months-3), penalizing the fitness function's results where months > 3, etc.

When you apply your GA algorithm to the fitness function, you will find some minimum for your function, if any exists, because the solution may not exist if you added too many constraints.

",25257,,2444,,1/6/2021 21:44,1/6/2021 21:44,,,,0,,,,CC BY-SA 4.0 22940,1,,,8/9/2020 10:15,,1,19,"

I wonder if there is research, patents, or libraries using Genetic algorithms (GA) to improve Neural Networks. I don't find anything in the subject. For example:

  1. use GA to find better parameters in a NN. So the chromosome will be [learning rate, activation function, layers number, layers size, dropout factor] and the fit function minimize computational cost to reach NN 95% accuracy.
  2. use GA to mix your NN input data and generate new data to adjust.
  3. use GA to mix several small NN, different types, and find the perfect mix for better predictions.
",25257,,,,,8/9/2020 10:15,experiences on using genetic algorithms as a way to improve neural networks?,,0,1,,,,CC BY-SA 4.0 22944,1,22947,,8/9/2020 15:35,,4,995,"

Here's another interesting multiple-choice question that puzzles me a bit.

In tabular MDPs, if using a decision policy that visits all states an infinite number of times, and in each state, randomly selects an action, then:

  1. Q-learning will converge to the optimal Q-values
  2. SARSA will converge to the optimal Q-values
  3. Q-learning is learning off-policy
  4. SARSA is learning off-policy

My thoughts, and question: Since the actions are being sampled randomly from the action space, learning definitely seems to be off-policy (correct me if I'm wrong, please!). So that rules 3. and 4. as incorrect. Coming to the first two options, I'm not quite sure whether Q-learning and/or SARSA would converge in this case. All that I'm able to understand from the question is that the agent explores more than it exploits, since it visits all states (an infinite number of times) and also takes random actions (and not the best action!). How can this piece of information help me deduce if either process converges to the optimal Q-values or not?


Source: Slide 2/55

",35585,,2444,,5/28/2022 10:26,5/28/2022 10:26,When do SARSA and Q-Learning converge to optimal Q values?,,1,0,,,,CC BY-SA 4.0 22945,1,22965,,8/9/2020 15:50,,1,1177,"

I am trying to implement Deep Deterministic policy gradient algorithm by referring to the paper Continuous Control using Deep Reinforcement Learning on the MountainCarContinuous-v0 gym environment. I am using 2 hidden Linear layers of size 32 for both the actor and the critic networks with ReLU activations and a Tanh activation for the output layer of the actor network. However, for some reason, algorithm doesn't seem to converge for some reason. I tried tuning the hyperparameters to no success.

  • Code
import copy
import random
from collections import deque, namedtuple

import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim

"""
Hyperparameters:

actor_layer_sizes
critic_layer_sizes
max_buffer_size
polyak_constant
max_time_steps
max_episodes
actor_lr
critic_lr
GAMMA
update_after
batch_size
"""

device = torch.device("cpu")
dtype = torch.double

Transition = namedtuple(
    "Transition", ("state", "action", "reward", "next_state", "done")
)


class agent:
    def __init__(
        self,
        env,
        actor_layer_sizes=[32, 32],
        critic_layer_sizes=[32, 32],
        max_buffer_size=2500,
    ):
        self.env = env
        (
            self.actor,
            self.critic,
            self.target_actor,
            self.target_critic,
        ) = self.make_models(actor_layer_sizes, critic_layer_sizes)
        self.replay_buffer = deque(maxlen=max_buffer_size)
        self.max_buffer_size = max_buffer_size

    def make_models(self, actor_layer_sizes, critic_layer_sizes):
        actor = (
            nn.Sequential(
                nn.Linear(
                    self.env.observation_space.shape[0],
                    actor_layer_sizes[0],
                ),
                nn.ReLU(),
                nn.Linear(actor_layer_sizes[0], actor_layer_sizes[1]),
                nn.ReLU(),
                nn.Linear(
                    actor_layer_sizes[1], self.env.action_space.shape[0]
                ), nn.Tanh()
            )
            .to(device)
            .to(dtype)
        )

        critic = (
            nn.Sequential(
                nn.Linear(
                    self.env.observation_space.shape[0]
                    + self.env.action_space.shape[0],
                    critic_layer_sizes[0],
                ),
                nn.ReLU(),
                nn.Linear(critic_layer_sizes[0], critic_layer_sizes[1]),
                nn.ReLU(),
                nn.Linear(critic_layer_sizes[1], 1),
            )
            .to(device)
            .to(dtype)
        )

        target_actor = copy.deepcopy(actor)    # Create a target actor network

        target_critic = copy.deepcopy(critic)   # Create a target critic network

        return actor, critic, target_actor, target_critic

    def select_action(self, state, noise_factor):         # Selects an action in exploratory manner
      with torch.no_grad():
        noisy_action = self.actor(state) + noise_factor * torch.randn(size = self.env.action_space.shape, device=device, dtype=dtype)
        action = torch.clamp(noisy_action, self.env.action_space.low[0], self.env.action_space.high[0])

        return action

    def store_transition(self, state, action, reward, next_state, done):             # Stores the transition to the replay buffer with a default maximum capacity of 2500
        if len(self.replay_buffer) < self.max_buffer_size:
            self.replay_buffer.append(
                Transition(state, action, reward, next_state, done)
            )
        else:
            self.replay_buffer.popleft()
            self.replay_buffer.append(
                Transition(state, action, reward, next_state, done)
            )

    def sample_batch(self, batch_size=128):                                            # Samples a random batch of transitions for training
      return Transition(
            *[torch.cat(i) for i in [*zip(*random.sample(self.replay_buffer, min(len(self.replay_buffer), batch_size)))]]
        )


    def train(
        self,
        GAMMA=0.99,
        actor_lr=0.001,
        critic_lr=0.001,
        polyak_constant=0.99,
        max_time_steps=5000,
        max_episodes=200,
        update_after=1,
        batch_size=128,
        noise_factor=0.2,
    ):
        
        self.train_rewards_list = []
        actor_optimizer = optim.Adam(self.actor.parameters(), lr=actor_lr)
        critic_optimizer = optim.Adam(
            self.critic.parameters(), lr=critic_lr
        )
        print("Starting Training:\n")
        for e in range(max_episodes):
            state = self.env.reset()
            state = torch.tensor(state, device=device, dtype=dtype).unsqueeze(0)
            episode_reward = 0
            for t in range(max_time_steps):
                #self.env.render()
                action = self.select_action(state, noise_factor)               
                next_state, reward, done, _ = self.env.step(action[0])         # Sample a transition
                episode_reward += reward

                next_state = torch.tensor(next_state, device=device, dtype=dtype).unsqueeze(0)
                reward = torch.tensor(
                    [reward], device=device, dtype=dtype
                ).unsqueeze(0)
                done = torch.tensor(
                    [done], device=device, dtype=dtype
                ).unsqueeze(0)

                self.store_transition(                               
                    state, action, reward, next_state, done
                )                # Store the transition in the replay buffer

                state = next_state
                
                sample_batch = self.sample_batch(128)

                with torch.no_grad():                 # Determine the target for the critic to train on
                  target = sample_batch.reward + (1 - sample_batch.done) * GAMMA * self.target_critic(torch.cat((sample_batch.next_state, self.target_actor(sample_batch.next_state)), dim=1))
                
                # Train the critic on the sampled batch
                critic_loss = nn.MSELoss()(
                    target,
                    self.critic(
                        torch.cat(
                            (sample_batch.state, sample_batch.action), dim=1
                        )
                    ),
                )

                critic_optimizer.zero_grad()
                critic_loss.backward()
                critic_optimizer.step()

                actor_loss = -1 * torch.mean(
                  self.critic(torch.cat((sample_batch.state, self.actor(sample_batch.state)), dim=1))
                  )

                #Train the actor  
                actor_optimizer.zero_grad()
                actor_loss.backward()
                actor_optimizer.step()
                

                #if (((t + 1) % update_after) == 0):
                for actor_param, target_actor_param in zip(self.actor.parameters(), self.target_actor.parameters()):
                  target_actor_param.data = polyak_constant * actor_param.data + (1 - polyak_constant) * target_actor_param.data
                  
                for critic_param, target_critic_param in zip(self.critic.parameters(), self.target_critic.parameters()):
                  target_critic_param.data = polyak_constant * critic_param.data + (1 - polyak_constant) * target_critic_param.data

                if done:
                    print(
                        "Completed episode {}/{}".format(
                            e + 1, max_episodes
                        )
                    )
                    break

            self.train_rewards_list.append(episode_reward)

        self.env.close()
        print(self.train_rewards_list)

    def plot(self, plot_type):
        if (plot_type == "train"):
            plt.plot(self.train_rewards_list)
            plt.show()
        elif (plot_type == "test"):
            plt.plot(self.test_rewards_list)
            plt.show()
        else:
            print("\nInvalid plot type")
  • Train code snippet
import gym

env = gym.make("MountainCarContinuous-v0")

myagent = agent(env)
myagent.train(max_episodes=150)
myagent.plot("train")

The figure below shows the plot for episode reward vs episode number:

",38895,,38895,,8/9/2020 17:45,8/10/2020 9:17,DDPG doesn't converge for MountainCarContinuous-v0 gym environment,,1,4,,,,CC BY-SA 4.0 22946,2,,22877,8/9/2020 15:56,,1,,"

I think it is premature to answer your question as OpenAI has not made GPT-3 available yet other than via a web-based API. For more information see OpenAI API.

From OpenAI will start selling its text-generation tech, and the first customers include Reddit, by James Vincent:

Access to the GPT-3 API is invitation-only, and pricing is undecided.

You can join the OpenAI wait list here: https://beta.openai.com/

I read somewhere that to load GPT-3 for inferencing requires 300GB if using half-precision floating point (FP16). There are no GPU cards today that even in a set of four will provide 300GB of video RAM. For example, the best I believe you can do in a single desktop box is four NVLinked Nvidia RTX 8000 cards on a single motherboard. Each card has 48GB of VRAM each. That would only provide a total of 192GB of VRAM.

",5763,,5763,,8/9/2020 16:30,8/9/2020 16:30,,,,0,,,,CC BY-SA 4.0 22947,2,,22944,8/9/2020 17:48,,5,,"

The true answers are 1 and 3.

1 is true because the required conditions for tabular Q-learning to converge is that each state action pair will be visited infinitely often, and Q-learning learns directly about the greedy policy, $\pi(a|s) := \arg \max_a Q_\pi(s,a)$, and because Q-learning converges to the optimal Q-value function we know that the policy will be optimal (because the optimal policy is the greedy policy wrt the optimal Q-function).

3 is true because Q-learning is by definition an off-policy algorithm, because we learn about the greedy policy whilst following some arbitrary policy.

2 is false because SARSA is on-policy, so it will be learning the Q-function under the random policy.

4 is false because SARSA is strictly on-policy, for reasons analogous to why Q-learning is off-policy.

",36821,,2444,,5/28/2022 10:25,5/28/2022 10:25,,,,4,,,,CC BY-SA 4.0 22948,2,,22900,8/9/2020 17:50,,8,,"

If you have multiple types of rewards (say, R1 and R2), then it is no longer clear what would be the optimal way to act: it can happen that one way of acting would maximize R1 and another way would maximize R2. Therefore, optimal policies, value functions, etc., would all be undefined. Of course, you could say that you want to maximize, for example, R1+R2, or 2R1+R2, etc. But in that case, you're back at a scalar number again.

It can still be helpful for other purposes to split up the reward into multiple components as you suggest, e.g., in a setup where you need to learn to predict these rewards. But for the purpose of determining optimal actions, you need to boil it down into a single scalar.

",17430,,,,,8/9/2020 17:50,,,,0,,,,CC BY-SA 4.0 22949,1,,,8/9/2020 19:15,,2,36,"

Consider a problem where the agent must learn to control a hierarchy of agents acting against another such agent in a competitive environment. The agents on each team need to learn cooperate in order to compete with the other agents.

A hierarchical RL algorithm would seem to be ideal for such a problem, learning a policy that includes sub-policies for sub-agents. But are there are other types of algorithms that could be used for this kind of task, perhaps ones that are involved centralized cooperation but aren't considered hierarchical RL?

",40189,,,,,8/9/2020 19:15,Alternatives to Hierarchical RL for centralized control tasks?,,0,0,,,,CC BY-SA 4.0 22950,2,,22782,8/9/2020 19:31,,0,,"

The clusters you mentioned at the beginning seem right. A "perfect" coreference system should be able to find all words/phrases that refer to the same object.

However, in language, there will always be some ambiguity. For example, consider the sentences "Bob and John like building. He has fun doing it.". It is ambiguous whether "He" refers to "Bob" or "John". In that sense, a perfect coreference system may not even exist theoretically. If it did, it would either avoid classifying "He" or recognize that it could go with either "Bob" or "John".

",40178,,,,,8/9/2020 19:31,,,,2,,,,CC BY-SA 4.0 22951,1,,,8/9/2020 20:01,,1,78,"

I would like to add a short ~1-3 minute video to a presentation, to demonstrate how Reinforcement Learning is used to solve problems. I am thinking something like a short gif of an agent playing an Atari game, but for my audience it would probably be better to have something more manufacturing/industry based.

Does anyone know any good sources where I could find some stuff like this?

",36821,,,,,9/13/2020 17:00,Where can I find short videos of examples of RL being used?,,1,4,,,,CC BY-SA 4.0 22954,1,,,8/9/2020 21:58,,2,108,"

I know this may be a question of semantics but I always see different articles explain forward pass slightly different. e.g. Sometimes they represent a forward pass to a hidden layer in a standard neural network as np.dot(x, W) and sometimes I see it as np.dot(W.T, x) and sometimes np.dot(W, x).

Take this image for example. They represent the input data as a matrix of [NxD] and weight data as [DxH] where H is the number of neurons in the hidden layer. This seems the most natural since input data will often be in tabular format with rows as samples and columns as features.

Now an example from the CS231n course notes. They talk about this below example and cite the code used to compute it as:

f = lambda x: 1.0/(1.0 + np.exp(-x)) # activation function (use sigmoid)
x = np.random.randn(3, 1) # random input vector of three numbers (3x1)
h1 = f(np.dot(W1, x) + b1) # calculate first hidden layer activations (4x1)
h2 = f(np.dot(W2, h1) + b2) # calculate second hidden layer activations (4x1)
out = np.dot(W3, h2) + b3 # output neuron (1x1)

Where W is [4x3] and x is [3x1]. I would expect the weight matrix to have dimensions equal to [n_features, n_hidden_neurons] but in this example it just seems like they transposed it naturally before it was used.

I guess I am just confused about general nomenclature in how data should be shaped and used consistently when computing neural network forward passes. Sometimes I see transpose, sometimes I don't. Is there a standard, preferred way to represent data in accordance to a diagram like these This question may be silly but I just wanted to discuss it a bit. Thank you.

",40192,,2444,,10/10/2020 22:07,10/10/2020 22:07,What is the Preferred Mathematical Representation for a Forward Pass in a Neural Network?,,1,0,,,,CC BY-SA 4.0 22955,1,,,8/9/2020 23:02,,1,22,"

I've been considering starting a project for some time on sound source identification.

To be more specific, my goal is to be able to identify the "sources" for sound in videos. Moving parts clanging, lips speaking, hands clapping etc. I'd like to think a model trained to be able to do this might be helpful for:

  • Identifying who is saying what in a crowd
  • Discovering noise sources caught on video (ex. a carpenter's saw as he is talking to someone)
  • Extending this to design a model for reading lips, to discern speech in silent video.

(Taken from https://www.youtube.com/watch?v=8ch_H8pt9M8)

You might think of this task like grounding in NLP, expect for sound/speech instead. I'm sure this has been done before, and I'd like to conduct a literature review. So is there a name for this kind of sound-source identification?

I've tried Googling "Sound Source Identification", but it only returns Speech Classification results (Is this sound a car or a truck etc.)

",35768,,,,,8/9/2020 23:02,"Is there a problem for ""Sound Source Identification in Video Footage""?",,0,2,,,,CC BY-SA 4.0 22956,2,,22954,8/10/2020 3:54,,1,,"

I don't think there's a "standard way" of expressing the forward pass: you use the transpose when you need to use it, and this depends on how you define the weights and inputs matrices, and on the architecture of your neural network. For example, in a fully connected feedforward neural network, you know that every neuron in the previous layer is connected to every neuron in the current layer, so, as long as this is satisfied when you multiply the matrices, it does not matter whether you use transposes or not, and I don't think that, in computational terms, it makes any difference if you use transposes or not. (By the way, if you are writing something, I suggest that you always specify the dimensions of your matrices and your conventions).

Of course, if you want to use a library like TensorFlow, you will probably need to follow the conventions of the library, but this is another story.

",2444,,,,,8/10/2020 3:54,,,,0,,,,CC BY-SA 4.0 22957,1,22960,,8/10/2020 4:01,,24,8242,"

The transformer, introduced in the paper Attention Is All You Need, is a popular new neural network architecture that is commonly viewed as an alternative to recurrent neural networks, like LSTMs and GRUs.

However, having gone through the paper, as well as several online explanations, I still have trouble wrapping my head around how they work. How can a non-recurrent structure be able to deal with inputs of arbitrary length?

",12201,,2444,,9/17/2020 13:56,1/4/2023 18:15,How can Transformers handle arbitrary length input?,,2,1,,,,CC BY-SA 4.0 22958,2,,22931,8/10/2020 4:07,,0,,"

If you are looking for a book that is more beginner friendly than the Sutton and Barto book (which you should of course check out also), try out:

Deep Reinforcement Learning Hands On

",12201,,,,,8/10/2020 4:07,,,,0,,,,CC BY-SA 4.0 22959,1,22991,,8/10/2020 5:26,,4,165,"

I recently read some introductions to AI alignment, AIXI and decision theory things.

As far as I understood, one of the main problems in AI alignment is how to define a utility function well, not causing something like the paperclip apocalypse.

Then a question comes to my mind that whatever the utility function would be, we need a computer to compute the utility and reward, so that there is no way to prevent AGI from seeking it to manipulate the utility function to always give the maximum reward.

Just like we humans know that we can give happiness to ourselves in chemical ways and some people actually do so.

Is there any way to prevent this from happening? Not just protecting the utility calculator physically from AGI (How can we sure it works forever?), but preventing AGI from thinking of it?

",40196,,2444,,8/12/2020 23:37,8/12/2020 23:37,How can we prevent AGI from doing drugs?,,2,0,,,,CC BY-SA 4.0 22960,2,,22957,8/10/2020 5:43,,21,,"

Actually, there is usually an upper bound for inputs of transformers, due to the inability of handling long-sequence. Usually, the value is set as 512 or 1024 at current stage.

However, if you are asking handling the various input size, adding padding token such as [PAD] in BERT model is a common solution. The position of [PAD] token could be masked in self-attention, therefore, causes no influence. Let's say we use a transformer model with 512 limit of sequence length, then we pass a input sequence of 103 tokens. We padded it to 512 tokens. In the attention layer, positions from 104 to 512 are all masked, that is, they are not attending or being attended.

",38960,,,,,8/10/2020 5:43,,,,2,,,,CC BY-SA 4.0 22961,1,,,8/10/2020 5:48,,2,121,"

I have been reading more about computer vision and I'm bothered by YOLO and similar deep learning architectures.

The thing I am confused about is how non-class image sections are dealt with. In particular, it's not clear to me at all why YOLO doesn't consider every part of an image a possible class.

What actually sets the cutoff for detection and then classification?

",32390,,2444,,1/29/2021 0:03,9/22/2022 6:48,How does YOLO handle non-class objects?,,2,0,,,,CC BY-SA 4.0 22962,1,,,8/10/2020 6:06,,2,105,"

I am new to reinforcement learning, but, for a finite horizon application problem, I am considering using the average reward instead of the sum of rewards as the objective. Specifically, there are a total of $T$ maximally possible time steps (e.g., the usage rate of an app in each time-step), in each time-step, the reward may be 0 or 1. The goal is to maximize the daily average usage rate.

Episode length ($T$) is maximally 10. $T$ is the maximum time window the product can observe about a user's behavior of the chosen data. There is an indicator value in the data indicating whether an episode terminates. From the data, it is offline learning, so in each episode, $T$ is given in the data. As long as an episode doesn't terminate, there is a reward of $\{0, 1\}$ in each time-step.

I heard if I use an average reward for the finite horizon, the optimal policy is no longer a stationary policy, and optimal $Q$ function depends on time. I am wondering why this is the case.

I see normally, the objective is defined maximizing

$$\sum_t^T \gamma^t r_t$$

And I am considering two types of average reward definition.

  1. $1/T(\sum^𝑇_{t=0}\gamma^t r_t)$, $T$ varies is in each episode.

  2. $1/(T-t)\sum^T_{i=t-1}\gamma^i r_i$

",40193,,2444,,8/11/2020 10:34,8/11/2020 10:34,Should I use the discounted average reward as objective in a finite-horizon problem?,,0,9,,,,CC BY-SA 4.0 22963,1,22973,,8/10/2020 8:05,,4,1419,"

Transfer learning consists of taking features learned on one problem and leveraging them on a new, similar problem.

In the Transfer Learning, we take layers from a previously trained model and freeze them.

Why is this layer freezing required and what are the effects of layer freezing?

",30725,,,,,9/22/2020 22:10,What is layer freezing in transfer learning?,,2,0,,,,CC BY-SA 4.0 22964,2,,22959,8/10/2020 8:31,,1,,"

You make a lot of assumptions about AGI, namely that 'we need a computer to compute the utility and reward AGI'. It not clear to me that (1) we can achieve AGI, (2) AGI will be on a computer as we know it and (3) AGI will work with a utility / reward function as we know them.

One thing I am sure though is that ML is known for "cheating" (see for exemple). Avoiding such cheating is part of the building process. So, when you assume that we can achieve AGI, it means that you assume we can build an AGI that do not "cheat". Thus, the answer is mostly contained in your assumptions.

Whether we are able to build an AGI, what "cheating" we would have to overcome to do so and how we will be able to do so is mostly undetermined.

",22654,,,,,8/10/2020 8:31,,,,0,,,,CC BY-SA 4.0 22965,2,,22945,8/10/2020 9:17,,1,,"

I had to change the actions selection function for this and tune some hyper-parameters. Here's what I did to make it converge:

  • Sampled the noise from a standard normal distribution instead of sampling randomly.
  • Changed the polyak constant (tau) from 0.99 to 0.001 (I didn't have an idea of what it should be, so I had just set it randomly in the first try)
  • Changed the hidden layer sizes of the critic network to [64, 64].
  • Removed the ReLU activation after the second layer in the critic network. Earlier the layer were stacked as (Linear, ReLU, Linear, ReLU, Linear). I changed it to (Linear, ReLU, Linear, Linear).
  • Changed max buffer size to 1000000
  • Changed the size of the batch_size to be sampled from 128 to 64

This is the plot that I get now after training it for 75 episodes :

",38895,,,,,8/10/2020 9:17,,,,0,,,,CC BY-SA 4.0 22966,1,,,8/10/2020 12:13,,1,70,"

I was reading the book "Reinforcement Learning: An Introduction" by Sutton and Barto. In section 7.3, they write the formula for n-step off-policy TD as

$$V(S_t) = V(S_{t-1}) + \alpha \rho_{t:t+n-1}[G_{t:t+n} - V(S_{t-1})],$$

where $V(S_{t})$ is state value function of the state $S$ at time $t$ and $ G_{t:t+n} \doteq \sum_{i=t}^{t+n-1}\gamma^{i-t}R_{i+1} + \gamma^n V(S_{t+n})$ and $\rho_{t:t+n-1}$ is the importance sampling ratio.

I tried to prove this equation for $n = 1$ using the incremental update of the value function. Now I end up with this formula: $$V(S_t) = \frac{1}{t} \sum_{j=1}^{t} \rho_{j}G_{j} $$ $$V(S_t)= \frac{1}{t}(\rho_{t}G_{t} + \sum_{j=1}^{t-1}\rho_{j}G_{j}) $$ $$V(S_t) = \frac{1}{t}(\rho_t G_t + (t-1)V(S_{t-1}))$$ $$V(S_t)=V(S_{t-1}) + \frac{1}{t}(\rho_{t}G_{t} - V(S_{t-1}))$$ I know I'm wrong because this does not match with the above equation. But can anyone please show me where I am wrong?

",28048,,2444,,11/5/2020 22:39,11/5/2020 22:39,How can I derive n-step off-policy temporal difference formula?,,0,2,,,,CC BY-SA 4.0 22967,1,,,8/10/2020 13:04,,1,116,"

I am playing around with a stock trading agent trained via (deep) reinforcement learning, including memory replay. The agent is trained for 1000 episodes, where each episode consists of 180 timesteps (e.g. daily stock prices).

My question is concerning the sampling of episodes for training.

Assuming I've got daily stock prices going back 3 years, that's about 750 trading days/prices.

How should I sample this data set to get enough episodes for training?

With an episode length of 180 and an episode count of 1000, I'd need 180k "days" to choose from, if I wouldn't want any duplication.

Do I even need to sample 1000 non-overlapping windows from my dataset or can I sample my episodes using a sliding window approach? Could I even just randomly sample the dataset for episodes? For example, calculate a random date and build the episode from the 180 days following that random starting date?

The reward for each action is calculated as follows, p are the prices and t is the current timestep of the episode.

  • CASH: 0
  • BUY: p(t+1) - p(t) - fee
  • HOLD: p(t+1) - p(t)
",40202,,2444,,12/20/2021 21:55,12/20/2021 21:55,"Given the daily stock prices of the last 3 years, how should I sample the training data for episodic RL?",,0,10,,,,CC BY-SA 4.0 22969,1,23079,,8/10/2020 13:42,,1,418,"

I am trying to determine the complexity of the neural network we use. The neural network is a U-net generator with an input shape of NxN (not an image but image-like data) and output of the same shape. There is 7x downsampling and 7x upsampling. Downsampling is a simple convolutional layer, where I have no problem to determine complexity as stated here:

$$ O\left(\sum_{l=1}^{d} n_{l-1} \cdot s_{l}^{2} \cdot n_{l} \cdot m_{l}^{2}\right) $$

I however cannot find what is big O complexity for the upsampling stage, where the UpSampling2D layer is used before convolution.

Any idea what is the time complexity of the upsampling convolutional layer, or where I might find information? Thanks in advance!

",40205,,2444,,8/10/2020 14:56,8/17/2020 7:51,What is the time complexity of the upsampling stage of the U-net?,,1,2,,,,CC BY-SA 4.0 22970,2,,22783,8/10/2020 14:38,,4,,"

The inequality \begin{align} \left\|T^{\pi} V-T^{\pi} U\right\|_{\infty} & \leq \gamma\|V-U\|_{\infty} \label{1}\tag{1}, \end{align} where $U$ and $V$ are two value functions, follows from the definition of Bellman policy operator (at slide 16)

\begin{align} T^{\pi} V(s) &\triangleq R(s, a)+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right) V\left(s^{\prime}\right) \\ &=R(s, \pi(s))+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) V\left(s^{\prime}\right), \; \forall s \in S \tag{2}\label{2}, \end{align} where $\triangleq$ means "defined as". Note the $\pi$ in the definition, hence the name Bellman policy operator (BPO), and note that the BPO holds for all $s$.

To prove (\ref{1}), first recall that

\begin{align} \left\|\mathbf {x} \right\|_{\infty } \triangleq \max _{i}\left|x_{i}\right| \label{3}\tag{3}. \end{align}

In the case of value functions $V$ and $U$, we have

\begin{align} \left\|V - U \right\|_{\infty } \triangleq \max_{s \in S}\left|V(s) - U(s) \right|. \label{4}\tag{4} \end{align}

Note also that $Pr$ is always non-negative (specifically, between $0$ and $1$).

Successively, we expand the left-hand side of (\ref{1}) by applying the definition (\ref{2}) and using the properties just mentioned

\begin{align} &\left\|T^{\pi} V-T^{\pi} U\right\|_{\infty} = \\ &\left\| \left( R(s, \pi(s))+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) \right) - \\ \left( R(s, \pi(s))+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right) \right\|_{\infty} =\\ &\max_{s \in S} \left| \left( R(s, \pi(s))+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) \right) - \\ \left( R(s, \pi(s))+\gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right) \right| = \\ & \max_{s \in S} \left| \gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) - \gamma \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right| = \\ & \gamma \max_{s \in S} \left| \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) V\left(s^{\prime}\right) - \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s) \right) U\left(s^{\prime}\right) \right| = \\ & \gamma \max_{s \in S} \left| \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) \left ( V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right) \right| = \\ & \gamma \max_{s \in S} \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) \left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right| \\ & \leq \gamma \max_{s \in S} \sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, \pi(s)\right) \max_{x \in S }\left| V\left(x\right) - U\left(x\right) \right| \label{5}\tag{5} \\ & \leq \gamma \max _{x \in \mathcal{S}}\left|V\left(x\right)-U\left(x\right)\right| \label{6}\tag{6} \\ &= \gamma \| V - U \|_{_{\infty}} \label{7}\tag{7} \end{align}

Here are a few notes to help you understand this derivation

  • Equation \ref{7} is just the direct application of the definition of the $\infty$-norm in equation \ref{4}

  • The inequalities \ref{5} and \ref{6} come from the fact that $\mathbb{E}[f(x)] \leq \max_x f(x)$. When we take $\max_s$, we choose among all conditional distributions $p$ (which are conditioned on $s$), but the differences $\left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right|$ don't change in that process. So, no matter which $p$ we choose, i.e. no matter which distribution of the function $\left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right|$ we choose, we know that $\mathbb{E} \left[ \left| V\left(s^{\prime}\right) - U\left(s^{\prime}\right) \right| \right] \leq \max _{x \in \mathcal{S}}\left|V\left(x\right)-U\left(x\right)\right|$

",2444,,2444,,1/22/2022 15:39,1/22/2022 15:39,,,,0,,,,CC BY-SA 4.0 22971,2,,22783,8/10/2020 14:59,,0,,"

I am assuming you are aware of the meaning of the notations. I will provide an informal explanation.

From your comment I am guessing you have difficulty in this portion in the 1st equation:

\begin{align} {\scriptsize \max_{s} \gamma \sum_{s^{\prime}} \operatorname{Pr} \left( s^{\prime} \mid s, \pi(s) \right) \left| V\left(s^{\prime}\right) - U \left(s^{\prime}\right) \right| \\ \leq \gamma \left(\sum \operatorname{Pr} \left(s^{\prime} \mid s, \pi(s)\right)\right) \max _{s^{\prime}}\left|V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right| \\ \leq \gamma\|U-V\|_{\infty} } \end{align}

The first inequality arises simply due to the fact that you are assigning a probability $1$ to the succesor state which has the maximum difference under the $2$ value functions, whereas previously you wee maximizing the entire equation with respect to a state $s$, and hence certain probabilities get assigned to low value diiference states as well (i.e $|U(s') - V(s')|$ is small compared to the largest value difference), whereas now you just pick the maximum difference between a succesor state, under the 2 value functions $V,U$ and assign the entire probability to it i.e ($(\sum_{s'}Pr(s'|s, \pi(s))) = 1$).

The second inequality is due to the fact, that now instead of selecting from a successor state, you select the maximum difference under the 2 value functions ($U(s),V(s)$) from the entire state space.

In the 2nd equation:

\begin{align} {\scriptsize \gamma \max _{s, a}\left|\sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\left(\sum_{s^{\prime}} \operatorname{Pr}\left(s^{\prime} \mid s, a\right)\right) \max _{s^{\prime}}\left|\left(V\left(s^{\prime}\right)-U\left(s^{\prime}\right)\right)\right| \\ \leq \gamma\|V-U\|_{\infty} } \end{align}

The first inequality is again due to the same reasoning as above, that you assign the entire probability to the succesor state with highest value difference (under $U,V$) the maximum probability. And the second inequality is also due to the same reasoning as the 1st equation. You look for the maximum difference in the entire state space instead of just among successor states.

NOTE: In general succesor states can be the entire state space with those unreachable from state having $Pr(s'|s) = 0$, in that case the last inequality will become equality in both the equations.

",,user9947,,user9947,8/14/2020 11:06,8/14/2020 11:06,,,,0,,,,CC BY-SA 4.0 22973,2,,22963,8/10/2020 20:05,,1,,"

Why is this layer freezing required?

It's not.

What are the effects of layer freezing? The consequences are:

(1) Should be faster to train (the gradient will have far less components)

(2) Should require less data to train on

If you do unfreeze the weights, I'd think your performance would be better because you are adjusting (i.e., fine-tuning) the parameters to your specific problem at hand. I am not sure what the marginal improvements are in practice, as I have not experiemented much with fine-tuning (like are the improvements typically a 0.01% reduction in error rate? Not sure.)

",40213,,,,,8/10/2020 20:05,,,,0,,,,CC BY-SA 4.0 22974,1,,,8/10/2020 23:36,,1,244,"

The definition I see for representational capacity is "the family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective." (Goodfellow's Deep learning book).

However, to me this seems to be the same as the definition of the hypothesis space. Is the key difference the "in order to reduce a training objective" in that some functions may not be chosen in reducing a training objective? Or are these identical definitions.

",40216,,2444,,8/11/2020 1:23,8/11/2020 1:23,What is the representational capacity of a learning algorithm?,,1,1,,8/12/2020 23:17,,CC BY-SA 4.0 22975,2,,22974,8/11/2020 0:51,,0,,"

Learning algorithms (some others too, like search) aim at generating functions that get as close as possible to the "shape" of the training data (so we can then feed values to the generated functions and get outputs like, say, a prediction).

In 2D, the "shape" may be easy to visualize. If the data in 2D seems to line up, learning algorithms generating linear/affine functions (e.g. y = ax + b), should fit fairly well. Their representational capacity extends to lines. If the data seems to form a parabola, a representational capacity bound to lines will do poorly. We then need more "capable" representations, which can cope with the quadratic terms.

So this should be what the "family of functions" Goodfellow's book refers to. The "training objective" should not be in the definition (to me), as a line fitting solution is unable to represent, say, a parabola, whatever the training objective is. However the book definition may mean a line can be fit to a parabola, strictly speaking, although it will do so very poorly.

The hypothesis space resembles the definition, perhaps (I have not checked it), but what matters is the "mindset". I would draw a parallel with prior and posterior distributions in Statistics, where we usually make some hypothesis on the shape of the distribution.

Concrete examples like linear regressions generate linear functions, as hinted by their names. Decision trees generate sets of linear functions, thus more complex and "capable". SVM generates functions depending on its kernel. The RBF kernel, for example, allows generating functions that can map quite complex, non-linear data "shapes". And arbitrary neural networks can map "arbitrary" data "shape" (easier written than actually done).

",169,,169,,8/11/2020 1:05,8/11/2020 1:05,,,,1,,,,CC BY-SA 4.0 22976,1,23844,,8/11/2020 5:16,,2,81,"

I have been doing a course which teaches you about Deep Neural Networks, during one of the exercises I was made to make an RNN for sentiment classification which I did, but I did not understand how an RNN is able to deal with sentences of different lengths while conducting sentiment classification.

",32636,,32636,,8/11/2020 9:14,10/30/2020 18:04,How do RNN's for sentiment classification deal with different sentence lengths?,,1,0,,,,CC BY-SA 4.0 22981,2,,3847,8/11/2020 10:25,,1,,"

The question is based on two concepts:

  1. First artificial intelligence (AI)
  2. The transistor is an intelligent device.

Let us talk about the first AI, why transistors, the same definition of intelligence can be applied to Vaccum tubes, and they definitely existed before transistors. So no matter what definition you decide for intelligence, transistors are not the first AI.

Now we come to the next part, what is artificial intelligence, like intelligence, the definition of artificial intelligence has undergone changes, in the last 6 decades.

If you use the definition loosely, almost any device can be intelligent even an electric bulb.

",12211,,,,,8/11/2020 10:25,,,,0,,,,CC BY-SA 4.0 22985,1,22990,,8/11/2020 15:31,,1,130,"

I'm following the guide as outlined at this link: http://neuralnetworksanddeeplearning.com/chap2.html

For the purposes of this question, I've written a basic network 2 hidden layers, one with 2 neurons and one with one neuron. For a very basic task, the network will learn how to compute an OR logic gate so the training data will be:

X = [[0, 0], [0, 1], [1, 0], [1, 1]]
Y = [0, 1, 1, 1]

And the diagram:

For this example, the weights and biases are:

w = [[0.3, 0.4], [0.1]]
b = [[1, 1], [1]]

The feedforward part was pretty easy to implement so I don't think I need to post that here. The tutorial I've been following summarises calculating the errors and the gradient descent algorithm with the following equations:

For each training example $x$, compute the output error $\delta^{x, L}$ where $L =$ Final layer (Layer 1 in this case). $\delta^{x, L} = \nabla_aC_x \circ \sigma'(z^{x, L})$ where $\nabla_aC_x$ is the differential of the cost function (basic MSE) with respect to the Layer 1 activation output, and $\sigma'(z^{x, L})$ is the derivative of the sigmoid function of the Layer 1 output i.e. $\sigma(z^{x, L})(1-\sigma(z^{x, L}))$.

That's all good so far and I can calculate that quite straightforwardly. Now for $l = L-1, L-2, ...$, the error for each previous layer can be calculated as

$\delta^{x, l} = ((w^{l+1})^T \delta^{x, l+1}) \circ \sigma(z^{x, l})$

Which again, is pretty straight forward to implement.

Finally, to update the weights (and bias), the equations are for $l = L, L-1, ...$:

$w^l \rightarrow w^l - \frac{\eta}{m}\sum_x\delta^{x,l}(a^{x, l-1})^T$

$b^l \rightarrow b^l - \frac{\eta}{m}\sum_x\delta^{x,l}$

What I don't understand is how this works with vectors of different numbers of elements (I think the lack of vector notation here confuses me).

For example, Layer 1 has one neuron, so $\delta^{x, 1}$ will be a scalar value since it only outputs one value. However, $a^{x, 0}$ is a vector with two elements since layer 0 has two neurons. Which means that $\delta^{x, l}(a^{x, l-1})^T$ will be a vector even if I sum over all training samples $x$. What am I supposed to do here? Am I just supposed to sum the components of the vector as well?

Hopefully my question makes sense; I feel I'm very close to implementing this entirely and I'm just stuck here.

Thank you

[edit] Okay, so I realised that I've been misrepresenting the weights of the neurons and have corrected for that.

weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])]

Which has the output

[array([[0.27660583, 1.00106314],
   [0.34017727, 0.74990392]])
array([[ 1.095244  , -0.22719165]])

Which means that layer0 has a weight matrix with shape 2x2 representing the 2 weights on neuron01 and the 2 weights on neuron02.

My understanding then is that $\delta^{x,l}$ has the same shape as the weights array because each weight gets updated indepedently. That's also fine.

But the bias term (according to the link I sourced) has 1 term for each neuron, which means layer 0 will has two bias terms (b00 and b01) and layer 1 has one bias term (b10).

However, to calculate the update for the bias terms, you sum the deltas over x i.e $\sum_x \delta^{x, l}$; if delta has the size of the weight matrix, then there are too many terms to update the bias terms. What have I missed here?

Many thanks

",40230,,40230,,8/12/2020 11:42,8/12/2020 15:00,"Implementing Gradient Descent Algorithm in Python, bit confused regarding equations",,1,0,,,,CC BY-SA 4.0 22986,1,23062,,8/11/2020 15:42,,6,784,"

I am implementing OpenAI gym's cartpole problem using Deep Q-Learning (DQN). I followed tutorials (video and otherwise) and learned all about it. I implemented a code for myself and I thought it should work, but the agent is not learning. I will really really really appreciate if someone can pinpoint where I am doing wrong.

Note that I have a target neuaral network and a policy network already there. The code is as below.

import numpy as np
import gym
import random
from keras.optimizers import Adam
from keras.models import Sequential
from keras.layers import Dense
from collections import deque

env = gym.make('CartPole-v0')

EPISODES = 2000
BATCH_SIZE = 32
DISCOUNT = 0.95
UPDATE_TARGET_EVERY = 5
STATE_SIZE = env.observation_space.shape[0]
ACTION_SIZE = env.action_space.n
SHOW_EVERY = 50

class DQNAgents:
    
    def __init__(self, state_size, action_size):
        self.state_size = state_size
        self.action_size = action_size
        self.replay_memory = deque(maxlen = 2000)
        self.gamma = 0.95
        self.epsilon = 1
        self.epsilon_decay = 0.995
        self.epsilon_min = 0.01
        self.model = self._build_model()
        self.target_model = self.model
        
        self.target_update_counter = 0
        print('Initialize the agent')
        
    def _build_model(self):
        model = Sequential()
        model.add(Dense(20, input_dim = self.state_size, activation = 'relu'))
        model.add(Dense(10, activation = 'relu'))
        model.add(Dense(self.action_size, activation = 'linear'))
        model.compile(loss = 'mse', optimizer = Adam(lr = 0.001))
        
        return model

    def update_replay_memory(self, current_state, action, reward, next_state, done):
        self.replay_memory.append((current_state, action, reward, next_state, done))
        
    def train(self, terminal_state):
        
        # Sample from replay memory
        minibatch = random.sample(self.replay_memory, BATCH_SIZE)
        
        #Picks the current states from the randomly selected minibatch
        current_states = np.array([t[0] for t in minibatch])
        current_qs_list= self.model.predict(current_states) #gives the Q value for the policy network
        new_state = np.array([t[3] for t in minibatch])
        future_qs_list = self.target_model.predict(new_state)
        
        X = []
        Y = []
        
        # This loop will run 32 times (actually minibatch times)
        for index, (current_state, action, reward, next_state, done) in enumerate(minibatch):
            
            if not done:
                new_q = reward + DISCOUNT * np.max(future_qs_list)
            else:
                new_q = reward
                
            # Update Q value for given state
            current_qs = current_qs_list[index]
            current_qs[action] = new_q
            
            X.append(current_state)
            Y.append(current_qs)
        
        # Fitting the weights, i.e. reducing the loss using gradient descent
        self.model.fit(np.array(X), np.array(Y), batch_size = BATCH_SIZE, verbose = 0, shuffle = False)
        
       # Update target network counter every episode
        if terminal_state:
            self.target_update_counter += 1
            
        # If counter reaches set value, update target network with weights of main network
        if self.target_update_counter > UPDATE_TARGET_EVERY:
            self.target_model.set_weights(self.model.get_weights())
            self.target_update_counter = 0
    
    def get_qs(self, state):
        return self.model.predict(np.array(state).reshape(-1, *state.shape))[0]
            

''' We start here'''

agent = DQNAgents(STATE_SIZE, ACTION_SIZE)

for e in range(EPISODES):
    
    done = False
    current_state = env.reset()
    time = 0 
    total_reward = 0
    while not done:
        if np.random.random() > agent.epsilon:
            action = np.argmax(agent.get_qs(current_state))
        else:
            action = env.action_space.sample()
        
        next_state, reward, done, _ = env.step(action)

        agent.update_replay_memory(current_state, action, reward, next_state, done)
        
        if len(agent.replay_memory) < BATCH_SIZE:
            pass
        else:
            agent.train(done)
            
        time+=1    
        current_state = next_state
        total_reward += reward
        
    print(f'episode : {e}, steps {time}, epsilon : {agent.epsilon}')
    
    if agent.epsilon > agent.epsilon_min:
        agent.epsilon *= agent.epsilon_decay

Results for first 40ish iterations are below (look for the number of steps, they should be increasing and should reach a maximum of 199)

episode : 0, steps 14, epsilon : 1
episode : 1, steps 13, epsilon : 0.995
episode : 2, steps 17, epsilon : 0.990025
episode : 3, steps 12, epsilon : 0.985074875
episode : 4, steps 29, epsilon : 0.9801495006250001
episode : 5, steps 14, epsilon : 0.9752487531218751
episode : 6, steps 11, epsilon : 0.9703725093562657
episode : 7, steps 13, epsilon : 0.9655206468094844
episode : 8, steps 11, epsilon : 0.960693043575437
episode : 9, steps 14, epsilon : 0.9558895783575597
episode : 10, steps 39, epsilon : 0.9511101304657719
episode : 11, steps 14, epsilon : 0.946354579813443
episode : 12, steps 19, epsilon : 0.9416228069143757
episode : 13, steps 16, epsilon : 0.9369146928798039
episode : 14, steps 14, epsilon : 0.9322301194154049
episode : 15, steps 18, epsilon : 0.9275689688183278
episode : 16, steps 31, epsilon : 0.9229311239742362
episode : 17, steps 14, epsilon : 0.918316468354365
episode : 18, steps 21, epsilon : 0.9137248860125932
episode : 19, steps 9, epsilon : 0.9091562615825302
episode : 20, steps 26, epsilon : 0.9046104802746175
episode : 21, steps 20, epsilon : 0.9000874278732445
episode : 22, steps 53, epsilon : 0.8955869907338783
episode : 23, steps 24, epsilon : 0.8911090557802088
episode : 24, steps 14, epsilon : 0.8866535105013078
episode : 25, steps 40, epsilon : 0.8822202429488013
episode : 26, steps 10, epsilon : 0.8778091417340573
episode : 27, steps 60, epsilon : 0.8734200960253871
episode : 28, steps 17, epsilon : 0.8690529955452602
episode : 29, steps 11, epsilon : 0.8647077305675338
episode : 30, steps 42, epsilon : 0.8603841919146962
episode : 31, steps 16, epsilon : 0.8560822709551227
episode : 32, steps 12, epsilon : 0.851801859600347
episode : 33, steps 12, epsilon : 0.8475428503023453
episode : 34, steps 10, epsilon : 0.8433051360508336
episode : 35, steps 30, epsilon : 0.8390886103705794
episode : 36, steps 21, epsilon : 0.8348931673187264
episode : 37, steps 24, epsilon : 0.8307187014821328
episode : 38, steps 33, epsilon : 0.8265651079747222
episode : 39, steps 32, epsilon : 0.8224322824348486
episode : 40, steps 15, epsilon : 0.8183201210226743
episode : 41, steps 20, epsilon : 0.8142285204175609
episode : 42, steps 37, epsilon : 0.810157377815473
episode : 43, steps 11, epsilon : 0.8061065909263957
episode : 44, steps 30, epsilon : 0.8020760579717637
episode : 45, steps 11, epsilon : 0.798065677681905
episode : 46, steps 34, epsilon : 0.7940753492934954
episode : 47, steps 12, epsilon : 0.7901049725470279
episode : 48, steps 26, epsilon : 0.7861544476842928
episode : 49, steps 19, epsilon : 0.7822236754458713
episode : 50, steps 20, epsilon : 0.778312557068642
",36710,,,,,8/16/2020 4:37,My Deep Q-Learning Network does not learn for OpenAI gym's cartpole problem,,2,2,,,,CC BY-SA 4.0 22990,2,,22985,8/12/2020 0:04,,0,,"

There seems to be a mismatch between the weights you provide and your network diagram. Since w[0] (the yellow connections) is meant to transform $ x \in \mathbb{R}^2 $ into the layer 0 activations which are $ \mathbb{R}^2 $, w[0] should be a matrix $ \in \mathbb{R}^{2 \times 2} $, not a vector in $\mathbb{R}^2 $ as you have. Likewise, your w[1] (the red connections) should be a vector $ \in \mathbb{R^2} $ and not a scalar. Finally, if you are indeed scaling the output of layer 1 (the blue connection), then you'll need an additional scalar value. However, the blue connection confuses me a bit as usually the activated output is used directly in the loss function, not a scaled version of it. Unless the blue connection stands for the loss function.

In short, I believe if you change the shapes of your weight matrices to actually represent your network diagram, your update equations will work. I'll go through the network below to make sure I illustrate my point.

$ x \in \mathbb{R}^{2} $, an input example

$ W^0 \in \mathbb{R}^{2 \times 2} $, the yellow connections

$ W^1 \in \mathbb{R}^2 $, the red connections

$ z^0 = xW^0 \in \mathbb{R}^{2} $, the weighted inputs to the layer 0 nodes. The dimensions of this should match the number of nodes at layer 0.

$ a^0 = \sigma(z^0) \in \mathbb{R}^{2} $, the output of the layer 0 nodes. The dimensions of this should match the number of nodes at layer 0.

$ z^1 = a^0 W^1 \in \mathbb{R} $, the weighted inputs to the layer 1 nodes. The dimensions of this should match the number of nodes at layer 1.

$ a^1 = \sigma(z^1) \in \mathbb{R} $, the output of the layer 1 nodes and thus the output of the network. The dimensions of this should match the number of nodes at layer 1.

Weight Updates

As you say before your edit, $\delta^1$, as the product of two scalars $\nabla_a C$ and $\sigma'(z^1)$, is also a scalar. Since $a^0$ is a vector in $\mathbb{R}^2$, then $\delta^1(a^0)^T$ is also a vector in $\mathbb{R}^2$. This matches what we expect, as it should match the dimensions of $W^1$ to allow the element-wise subtraction in the weight update equation.

NB. It is not the case, as you say in your edit, that the shape of $\delta^l$ should match the shape of $W^l$. It should instead match the number of nodes, and it is the shape of $\delta^l(a^{l-1})^T$ that should match the shape of $W^l$. You had this right in your original post.

Bias Updates

This brings us to the bias updates. There should be one bias term per node in a given layer, so the shapes of your biases are correct (i.e. $\mathbb{R}^2$ for layer 0 and $\mathbb{R}$ for layer 1). Now, we saw above that the shape of $\delta^l$ also matches the number of nodes in layer $l$, so again the element-wise subtraction in your original bias update equation works.

I also tried using this book to learn backprop, but I had a hard time connecting the variables with the different parts of the network and the corresponding code. I finally understood the algorithm in depth only after deriving all the update equations by hand for a very small network (2 inputs, one output, no hidden layers) and working my way up to larger networks, making sure to keep track of the shapes of the inputs and outputs along the way. If you're having trouble with the update equations I highly recommend this.

A final piece of advice that helped me: drop the $x$ and the summations over input examples from your formulations and just treat everything as matrices (e.g. a scalar becomes a matrix in $\mathbb{R}^{1 \times 1}$, $X$ is a matrix in $\mathbb{R}^{N \times D}$). First, this allows you to better interpret matrix orientations and debug issues such as a missing transpose operation. Second, this is (in my limited understanding) how backprop should actually be implemented in order to take advantage of optimized linalg libraries and GPUs, so it's perhaps a bit more relevant.

",37972,,37972,,8/12/2020 15:00,8/12/2020 15:00,,,,5,,,,CC BY-SA 4.0 22991,2,,22959,8/12/2020 1:08,,1,,"

This is known as reward hacking in the literature; see, e.g., https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd for discussion and further links.

",17430,,,,,8/12/2020 1:08,,,,1,,,,CC BY-SA 4.0 22992,2,,18290,8/12/2020 4:52,,0,,"

The term "intentionality" has two quite different senses. One is a very technical concept in philosophy of AI and means (roughly) aboutness in the sense that beliefs, desires, fears, etc., are about things (snakes, tax, chocolate). That's the sense central to searle's Chinese room argument. The other sense is the common idea of intending to do (or not do) something. A really key issue for AI according to Searle is how can a computer have intentionality in the first sense.

",17709,,,,,8/12/2020 4:52,,,,0,,,,CC BY-SA 4.0 22993,1,22995,,8/12/2020 12:39,,9,1627,"

I'm new to the artificial intelligence field. In our first chapters, there is one topic called "problem-solving by searching". After searching for it on the internet, I found the depth-first search algorithm. The algorithm is easy to understand, but no one explains why this algorithm is included in the artificial intelligence study.

Where do we use it? What makes it an artificial intelligence algorithm? Is every search algorithm is an AI algorithm?

",40245,,2444,,8/12/2020 13:59,8/12/2020 23:20,Why is depth-first search an artificial intelligence algorithm?,,2,2,,,,CC BY-SA 4.0 22994,1,22996,,8/12/2020 12:53,,2,135,"

I'm slightly confused about the experience replay process. I understand why we use batch processing in reinforcement learning, and from my understanding, a batch of states is input into the neural network model.

Suppose there are 2 valid moves in the action space (UP or DOWN)

Suppose the batch size is 5, and the 5 states are this:

$$[s_1, s_2, s_3, s_4, s_5]$$

We put this batch into the neural network model and output Q values. Then we put $[s_1', s_2', s_3', s_4', s_5']$ into a target network.

What I'm confused about is this:

Each state in $[s_1, s_2, s_3, s_4, s_5]$ is different.

Are we computing Q values for UP and DOWN for ALL 5 states after they go through the neural network?

For example, $$[Q_{s_1}(\text{UP}), Q_{s_1}(\text{DOWN})], \\ [Q_{s_2} (\text{UP}), Q_{s_2}(\text{DOWN})], \\ [Q_{s_3}(\text{UP}), Q_{s_3}(\text{DOWN})], \\ [Q_{s_4}(\text{UP}), Q_{s_4}(\text{DOWN})], \\ [Q_{s_5}(\text{UP}), Q_{s_5}(\text{DOWN})]$$

",26159,,2444,,8/12/2020 21:07,8/12/2020 21:07,"When using experience replay in reinforcement learning, which state is used for training?",,1,0,,,,CC BY-SA 4.0 22995,2,,22993,8/12/2020 13:13,,15,,"

This is a fundamentally a philosophical question. What makes AI AI? But first things, why would DFS be considered an AI algorithm?

In its most basic form, DFS is a very general algorithm that is applied to wildly different categories of problems: topological sorting, finding all the connected components in a graph, etc. It may be also used for searching. For instance, you could use DFS for finding a path in a 2D maze (although not necessarily the shortest one). Or you could use it to navigate through more abstract state spaces (e.g. between configuration of chess or in the towers of Hanoi). And this is where the connection to AI arises. DFS can be used on its own for navigating such spaces, or as a basic subroutine for more complex algorithms. I believe that in the book Artificial Intelligence: A Modern Approach (which you may be reading at the moment) they introduce DFS and Breadth-First Search this way, as a first milestone before reaching more complex algorithms like A*.

Now, you may be wondering why such search algorithms should be considered AI. Here, I'm speculating, but maybe the source of the confusion comes from the fact that DFS does not learn anything. This is a common misconception among new AI practitioners. Not every AI technique has to revolve around learning. In other words, AI != Machine Learning. ML is one of the many subfields within AI. In fact, early AI (around the 50s-60s) was more about logic reasoning than it was about learning.

AI is about making an artificial system behave "intelligently" in a given setting, whatever it takes to reach that intelligent behavior. If what it takes is applying well-known algorithms from computer science like DFS, then so be it. Now, what is it that intelligent means? This is where we enter more philosophical grounds. My interpretation is that "intelligence" is a broad term to define the large set of techniques that we use to approach the immense complexity that reality and certain puzzle-like problems have to offer. Often, "intelligent behavior" revolves around heuristics and proxy methods away from the perfect, provable algorithms that work elsewhere in computer science. While certain algorithms (like DFS or A*) may be proven to give optimal answers if infinitely many resources can be devoted to the task at hand, only in sufficiently constrained settings would such techniques be affordable. Fortunately, we can make them work in many situations (like A* for chess or for robot navigation, or Monte Carlo Tree Search for Go), but only if reasonable assumptions and constraints over the state space are imposed. For all the rest is where learning techniques (like Markov Random Fields for image segmentation, or Neural Nets paired with Reinforcement Learning for situated agents) may come handy.

Funny enough, even if intelligence is often regarded as a good thing, my interpretation can be summed up as imperfect modes of behavior to address immensely complex problems for which no known perfect solution exists (with rare exceptions in sufficiently bounded problems). If we had a huge table that, for each chess position, gives the best possible move you can make, and put that table inside a program, would this program be intelligent? Maybe you'd think so, but in any case it seems more arguable than a program that makes real-time reasoning and spits a decision after some reasonable time, even if it's not the best one. Similarly, do you consider sorting algorithms intelligent? Again, the answer is arguable, but the fact is that algorithms exist with optimal time and memory complexities, we know that we can't do better than what those algorithms do, and we do not have to resort to any heuristic or any learning to do better (disclaimer: I haven't actually checked if there's some madman out in the wild applying learning to solve sorting with better average times).

",37359,,37359,,8/12/2020 15:33,8/12/2020 15:33,,,,5,,,,CC BY-SA 4.0 22996,2,,22994,8/12/2020 15:05,,1,,"

The way the states are used is as follows:

Typically your $Q$-network will state a state as input and output scores over the action space. I.e. $Q : \mathcal{S} \rightarrow \mathbb{R}^{|\mathcal{A}|}$. So, in your replay buffer you should store $s_t, a_t, r_{t+1}, s_{t+1}, \mbox{done}$ (note that done just represents where the episode ended on this transition and I add for completeness.

Now, when you are doing your batch updates you sample uniformly at random from this replay buffer. This means you get $B$ tuples of $s_t, a_t, r_{t+1}, s_{t+1}, \mbox{done}$. Now, I will assume $B=1$ as it is easier to explain and the extension to $B > 1$ should be easy to see.

For our state-action tuple $s_t, a_t$ we want to shift what the network predicts for this pair to be closer to $r_{t+1} + \gamma \arg\max_a Q(s,a)$. However, our neural network only takes the state as input, and outputs a vector of scores for each action. That means we want to shift the output of our network for the state $s_t$ towards the target I just mentioned, but only for the action $a_t$ that we took. To do this we just calculate the target, i.e. we calculate $r_{t+1} + \gamma \arg\max_a Q(s,a)$, and then we do gradient ascent like we would a normal neural network where the target vector is the same as the predicted vector everywhere except the $a_t$th element, which we will change to $r_{t+1} + \gamma \arg\max_a Q(s,a)$. This way, our network moves closer to our Q-learning update for only the action we want, in line with how Q-learning works.

It is also worth nothing that you can parameterise your Neural Network to be a function $Q: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ which would make training more in line with how tabular Q-learning but is seldom used in practice as it becomes much more expensive to compute (you have to do a forward pass for each action, rather than one forward pass per state).

",36821,,36821,,8/12/2020 15:53,8/12/2020 15:53,,,,8,,,,CC BY-SA 4.0 22997,1,,,8/12/2020 15:52,,4,209,"

In Convolutional Neural Networks, do all filters of the same convolutional layer need to have the same dimensions and stride?

If they don't, then it would seem the channel produced by each filter would have different sizes. Or is there some way to get around that?

",40250,,18758,,1/16/2022 8:57,1/16/2022 8:57,Do all filters of the same convolutional layer need to have the same dimensions and stride?,,1,1,,,,CC BY-SA 4.0 22999,1,23004,,8/12/2020 18:15,,0,485,"

I am trying to re-implement the SDNE algorithm for graph embedding by PyTorch.

I get stuck at some issues about evaluation metric Precision@K.

precision@k is a metric which gives equal weight to the returned instance. It is defined as follows

$$precision@k(i) = \frac{\left| \, \{ j \, | \, i, j \in V, index(j) \le k, \Delta_i(j) = 1 \} \, \right|}{k}$$

where $V$ is the vertex set, $index(j)$ is the ranked index of the $j$-th vertex and $\Delta_i(j) = 1$ indicates that $v_i$ and $v_j$ have a link.

I don't understand what "ranked index of the $j$-th vertex" means.

Beside, I am also confused about the MAP metric in section 4.3. I don't understand how to calculate it.

Mean Average Precision (MAP) is a metric with good discrimination and stability. Compared with precision@k, it is more concerned with the performance of the returned items ranked ahead. It is calculated as follows: $$AP(i) = \frac{\sum_j precision@j(i) \cdot \Delta_i(j)}{\left| \{ \Delta_i(j) = 1 \} \right|}$$ $$MAP = \frac{\sum_{i \in Q} AP(i)}{|Q|}$$ where $Q$ is the query set.

If anyone is familiar with these metrics, could you help me to explain them?

",25645,,40178,,8/13/2020 3:27,5/16/2021 15:12,What is Precision@K for link prediction in graph embedding meaning?,,2,0,,,,CC BY-SA 4.0 23001,1,,,8/12/2020 19:11,,1,195,"

I understand we use a target network because it helps resolve issues regarding stability, however, that's not what I'm here to ask.

What I would like to understand is why a target network is used as a measure of ground truth as opposed to the expectation equation.

To clarify, here is what I mean. This is the process used for DQN:

  1. In DQN, we begin with a state $S$
  2. We then pass this state through a neural network which outputs Q values for each action in the action space
  3. A policy e.g. epsilon-greedy is used to take an action
  4. This subsequently produces the next state $S_{t+1}$
  5. $S_{t+1}$ is then passed through a target neural network to produce target Q values
  6. These target Q values are then injected into the Bellman equation which ultimately produces a target Q value via the Q-learning update rule equation
  7. MSE is used on 6 and 2 to compute the loss
  8. This is then back-propagated to update the parameters for the neural network in 2
  9. The target neural network has its parameters updated every X epochs to match the parameters in 2

Why do we use a target neural network to output Q values instead of using statistics. Statistics seems like a more accurate way to represent this. By statistics, I mean this:

Q values are the expected return, given the state and action under policy π.

$Q(S_{t+1},a) = V^π(S_{t+1})$ = $\mathbb{E}(r_{t+1}+ γr_{t+2}+ (γ^2)_{t+3} + ... \mid S_{t+1}) = {E}(∑γ^kr_{t+k+1}\mid S_{t+1})$

We can then take the above and inject it into the Bellman equation to update our target Q value:

$Q(S_{t},a_t) + α*(r_t+γ*max(Q(S_{t+1},a))-Q(S_{t},a))$

So, why don't we set the target to the sum of diminishing returns? Surely a target network is very inaccurate, especially since the parameters in the first few epochs for the target network are completely random.

",26159,,26159,,8/12/2020 21:03,8/13/2020 8:52,Why are Target Networks used in Deep Q-Learning as opposed to the Expected Value equation?,,1,0,,,,CC BY-SA 4.0 23003,1,23014,,8/12/2020 22:01,,2,235,"

I'm aware that we back-propagate after computing the loss between:

The Neural Network Q values and the Target Network Q values

However, all this is doing is updating the parameters of the Neural Network to produce an output that matches the Target Q values as close as possible.

Suppose one epoch is run and the reward is +10, surely we need to update the parameters using this too to tell the Network to push the probability of these actions, given these parameters up.

How does the algorithm know +10 is good? Suppose the reward range is -10 for loss and +10 for win.

",26159,,26159,,8/13/2020 13:30,8/13/2020 13:30,"In DQN, when do the parameters in the Neural Network update based on the reward received?",,1,0,,,,CC BY-SA 4.0 23004,2,,22999,8/12/2020 22:44,,0,,"

These measures are used for evaluating how "good" an embedding of a graph is or how "good" the graph reconstructed from the embedding resembles the original.

Given the embedding and vertex $i$, it seems to be that the rank of the vertices is dependent on the probability of there being a link between vertex $i$ and vertex $j$ in the original graph. If there is a higher probability of there being a link between $i$ and $j$ in the original graph, $j$ has a lower rank.

In other words, $precision@k(i)$ is the proportion of vertices $j$ that vertex $i$ has a link to in the original graph out of the $k$ vertices for which vertex $i$ has the highest probability of having a link to, recovered from the embedding.

This matches up with the common definition of $precision@n$ used in evaluating information/document retrieval, defined as the proportion of relevant documents out of the $n$ best retrieved documents.

The average precision of a vertex, $AP(i)$, is the average of $precision@j$ over all $j$ such that there is a link between vertex $i$ and vertex $j$. Perhaps a more clear definition would have been $$AP(i) = \frac{\sum_{j \in S_i} precision@j(i)}{\left| S_i \right|}$$

where $S_i = \{j \, |\, \Delta_i(j) = 1 \}$, the set of all $j$ such that there is a link from $i$ to $j$.

$MAP$ for a query set $Q$ is then the mean of the average precision ($AP$) over all vertices in $Q$.

",40178,,40178,,8/19/2020 20:49,8/19/2020 20:49,,,,9,,,,CC BY-SA 4.0 23005,2,,7683,8/12/2020 23:20,,2,,"

A hypothesis space/class is the set of functions that the learning algorithm considers when picking one function to minimize some risk/loss functional.

The capacity of a hypothesis space is a number or bound that quantifies the size (or richness) of the hypothesis space, i.e. the number (and type) of functions that can be represented by the hypothesis space. So a hypothesis space has a capacity. The two most famous measures of capacity are VC dimension and Rademacher complexity.

In other words, the hypothesis class is the object and the capacity is a property (that can be measured or quantified) of this object, but there is not a big difference between hypothesis class and its capacity, in the sense that a hypothesis class naturally defines a capacity, but two (different) hypothesis classes could have the same capacity.

Note that representational capacity (not capacity, which is common!) is not a standard term in computational learning theory, while hypothesis space/class is commonly used. For example, this famous book on machine learning and learning theory uses the term hypothesis class in many places, but it never uses the term representational capacity.

Your book's definition of representational capacity is bad, in my opinion, if representational capacity is supposed to be a synonym for capacity, given that that definition also coincides with the definition of hypothesis class, so your confusion is understandable.

",2444,,2444,,8/14/2020 11:34,8/14/2020 11:34,,,,1,,,,CC BY-SA 4.0 23006,2,,22993,8/12/2020 23:20,,1,,"

DFS on its own would not typically be considered AI imo. It is a standard computer science deterministic algorithm. Instead an intelligent agent might use DFS to inform its decision making as part of an AI package.

",33059,,,,,8/12/2020 23:20,,,,2,,,,CC BY-SA 4.0 23009,2,,7683,8/13/2020 3:57,,2,,"

A hypothesis space is defined as the set of functions $\mathcal H$ that can be chosen by a learning algorithm to minimize loss (in general).

$$\mathcal H = \{h_1, h_2,....h_n\}$$

The hypothesis class can be finite or infinite, for example a discrete set of shapes to encircle certain portion of the input space is a finite hypothesis space, whereas hpyothesis space of parametrized functions like neural nets and linear regressors are infinite.

Although the term representational capacity is not in the vogue a rough definition woukd be: The representational capacity of a model, is the ability of its hypothesis space to approximate a complex function, with 0 error, which can only be approximated by infinitely many hypothesis spaces whose representational capacity is equal to or exceed the representational capacity required to approximate the complex function.

The most popular measure of representational capacity is the $\mathcal V$ $\mathcal C$ Dimension of a model. The upper bound for VC dimension ($d$) of a model is: $$d \leq \log_2| \mathcal H|$$ where $|H|$ is the cardinality of the set of hypothesis space.

",,user9947,,,,8/13/2020 3:57,,,,0,,,,CC BY-SA 4.0 23011,1,,,8/13/2020 6:55,,1,31,"

I am newbie to Reinforcement Learning, this is my idea - Agent(food provider) has to select a food based on the environment(based on the user profile). Here the reward will be given to the agent based on the user's feedback. This is for single person, what if I wanted to make it for multiple persons. I wanted the system to learn on its own and create a policy such that it can identify certain group of people based on their profile and what type of food will be suitable for them.

  • Is this possible to implement in Reinforcement learning?
  • If so what type of problem is this and what type of solution I can use to solve this.
",40262,,,,,8/13/2020 6:55,Customized food for persons based on their profile using Reinforcement learning,,0,2,,,,CC BY-SA 4.0 23012,1,23013,,8/13/2020 7:05,,4,1748,"

From what I understand, if the rewards are sparse the agent will have to explore more to get rewards and learn the optimal policy, whereas if the rewards are dense in time, the agent is quickly guided towards its learning goal.

Are the above thoughts correct, and are there any other pros and cons of the two contrasting settings? On a side-note, I feel that the inability to specify rewards that are dense in time is what makes imitation learning useful.

",35585,,2444,,11/2/2020 21:42,11/2/2020 21:42,What are the pros and cons of sparse and dense rewards in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 23013,2,,23012,8/13/2020 7:34,,3,,"

What are the pros and cons of sparse and dense rewards in reinforcement learning?

It is unusual to refer to this difference as "pros and cons" because that term is often used to make comparisons between difference choices. Assuming you have a specific problem to solve, then whether or not the rewards are naturally sparse or dense is not a choice. You cannot say "I want to solve MountainCar, I will use a dense reward setting", because MountainCar has (relatively, for a starting problem) sparse rewards. You can only say "I won't attempt MountainCar, it is too difficult".

In short however, your assessment is correct:

if the rewards are sparse the agent will have to explore more to get rewards and learn the optimal policy, whereas if the rewards are dense in time, the agent is quickly guided towards its learning goal

There is not really any other difference at the top level. Essentially, sparser rewards make for a harder problem to solve. All RL algorithms can cope with sparse rewards to some degree, the whole concept of returns and value backup is designed to deal with sparseness at a theoretical level. In practical terms however, it may take some algorithms an unreasonable amount of time to determine a good policy beyond certain levels of sparseness.

On a side-note, I feel that the inability to specify rewards that are dense in time is what makes imitation learning useful.

Imitation learning is one of many techniques available to work around or deal with problems that have sparse reward structure. Others include:

  • Reward shaping, which attempts to convert a sparse reward scheme to a dense one using domain knowledge of the researcher.

  • Eligibility traces, which back up individual TD errors across multiple time steps.

  • Prioritised sweeping, which focuses updates on "surprising" reward data.

  • Action selection planning algorithms that look ahead from the current state.

  • "Curiousity" driven reinforcement learning that guides exploration to new state spaces independently of any reward signal.

",1847,,,,,8/13/2020 7:34,,,,2,,,,CC BY-SA 4.0 23014,2,,23003,8/13/2020 7:47,,1,,"

However, all this is doing is updating the parameters of the Neural Network to produce an output that matches the Target Q values as close as possible.

Yes. That is all it needs to do because we have defined the policy around the Q values like so:

$$\pi(s) = \text{argmax}_a \hat{q}(s,a,\theta)$$

Where $\theta$ is the neural network weights.

Therefore, if the estimates of Q are approximately the same as the action value of the optimal policy, the policy in DQN is approximately the optimal policy.

How does the algorithm know +10 is good?

It does not, at least not directly. The algorithm knows, approximately, what the action values are if it acts consistently with their current estimates by always choosing the maximising action at each step.

The learning process will learn that +10 is relatively good in your scenario because it never finds anything better when exploring.

",1847,,,,,8/13/2020 7:47,,,,0,,,,CC BY-SA 4.0 23015,2,,23001,8/13/2020 8:30,,1,,"

Why are Target Networks used in Deep Q-Learning as opposed to the Expected Value equation?

In short, because for many problems, this learns more efficiently.

It is the difference between Monte Carlo (MC) methods and Temporal Difference (TD) learning.

You can use MC estimates for expected return in deep RL. They are slower for two reasons:

  • It takes far more experience to collect enough data to train a neural network, because to fully sample a return you need a whole episode. You cannot just use one episode at a time because that presents the neural network with correlated data. You would need to collect multiple episodes and fill a large experience table.

    • As an aside, you would also need to discard all the experience after each update, because sampled full returns are on-policy data. Or you could implement importance sampling for off-policy Monte Carlo control and re-calculate the correct updates when the policy starts to improve, which is added complexity.
  • Samples of full returns have a higher variance, so the sampled data is noisier.

In comparison, TD learning starts with biased samples. This bias reduces over time as estimates become better, but it is the reason why a target network is used (otherwise the bias would cause runaway feed back).

So you have a bias/variance trade off with TD representing high bias and MC representing high variance.

It is not clear theoretically which is better in general, because it depends on the nature of MDPs that you are solving with each method. In practice, on the types of problems Deep RL has been tried on, single-step TD learning appears to do better than MC sampling of returns, in terms of goals such as sample efficiency and learning time.

You can compromise between TD and MC using eligibility traces, resulting in TD($\lambda$). However, this is awkward to implement in Deep RL due to the experience replay table. A simpler compromise is to use $n$-step returns e.g. $r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + \gamma^3 \text{max}_a(Q(s_{t+4},a))$, which was one of the refinements used in the "Rainbow" DQN paper - note that even though strictly in their case, this handled off-policy incorrectly (it should use importance sampling, but they didn't bother), it still worked well enough for low $n$ on the Atari problems.

",1847,,1847,,8/13/2020 8:52,8/13/2020 8:52,,,,0,,,,CC BY-SA 4.0 23016,1,23020,,8/13/2020 9:15,,2,164,"

I've been reading A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning lately, and I can't understand what they mean by the surrogate loss function.

Some relevant notation from the paper -

  • $d_\pi$ = average distribution of states if we follow policy $\pi$ for $T$ timesteps
  • $C(s,a)$ = the expected immediate cost of performing action a in state s for the task we are considering (assume $C$ is bounded in [0,1]
  • $C_\pi(s) = \mathbb{E}_{a\sim\pi(s)}[C(s,a)]$ is the expected immediate cost of $π$ in $s$.
  • $J(π) = T\mathbb{E}_{s\sim d_\pi}[C_\pi(s)]$ is the total cost of executing policy $\pi$ for $T$ timesteps

In imitation learning, we may not necessarily know or observe true costs $C(s,a)$ for the particular task. Instead, we observe expert demonstrations and seek to bound $J(π)$ for any cost function $C$ based on how well $π$ mimics the expert’s policy $π^{*}$. Denote $l$ the observed surrogate loss function we minimize instead of $C$. For instance, $l(s,π)$ may be the expected 0-1 loss of $π$ with respect to $π^{*}$ in state $s$, or a squared/hinge loss of $π$ with respect to $π^{*}$ in $s$. Importantly, in many instances, $C$ and $l$ may be the same function – for instance, if we are interested in optimizing the learner’s ability to predict the actions chosen by an expert.

I don't understand how exactly the surrogate loss is different from the true costs, and what are the possible cases in which both are the same. It'd be great if someone could throw some light on this. Thank you!

",35585,,2444,,8/13/2020 10:40,8/13/2020 14:08,"What is the surrogate loss function in imitation learning, and how is it different from the true cost?",,1,1,,,,CC BY-SA 4.0 23019,1,24519,,8/13/2020 11:01,,6,146,"

I just read the following points about the number of required expert demonstrations in imitation learning, and I'd like some clarifications. For the purpose of context, I'll be using a linear reward function throughout this post (i.e. the reward can be expressed as a weighted sum of the components of a state's feature vector)

The number of expert demonstrations required scales with the number of features in the reward function.

I don't think this is obvious at all - why is it true? Intuitively, I think that as the number of features rises, the complexity of the problem does too, so we may need more data to make a better estimate of the expert's reward function. Is there more to it?

The number of expert demonstration required does not depend on -

  • Complexity of the expert’s optimal policy $\pi^{*}$
  • Size of the state space

I don't see how the complexity of the expert's optimal policy plays a role here - which is probably why it doesn't affect the number of expert demonstrations we need; but how do we quantify the complexity of a policy in the first place?

Also, I think that the number of expert demonstrations should depend on the size of the state space. For example, if the train and test distributions don't match, we can't do behavioral cloning without falling into problems, in which case we use the DAGGER algorithm to repeatedly query the expert and make better decisions (take better actions). I feel that a larger state space means that we'll have to query the expert more frequently, i.e. to figure out the expert's optimal action in several states.

I'd love to know everyone's thoughts on this - the dependence of the number of expert demonstrations on the above, and if any, other factors. Thank you!


Source: Slide 20/75

",35585,,35585,,8/13/2020 20:55,11/10/2020 13:44,What does the number of required expert demonstrations in Imitation Learning depend on?,,1,0,,,,CC BY-SA 4.0 23020,2,,23016,8/13/2020 11:18,,2,,"

A surrogate loss is a loss than you use "instead of", "in place of", "as a proxy for" or "as a substitute for" another loss, which is typically the "true" loss.

Surrogate losses are actually common in machine learning (although almost nobody realizes that they are surrogate losses). For example, the empirical risk (which the mean squared error is an instance of) is a surrogate for the expected risk, which is incomputable in almost all cases, given that you do not know the underlying probability distribution. See An Overview of Statistical Learning Theory by V. N. Vapnik for more details. In fact, discussions on generalization arise because of this issue, i.e. you use surrogate losses rather than true losses.

The term "surrogate" is also used in conjunction with the term "model", i.e. "surrogate model", for example, in the context of Bayesian optimization, where a Gaussian process is the surrogate model for the unknown model/function that you want to know about, i.e. you use the Gaussian process to approximate the unknown function/model.

Regarding the excerpt you are quoting and your specific concerns, although I didn't read the paper and I am not an expert in imitation learning, let me try to explain what I understand from this excerpt. Essentially, in imitation learning, you use the expert's policy $\pi^*$ to train the agent, rather than letting him just explore and exploit the environment. So, what you know is $\pi^*$ and you can calculate the "loss" between $\pi^*$ and $\pi$ (the current agent's policy), denoted by $l$. However, this loss $l$ that you calculate is not necessarily the "true" loss (i.e. it is a surrogate loss), given that our goal is not really to imitate the "expert" but to learn an optimal policy to behave in the environment. If the goal was to just imitate the "expert", then $C$ and $l$ would coincide, because, in that case, $l$ would represent the "discrepancy" or "loss" between $\pi$ and the expert's policy $\pi^*$.

",2444,,2444,,8/13/2020 14:08,8/13/2020 14:08,,,,0,,,,CC BY-SA 4.0 23021,1,,,8/13/2020 13:31,,5,476,"

In Sutton and Barto's book about reinforcement learning, policy iteration and value iterations are presented as separate/different algorithms.

This is very confusing because policy iteration includes an update/change of value and value iteration includes a change in policy. They are the same thing, as also shown in the Generalized Policy Iteration method.

Why then, in many papers as well, they (i.e. policy and value iterations) are considered two separate update methods to reach an optimal policy?

",38958,,2444,,8/13/2020 23:00,8/14/2020 18:36,Why are policy iteration and value iteration studied as separate algorithms?,,2,0,,,,CC BY-SA 4.0 23022,2,,6329,8/13/2020 13:55,,1,,"

I also struggled with this when I was implementing NEAT.

What worked for me was cycle detection using DFS search in this video https://www.youtube.com/watch?v=tg96sZqhXyU

Simply put, I did DFS on all my input nodes recording all the nodes visited if I encounter a node I've already visited then its a cycle thereby I for my neat to discard this and attempt to make another connection.

",40272,,,,,8/13/2020 13:55,,,,0,,,,CC BY-SA 4.0 23023,2,,22997,8/13/2020 13:57,,1,,"

It seems that a similar question has been raised here: https://stackoverflow.com/questions/57438922/different-size-filters-in-the-same-layer-with-tensorflow-2-0

Like answered in the link above, you could combine severall Conv2D ops with different kernel sizes on the same input. You would have to adapt each output with padding, or cropping, so that you could concatenate all of them.

Hope this helps!

",34409,,,,,8/13/2020 13:57,,,,0,,,,CC BY-SA 4.0 23024,2,,23021,8/13/2020 14:11,,3,,"

Policy iteration is made up of two steps. The first is a full policy evaluation, where a value function is calculated for the current policy. The second is policy improvement, where the policy is made greedy with respect to the value function.

Value iteration looks to speed things up by stopping policy evaluation after one iteration, make the policy greedy with respect to that value function, and repeat until convergence.

Clearly, these are two different algorithms, hence why they are considered to be different. They are, however, very closely linked, which is why you might consider them to be 'the same thing'. I guess you could say they belong to the same family of algorithm.

",36821,,,,,8/13/2020 14:11,,,,0,,,,CC BY-SA 4.0 23025,1,23033,,8/13/2020 14:15,,1,77,"

This is an open-ended question.Suppose I have a reinforcement learning task that is being solved using many different fixed policies, one of which is optimal. The goal of the agent is not to figure out what the optimal policy is but rather which policy (from a set of predefined fixed policies) is the optimal one.

Are there any algorithms/methods that handle this?

I was wondering if meta learning is the right area to look into?

",40275,,,,,8/13/2020 19:03,Finding the optimal policy from a set of fixed policies in reinforcement learning,,1,3,,,,CC BY-SA 4.0 23026,1,,,8/13/2020 14:24,,3,228,"

I've already read the original paper about double DQN but I do not find a clear and practical explanation of how the target $y$ is computed, so here's how I interpreted the method (let's say I have 3 possible actions (1,2,3)):

  1. For each experience $e_{j}=(s_{j},a_{j},r_{j},s_{j+1})$ of the mini-batch (consider an experience where $a_{j}=2$) I compute the output through the main network in the state $s_{j+1}$, so I obtain 3 values.

  2. I look which of the three is the highest so: $a^*=arg\max_{a}Q(s_{j+1},a)$, let's say $a^*=1$

  3. I use the target network to compute the value in $a^*=1$ , so $Q_{target}(s_{j+1},1)$

  4. I use the value at point 3 to substitute the value in the target vector associeted with the known action $a_{j}=2$, so: $Q_{target}(s_{j+1},2)\leftarrow r_{j}+\gamma Q_{target}(s_{j+1},1)$, while $Q_{target}(s_{j+1},1)$ and $Q_{target}(s_{j+1},3)$, which complete the target vector $y$, remain the same.

Is there anything wrong?

",37169,,2444,,11/4/2020 21:13,11/4/2020 21:13,How to compute the target for double Q-learning update step?,,1,0,,,,CC BY-SA 4.0 23028,2,,22914,8/13/2020 16:14,,3,,"

I am having trouble understanding how to keep track of the expansion, do I expand all stochastic possibilities and weight the return via their chance of happening?

This is indeed one option you can take. This would be very similar in spirit to the idea of "Expectimax" as a variant of minimax for non-deterministic games, in the sense that you'll include explicit "chance nodes" in your tree. When running into such a chance node later on again during a Selection phase, of a later MCTS iteration, you can just select a path of the tree to follow based on a "dice roll". Importantly, note that this option is only actually available if you have explicit knowledge of exactly when chance events occur, which states they can lead to, and with which probabilities they lead to different states. We also assume that this is feasible, i.e. that you don't have a crazy high (or infinite) number of slightly different game states you could reach.

An alternative option is to use an "open-loop" variant of MCTS. Your nodes would no longer represent game states, but only be representative of the sequence of actions leading to them. You would no longer store any game states in any nodes, but always regenerate them from scratch when traversing the tree, starting from the root node. You would no longer have any explicit chance nodes, but instead have states being representative of larger sets of states that could possibly be reached by following the corresponding path from the root node. For more on this, see my answer to this other question. The advantage of this approach is that it does not require explicit knowledge of all the possible states you can reach due to chance events, do not need explicit knowledge of the probabilities, and can just sample instead of explicitly enumerating every possible outcome.

",1641,,,,,8/13/2020 16:14,,,,0,,,,CC BY-SA 4.0 23029,2,,17540,8/13/2020 16:19,,1,,"

I might be able to help with the theory, but the coding... it is a non standard API such as Tensorflow or Pytorch (it might be custom code for what I can tell).

The key element here is that the bouding boxes are removed only if they hold a prediciton for the same class that the box that is overlapping with (but with less confidence, that is why it gets removed).

Here is an example, where we have:

  • Two classes $c \in [c_1, c_2] = [$ "star"$, $ "moon" $]$
  • Three bounding boxes

The blue bounding boxes holds prediction for the class $c_1$ so their predictions are $p(c_1)_{box1} = 0.8$ and $ p(c_1)_{box2} = 0.9$. On the other hand, the green box holds a prediction for the class $c_2$.

The three boxes are highly overlaping so the overlap between any box $x$ with any box $y$ will be above the IoU threshold: $IoU(box_x, box_y) > 0.5$. So in principle all boxes are suceptible to be removed.

However the NMS only applies for boxes predicting the same class (in the case the blue one). So the NMS algorihtm is: if the boxes are overlapping, $IoU(box_1, box_2) > 0.5$, which is true, remove all non maximal class probability boxes. Said differently, take just the box with highest $p(c_1)$ and remove the rest. So the $box_1$ with class probability $p(c_1) = 0.8$ would be removed.

So what happens with green box? Isn't it overlapping as well? Yes, but consider that the green box is not trying to predict the same object, is trying to predict another object, $c_2$, which happens to be very close to the first object, $c_1$. This way object detectors support detection of different overlapping objects.

",26882,,,,,8/13/2020 16:19,,,,0,,,,CC BY-SA 4.0 23033,2,,23025,8/13/2020 18:39,,0,,"

The quickest way to do this would be to use policy evaluation methods. Most of the standard optimal control algorithms consist of policy evaluation plus a rule for updating the policy.

It may not be possible to rank arbitrary policies by performance when considering all states. So you will want to rank them according to some fixed distribution of state values. The usual distribution of start states would be a natural choice (this is also the objective when learning via policy gradients in e.g. Actor-Critic).

One simple method would be to run multiple times for each policy, starting each time according to the distribution of start states, and calculate the return (discounted sum of rewards) from each one. A simple Monte Carlo run from each start state would be fine, and is very simple to code. Take the mean value as your estimate, and measure the variance too so you can establish a confidence for your selection.

Then simply select the policy with the best average value in start states. You can use the variance to calculate a standard error for this, so you will have a feel for how robust your selection is.

If have a large number of policies to select between, you could do a first pass through with a relatively low number of samples, and try to rule out policies that perform badly enough that even adding say 3 standard errors to the estimated value would not cause them to be preferred. Other than that, the more samples you can take, the more accurate your estimates of mean starting value for each policy will be, and the more likely you will be to select the right policy.

I was wondering if meta learning is the right area to look into?

In general no, but you might want to consider meta learning if:

  • You have too many policies to select between by testing them all thoroughly.

  • The policies have some meaningful low dimension representation that is driving their behaviour. The policy function itself would normally be too high dimensional.

You could then use some form of meta-learning to predict policy performance directly from the representation, and start to skip evaluations from non-promising policies. You may need your fixed policies to number in the thousands or millions before this works though (depending on the number of parameters in the representation and complexity of mapping between parameters and policy function), plus you will still want to thoroughly estimate performance of candidates selected as worth evaluating by the meta-learning.

In comments you suggest treating the list of policies as context-free bandits, using a bandit solver to pick the policy that scores the best on average. This might offer some efficiency over evaluating each policy multiple times in sequence. A good solver will try to find best item in the list using a minimal number of samples, and you could use something like UCB or Gibbs distribution to focus more on the most promising policies. I think the main problem with this will be finding the right hyperparameters for the bandit algorithm. I would suggest if you do that to seed the initial estimates with an exhaustive test of each policy multiple times, so you can get a handle on variance and scale of the mean values.

",1847,,1847,,8/13/2020 19:03,8/13/2020 19:03,,,,0,,,,CC BY-SA 4.0 23035,1,23039,,8/13/2020 20:00,,2,255,"

In section 4.4 Value Iteration, the authors write

One important special case is when policy evaluation is stopped after just one sweep (one update of each state). This algorithm is called value iteration.

After that, they provide the following pseudo-code

It is clear from the code that updates of each state occur until $\Delta$ is sufficiently small. Not one update of each state as the authors write in the text. Where is the mistake?

",40285,,2444,,8/13/2020 22:54,8/13/2020 22:54,Is value iteration stopped after one update of each state?,,1,0,,,,CC BY-SA 4.0 23036,1,23046,,8/13/2020 20:08,,3,78,"

I am trying to find the best algorithm to create a list of recommendations for a user based on the interests of all other users.

Say I have a list of of samples:

$samples = [
    ['hot dog', 'big mac', 'whopper'],
    ['hot dog', 'big mac'],
    ['hot dog', 'whopper'],
    ['big mac', 'dave single'],
    ['whopper', 'mcnuggets', 'mcchicken'],
    ['mcchicken', 'original chicken sandwich'],
    ['mcchicken', 'mcrib']
];

And we will say each array in the sample list is unique user's food preferences.

Let's say now I have a user with this food preference:

['hot dog', 'mcchicken']

I want to be able to recommend to this user other foods that other users have in their preferences.

So in the simplest terms, it should return:

['whopper', 'big mac', 'original chicken sandwich', 'mcrib', 'mcnuggets']

Obviously I will also introduce other variables such as how each user rates each item in their preference list and also the percentage of users that need to have that item in order to use their other food items as recommendations.

But I would like to find the best algorithm to start working on it.

At first I thought Apriori might be my best guess, but I wasn't having luck once I introduced multiple items.

",40284,,40284,,8/13/2020 21:52,8/14/2020 8:56,What is the most appropriate ML algorithm for creating recommendations,,1,4,,,,CC BY-SA 4.0 23037,1,,,8/13/2020 20:19,,1,72,"

Suppose you have a ground plane and can use a stereo vision system to detect things that are possibly separate objects.

Suppose also your robot or agent can attempt to pick up and move these objects around in real-time.

Is there any current system in computer vision that allows new objects to be learned in real-time?

",32390,,2444,,8/15/2020 9:47,8/18/2020 14:48,Is there any real-time computer vision system that can learn to detect new objects of new classes?,,1,2,,,,CC BY-SA 4.0 23038,1,,,8/13/2020 20:45,,4,209,"

2015 was a milestone year for AI--"deep learning" was validated in a very public way with AlphaGo. However, at the time, the question was raised: "What else is deep learning good for?"

5 years later, I want to gauge:

  • How is deep learning applied to real world problems in 2020? What real world applications is it currently used for?
",1671,,,,,10/2/2021 0:56,What is the scope of real-world deep learning applications in 2020?,,1,0,0,,,CC BY-SA 4.0 23039,2,,23035,8/13/2020 21:01,,3,,"

Where the author mentions the policy evaluation being stopped after one state, they are referring to the part of the algorithm that evaluates the policy -- the pseudocode you have listed is the pseudocode for Value Iteration, which consists of iterating between policy evaluation and policy improvement.

In normal policy evaluation, you would apply the update $v_{k+1}(s) = \mathbb{E}_\pi[R_{t+1} + \gamma v_k(S_{t+1})|S_t = s]$ until convergence. In the policy iteration algorithm, you perform policy evaluation until the value functions converge in each state, then apply policy improvement, and repeat. Value iteration will perform policy evaluation for one update, i.e. not until convergence, and then improve the policy, and repeat this until the value functions converge.

The line

$$V(s) \leftarrow \max_a \sum_{s', r} p(s',r|s,a)[r + \gamma V(s')]$$

perform both the early stopping policy evaluation and policy improvement. Lets examine how:

The $\sum_{s', r} p(s',r|s,a)[r + \gamma V(s')]$ is the same as the expectation I wrote earlier, so we can see clearly that is policy evaluation for just one iteration. Then, we take a max over the actions -- this is policy improvement. Policy improvement is defined as (for a deterministic policy) \begin{align} \pi'(s) &= \arg\max_a q_\pi(s,a) \\ &= \arg\max_a \sum_{s', r} p(s',r|s,a)[r + \gamma V(s')]\;. \end{align} Here, we assign the action that satisfies the $\mbox{argmax}$ to the improved policy in state $s$. This is essentially what we are doing in the line from your pseudo when we take the max. We are evaluating our value function for a policy that is greedy with respect to said value function.

If you keep applying the line from the pseudocode of value iteration it will eventually converge to the optimal value function as it will end up satisfying the Bellman Optimality Equation.

",36821,,36821,,8/13/2020 21:51,8/13/2020 21:51,,,,4,,,,CC BY-SA 4.0 23042,2,,23038,8/13/2020 23:48,,5,,"

Deep learning is used to perform language translation in Google Translate [1]. Specifically, Google Translate now uses transformers and RNNs rather than the original GNMT system (proposed in 2016), which was also based on neural networks. Deep learning is also used in DeepL (though I cannot find a good resource to cite apart from Wikipedia [2] given that the system is closed-source), a strong alternative to Google Translate. However, note that, in general, machine translation is still far from perfect and it is probably not adopted to perform serious translations.

Tesla's autopilot also uses neural networks [3].

Built on a deep neural network, Tesla Vision deconstructs the car's environment at greater levels of reliability than those achievable with classical vision processing techniques.

DeepFakes are also developed using deep learning techniques [4].

Neural networks are also being used for chords and beat detection of songs [5].

",2444,,2444,,10/2/2021 0:56,10/2/2021 0:56,,,,2,,,,CC BY-SA 4.0 23043,1,,,8/14/2020 4:49,,0,35,"

I am attempting to forecast a time series using tensorflow with the following code:

X = mytimeseries
scaler = MinMaxScaler()
scaled = scaler.fit_transform(X)

length = len(X)-1
generator = TimeseriesGenerator(scaled,scaled,
                            length=length,batch_size=1)

model = Sequential()
model.add(LSTM(units=100,activation='relu',input_shape=(length,n_features)))
model.add(Dense(units=100))
model.add(Dense(units=1))

model.fit(generator,epochs=20)

Then I just run a loop to forecast, but it's giving me nothing more than a straight line after a few points, as observed below.

Obviously there is a trend for the data to go down, and I would expect to see that.

Is this because my architecture is not sophisticated enough / not the right one to pick up on the general decline of the known data? Have I inappropriately chosen any parameters?

I have tried increasing the number of neurons in the dense layer, units in the LSTM cell, etc. At the moment, the thing that looks like to most effect the resultant curve is to change the length parameter in my code above. But all this does is make the predictions more sinusoidal.

Thanks for your help!

",40293,,,,,8/14/2020 4:49,Time Series Forecasting - Recurrent Neural Networks (tensorflow),,0,2,,,,CC BY-SA 4.0 23046,2,,23036,8/14/2020 8:56,,3,,"

You can use Collaborative Filtering, and specifically its memory based approach. The problem that you have discussed in the question should probably be solved using User-Item collaborative filtering which will calculate similarity between users and then recommend the item. The similarity can be be calculated using cosine similarity or Pearson's similarity formulae.

The best thing about this approach is that no training is required here but if you wish to have very large data, this approach's performance decreases.

",35612,,,,,8/14/2020 8:56,,,,0,,,,CC BY-SA 4.0 23049,2,,22010,8/14/2020 10:42,,1,,"

Foundations of Deep Reinforcement Learning: Theory and Practice in Python (Addison-Wesley Data & Analytics Series) 1st Edition

This book does not give a detailed background information on Markov Decision Processes, different Bellman equations and relationships between the value function and action-value function, etc. It focuses on Deep Reinforcement Learning and goes straight to Policy and Value - based algorithms using neural networks. It might be good for someone trying to quickly understand what Deep RL algorithms are out there and apply them.

",37627,,,,,8/14/2020 10:42,,,,0,,,,CC BY-SA 4.0 23051,2,,22951,8/14/2020 16:57,,1,,"

You can take some ideas from this YouTube video .

In addition, you should consider that page which is about Deep Reinforcement Learning used in a game (Pong from Pixels) .

",32076,,,,,8/14/2020 16:57,,,,1,,,,CC BY-SA 4.0 23054,2,,23021,8/14/2020 18:36,,1,,"

Policy iteration is based on the insight that for a given policy, it is straightforward to compute the value function (the long-run expected discounted value of being in a given stage) exactly -- it is a set of linear equations at that point. So, we update the policy, then calculate the exact values of the states for always following that particular policy, and based on that we update the policy again, etc.

Value iteration, in contrast, does not use that insight. It just updates estimates of the values of being in the states one step at a time. If these values are initialized at 0, you can think of this of the $i$th iteration computing the value of what would be the optimal policy if we knew the MDP would end after $i$ iterations. We never really have to think explicitly about policies (though we are in effect computing a policy each iteration), and never directly calculate the infinite sum of expected discounted rewards.

These are just the vanilla variants and it is possible to mix and match these ideas -- e.g., you might not evaluate a policy by explicitly solving a system of linear equations but rather just do some iterations -- but the vanilla variants are clearly distinct.

",17430,,,,,8/14/2020 18:36,,,,0,,,,CC BY-SA 4.0 23056,1,,,8/15/2020 9:46,,1,165,"

I would like to bind kernel parameters through channels/feature-maps for each filter. In a conv2d operation, each filter consists of HxWxC parameters I would like to have filters that have HxW parameters, but the same (HxWxC) form.

The scenario I have is that I have 4 gray pictures of bulb samples (yielding similar images from each side), which I overlay as channels, but a possible failure that needs to be detected might only appear on one side (a bulb has 4 images and a single classification). The rotation of the object when the picture is taken is arbitrary. Now I solve this by shuffling the channels at training, but it would be more efficient if I could just bind the kernel parameters. Pytorch and Tensorflow solutions are both welcome.

",40315,,,,,11/5/2022 10:00,How can I implement 2D CNN filter with channelwise-bound kernel weights?,,3,0,,,,CC BY-SA 4.0 23057,1,23058,,8/15/2020 12:30,,3,162,"

Here's a screenshot of the popular policy-gradient algorithm from Sutton and Barto's book -

I understand the mathematical derivation of the update rule - but I'm not able to build intuition as to why this algorithm should work in the first place. What really bothers me is that we start off with an incorrect policy (i.e. we don't know the parameters $\theta$ yet), and we use this policy to generate episodes and do consequent updates.

Why should REINFORCE work at all? After all, the episode it uses for the gradient update is generated using the policy that is parametrized by parameters $\theta$ which are yet to be updated (the episode isn't generated using the optimal policy - there's no way we can do that).

I hope that my concern is clear and I request y'all to provide some intuition as to why this works! I suspect that, somehow, even though we are sampling an episode from the wrong policy, we get closer to the right one after each update (monotonic improvement). Alternatively, we could be going closer to the optimal policy (optimal set of parameters $\theta$) on average.

So, what's really going on here?

",35585,,35585,,8/15/2020 12:54,8/15/2020 21:52,Why does REINFORCE work at all?,,1,0,,,,CC BY-SA 4.0 23058,2,,23057,8/15/2020 14:16,,4,,"

The key to REINFORCE working is the way the parameters are shifted towards $G \nabla \log \pi(a|s, \theta)$.

Note that $ \nabla \log \pi(a|s, \theta) = \frac{ \nabla \pi(a|s, \theta)}{\pi(a|s, \theta)}$. This makes the update quite intuitive - the numerator shifts the parameters in the direction that gives the highest increase in probability that the action will be repeated, given the state, proportional to the returns - this is easy to see because it is essentially a gradient ascent step. The denominator controls for actions that would have an advantage over other actions because they would be chosen more frequently, by inversely scaling with respect to the probability of the action being taken; imagine if there had been high rewards but the action at time $t$ has low probability of being selected (e.g. 0.1) then this will multiply the returns by 10 leading to a larger update step in the direction that would increase the probability of this action being selected the most (which is what the numerator controls for, as mentioned).

That is for the intuition -- to see why it does work, then think about what we've done. We defined an objective function, $v_\pi(s)$, that we are interested in maximising with respected to our parameters $\theta$. We find the derivative of this objective with respect to our parameters, and then we perform gradient ascent on our parameters to maximise our objective, i.e. to maximise $v_\pi(s)$, thus if we keep performing gradient ascent then our policy parameters will converge (eventually) to values that maximise $v$ and thus our policy would be optimal.

",36821,,36821,,8/15/2020 21:52,8/15/2020 21:52,,,,5,,,,CC BY-SA 4.0 23059,2,,3397,8/15/2020 15:56,,0,,"

Amazingly, I just found a claim about that. I just read it in twitter and the model is simply a GPT-2 WITH 355M params trained with 200,000 raw title and body-based jokes. what is amazing is that GPT-2 is the most advanced text generating model it even can translate or answer math questions if trained well.

Let's see example output from twitter.

  • "I asked my girlfriend if she knew what sex was like || she said that you can kiss her and she'll think you're a queer."
  • Why does the teacher have her own car? || She's a car company for Santa.

https://twitter.com/lgbtinethiopia/status/1294644776772472834?s=20

",40320,,,,,8/15/2020 15:56,,,,1,,,,CC BY-SA 4.0 23060,2,,22986,8/15/2020 15:57,,0,,"

I think the problem is with openAI gym CartPole-v0 environment reward structure. The reward is always +1 for each time step. So if pole falls reward is +1 itself. So we need to check and redefine the reward for this case. So in the train function try this:

if not done:
    new_q = reward + DISCOUNT * np.max(future_qs_list)
else:
    # if done assign some negative reward
    new_q = -20

(Or change the reward during replay buffer update)

Check the lines 81 and 82 in Qlearning.py code in this repo for further clarification.

",40321,,40321,,8/16/2020 4:37,8/16/2020 4:37,,,,4,,,,CC BY-SA 4.0 23062,2,,22986,8/15/2020 22:54,,3,,"

There is a really small mistake in here that causes the problem:


for index, (current_state, action, reward, next_state, done) in enumerate(minibatch):
            if not done:
                new_q = reward + DISCOUNT * np.max(future_qs_list) #HERE 
            else:
                new_q = reward
                
            # Update Q value for given state
            current_qs = current_qs_list[index]
            current_qs[action] = new_q
            
            X.append(current_state)
            Y.append(current_qs)

Since np.max(future_qs_list) should be np.max(future_qs_list[index]) since you're now getting the highest Q of the entire batch. Instead of the getting the highest Q from the current next state.

It's like this after changing that (remember an epsilon of 1 means that you get 100% of your actions taken by the a dice roll so I let it go for a few more epochs, also tried it with the old code but indeed didn't get more then 50 steps (even after 400 epochs/episodes))

episode : 52, steps 16, epsilon : 0.7705488893118823
episode : 53, steps 25, epsilon : 0.7666961448653229
episode : 54, steps 25, epsilon : 0.7628626641409962
episode : 55, steps 36, epsilon : 0.7590483508202912
episode : 56, steps 32, epsilon : 0.7552531090661897
episode : 57, steps 22, epsilon : 0.7514768435208588
episode : 58, steps 55, epsilon : 0.7477194593032545
episode : 59, steps 24, epsilon : 0.7439808620067382
episode : 60, steps 46, epsilon : 0.7402609576967045
episode : 61, steps 11, epsilon : 0.736559652908221
episode : 62, steps 14, epsilon : 0.7328768546436799
episode : 63, steps 13, epsilon : 0.7292124703704616
episode : 64, steps 113, epsilon : 0.7255664080186093
episode : 65, steps 33, epsilon : 0.7219385759785162
episode : 66, steps 33, epsilon : 0.7183288830986236
episode : 67, steps 39, epsilon : 0.7147372386831305
episode : 68, steps 27, epsilon : 0.7111635524897149
episode : 69, steps 22, epsilon : 0.7076077347272662
episode : 70, steps 60, epsilon : 0.7040696960536299
episode : 71, steps 40, epsilon : 0.7005493475733617
episode : 72, steps 67, epsilon : 0.697046600835495
episode : 73, steps 115, epsilon : 0.6935613678313175
episode : 74, steps 61, epsilon : 0.6900935609921609
episode : 75, steps 43, epsilon : 0.6866430931872001
episode : 76, steps 21, epsilon : 0.6832098777212641
episode : 77, steps 65, epsilon : 0.6797938283326578
episode : 78, steps 45, epsilon : 0.6763948591909945
episode : 79, steps 93, epsilon : 0.6730128848950395
episode : 80, steps 200, epsilon : 0.6696478204705644
episode : 81, steps 200, epsilon : 0.6662995813682115
",30100,,30100,,8/15/2020 23:26,8/15/2020 23:26,,,,0,,,,CC BY-SA 4.0 23063,2,,23026,8/15/2020 23:51,,1,,"

$$Y_{t}^{\text {DoubleDQN }} \equiv R_{t+1}+\gamma Q\left(S_{t+1}, \underset{a}{\operatorname{argmax}} Q\left(S_{t+1}, a ; \boldsymbol{\theta}_{t}\right), \boldsymbol{\theta}_{t}^{-}\right)$$

The only difference between the "original" DQN and this one is that you use your $Q_\text{est}$ with the next state to get your action (by choosing the action with the highest Q).

Afterward, you just figure out what the target $Q$ is given that action, by selecting the $Q$ belonging to that action from the target_network (instead of using the argmax a directly on the target Q network).

About the formula

  • $\theta_{t}^{-}$ above it means frozen weights, so it represents the target Q network.

  • the other $\theta_{t}$ represents the "learnable weights" so the estimate Q network.

",30100,,2444,,8/17/2020 0:13,8/17/2020 0:13,,,,3,,,,CC BY-SA 4.0 23065,1,,,8/16/2020 9:30,,6,185,"

I have two functions $f(x)$ and $g(x)$, and each of them can be computed with a neural network $\phi_f$ and $\phi_g$.

My question is, how can I write a neural net for $f(x)g(x)$?

So, for example, if $g(x)$ is constant and equal to $c$ and $\phi_f = ((A_1,b_1),...(A_L,b_L))$, then $\phi_{fg} = ((A_1,b_1),...,(cA_L,cb_L))$.

Actually, I need to show it for $f(x)=x$ and $g(x)=x^2$ if this make something easier.

",40330,,2444,,8/17/2020 23:46,8/23/2020 17:03,"Given two neural networks that compute two functions $f(x)$ and $g(x)$, how can I create a neural network that computes $f(x)g(x)$?",,2,4,,,,CC BY-SA 4.0 23066,2,,23065,8/16/2020 11:44,,1,,"

Use another network h which takes f(x) and g(x) as input i.e. h(f(x), g(x)).

Training psuedo code(pytorch):

for epoch in epochs:
    for batch, x in dataset:
        train(f)
        train(g)


for epoch in epochs:
    for batch, x in dataset:
        # freeze f and g (using torch.no_grad in pytorch)
        with torch.no_grad:
            fx = f(x)
            gx = g(x)
        pred = h(fx, gx)
        loss = loss_fn(pred, (fx . gx))
        backpropagate()
        optimise()
        
",40321,,,,,8/16/2020 11:44,,,,1,,,,CC BY-SA 4.0 23067,1,,,8/16/2020 12:47,,2,171,"

Recently, I came across the paper Robust and Stable Black Box Explanations, which discusses a nice framework for global model-agnostic explanations.

I was thinking to recreate the experiments performed in the paper, but, unfortunately, the authors haven't provided the code. The summary of the experiments are:

  1. use LIME, SHAP and MUSE as baseline models, and compute fidelity score on test data. (All the 3 datasets are used for classification problems)

  2. since LIME and SHAP give local explanations, for a particular data point, the idea is to use K points from the training dataset, and create K explanations using LIME. LIME is supposed to return a local linear explanation. Now, for a new test data point, using the nearest point from K points used earlier and use the corresponding explanation to classify this new point.

  3. measure the performance, using fidelity score (% of points for which $E(x) = B(x)$, where $E(x)$ is the explanation of the point and $B(x)$ is the classification of the point using the black box.

Now, the issue is, I am using LIME and SHAP packages in Python to achieve the results on baseline models.

However, I am not sure how I'll get a linear explanation for a point (one from the set K), and use it to classify a new test point in the neighborhood.

Every tutorial on YouTube and Medium discusses visualizing the explanation for a given point, but none talks about how to get the linear model itself and use it for newer points.

",40332,,2444,,6/4/2021 11:41,6/29/2022 15:00,Black Box Explanations: Using LIME and SHAP in python,,1,1,,,,CC BY-SA 4.0 23068,2,,23056,8/16/2020 13:41,,0,,"

Assuming you want HxWx1 kernel to perform convolution on hxwxc images.

Here's sample code which uses single channel kernel to operate on multichannel feature: maps

import torch
import torch.nn as nn
import torch.nn.functional as F


class model(nn.Module):
    def __init__(self, in_ch=4):
        super().__init__()
        self.in_ch  = in_ch
        # single channel kernel initialization
        self.kernel = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=1, stride=1)

    def forward(self, x):
        bs, ch, w, h = x.shape
        x   = x.view((bs, ch, 1, w, h))
        out = self.kernel(x[:, 0])
        # reusing of same kernel
        for i in range(1, self.in_ch):
            out = torch.cat((out, self.kernel(x[:, i])), 1)
        return out

net = model(4)
print(net)
inp = torch.randn((10, 4, 100, 100))
out = net(inp)
print(out.shape)

(The main hack is in the forward function)

",40321,,,,,8/16/2020 13:41,,,,0,,,,CC BY-SA 4.0 23071,1,,,8/16/2020 21:45,,1,33,"

I have an RBM model which takes extremely long to train and evaluate because of the large number of free parameters and the large amount of input data. What would be the most efficient way of tuning its hyperparameters (batch size, number of hidden units, learning rate, momentum and weight decay)?

",37354,,,,,8/16/2020 21:45,Best/quickest approach for tuning the hyperparameters of a restricted boltzmann machine,,0,0,,,,CC BY-SA 4.0 23072,1,23074,,8/16/2020 21:58,,3,1848,"

Q-learning seems to be related to A*. I am wondering if there are (and what are) the differences between them.

",10135,,2444,,8/16/2020 22:10,8/17/2020 14:56,What are the differences between Q-Learning and A*?,,1,0,,,,CC BY-SA 4.0 23073,2,,50,8/16/2020 23:41,,1,,"

Error Estimation is a subject with a long history. The test-set method is only one way to estimate generalization error. Others include resubstitution, cross-validation, bootstrap, posterior-probability estimators, and bolstered estimators. These and more are reviewed, for instance, in the book: Braga-Neto and Dougherty, "Error Estimation for Pattern Recognition," IEEE-Wiley, 2015.

",40338,,,,,8/16/2020 23:41,,,,0,,,,CC BY-SA 4.0 23074,2,,23072,8/17/2020 0:26,,16,,"

Q-learning and A* can both be viewed as search algorithms, but, apart from that, they are not very similar.

Q-learning is a reinforcement learning algorithm, i.e. an algorithm that attempts to find a policy or, more precisely, value function (from which the policy can be derived) by taking stochastic moves (or actions) with some policy (which is different from the policy you want to learn), such as the $\epsilon$-greedy policy, given the current estimate of the value function. Q-learning is a numerical (and stochastic optimization) algorithm that can be shown to converge to the optimal solution in the tabular case (but it does not necessarily converge when you use a function approximator, such as a neural network, to represent the value function). Q-learning can be viewed as a search algorithm, where the solutions are value functions (or policies) and the search space is some space of value functions (or policies).

On the other hand, A* is a general search algorithm that can be applied to any search problem where the search space can be represented as a graph, where nodes are positions (or locations) and the edges are the weights (or costs) between these positions. A* is an informed search algorithm, given that you can use an (informed) heuristic to guide the search, i.e. you can use domain knowledge to guide the search. A* is a best-first search (BFS) algorithm, which is a family of search algorithms that explore the search space by following the next best location according to some objective function, which varies depending on the specific BFS algorithm. For example, in the case of A*, the objective function is $f(n) = h(n) + g(n)$, where $n$ is a node, $h$ the heuristic function and $g$ the function that calculates the cost of the path from the starting node to $n$. A* is also known to be optimal (provided that the heuristic function is admissible)

",2444,,2444,,8/17/2020 14:56,8/17/2020 14:56,,,,2,,,,CC BY-SA 4.0 23075,1,,,8/17/2020 3:17,,4,621,"

I'm trying to get a better understanding of Multi-Arm Bandits, Contextual Multi-Arm Bandits and Markov Decision Process.

Basically, Multi-Arm Bandits is a special case of Contextual Multi-Arm Bandits where there is no state(features/context). And Contextual Multi-Arm Bandits is a special case of Markov Decision Process, where there is only one state (features, but no transitions).

However, since MDP has Markov property, I wonder if every MDP problem can also be converted into a Contextual Multi-Arm Bandits problem, if we simply treat each state as a different input context (features)?

",40340,,2444,,8/17/2020 23:54,1/16/2021 19:02,Can you convert a MDP problem to a Contextual Multi-Arm Bandits problem?,,1,0,,,,CC BY-SA 4.0 23076,1,23078,,8/17/2020 3:34,,1,51,"

I have made an RNN from scratch in Tensorflow.js. In order to update my weights (without needing to calculate the derivatives), I thought of using the normal equation to find the optimal values for my RNN's weights. Would you recommend this approach and if not why?

",32636,,2444,,8/18/2020 10:11,8/18/2020 10:11,Can the normal equation be used to optimise the RNN's weights?,,1,0,,,,CC BY-SA 4.0 23078,2,,23076,8/17/2020 7:20,,2,,"

Unfortunately, this is not possible. The normal equation can only directly optimise a single layer that connects input and output. There is no equivalent for multiple layers such as those in any neural network architecture.

",1847,,,,,8/17/2020 7:20,,,,0,,,,CC BY-SA 4.0 23079,2,,22969,8/17/2020 7:51,,0,,"

After further investigating the problem I have found the answer:

U-net generators' up-sampling stage consists of two steps:

  1. Use UpSampling2D layer
  2. Apply convolution on the output

The UpSampling2D layer is in the keras documentation described as:

Repeats the rows and columns of the data by size[0] and size[1] respectively.

From this information, we can calculate the time cost for UpSampling2D alone. Lets set size to (2,2), as is set in basic configuration of the U-net generator. The output of the UpSampling2D is then doubled. In case we started with (4,4,3), where the last index corresponds to number of channels, the output shape will be 8,8,3. We can see that each row and column need to be copied twice in each channel. From this we can define time complexity of a single up-sampling as:

$$ O\left(2 \cdot c \cdot n \cdot s\right) $$

Where c corresponds to number of channels, n corresponds to input length (one side of a matrix) and s is equal to filter size. Assuming that length and filter size have square shape, the complexity is multiplied by 2. Since in this case the the filter size is known, equal to (2,2), the notation can be simplified to:

$$ O\left(4 \cdot c \cdot n \right) = O\left(c \cdot n \right) $$

In my case, with only 1 channel, the complexity is simply

$$ O\left(n \right) $$

Which means the up-sampling stage is linear, and the only important feature is input size, which is negligible to the complexity of the following convolutional layer and can be ignored.

",40205,,,,,8/17/2020 7:51,,,,0,,,,CC BY-SA 4.0 23080,2,,23075,8/17/2020 10:44,,3,,"

The main difference between an MDP and contextual bandit setting is time steps and state progression. If those are important to the problem you want to solve, then it is not possible to convert.

Essentially MDPs are a strict generalisation of contextual bandits. You can model a CB as an MDP but not vice-versa.

In some very specific cases you can convert MDP to CB - any one of these situations means that the MDP can be simplified to CB, and then you could use bandit solving algorithms to optimise it:

  • When there is only one time step for an episodic problem.

  • When the discount factor is zero.

  • When state transition rules are completely independent of action choice, but reward is not.

",1847,,1847,,8/19/2020 17:14,8/19/2020 17:14,,,,2,,,,CC BY-SA 4.0 23082,1,,,8/17/2020 12:25,,0,32,"

When training policies, is there a reason we need on-policy samples? For expensive simulations, it makes sense to try and reuse samples. Say we're interested in hyperparameter tuning. Can we collect a bunch of episodes using randomly sampled actions (or maybe by following an old policy) one time, and train multiple policies using this set of samples to find the most effective hyperparameters? Every time we train a new policy, does it make sense to replay all the episodes generated by the previous policy? I'm mostly interested in actor-critic methods.

",40354,,2444,,8/17/2020 23:50,8/17/2020 23:50,Learning only using off-policy samples,,1,0,,,,CC BY-SA 4.0 23086,2,,23082,8/17/2020 13:46,,1,,"

What you're describing is off-policy learning. A classic example is $Q$-learning, where you follow some policy $\pi$ whilst learning about the greedy policy.

If you're interested in actor-critic methods then a popular off-policy method is the Deep Deterministic Policy Gradient.

",36821,,,,,8/17/2020 13:46,,,,0,,,,CC BY-SA 4.0 23087,1,25440,,8/17/2020 14:02,,1,1199,"

I am relatively new to reinforcement learning, and I am trying to implement a reinforcement learning algorithm that can do continuous control in a custom environment. The state of the environment is composed of 5 sequential observations of 11 continuous values that represent cells (making the observation space 55 continuous values total), and the action is 11 continuous values representing multipliers of those cell totals.

After preliminary research, I decided to use Deep Deterministic Policy Gradient (DDPG) as my control algorithm because of its ability to deal with both discrete states and actions. However, most of the examples, including the one that I am basing my implementation off of, have only a single continuously valued action as the output. I have tried to naively change the agent network from outputting a single value to output a vector of values, but the agent does not improve as all, and the set of outputs seems to split into two groups near either the maximum value or the minimum value (I believe the tanh activation on the output has something to do with it) with the values in those groups changing in unison.

I have two questions about my problems.

  1. First, is it even possible to use DDPG for multi-dimensional continuous action spaces? My research leads me to believe it is, but I have not found any code examples to learn from and many of the papers I have read are near the limit of my understanding in this area.

  2. Second, why might my actor network be outputting values clustered near its max/min values, and why would the values in either cluster all be the same?

Again, I am fairly new to reinforcement learning, so any advice or recommendations would be greatly appreciated, thanks.

",38992,,,,,5/4/2022 3:42,Using DDPG for control in multi-dimensional continuous action space?,,1,1,,,,CC BY-SA 4.0 23089,1,24679,,8/17/2020 14:44,,1,157,"

I want to use RNN for classifying whole sequences of events, generated by website visitors. Each event has some categorical properties and a Unix timestamp:

sequence1 = [{'timestamp': 1597501183, 'some_field': 'A'}, {'timestamp': 1597681183, 'some_field': 'B'}]
sequence2 = [{'timestamp': 1596298782, 'some_field': 'B'}]
sequence3 = [{'timestamp': 1596644362, 'some_field': 'A'}, {'timestamp': 1596647951, 'some_field': 'C'}]

Unfortunately, they can't be treated as classic time series, because they're of variable length and irregular, so timestamps contain essential information and cannot be ignored. While categorical features can be one-hot encoded or made into embeddings, I'm not sure what to do with the timestamps. It doesn't look like a good idea to use them raw. I've come up with two options so far:

  • Subtract the minimum timestamp from every timestamp in the sequence, so that all sequences start at 0. But in this case the numbers can still be high, because the sequences run over a month.
  • Use offsets from previous event instead of absolute timestamps.

I'm wondering if there are common ways to deal with this? I haven't found much on this subject.

",40359,,2444,,6/30/2022 22:35,6/30/2022 22:35,"How to deal with Unix timestamps features of sequences, which will be classified with RNNs?",,1,0,,,,CC BY-SA 4.0 23091,1,,,8/17/2020 17:16,,3,1774,"

I have been dealing with a problem that I'm trying to solve with DQN. A general question that I have is regarding the target's update frequency. How should it change? Depending on what factor do we increase or decrease this hyperparameter?

",40367,,2444,,8/18/2020 10:21,8/18/2020 10:21,How should I choose the target's update frequency in DQN?,,1,0,,,,CC BY-SA 4.0 23096,1,,,8/17/2020 18:43,,2,185,"

I came across the term 'principal angle between subspaces' as a tool for comparing objects in images. All material that I found on the internet seems to deal with this idea in a highly mathematical way and I couldn't understand the real physical meaning behind the term.

I have some knowledge of linear algebra. Any help to understand the physical significance of this term and its application in object recognition would be appreciated.

",35576,,2444,,1/15/2021 11:36,10/12/2021 15:04,What do we mean by 'principal angle between subspaces'?,,1,1,,,,CC BY-SA 4.0 23098,1,,,8/17/2020 19:10,,0,1345,"

Here is the Question:

Describe the PEAS descriptions for the following agents:

a) A grocery store scanner that digitally scans a fruit or vegetable and identifies it.

b) A GPS system for an automobile. Assume that the destination has been preprogrammed and that there is no ongoing interaction with the driver. However, the agent might need to update the route if the driver misses a turn.

c) A credit card fraud detection agent that monitors an individual’s transactions and reports suspicious activity.

d) A voice activated mobile-phone assistant

For each of the agents described above, categorize it with respect to the six dimensions of task environments as described on pages 41-45 (Section 2.3.2 of AIMA). Be sure that your choices accurately reflect the way you have specified your environment, especially the sensors and actuators. Give a short justification for each property

Here is what i thinks that the answers of above questions might be this. Can you guyz correct me if i answered wrong at any point.

",40371,,,,,8/18/2021 19:23,Need some reviews in PEAS descriptions,,1,1,,,,CC BY-SA 4.0 23101,2,,23096,8/17/2020 23:23,,1,,"

Let's consider the case where you have two photos, one base photo and one other photo which is a scaled version of the base photo. Consider then that you could create a 'mapping' from the base photo to the scaled photo as defined by a set of vector changes for each pixel in the base photo.

That is to say, if you had a pixel at point (0,0) in the base photo it would be in position (0,0) on the scaled photo. If you had a pixel at point (0,1) on the base photo, it would be at point (0,s) on the scaled photo, where s is the scale factor between them. The same would be true for (1,0) mapping to (s,0) and (1,1) mapping to (s,s).

We can understand each of these pixels as a subspace on the vector space of "all possible pixels" (possibly with an included vector for the 'value' of each pixel as a third+ dimension, based on how it's represented). Note though, that they are different subspaces. There are vectors that exist in the scaled picture for scale factors greater than 1 which are not in the original picture. For scale values less than one, there are vectors which are in the original picture which are not in the scaled picture. For scale vector equal to one, it's basically the identity and they're the same picture.

What's more, we can do the same for rotation. If you have base photo and rotated photo, then there is a mapping from one pixel on the base photo to one pixel on the rotated photo, I'll leave the math up to you. The same is further true for translating, skewing, and even changing the colors of the image. Each one denotes a type of mapping that you could make from one subspace to another.

So what does this mean? Well, one thing that you could do is take two images, one "base" photo and two "scaled" photos and determine "how different are these two photos from the base photo"? That is, if the first scaled photo has a scaling factor of 10 and the second has a scaling factor of 100, then you could say that the larger scaled-photo is "further away" from the base photo. That is, we can compare the two mappings together to determine which mapping is closer to the original.

What's more, these mapping functions are often composable and commutative. That is if you do a "scale-then-rotate", that is very similar if not exactly identical to doing "rotate-then-scale". Let's take a look at why this is. Let's consider that we take a unit vector from either one. We then apply "scale" which scales both the x and y components, and then rotate this unit vector about some axis (let's assume the origin, though maybe there's a translation that happens as well). You will then yield a new vector which represents all of these applied changes. What's more, we can do this to both of the "scale-then-rotate" function and the "rotate-then-scale" function.

Now we have two vectors, and we want to determine if they are the same. Well, one way to do that is to calculate the angle between two vectors. If the angle is 0, then we can say that the vectors are the same (with the assumption that they are also of the same magnitude). In this case, both would yield the same vector, the angle between them is 0, so we can say they are the same.

So then how do we get back to the question of "how different are these two objects"? Well, one way that we could do it is to attempt to answer the question "What set of composed functions maps from one object onto the other one?". If we can create a new "vector" which is the representation of the full set of changes from one picture to the other, we can then attempt to determine how different these two objects are, as represented in the dimensional spaces of their image.

But the question that you may then ask is "why is this useful"? Well, imagine that you are a neural network. You've been given a picture and tasked with answering the question "Is this a cat?". How would you go about answering this? Well, one thing as a neural net that you've learned is "cats have ears". You have a canonical representation of a "cat ear" encoded in the weights of your neural network, so you go about looking at different subsets of the picture to say "is this a cat ear"? You can do this, for example, by taking a cropped section of the picture, determining the angle and dimensions apart that cropped section is from your canonical "cat ear" is, and if the angle between your canonical "cat ear" and your sample is below some amount, you can say with confidence that what you have found is a "cat ear". A couple cat ears, eyes, a nose, some fur, and a tail (though maybe the tail is occluded), and you can be confident that you have found a "cat".

(Note: This is an oversimplification and this is usually done in deep networks by breaking pictures out into a continually composing series of features, for instance first "lines" vs "curves" vs "textures", which then yield "circles" vs "rectangles" vs "cross-hashes", which then yield "eyes" vs "fur" vs "skin", etc. Each different layer could be implemented with its own comparison in this same method though!)

",16857,,,,,8/17/2020 23:23,,,,6,,,,CC BY-SA 4.0 23104,2,,16004,8/18/2020 4:16,,0,,"

There is a commonly used method, that is also used in machine learning: Independent Component Analysis. (ICA). This is commonly used to find specific noises in the data, however, you need to have some EEG knowledge to do this because automatic rejection is not completely solved at this time. Software like EEGLab is available (as a standalone and Matlab toolbox)

Now to do this in real-time it is also not impossible after you collect initial data for a while and don't have too many channels. You can isolate relatively constant noises with ICA, like heart-beating, other temporal noises can be rejected globally (on all channels) because EEG normally does not exceed certain levels.

Useful documentation is EEGLabs artifacts Wikipedia page: https://sccn.ucsd.edu/wiki/Chapter_01:_Rejecting_Artifacts

",40315,,,,,8/18/2020 4:16,,,,0,,,,CC BY-SA 4.0 23105,2,,15504,8/18/2020 5:53,,1,,"

I would suggest to convert the documents into TF-IDF(use Gensim) vectors and then compare them using various similarity calculating techniques like cosine similarity.

You should read this amazing article for the same. I once used it while working on my project.

https://medium.com/@adriensieg/text-similarities-da019229c894

",35612,,,,,8/18/2020 5:53,,,,0,,,,CC BY-SA 4.0 23106,1,,,8/18/2020 6:22,,1,52,"

I have run into a strange behavior of my multi label classification ANN

model = Sequential()
model.add(Dense(6, input_shape=(input_size,), activation='elu'))
#model.add(BatchNormalization(axis=-1))
model.add(Dropout(0.2))
#model.add(BatchNormalization(axis=-1))
model.add(Dense(6, activation='elu'))
model.add(Dropout(0.2))
#model.add(BatchNormalization(axis=-1))
model.add(Dense(6, activation='elu'))
model.add(Dropout(0.2))

# model.add(keras.layers.BatchNormalization(axis=-1))
model.add(Dense(6, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
              optimizer='nadam',
              metrics=['accuracy'])
history = model.fit(X_train, Y_train,batch_size=64 ,epochs=300,
                    validation_data = (X_test, Y_test), verbose=2)

The result is quite strange, I have a feeling that my model could not improve any more. Why does the loss and the accuracy does not change overtime ?

P/S For clarification, I have 6 output and the value of each output is 0 or 1 that is output 1: can be 0 or 1

output 2: can be 0 or 1

output 3: can be 0 or 1

output 4: can be 0 or 1

output 5: can be 0 or 1

output 6: can be 0 or 1

",40383,,40383,,8/18/2020 7:50,8/18/2020 7:50,Why does loss and accuracy for a multi label classification ann does not change overtime?,,0,4,,,,CC BY-SA 4.0 23107,1,23108,,8/18/2020 6:36,,1,84,"

I trying to use DDPG augmented with Hindsight Experience Replay (HER) on pybullet's KukaGymEnv.

To formulate the feature vector for the goal state, I need to know what the features of the state of the environment represent. To be precise, a typical state vector of KukaGymEnv is an object of the numpy.ndarray class with a shape of (9,).

What do each of these 8 elements represent, and how can I formulate the goal state vector for this environment? I tried going through the source code of the KukaGymEnv, but was unable to understand anything useful.

",38895,,2444,,11/21/2020 12:52,11/21/2020 12:52,What do the state features of KukaGymEnv represent?,,1,0,,12/27/2021 19:16,,CC BY-SA 4.0 23108,2,,23107,8/18/2020 7:09,,1,,"

Here's an incomplete answer, but it may help.

Your state is read by the function getExtendedObservation(). This function makes two things : it calls the function getObservation() from this source code, gets a state, and extend this state with three components :

relative x,y position and euler angle of block in gripper space

But what are the 5 first components returned by getObservation()? From what I read, there are positions, then euler angles describing the orientation. But that would make 6 + 3 = 9 features, so there is either only 2 positions, or only 2 euler angles. You may know kuka better than me and know the answer of this one :).

So, to sum up :

state = [X, Y, (Z, ) , Alpha, Gamma, (Beta, ), gripX, gripY, gripAlpha]

(Either Z or Beta is absent)

",17759,,17759,,8/23/2020 12:31,8/23/2020 12:31,,,,1,,,,CC BY-SA 4.0 23110,2,,23091,8/18/2020 8:05,,1,,"

As you said yourself, it is a hyperparameter. Hence, no one (even you) can say what is the ideal update frequency. You have to test and try.

Having said that, remember one thing the target NN should mimic the actual network as closely as possible. Hence if you update it after a long number runs, then I think you will start losing the accuracy. On the contrary, if you update it too often, then you lose the benefit of using the target network (which is to boost the training rate and reduce training time) and the training will take a larger amount of time.

My suggestion is to try updating after every 5 to 8 episodes.

",36710,,,,,8/18/2020 8:05,,,,0,,,,CC BY-SA 4.0 23111,1,,,8/18/2020 9:32,,2,27,"

In reinforcement learning, when we talk about the principle of optimality, do we assume the policy to be deterministic?

",40229,,2444,,8/18/2020 10:18,8/18/2020 10:18,Do we assume the policy to be deterministic when proving the optimality?,,0,4,,,,CC BY-SA 4.0 23112,1,23661,,8/18/2020 12:33,,1,1049,"

I am using a deep autoencoder for my problem. However, the way I choose the number of hidden layers and hidden units in a hidden layer is still based on my feeling.

The size of the model that indicates the number of hidden layers and units should not be too much or too few for the model can capture useful features from the dataset.

So, how do I choose the right size of the deep autoencoder model is enough to good?

",25645,,2444,,10/5/2020 21:20,10/5/2020 21:20,How to determine the number of hidden layers and units of a deep auto-encoder?,,1,0,,,,CC BY-SA 4.0 23113,2,,23037,8/18/2020 14:48,,1,,"

You are probably looking for incremental learning (sometimes known as lifelong learning) techniques, i.e. machine learning techniques that attempt to address the catastrophic forgetting effect of neural networks when trained incrementally, i.e. as new classes or data are added to the original training data.

There are different techniques and some of them store (or compress) the old data in order to fully or partially re-train the neural network with the new classes or data. However, note that this is a relatively new area of research and significant progress still needs to be made to produce serious tools. If you are specifically interested in incremental class learning, maybe have a look at this paper Class-incremental Learning via Deep Model Consolidation (2020).

",2444,,,,,8/18/2020 14:48,,,,1,,,,CC BY-SA 4.0 23115,1,,,8/18/2020 16:36,,1,27,"

I'm evaluating the state of the art techniques to translate legal text to simple text, what are the best approaches for a non-English language (Portuguese)?

",40396,,40396,,8/18/2020 22:43,8/18/2020 22:43,What are the best techniques to perform text simplification?,,0,1,,,,CC BY-SA 4.0 23116,2,,16610,8/18/2020 17:23,,4,,"

What if a scalar reward is insufficient, or its unclear on how to collapse a multi-dimensional reward to a single dimension. Example, for someone eating a burger, both taste and cost are important. Agents may prioritize taste and cost differently, so its not clear on how to aggregate the two. It is also not clear on how a subjective categorical taste value can be combined with a numerical cost.

",40398,,,,,8/18/2020 17:23,,,,4,,,,CC BY-SA 4.0 23117,1,,,8/18/2020 17:24,,3,348,"

In my understanding, domain randomization is one method of diversifying the dataset to achieve a better shot at domain adaptation. Am I wrong?

",40397,,2444,,11/10/2020 12:59,10/19/2022 21:03,What's the difference between domain randomization and domain adaptation?,,1,1,,,,CC BY-SA 4.0 23118,2,,22834,8/19/2020 1:19,,1,,"

What if the complexity of the problem of self-improving the software grows at a faster rate than the AGI intelligence self-improvement?

If this turns out to be the case, then the complexity would indeed be the bottleneck.

There's a formal term for this: kolmogrov complexity or "k-complexity". (The best formal definition of intelligence I've come across is Hutter & Legg's, which is broken down in the article "On the definition of intelligence" by our own nbro. It will shed shed some light on the formal structure of the question at hand.

  • Any attempt to provide values to your d & l would speculative, since we're still so far from AGI. The question would be what is the effect of any given (d,l)

The subject can definitely be treated formally in a mathematical sense the and values analyzed. (There may even been some formulae out there, although, they are likely to be more complex, per the Hutter&Legg.)

Sorry I can't give you a more formal answer--it's a good question!


Notes:

Another bottleneck would be the speed of light, essentially, "bits and bytes across a pipeline". This would be a factor in both self-replication and networked processing and memory. You can expand bandwidth/throughput, but not infinitely.

This relates to space complexity (how much volume/memory does the algorithm require?) in addition to the k-complexity, which itself is likely to be non-trivial for anything approaching AGI.

AI Threat:

From the standpoint of neoluddism, I'm skeptical that "infinitely" self-optimizing superintelligence is the greatest threat. Evolution, and economic theory, suggest that the greater danger would come from not what is most expensive, complex, & capable, but that which is "just good enough", cheaper and more easily replicable. This is because decision time becomes a critical factor in a competitive environment, as does replication time.

I always bring up grey goo because it only has to be good at 1 thing to eat everything else!

",1671,,,,,8/19/2020 1:19,,,,0,,,,CC BY-SA 4.0 23121,1,,,8/19/2020 3:43,,0,54,"

I am curious to know how neural networks are built in practice. Are they hand coded using weight matrices, activation functions etc OR are there ways to build the NN by mentioning the number of layers, number of neurons in each layer, activation to be used, etc as parameters?

Similar question on training, once built is there a ‘fit’ method or does the training need to be hand coded?

Any reference for understanding these basics will be of great help.

",15267,,,user9947,8/19/2020 4:00,8/21/2020 4:09,How are neural networks built in practice?,,1,1,,,,CC BY-SA 4.0 23122,1,,,8/19/2020 4:28,,3,56,"

On page 27 of the DeepMind AlphaGo paper appears the following sentence:

The first hidden layer zero pads the input into a $23 \times 23$ image, then convolves $k$ filters of kernel size $5 \times 5$ with stride $1$ with the input image and applies a rectifier nonlinearity.

What does "convolves $k$ filters" mean here?

Does it mean the following:

The first hidden layer is a convolutional layer with $k$ groups of $(19 \times 19)$ neurons, where there is a kernel of $(5 \times 5 \times numChannels + 1)$ parameters (input weights plus a bias term) used by all the neurons of each group. $numChannels$ is 48 (the number of feature planes in the input image stack).

All $(19 \times 19 \times k)$ neurons' outputs are available to the second hidden layer (which happens to be another convolutional layer, but could in principle be fully connected).

?

",40405,,2444,,8/21/2020 11:36,8/21/2020 11:36,"What does ""convolve k filters"" mean in the AlphaGo paper?",,0,2,,,,CC BY-SA 4.0 23123,2,,23121,8/19/2020 6:14,,3,,"

A "software library" is a codebase that includes many commonly used functions/algorithms, you do not need to write the same function again if you "import" the library.

In practice, practitioners/researchers don't often code the nitty gritty details of well-used algorithms, they just use existing libraries such as sklearn or TensorFlow that already has an existing implementation.

In the case of sklearn, ignoring some details, you can create a muti-layer perceptron in this line of code:

>>> clf = MLPClassifier(activation = 'relu', alpha=1e-5, hidden_layer_sizes((5,2))

...in which clf is an "object" that refers to the neural network, MLPClassifier is a function provided by sklearn that "creates" a neural net depending on the parameters provided.

In this case, the parameters provided are that the activation function is ReLU, where the network has two layers, with the first having 5 nodes, and the second has 2 nodes.

All other parameters that are not explicitly provided are set to a default value, in this case, the learning rate, among others.

It should be noted that this is not complete yet, we still have not completed training. Luckily, this is as simple as:

>>> clf.fit(X, y)

Where X and y are some array-shaped data/target set.

I recommend sklearn as i think its more beginner friendly.

https://scikit-learn.org/stable/getting_started.html

",6779,,6779,,8/21/2020 4:09,8/21/2020 4:09,,,,0,,,,CC BY-SA 4.0 23124,1,,,8/19/2020 6:38,,2,62,"

I have an image dataset, which is composed of 113695 images for training and 28424 images for validation. Now, when I use ImageDataGenerator and flow_from_dataframe, it as the parameter batch_size.

How can I take the correct number for batch_size because both numbers cannot be divided by the same number? Should I need to drop four images in the validation data to make them batch_size of 5? Or is there another way?

",38737,,2444,,8/21/2020 11:14,8/21/2020 11:14,How to take the optimal batch_size for training a model?,,2,1,,,,CC BY-SA 4.0 23125,1,23129,,8/19/2020 6:53,,2,344,"

I'm training a DQN in a real environment where I do not have a natural terminal state, so I've built the episode in an artificial way (i.e. it starts in a random condition and after T steps it ends). My question is about the terminal state: should I consider it when I have to compute $y$ (so using only the reward) or not?

",37169,,37169,,8/21/2020 7:02,8/21/2020 7:02,How should I compute the target for updating in a DQN at the terminal state if I have pseudo-episodes?,,1,1,,,,CC BY-SA 4.0 23127,2,,23124,8/19/2020 7:41,,1,,"

This Cross Validated post might answer your question.

In a nutshell:

  • A single batch (that is all your data in one batch) will result in a smooth trajectory on the loss surface. The drawback is that all your data might not fit into your memory. Which is highly likely for ~100k images.

  • One image per batch (batch size = no. examples) will result in a more stochastic trajectory since the gradients are calculated on a single example. Advantages are of computational nature and faster training time.

The middle way is to choose the batch size in such a way that your batch fits into memory and gradients behave less 'noisy'. To be honest there is no 'golden' number, personally I like to choose powers of two.

Don't worry that your data is not divisible by the batch size. Libraries will take care about that internally, the last batch will just be a smaller than the defined batch size ($N \text{ mod } b$).

",37120,,,,,8/19/2020 7:41,,,,0,,,,CC BY-SA 4.0 23129,2,,23125,8/19/2020 8:20,,3,,"

If the episode does not terminate naturally, then if you are breaking it up into pseudo-episodes for training purposes, the one thing you should not do is use the TD target $G_{T-1} = R_T$ used for an end of episode, which assumes a return of 0 from any terminal state $S_{T}$. Of course that is because it is not the end of the episode.

You have two "natural" options to tweak DQN to match to theory at the end of a pseudo-episode:

  • Store the state, action, reward, next_state tuple as normal and use the standard one step TD target $G_{t:t+1} = R_{t+1} + \gamma \text{max}_{a'} Q(S_{t+1}, a')$

  • Completely ignore the last step and don't store it in memory. There is no benefit to this as opposed to the above option, but it might be simpler to implement if you are using a pre-built RL library.

Both these involve ignoring any done flag returned by the environment for the purposes of calculating TD targets. You still can use that flag to trigger the end of a loop and a reset to new starting state.

You should also take this approach if you terminate an episodic problem early after hitting a time step limit, in order to reset for training purposes.


As an aside (and mentioned in comment by Swakshar Deb), you can also look into the average reward setting for non-episodic environments. This solves the problem of needing to pick a value for $\gamma$. If you have no reason to pick a specific $\gamma$ in a continuing problem, then it is common to pick a value close to 1 such as 0.99 or 0.999 in DQN - this is basically an approximation to average reward.

",1847,,1847,,8/19/2020 9:35,8/19/2020 9:35,,,,2,,,,CC BY-SA 4.0 23133,1,,,8/19/2020 10:52,,1,515,"

Since the discriminator defines how the generator is updated, then building a discriminator with a higher number of parameters/more layers should lead to a better quality of generated samples. So, assuming that it won't lead to overwhelming the generator (discriminator loss toward 0) or mode collapse, when engineering a GAN, I should build a discriminator as good as possible?

",31324,,2444,,8/20/2020 10:09,1/7/2023 23:03,Does a better discriminator in GANs mean better sample generation by the generator?,,1,0,,,,CC BY-SA 4.0 23134,1,,,8/19/2020 10:59,,0,71,"

I am trying to implement safe exploration technique in [Ref.1]. I am using Soft Actor-Critic algorithm to teach an agent to introduce a bias between 0 and 1 to a specific state of interest in my environment.

I would like to ask for your help in order to modify the critic update equation -which is originally based on the return or the RL cost function- from this:

which is based on the return functions:

=

to be based on the following cost function to make the RL objective risk-sensitive and avoid large-valued actions (bias values) at the start of the agent's learning,

How can I include the second part of the cost function - in which the variance of the reward is evaluated- in the update equation?

[Ref.1] Heger, M., Consideration of risk in reinforcement learning. 11th International Machine Learning Conference (1994)

",40409,,40409,,9/2/2020 12:42,9/2/2020 12:42,How to modify the Actor-Critic policy gradient algorithm to perform Safe exploration in Reinforcement Learning,,0,2,,,,CC BY-SA 4.0 23139,1,,,8/19/2020 18:15,,0,67,"

When an AI is trained to play an opposing game, such as chess or go, it can become very strong.

I have read in an article (non-scientific) the claim that AI strategies were identified by scientists while an AI was bound to play go games, as well as starcraft games. However it did not tell what these strategies actually were, how they were identified, nor did it explain the configuration in which AI played (AI vs AI? AI vs human?)

Can someone explain it to me? I am familiar with go, not with starcraft, so an explanation about go is appreciated.

I also note that the chess game is not mentioned. Is there any specific feature for chess that makes them inappropriate for strategies? Or is it the behavior of an AI in the chess game that does not allow to identify strategy?

I understand there are plenty of definitions for strategy, and the article did not give one. So let's focus on following significance: Strategy is a group of principles that tell which fields are important to fight for and which are not. A strategy gives long term rewards, which is opposite to tactics with short term rewards obtained thanks to calculation on a specific issue. With this definition, go game stand for a strategic game with a few, well known tactical situations such as line versus line.

",26866,,2444,,8/20/2020 9:49,8/20/2020 9:49,Were AI strategies identified at go or starcraft games and how?,,1,2,,,,CC BY-SA 4.0 23140,2,,23139,8/19/2020 22:11,,1,,"

Truth be told, I have no idea how to play Go, but luckily this is a AI forum and not a Go forum. Addressing your questions about the specific strategies that AI discovered, there's a paper released by OpenAI titled "Mastering the game of Go without human knowledge" (https://deepmind.com/research/publications/mastering-game-go-without-human-knowledge). Here, there's a section titled "Knowledge learned by AlphaGo Zero." In it, they mention that AlphaGo Zero discovered some variants of common corner sequences. I think it may be worth checking out. The section also lists some other common Go strategies that AlphaGo Zero learned. I believe during the training process, AlphaGo Zero was trained using self-play, where the AI played itself over and over again to get better.

To address your questions about chess. I don't think there's anything limiting AI from learning strategy in chess or anything about chess that limits AI from learning an effective strategy. Check out this paper for more: https://deepmind.com/research/publications/Mastering-Atari-Go-Chess-and-Shogi-by-Planning-with-a-Learned-Model Here the authors present a general reinforcement learning model capable of mastering multiple common board games. I'm not too familiar with strategies in Star Craft, however, here's a link containing some other projects where strategies that emerge by the AI are shown: https://openai.com/blog/competitive-self-play/

",40428,,,,,8/19/2020 22:11,,,,0,,,,CC BY-SA 4.0 23141,1,23144,,8/19/2020 23:27,,2,124,"

Let's say I have pairs of keys and values of the form $(x_1, y_1), \dots, (x_N, y_N)$. Then I give a neural net a key and a value, $(x_i, y_i)$. For example, $x_i$ could be $4$ and $y_i$ could be $3$, but this does not have to be the case.

Is there a way to teach the neural net to output the $y_i$ variable every time it receives the corresponding $x_i$?

By the way, how do our brains perform this function?

",4744,,2444,,8/20/2020 12:22,8/23/2020 13:33,How can we teach a neural net to make arbitrary data associations?,,1,5,,,,CC BY-SA 4.0 23144,2,,23141,8/20/2020 8:11,,3,,"

In a nutshell : Memorizing is not Learning

So, first let's just remind the classical use of a neural net, in Supervised Learning :

  • You have a set of $(x_{train}, y_{train}) \in X \times Y$ pairs, and you want to extract a general mapping law from $X$ to $Y$
  • You use a neural net function $f_{\theta} : x \rightarrow f_{\theta}(x)$, with $\theta$ the weights (parameters) of your net.
  • You optimise $f_{\theta}$ by minimizing the prediction error, represented by the loss function.

Can this solve your question ? Well, I don't think so. With this scheme, your neural net will learn an appropriate mapping from the set $X$ to the set $Y$, but this mapping is appropriate according to your loss function , not to your $(x_{train}, y_{train})$ pairs.

Imagine that a small part of the data is wrongly labelled. A properly trained net learns to extract relevant features and thus will predict correctly the label, not like you did. So the net doesn't memorize your pairs, it infers a general law from the data, and this law may not respect each $(x_{train}, y_{train})$. So classical Supervised Deep Learning should not memorize $(x_{train}, y_{train})$ pairs.

However, you could memorize using a net with too many parameters : it's Overfitting !

  • In this case, you set up the net with too many parameters. That gives too much degrees of freedom to your net, and the net will use these DoFs to exactly fit rightly each $(x_{train}, y_{train})$ pair you feed during training.
  • However, for an input $x$ that it never saw during training, $f_{\theta}(x)$ would have no meaning. That's why we say an overfitted net did not learn, and overfitting is feared by many DL practitioner.

But as long as you want only to memorize, and not to learn, a overfitted net may be the a solution. An other solution for memorization may be Expert Systems, I don't know them enough to explain them, but you may check that if you want.

What about the brain ?

The matter in answering this question is that we don't really know how does the brain work. I highly recommend this article discussing neural networks and the brain.

Some thoughts to start :

  1. The brain has an incredibly huge amount of parameters, and has a great plasticity. In that sense, we could draw a parallel with overfitted neural networks : so the brain could be also able to overfit, and thus to memorize by this mean.
  2. Our brain is not a feed forward network at all, we can't delimitate any layer, just some rough zones where we know that some specific information is processed. This makes any parallel between neural nets and the brain difficult.
  3. It's still unclear how our brain updates itself. There's no backpropagation for instance. Our overfitted networks also stem from the update processes (for instance, adding regularization to the loss helps avoiding underfitting), but we have no idea of how this works in the brain, so that's another hurdle to drawing parallels !
  4. A more personal thought : the brain is able to both learn and memorize ("The exception that proves the rule" motto shows that I think), while learning and memorizing are antonyms for neural nets...
",17759,,17759,,8/23/2020 13:33,8/23/2020 13:33,,,,2,,,,CC BY-SA 4.0 23145,2,,23133,8/20/2020 8:33,,0,,"

If you start to train a GAN in a way in which the discriminator becomes very powerful than the generator, then the training of generator will not be very much successful. In the same way, if the discriminator is too low in comparison with generator, it would let it produce any thing. So, either way, it will affect the training of the entire system.

So, if you see here, both generator and discriminator and competing against each other and on the other side they are dependent on each other for efficient training.

But I've read in a research paper that adding noise to discriminator is good for the overall stability. And sometimes people also change the labelling system as well. For instance, the value for YES is 1 and people change it to 0.9. This actually keeps the discriminator from being over confident.

",35612,,,,,8/20/2020 8:33,,,,1,,,,CC BY-SA 4.0 23146,1,,,8/20/2020 9:44,,2,446,"

Additionally, by default, the UpSampling2D layer will use a nearest neighbor algorithm to fill in the new rows and columns. This has the effect of simply doubling rows and columns, as described and is specified by the ‘interpolation‘ argument set to ‘nearest‘. Alternately, a bilinear interpolation method can be used which draws upon multiple surrounding points. This can be specified via setting the ‘interpolation‘ argument to ‘bilinear‘.

How exactly does the nearest neighbor algorithm mentioned above work? Also, what does interpolation mean in this context (nearest and bilinear)?

Source: Section on Upsampling2D layer

",35585,,,,,8/20/2020 9:44,What's the nearest neighbor algorithm used for upsampling?,,0,1,,,,CC BY-SA 4.0 23147,2,,20773,8/20/2020 10:59,,3,,"

DDPG is an off-policy algorithm simply because of the objective taking expectation with respect to some other distribution that we are not learning about, i.e. the deterministic policy gradient can be expressed as

$$\nabla _{\theta^\mu} J \approx \mathbb{E}_{s_t \sim \rho^\beta} \left[ \nabla _{\theta^\mu} Q(s,a|\theta^Q) | s=s_t, a=\mu(s_t ; \theta ^\mu) \right]\;.$$

We are interested in learning about the policy parameters of $\mu$, denoted by $\theta$, but we take expected with respect to some discounted state distribution induced by a policy $\beta$, which we will denote as $\rho^\beta$.

To summarise, we are learning off-policy as the expectation of the gradient is taken with respect to some state distribution that occurs under some policy that we are not learning about.

Given that on-policy learning is a special case of off-policy learning, if the replay buffer had a size of one, i.e. we use only the most recent experience tuple to perform parameter updates, then DDPG would be on-policy.

",36821,,36821,,5/9/2021 23:13,5/9/2021 23:13,,,,0,,,,CC BY-SA 4.0 23148,1,,,8/20/2020 11:07,,1,25,"

This does not answer my question. I struggled very hard to understand the SVD from a linear-algebra point of view. But in some cases I failed to connect the dots. So, I started to see all the application of SVD. Like movie recommendation system, Google page ranking system, etc.

Now in the case of movie recommendation system, what I had as a mental picture is...

The SVD is a technique that falls under collaborative filtering. And what the SVD does is factor a big data matrix into two smaller matrix. And as an input to the SVD we give an incomplete data matrix. And SVD gives us a probable complete data matrix. Here, in the case of a movie recommendation system we try to predict ratings of users. Incomplete input data matrix means some users didn't give ratings to certain movies. So the SVD will help to predict users' ratings. I still don't know how the SVD breaks down a large matrix to smaller pieces. I don't how the SVD determines the dimensions of the smaller matrices.

It would be helpful if anyone could judge my understanding. And I will very much appreciate any resources which can help me to understand the SVD from scratch to its application to Netflix recommendation systems. Also for the Google Page ranking system or for other applications.

I am looking forward to seeing an explanation more from human-intuition level and from a linear-algebra point of view. Because I am interested in using this algorithm in my research, I need to understand as soon as possible: how does the SVD work deep down from the core?

",40437,,,,,8/20/2020 11:07,Human intuition behind SVD in case of recommendation system,,0,0,,,,CC BY-SA 4.0 23151,1,,,8/20/2020 15:14,,0,80,"

I have been reading the research paper Tell Me Where to Look: Guided Attention Inference Network.

In this paper, they calculate the attention loss, but I didn't understand how to calculate it. Do we have to calculate it like outcome[c]? If it is so, then why do arrows connect with each other from the middle FC and the last FC?

Here is the image:

",36107,,2444,,8/21/2020 11:48,8/21/2020 11:48,"How to calculate the attention loss in the paper ""Tell Me Where to Look: Guided Attention Inference Network""?",,0,2,,,,CC BY-SA 4.0 23154,1,,,8/20/2020 17:45,,1,1323,"

I am designing a deep autoencoder for graph embedding (exactly node embedding) following this paper SDNE. In the original paper, they used the sigmoid activation for all hidden layers in the autoencoder model, even for the embedding layer.

However, I think the embedding layer should use the tanh activation and the reconstruction layer should be used ReLU activation. Because, embedding is in the range $[-1, 1]$ and reconstruction layer is in the range $[0, x]$, which generates better results due to a larger range for representation and directed graph. Instead of in the range $[0,1]$ from sigmoid will lead to a lack of embedding information.

So, what is the best activation function for deep autoencoders to capture good information about the structure of graph?

",25645,,2444,,8/21/2020 11:52,2/28/2021 18:05,What is the best activation function for the embedding layer in a deep auto-encoder?,,1,0,,,,CC BY-SA 4.0 23155,2,,23124,8/20/2020 18:04,,0,,"

From Andrew lesson on Coursera, batch_size should be the power of 2, ex: 512, 1024, 2048. It will faster for training.

And you don't need to drop your last images to batch_size of 5 for example. The library likes Tensorflow or Pytorch, the last batch_size will be number_training_images % 5 which 5 is your batch_size.

Last but not least, batch_size need to fit your memory training (CPU or GPU). You can try several large batch_size to know which value is not out of memory. The smaller number_mini_batch = number_training_image//batch_size + 1, the faster for training time.

Hope they can help you!

",25645,,,,,8/20/2020 18:04,,,,0,,,,CC BY-SA 4.0 23158,1,,,8/20/2020 21:32,,1,296,"

When I build a convolution layer for image processing, the filter parameters should have 3 dimensions, (filter_length, filter_width, color_depth) is that correct?

Why is this convolution layer called Conv2D? Where does the 2 come from?

",37178,,2444,,12/30/2021 10:47,12/30/2021 10:47,Why is the convolution layer called Conv2D?,<2d-convolution>,1,0,,,,CC BY-SA 4.0 23159,1,23168,,8/21/2020 1:08,,1,2010,"
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(1000, 16, input_length=20), 
    tf.keras.layers.Dropout(0.2),                           # <- How does the dropout work?
    tf.keras.layers.Conv1D(64, 5, activation='relu'),
    tf.keras.layers.MaxPooling1D(pool_size=4),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

I can understand when dropout is applied between Dense layers, which randomly drops and prevents the former layer neurons from updating parameters. I don't understand how dropout works after an embedding layer.

Let's say the output shape of the Embedding layer is (batch_size,20,16) or simply (20,16) if we ignore the batch size. How is dropout applied to the embedding layer's output?

Randomly dropout rows or columns?

",37178,,2444,,8/21/2020 12:08,8/21/2020 13:45,How is dropout applied to the embedding layer's output?,,1,0,,,,CC BY-SA 4.0 23160,1,,,8/21/2020 4:27,,1,41,"

Suppose the following properties of a board game:

  1. High branching factor in the beginning of the game (~500) which slowly tends towards 0 at the end of the game

  2. Evaluation of the any given board state isn't hard to create and can be quite accurate

And that we want to create an AI to play such board game.

What method of tree searching should be applied for the AI?

Considering the absurd branching factor (at least for most of the game), the Monte Carlo method of search is appealing. The problem is that from what I've seen usually monte carlo search methods are used on games with both high branching factor and no easy evaluation function. However that is not the case for this board game as previously stated.

I'm simply curious how this property of evaluation should influence my decision. For example: Should I replace simulations and playouts with an evaluation function? At that point, would alpha-beta pruning minimax work better? Is there some hybrid which would be optimal?

",40455,,,,,8/21/2020 4:27,Which method of tree searching should be used for this board game?,,0,1,,,,CC BY-SA 4.0 23161,1,23163,,8/21/2020 5:01,,5,108,"

From Goodfellow et al. (2014), we have the adversarial loss:

$$ \min_G \, \max_D V (D, G) = \mathbb{E}_{x∼p_{data}(x)} \, [\log \, D(x)] + \, \mathbb{E}_{z∼p_z(z)} \, [\log \, (1 − D(G(z)))] \, \text{.} \quad$$

In practice, the expectation is computed as a mean over the minibatch. For example, the discriminator loss is:

$$ \nabla_{\theta_{d}} \frac{1}{m} \sum_{i=1}^{m}\left[\log D\left(\boldsymbol{x}^{(i)}\right)+\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right] $$

My question is: why is the mean used to compute the expectation? Does this imply that $p_{data}$ is uniformly distributed, since every sample must be drawn from $p_{data}$ with equal probability?

The expectation, expressed as an integral, is:

$$ \begin{aligned} V(G, D) &=\int_{\boldsymbol{x}} p_{\text {data }}(\boldsymbol{x}) \log (D(\boldsymbol{x})) d x+\int_{\boldsymbol{z}} p_{\boldsymbol{z}}(\boldsymbol{z}) \log (1-D(g(\boldsymbol{z}))) d z \\ &=\int_{\boldsymbol{x}} p_{\text {data }}(\boldsymbol{x}) \log (D(\boldsymbol{x}))+p_{g}(\boldsymbol{x}) \log (1-D(\boldsymbol{x})) d x \end{aligned} $$

So, how do we go from an integral involving a continuous distribution to summing over discrete probabilities, and further, that all those probabilities are the same?

The best I could find from other StackExchange posts is that the mean is just an approximation, but I'd really like a more rigorous explanation.

This question isn't exclusive to GANs, but is applicable to any loss function that is expressed mathematically as an expectation over some sampled distribution, which is not implemented directly via the integral form.

(All equations are from the Goodfellow paper.)

",40197,,2444,,12/10/2021 16:10,12/10/2021 16:10,Why is the mean used to compute the expectation in the GAN loss?,,1,0,,,,CC BY-SA 4.0 23162,1,23250,,8/21/2020 5:37,,3,121,"

When training a network using word embeddings, it is standard to add an embedding layer to first convert the input vector to the embeddings.

However, assuming the embeddings are pre-trained and frozen, there is another option. We could simply preprocess the training data prior to giving it to the model so that it is already converted to the embeddings. This will speed up training, since this conversion need only be performed once, as opposed to on the fly for each epoch.

Thus, the second option seems better. But the first choice seems more common. Assuming the embeddings are pre-trained and frozen, is there a reason I might choose the first option over the second?

",12201,,12201,,8/21/2020 14:12,8/25/2020 13:36,When to convert data to word embeddings in NLP,,3,0,,,,CC BY-SA 4.0 23163,2,,23161,8/21/2020 10:15,,4,,"

It seems your question is concerned with how an empirical mean works.

It is indeed true that, if all $x^{(i)}$ are independent identically distributed realisations of a random variable $X$, then $\lim_{n \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n f(x^{(i)}) = \mathbb{E}[f(X)]$. This is a standard result in statistics known as the law of large numbers.

",36821,,,,,8/21/2020 10:15,,,,0,,,,CC BY-SA 4.0 23167,2,,23158,8/21/2020 12:02,,2,,"

A 2D convolution is a convolution where the kernel has the same depth as the input, so, in theory, you do not need to specify the depth of the kernel, if you know the depth of the input.

I don't know which library you are referring to (although you tagged your post with TensorFlow and Keras), but, in TensorFlow, you only need to specify the width and height of the kernel in the Conv2D class, given that the depth is automatically calculated from the depth of the input.

Thus, the $2$ comes from the fact that you slide across two dimensions (i.e. width and height).

On the other hand, in a 3D convolution, the depth of the kernel does not necessarily have the same depth of the input, so, in that case, you also slide across the depth. In TensorFlow, you need to specify the width, height and depth of the Conv3D class.

",2444,,,,,8/21/2020 12:02,,,,0,,,,CC BY-SA 4.0 23168,2,,23159,8/21/2020 13:45,,1,,"

It doesn't drops rows or columns, it acts directly on scalars. The Dropout Layer keras documentation explains it and illustrates it with an example :

The Dropout layer randomly sets input units to 0 with a frequency of rate

After an Dense Layer, the Dropout inputs are directly the outputs of the Dense layer neurons, as you said. After your embedding layer, in your case, you should have rate * (16 * input_length) = 0.2 * 20 * 16 = 64 inputs set to 0 out of the 320 scalars inputs. These 64 dropped inputs are randomly selected in the 20x16 grid. Note that the Dropout rescales the non dropped inputs by multiplicating them by a factor $\frac{1}{1-rate}$.

",17759,,,,,8/21/2020 13:45,,,,0,,,,CC BY-SA 4.0 23169,1,,,8/21/2020 14:27,,1,62,"

This is a generic question. Still posting it to get insights from experts in the field.

I am interested in knowing if Neural Networks are used in general apart from specific hi-tech organizations.

If so, which type of NN is used in which industry and for what purpose?

",15267,,2444,,8/21/2020 17:30,9/20/2020 18:00,Are neural networks really used apart from specific hi-tech organisations?,,1,3,,,,CC BY-SA 4.0 23172,1,,,8/21/2020 16:59,,0,191,"

I am practicing with an image dataset which is having different dimensions. If I simply crop and pad them to 1024X1024(the original images having smallest width is around 300 and largest is around 2400 and widths and heights of the images are not the same) I am not getting good val_accuracy. It's just giving 49% accuracy. How to do image processing to these images because the brightness of the images is also changing. My task is to classify them into 5 classes.

",38737,,,,,10/31/2022 5:59,How to train the images of various sizes?,,3,2,,,,CC BY-SA 4.0 23173,2,,23169,8/21/2020 17:02,,2,,"

Before putting up my experience I'd like to provide some facts about AI startups and established Hi-Tech companies.

  1. Most of companies claiming to use AI doesn't actually use ML/DL
  2. And there are products by some companies that are classified to be using AI since they just use linear regression

Coming to my experience I can say that AI is passively as well as actively utilized in product development and services. Though Neural Networks are the driving force behind the current AI tech, the thing that we study in the text books are just basic building blocks that are used to designing complex network architectures. Though most of High-Tech companies as you have mentioned might not be willing to share their secrets, the publications they make in conferences are good enough. I would suggest one to go through the papers that are published by companies like NVIDIA, Microsoft, and Google to get understanding on how complex networks are designed to address specific problems.

",25676,,25676,,8/21/2020 17:52,8/21/2020 17:52,,,,0,,,,CC BY-SA 4.0 23175,1,,,8/21/2020 17:56,,1,34,"

I am trying to solve the following problem it is to classify the the red points and green points in image 1 into two cases. The cluster of green or red points can be anywhere and there can be any number of green or red clusters; different coloured clusters do not mix or bleed into each other; at least one green and one red cluster always exists. An example of points to classify is given in image 1.

So I guess there are two ways to do this

  1. Classify then with boundaries as shown in image 2, with some algorithm, then use a some post processing step to link the separate classes that have the same colour.

  2. Use some algorithm to directly find the classification boundaries as shown in image 3.

So my question is what is the algorithm or algorithm can I use for 1) and 2)?

It seems it can be solved using the MLE algorithm in some way.

",40467,,40467,,8/21/2020 18:03,8/21/2020 18:03,What Classification Algorithm Do I need to Use to Solve this Problem?,,0,5,,,,CC BY-SA 4.0 23179,2,,22602,8/21/2020 19:59,,0,,"

Intuitively I think there's definitely something to be said for your idea, but it's not a 100% clear case, and there are also some arguments to be made for the case that we should also be training the policy from data where $z_t = -1$.

So, first let's establish that if we do indeed choose to discard any and all data where $z_t = -1$, we are in fact discarding a really significant part of our data; we're discarding 50% of all the data we generate in games like Go where there are no draws (less than that in games like Chess where there are many draws, but still a significant amount of data). So this is not a decision to be made lightly (it has a major impact on our sample efficiency), and we should probably only do it if we really believe that policy learning from any data where $z_t = -1$ is actually harmful.


The primary idea behind the self-play learning process in AlphaGo Zero / AlphaZero may intuitively be explained as:

  1. When we run an MCTS search biased by a trained policy $\pi_t$, we expect the resulting distribution of visits to be slightly better than what was produced by $\pi_t$ alone.
  2. According to the expectation from point 1., we may use the visit counts of MCTS as a training target for the policy $\pi_t$, and hence we expect to get a slight improvement in the quality of that trained policy.
  3. If we were to now run a new MCTS search biased by the updated policy in the same situation again, we would expect that to perform even better than it previously did because it is now biased by a new policy which has improved in comparison to the policy we previously used.

Of course, there can be exceptions to point 1. if we get unlucky, but on average we expect that to be true. Crucially for your question, we don't expect this to only be true in games where we actually won, but still also be true in games that we ultimately end up losing. Even if we still end up losing the game played according to the MCTS search, we expect that we at least put up a slightly better fight with the MCTS + $\pi_t$ combo than we would have done with just $\pi_t$, and so it may still be useful to learn from it (to at least lose less badly).

On top of this, it is important to consider that we intentionally build in exploration mechanisms in the self-play training process, which may "pollute" the signal $z_t$ without having polluted the training target for the policy. In self-play, we do not always pick the action with the maximum visit count (as we would in an evaluation match / an important tournament game), but we pick actions proportionally to the MCTS visit counts. This is done for exploration, to introduce extra variety in the experience that we generate, to make sure that we do not always learn from exactly the same games. This can clearly affect the $z_t$ signal (because sometimes we knowingly make a very very bad move just for the sake of exploration), but it does not affect the policy training targets encountered throughout that game; MCTS still tries to make the best that it can out of the situations it faces. So, these policy training targets are still likely to be useful, even if we "intentionally" made a mistake somewhere along the way which caused us to lose the game.

",1641,,,,,8/21/2020 19:59,,,,1,,,,CC BY-SA 4.0 23180,1,,,8/21/2020 20:00,,5,2045,"

In section 3 of the paper Continuous control with deep reinforcement learning, the authors write

As detailed in the supplementary materials we used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated exploration for exploration efficiency in physical control problems with inertia (similar use of autocorrelated noise was introduced in (Wawrzynski, 2015)).

In section 7, they write

For the exploration noise process we used temporally correlated noise in order to explore well in physical environments that have momentum. We used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) with θ = 0.15 and σ = 0.2. The Ornstein-Uhlenbeck process models the velocity of a Brownian particle with friction, which results in temporally correlated values centered around 0.

In a few words, what is the Ornstein-Uhlenbeck process? How does it work? How exactly is it used in DDPG?

I want to implement the Deep Deterministic Policy Gradient algorithm, and, in the initial actions, noise has to be added. However, I cannot understand how this Ornstein-Uhlenbeck process works. I have searched the internet, but I have not understood the information that I found.

",40477,,2444,,8/21/2020 22:23,1/25/2022 9:33,"How does the Ornstein-Uhlenbeck process work, and how it is used in DDPG?",,1,1,,,,CC BY-SA 4.0 23184,1,,,8/22/2020 4:01,,1,73,"

If the title wan not very clear, I want a method to take an input image like this,

[[0, 0, 0, 0],
 [1, 1, 1, 0],
 [1, 1, 1, 0],
 [0, 1, 1, 0]]

and output the 2D coordinates of the 1s of the image (So that I can recreate the image)

The application is a robot creating the image using some kind of building blocks, placing one block after the other

I want the output to be sequential because I need to reconstruct the input image pixel by pixel and there are some conditions on the image construction order (e.g. You cannot place a 1 somewhere when it is surrounded by 1s)

The image can change and the number of 1s in the image too.

  1. What is an appropriate AI method to apply in this case?
  2. How should I feed the image to the network? (Will flattening it to 2D affect my need of an output order?)
  3. Should I get the output coordinates one by one or as an ordered 2xN matrix?
  4. If one by one, should I feed the same image for each output or the image without the 1s already filled?

I have tried to apply "NeuroEvolution of Augmenting Topologies" for this using neat-python but was unsuccessful. I am currently looking at RNNs but I am not sure if it is the best choice either.

",40482,,,,,8/22/2020 9:41,Choosing an AI method to recreate a given binary 2D image,,1,1,,,,CC BY-SA 4.0 23186,2,,23184,8/22/2020 7:07,,2,,"

This sounds like a problem that's best solved with a simple non-AI algorithm. If you just enumerate the coordinates in a regular order (rows, colums, zig-zag, hilbert curve) and emit the coordinates where the image has a '1' you're meeting your requirements. Is there any specific reason you want to use an AI algorithm which is most likely worse than this?

",22993,,22993,,8/22/2020 9:41,8/22/2020 9:41,,,,2,,,,CC BY-SA 4.0 23187,1,,,8/22/2020 9:58,,2,82,"

In Goodfellow et al. book Deep Learning chapter 12.1.4 they write

These large models learn some function $f(x)$, but do so using many more parameters than are necessary for the task. Their size is necessary only due to the limited number of training examples.

I am not able to understand this. Large models are expressive, but if you train them on few examples they should also overfit.

So, what do the authors mean by saying large models are necessary precisely because of the limited number of training examples?

This seems to go against the spirit of using more bias when training data is limited.

",21511,,21511,,8/23/2020 16:12,4/22/2021 20:02,Why are large models necessary when we have a limited number of training examples?,,1,0,,,,CC BY-SA 4.0 23190,1,,,8/22/2020 12:52,,2,50,"

I was going through the Sutton book and they said the update formula for Q learning comes from the weighted average of the returns I.e

New estimate= old estimate +alpha*[returns- old estimate]

So by the law of large numbers this will converge to the optimal true q value

Now when we go to Deep Q networks,how exactly is the weighted average computed, all they simply did was try to reduce the error between the target and the estimate, and keep in mind this isn’t the true target, it’s just an unbiased estimate,since it’s an unbiased estimate how is the weighted average computed , which is the expectation?

Can someone help me out here?? Thanks in advance

",40049,,,,,8/22/2020 12:58,How is weighted average computed in Deep Q networks,,1,0,,,,CC BY-SA 4.0 23191,2,,23190,8/22/2020 12:58,,2,,"

Let's say $Q$ is the old estimate, $Q'$ the new estimate, and $R$ is the return.

We have

$$ Q' = Q + \alpha(R-Q) $$

This can be rewritten as

$$ Q' = (1-\alpha)Q + \alpha R $$

When $\alpha$ is a constant, this is an exponential weighted average of returns. If $n$ is the number of samples we get and $\alpha=1/n$ (so it decreases with each sample), we get

$$ Q' = \frac{n-1}{n}Q + \frac{1}{n}R $$

This simply represents the average return. So, playing with $\alpha$ tunes the weighting of the estimate.

",37829,,,,,8/22/2020 12:58,,,,9,,,,CC BY-SA 4.0 23193,1,,,8/22/2020 16:24,,0,98,"

What is the error function? Is it the same as the cost function?

Is the error function known or unknown?

When I get the outcome of a neural net I compare it with the target value. The difference between both is called the error. When I get mutiple error values e.g. when I pass a batch through the NN I will get as many error value as the size of my batch. Is the error function the plot of the points? If yes, to me the error function would be unknown. I would only know some point on the graph of the error function.

",27777,,2444,,8/22/2020 17:14,8/23/2020 13:30,Is the error function known or unknown?,,1,0,,,,CC BY-SA 4.0 23194,2,,23193,8/22/2020 17:26,,2,,"

In deep learning, the error function is sometimes known as loss function or cost function (but I do not exclude the possibility that these terms/expressions have also been used to refer to different although related functions, so you should take into account your context).

In statistical learning theory, the function that you want to minimize is known as expected risk

\begin{align} R(\theta) &= \int L(y, f_{\theta}(x)) d P(x, y) \\ &= \mathbb{E}_{P(x, y)} \left[ L(y, f_{\theta}(x)) \right], \end{align} where $\theta$ are the parameters of your model $f$, and the function that you actually minimize is the empirical risk given the dataset $D = \{ (x_i, y_i) \}_{i=1}^n$,

$$ R_{\mathrm{emp}}(\theta)=\frac{1}{n} \sum_{i=1}^{n}L(y_i, f_{\theta}(x_i)), $$ which is a generalization of the commonly used cost functions, such as mean squared error (MSE) or binary cross-entropy. For example, in the case of MSE, $L = \left(y_{i}-f_\theta(x_i)\right)^{2}$, which can be called the loss function (though this terminology may not be standard).

You optimize the empirical risk, a proxy for the expected risk, because the expected risk is incomputable (given that $P(x, y)$ is generally unknown), so, in this sense, the expected risk is unknown.

If you use the term error function to refer to the expected risk, then, yes, the error function is typically unknown, but error function is typically used to refer to an instance of the empirical risk, so, in that case, it is known and computable.

Note that I purposely used the term loss function above to refer to $L$ and the term cost function to refer to the empirical risks, such as the MSE (i.e., in this case, I did not use loss function and cost function as synonyms), which shows that terminology is not always used consistently, so take into account your context.

",2444,,2444,,8/23/2020 13:30,8/23/2020 13:30,,,,3,,,,CC BY-SA 4.0 23196,1,,,8/22/2020 23:36,,2,131,"

I have to deal with a non-episodic task, where there is addittionally a continuous state space and more specifically in each time step there is always a new state that has never been seen before. I want to use DQN algorithm. As it is referred in Sutton's book (Chapter 10), the average reward setting, that is the undiscounted setting with differential function, should be preferred for non-episodic tasks with function approximation.

(a) Are there any reference papers that use DQN with the average reward setting?

(b) Why should the classic discounted setting (with no average reward) fail in such tasks, comparing to the average reward setting, taking into account that the highest reward that my agent can gain in a time step is 1.0 and thus the max $G_t = \frac{1}{1-γ}$ and not infinite ?

",36055,,36055,,8/23/2020 0:08,8/23/2020 0:08,Combine DQN with the Average Reward setting,,0,0,,,,CC BY-SA 4.0 23197,2,,23187,8/23/2020 3:40,,1,,"

If you read the relevant section. it also says:

Model compression is applicable when the size of the original model is driven primarily by a need to prevent overfitting. In most cases, the model with the lowest generalization error is an ensemble of several independently trained models. Evaluating all $n$ ensemble members is expensive. Sometimes, even a single model generalizes better if it is large (for example, if it is regularized with dropout).

The keyword (I think) here is dropout. Dropout Learning in the referred book has been intepreted as training an ensemble of models, with a model probability same as the probability of a particular dropout architecture of the large Neural Network. Thus, this effectively makes the training as training multiple smaller Neural Nets. According to this paper on dropout, by the original authors, dropout prevents co-adaptation which effectively means you are just training an ensemble of Neural Nets. But this intuition lacks any theoretical justification.

Another paper (understanding the paper might require familiarity with certain statistical ideas of ML) claims this is not true, and dropout doesn't reduce co-adaptation but more likely reduces the variance over dropout patterns. They have provided better empirical and theoretical justifications to this end. So it is still up for debate what actually happens.

But in general the generalization error upper bound very roughly is directly proportional to the size of the Neural Nets. So yes the authors statement in face value is oversimplified and most likely wrong in the general case.

",,user9947,,,,8/23/2020 3:40,,,,0,,,,CC BY-SA 4.0 23198,2,,23172,8/23/2020 4:06,,1,,"

I would use the ImageDataGenerator.flow_from_directory. Documentation is here. Make a train directory. Within it create 5 subdirectories one for each class and give them the desired class name. Place the associated images into the 5 class sub directories. Then use something like the code below. Set the image size to something standard particularly is you are using transfer learning. For MobileNet standard size is 224 X 224. You can use the Image Data Generator to augment your data for example set horizontal flip =True. You can also vary the brightness and rotation of the images.

tf.keras.preprocessing.image.ImageDataGenerator(
    featurewise_center=False,
    samplewise_center=False,
    featurewise_std_normalization=False,
    samplewise_std_normalization=False,
    zca_whitening=False,
    zca_epsilon=1e-06,
    rotation_range=0,
    width_shift_range=0.0,
    height_shift_range=0.0,
    brightness_range=None,
    shear_range=0.0,
    zoom_range=0.0,
    channel_shift_range=0.0,
    fill_mode="nearest",
    cval=0.0,
    horizontal_flip=True,
    vertical_flip=False,
    rescale=None,
    preprocessing_function=None,
    data_format=None,
    validation_split=0.0,
    dtype=None)
#for flow from directory
.flow_from_directory(
    directory,
    target_size=(256, 256),
    color_mode="rgb",
    classes=None,
    class_mode="categorical",
    batch_size=32,
    shuffle=True,
    seed=None,
    save_to_dir=None,
    save_prefix="",
    save_format="png",
    follow_links=False,
    subset=None,
    interpolation="nearest",
)
",33976,,62466,,10/31/2022 5:59,10/31/2022 5:59,,,,0,,,,CC BY-SA 4.0 23200,2,,23180,8/23/2020 6:02,,6,,"

The Ornstein Ulhenebck Process is defined as (in the continuous setting) :

$$dX_t = -\beta(X_t - \alpha)dt + \sigma dW_t$$

The analogue for this process in the discrete time case which I assume will be applicable in the RL case will be: $$X_{t+1} = X_t -\beta(X_t - \alpha) + \sigma \{W_{t+1}-W_t\}=$$ $$X_{t+1} = (1 -\beta)X_t + \alpha\beta + \sigma \{W_{t+1}-W_t\}$$

In the RL seting the terms in the equation probably means:

  • $X_t$ will stand for a state in RL i.e. the state is the number $\in \mathbb R$ where the particle moves to at time $t$.
  • $\beta$ and $\alpha$ are just constants which decide certain movement characteristics of the particle. Check here for graphs plotted for various $\beta$.
  • $W_t$ is a Weiner process which starts at $W_0 = 0$ and then adds independent increments of $\mathcal N(\mu,\sigma)$ as $W_{t+1} = W_t+\mathcal N(\mu,\sigma)$ which is basically a radom walk. More generally we use $\mathcal N(0,1)$. This is formulated as $W_t-W_s = \sqrt{t-s} \mathcal N(0,1)$. This is because of the fact, $W_t$ can be written recursively as $W_t = \mathcal N(0,1)+W_{t-1} = \mathcal N(0,1) + \mathcal N(0,1) + ...W_s$ and since the samplings are independent at each step the mean get added as $\mu_t+\mu_{t-1}...$ and the variances as $\sigma_t^2 + \sigma_{t-1}^2...$. SInce her the means and variances are $0$ and $1$ respectively, the final mean $\mu = 0$ and variance $\sigma^2 = (t-s)$. And hence, byproperties of Gaussian random variables you can write (it is easy to show this via variable tranformation) $W_t-W_s = \sqrt{t-s} \mathcal N(0,1)$. Here, is the formulation of standard Weiner process.
  • $\sigma$ will be the weighting factor of the Weiner process, which just means the amount of noise being added to the process.

Another useful resource on discrete Ornstein Ulhenbeck process, much less generalized. I think now you can extend this to whatever scenario you are intereted in RL setting.

",,user9947,32668,user9947,1/25/2022 9:33,1/25/2022 9:33,,,,0,,,,CC BY-SA 4.0 23202,1,,,8/23/2020 10:37,,1,279,"

In normal Q-learning, the update rule is an implementation of the exponential moving average, which then converges to the optimal true Q values. However, looking at DQN, how exactly is the exponential moving average implemented in deep networks?

",40049,,2444,,8/23/2020 11:11,5/20/2021 14:05,How is exponential moving average computed in deep Q networks?,,1,0,,,,CC BY-SA 4.0 23204,2,,23202,8/23/2020 11:39,,1,,"

However, looking at DQN, how exactly is the exponential moving average implemented in deep networks?

It is not implemented directly as exponential moving average.

Instead, the ability of neural networks to learn online and incrementally forget older input/output associations is used to achieve the same goal.

If you use the simplest mini-batch stochastic gradient descent methods - i.e. just a simple gradient step $\mathbf{w} \leftarrow \mathbf{w} - \alpha \nabla_{\mathbf{w}}\sum_i(g_i - \hat{q}_i)^2$ where $g_i$ is measured (or bootstrap estimated) discounted return for a single state/action pair and $\hat{q}_i$ is the current estimate, then the learning rate $\alpha$ is analagous to the same factor in exponential moving average approach, and in fact would be the same thing mathematically if you one-hot-encoded the states and only had a single layer in the neural network.

Typical implementations of DQN will have deeper networks, will not one-hot-encode the entire state space, and will typically use some gradient modifier such as momentum or Adam to improve performance. So the match to exponential moving average is not exact. But the behaviour is similar in the most important aspect for RL - the ability to learn online and forget older values as the target distribution of expected returns changes due to changes in policy.

",1847,,,,,8/23/2020 11:39,,,,5,,,,CC BY-SA 4.0 23206,2,,67,8/23/2020 12:05,,4,,"

One actual example of SAT solvers is finding the set of compatible package versions in python conda package manager.

(see, for example, https://www.anaconda.com/blog/understanding-and-improving-condas-performance)

The practical applications of SAT solvers involve as well (see http://www.carstensinz.de/talks/RISC-2005.pdf):

  • Product Configuration
  • Hardware Verification
  • Bounded Model Checking
  • (Hardware) Equivalence Checking
  • Asynchronous circuit synthesis (IBM)
  • Software-Verification
  • Expert system verification
  • Planning (air-traffic control, telegraph routing)
  • Scheduling (sport tournaments)
  • Finite mathematics (quasigroups)
  • Cryptanalysis
",40514,,,,,8/23/2020 12:05,,,,0,,,,CC BY-SA 4.0 23207,1,,,8/23/2020 13:07,,0,439,"

The 'by the book' method of delivering final machine learning models is to include all data in the final training (including validation and test sets). To check robustness of my model I use randomly chosen population for training and validation sets with each training (no set random seed). The results on validation and then test sets are pretty satisfactory for my case however they are always different each time, precision spans between 0.7 and 0.9. This is due to fact that each time different data points fall to set with which model is trained.

My question is: how do I know that final training will also generate good model and how to estimate its precision when I do not have anymore unseen data?

",22659,,22659,,8/24/2020 10:45,10/24/2020 4:02,"How can I be sure that the final model, trained on all data, is correct?",,2,1,,,,CC BY-SA 4.0 23209,2,,7755,8/23/2020 15:12,,2,,"

Change the action space at each step, depending on the internal_state. I assume this is nonsense.

Yes, this seems overkill and makes the problem unnecessarily complex, there could be other things you can do.

Do nothing : let the model understand that choosing an unavailable action has no impact.

While this will not harm your model negatively, in any way the negative thing about this is that the model could take a long time to understand that some actions don't matter for some values of state.

Do -almost- nothing : impact slightly negatively the reward when the model chooses an unavailable action.

Same response as for the previous point except that this could harm your model negatively (but not sure if this is significant). Assume you give a reward of -100 for every illegal action it takes. Looking at only the negative rewards, we have:

  • -100 when initial_state == 0
  • -200 when initial state == 1 By doing this, you might be implicitly favoring situations where state == 0. Plus, I dont see the point of the -100 reward anyway since once they come to state 0 they will have to choose a value for the illegal actions too (it's not like they can ignore the values for the action and escape -100 reward)

Help the model : by incorporating an integer into the state/observation space that informs the model what's the internal_state value + bullet point 2 or 3

To me, this seems like the most ideal thing to do. You could branch out the final output layer (of your ActorCritic Agent) into 2 instead of 1.

Like: Input layer, fc1 (from input layer), output1(from fc1), output2(from fc1) Based on initial_state you can get the output from output1 or output2.

",40518,,,,,8/23/2020 15:12,,,,1,,,,CC BY-SA 4.0 23211,2,,23065,8/23/2020 16:56,,1,,"

You need to express the product of the activation functions of your neurons via some combination of individual activation functions.

For example, if the product of your activation functions is equal to the linear combination of such functions then the network for $f(x)g(x)$ can be derived from the networks for $f(x)$ and $g(x)$ exactly.

Let's assume for simplicity that $f(x)$ and $g(x)$ are approximated with trained networks that consist of just one neuron, $$f(x) = A(w_f x + b_f)$$ $$g(x) = A(w_g x + b_g)$$ Then, if the activation function $A$ is such that $$A(w_f x + b_f)A(w_g x + b_g) = \sum\limits_{i=1}^N \alpha_iA(w_i x + b_i)$$ then $f(x)g(x)$ is approximated by a neural network consisting of $N$ neurons: $$f(x)g(x) = \sum\limits_{i=1}^N \alpha_iA(w_i x + b_i)$$ This is easily generalized to a network that consists of multiple neurons.

An example is when some of your neurons have activation functions $A(z)=\cos(z)$ and others are $A(z)=\sin(z)$. Then you can build the network computing $f(x)g(x)$ from the networks computing $f(x)$ and $g(x)$ using the product-to-sum identities, like $2\sin(\theta)\sin(\phi) = cos(\theta-\phi)-cos(\theta+\phi)$, etc.

If the product of your activation functions cannot be expressed as a linear combination of such functions (as in the case of sigmoids) then the network for $f(x)g(x)$ can be derived from the networks for $f(x)$ and $g(x)$ approximately. You need to figure out the parameters of the approximation $$A(z_f)A(z_g) \approx \sum\limits_{i=1}^N \alpha_iA(z_i)$$ Such parameters are: the number of terms, $N$, the coefficients $\alpha_i$, and the range $[z_{min}, z_{max}]$ where this approximation works with a desired accuracy.

Linear combination is not the only opportunity. Say, your activation is $A(z)=a^z$. Then, $$A(w_f x + b_f)A(w_g x + b_g)=a^{(w_f+w_g) x + (b_f+b_g)}=a^{w_{fg} x + b_{fg}}$$ so the weights and the biases of the product are exactly expressed via the weights and the biases of the multiplicands.

",15524,,15524,,8/23/2020 17:03,8/23/2020 17:03,,,,0,,,,CC BY-SA 4.0 23212,1,23214,,8/23/2020 17:49,,5,525,"

I know that the notation $\mathcal{N}(\mu, \sigma)$ stands for a normal distribution. But I'm reading the book "An Introduction to Variational Autoencoders" and in it, there is this notation: $$\mathcal{N}(z; 0, I)$$ What does it mean?

picture of the book:

",23811,,,,,8/23/2020 19:19,"What does the notation $\mathcal{N}(z; \mu, \sigma)$ stand for in statistics?",,1,1,,,,CC BY-SA 4.0 23213,1,23215,,8/23/2020 18:27,,1,445,"

TensorFlow allows users to save the weights and the model architecture, however, that will be insufficient unless the values of certain other variables are also stored. For instance, in DQN, if $\epsilon$ is not stored the model will start exploring from scratch and a new model will have to be trained.

What are the variables that need to be saved and loaded, so that a DQN model starts where it left off? Some pseudocode will be highly appreciated!

Here is my current model with code

## Slightly modified from the following repository - https://github.com/gsurma/cartpole

from __future__ import absolute_import, division, print_function, unicode_literals

import os
import random
import gym
import numpy as np
import tensorflow as tf

from collections import deque
from tensorflow.models import Sequential
from tensorflow.layers import Dense
from tensorflow.optimizers import Adam


ENV_NAME = "CartPole-v1"

GAMMA = 0.95
LEARNING_RATE = 0.001

MEMORY_SIZE = 1000000
BATCH_SIZE = 20

EXPLORATION_MAX = 1.0
EXPLORATION_MIN = 0.01
EXPLORATION_DECAY = 0.995

checkpoint_path = "training_1/cp.ckpt"


class DQNSolver:

    def __init__(self, observation_space, action_space):
        self.exploration_rate = EXPLORATION_MAX

        self.action_space = action_space
        self.memory = deque(maxlen=MEMORY_SIZE)

        self.model = Sequential()
        self.model.add(Dense(24, input_shape=(observation_space,), activation="relu"))
        self.model.add(Dense(24, activation="relu"))
        self.model.add(Dense(self.action_space, activation="linear"))
        self.model.compile(loss="mse", optimizer=Adam(lr=LEARNING_RATE))

    def remember(self, state, action, reward, next_state, done):
        self.memory.append((state, action, reward, next_state, done))

    def act(self, state):
        if np.random.rand() < self.exploration_rate:
            return random.randrange(self.action_space)
        q_values = self.model.predict(state)
        return np.argmax(q_values[0])

    def experience_replay(self):
        if len(self.memory) < BATCH_SIZE:
            return
        batch = random.sample(self.memory, BATCH_SIZE)
        for state, action, reward, state_next, terminal in batch:
            q_update = reward
            if not terminal:
                q_update = (reward + GAMMA * np.amax(self.model.predict(state_next)[0]))
            q_values = self.model.predict(state)
            q_values[0][action] = q_update
            self.model.fit(state, q_values, verbose=0)
        self.exploration_rate *= EXPLORATION_DECAY
        self.exploration_rate = max(EXPLORATION_MIN, self.exploration_rate)


def cartpole():
    env = gym.make(ENV_NAME)
    #score_logger = ScoreLogger(ENV_NAME)
    observation_space = env.observation_space.shape[0]
    action_space = env.action_space.n
    dqn_solver = DQNSolver(observation_space, action_space)
    checkpoint = tf.train.get_checkpoint_state(os.getcwd()+"/saved_networks")
    print('checkpoint:', checkpoint)
    if checkpoint and checkpoint.model_checkpoint_path:
        dqn_solver.model = keras.models.load_model('cartpole.h5')
        dqn_solver.model = model.load_weights('cartpole_weights.h5')
        
    run = 0
    i = 0
    while i<5:
        i = i + 1
        #total = 0
        run += 1
        state = env.reset()
        state = np.reshape(state, [1, observation_space])
        step = 0
        while True:
            step += 1
            #env.render()
            action = dqn_solver.act(state)
            state_next, reward, terminal, info = env.step(action)
            #total += reward
            reward = reward if not terminal else -reward
            state_next = np.reshape(state_next, [1, observation_space])
            dqn_solver.remember(state, action, reward, state_next, terminal)
            state = state_next
            dqn_solver.model.save('cartpole.h5')
            dqn_solver.model.save_weights('cartpole_weights.h5')
            if terminal:
                print("Run: " + str(run) + ", exploration: " + str(dqn_solver.exploration_rate) + ", score: " + str(step))
                #score_logger.add_score(step, run)
                break
            dqn_solver.experience_replay()


if __name__ == "__main__":
    cartpole()
",31755,,2444,,8/23/2020 21:20,9/1/2020 11:44,"What are the variables that need to be saved and loaded, so that a DQN model starts where it left off?",,1,4,,,,CC BY-SA 4.0 23214,2,,23212,8/23/2020 19:19,,7,,"

It means that $z$ has a (multivariate) normal distribution with 0 mean and identity covariance matrix. This essentially means each individual element of the vector $z$ has a standard normal distribution.

",36821,,,,,8/23/2020 19:19,,,,1,,,,CC BY-SA 4.0 23215,2,,23213,8/23/2020 21:49,,2,,"

Typically you would need to save the network weights, hyper-parameters and the replay buffer if you wanted to stop training and then come back at a later date and carry on training. Usually, I do this by writing it all as a class in Python (the agent, the memory buffer, hyper-parameters etc.) and saving the final object with Pickle.

Looking at your code, the only thing I would personally have done different would be to define the model outside of the class and have the class take as input a network; however I usually use PyTorch as opposed to Keras/Tensorflow so I'm not sure which method works better.

As per OP's request in the comments, here is a snippet of code I used for Car-pool.

class DQN:

def __init__(self, observation_space, action_space):
    self.exploration_rate = epsilon_max
    self.observation_space = observation_space
    self.batch_size = batch_size
    self.gamma = gamma

    self.action_space = action_space

    self.memory = deque(maxlen=max_memory)

    self.model = Net()
    self.loss_fn = nn.MSELoss()
    self.optimizer = optim.Adam(self.model.parameters(), lr=0.0001)

def remember(self, state, action, reward, next_state, done):
    self.memory.append((state, action, reward, next_state, done))

def act(self, state):
    self.model.eval()
    state = torch.from_numpy(state).type(torch.FloatTensor)
    self.exploration_rate *= exploration_decay
    self.exploration_rate = max(epsilon_min, self.exploration_rate)
    if np.random.uniform() < self.exploration_rate:
        return np.random.randint(0, self.action_space)
    q_values = self.model(state)
    action = torch.argmax(q_values, dim=1)
    return int(action)

def experience_replay(self):
    if len(self.memory) < batch_size:
        return

    self.optimizer.zero_grad()
    states,actions,rewards,next_states,dones = self.get_batch()
    states = torch.FloatTensor(states).reshape((self.batch_size,4))
    next_states = torch.FloatTensor(next_states).reshape((self.batch_size,4))
    rewards = torch.FloatTensor(rewards).reshape((self.batch_size,1))
    dones = torch.FloatTensor(dones).reshape((self.batch_size,1))

    # get the q update ready
    # first take max and set up target
    max_vals,argmax = torch.max(self.model(next_states),axis=1)
    q_target = rewards + self.gamma * max_vals.reshape((self.batch_size,1)) * (1-dones)

    q_values = self.model(states)
    for _ in range(self.batch_size):
        q_values[_][actions[_]] = q_target[_][0]
    input = self.model(states)
    q_values = q_values.detach()
    loss = self.loss_fn(input, q_values)
    loss.backward()
    self.optimizer.step()

def get_batch(self):
    batch = random.sample(self.memory, self.batch_size)

    states = []
    actions = []
    rewards = []
    next_states = []
    dones = []
    for state,action,reward,next_state,done in batch:
        states.append(state)
        actions.append(action)
        rewards.append(reward)
        next_states.append(next_state)
        dones.append(done)
    return states,actions,rewards,next_states,dones


def tensor_max(self, ten):
    ten = ten.numpy()
    ten = np.amax(ten, 1).reshape(1, 1)
    ten = torch.from_numpy(ten).type(torch.FloatTensor)
    return ten
",36821,,36821,,9/1/2020 11:44,9/1/2020 11:44,,,,2,,,,CC BY-SA 4.0 23217,1,,,8/24/2020 7:53,,1,192,"

A homograph - is a word that shares the same written form as another word but has a different meaning.

They can be even different parts of speech. For example:

  1. close(verb) - close(adverb)
  2. lead(verb) - lead(noun)
  3. wind(noun) - wind(verb)

And there is rather a big list https://en.wikipedia.org/wiki/List_of_English_homographs.

As far as I understand, after processing the text data in any conventional way, lemmatization, building an embedding, these words, despite having different meaning, and appearing in different contexts, would be absolutely the same for the algorithm, and in the end we would get some averaged context between two or more meainings of the word. And this embedding would be meaningless.

How is this problem treated or these words are regarded to be too rare to have a significant impact on the quality of resulting embedding?

I would appreciate comments and references to the papers or sources

",38846,,,,,3/26/2021 8:21,How homographs is an NLP task can be treated?,,0,2,,,,CC BY-SA 4.0 23219,1,,,8/24/2020 12:17,,1,262,"

I have devised an gridworld-like environment where a RL agent is tasked to cover all the blank squares by passing through them. Possible actions are up, down, left, right. The reward scheme is the following: +1 for covering a blank cell, and -1 per step. So, if the cell was colored after a step, the summed reward is (+1) + (-1) = 0, otherwise it is (0) + (-1) = -1. The environment is a tensor whose layers encode the positions to be covered and the position of the agent.

Under this reward scheme, DQN fails to find a solution (implementation: stable_baselines3). However, when the rewards are reduced by a factor of 10 to +0.1/-0.1, then the algorithm learns an optimal path.

I wonder why that happens. I have tried reducing learning rate and gradient clipping (by norm) for the first case to see whether it will improve the learning, but it does not.

The activation function used is ReLU

",2254,,2254,,8/24/2020 17:58,8/24/2020 17:58,Why scaling reward drastically affects performance?,,0,9,,,,CC BY-SA 4.0 23221,1,23683,,8/24/2020 14:56,,19,10568,"

As far as I can tell, BERT is a type of Transformer architecture. What I do not understand is:

  1. How is Bert different from the original transformer architecture?

  2. What tasks are better suited for BERT, and what tasks are better suited for the original architecture?

",12201,,2444,,9/21/2020 16:48,10/21/2020 22:02,How is BERT different from the original transformer architecture?,,1,0,,,,CC BY-SA 4.0 23222,1,,,8/24/2020 19:15,,0,97,"

Researcher here. I just read this piece about medical imaging ai with object recognition and it left me wondering why there are still 100,000+ deaths a year in the US due to misdiagnosis - anyone out there working on these problems? Vinod Khosla famously said that he'd rather get surgery from AI than from a human - so where are we at with that?

",40545,,,,,9/21/2022 6:06,Why isn't medical imaging improving faster with AI?,,1,3,,,,CC BY-SA 4.0 23223,1,,,8/24/2020 19:33,,1,30,"

How can edge detection algorithms, which are not based on deep learning, such as the canny edge detector, be implemented on a GPU? For example, how are non-edge pixels removed from an image once it detects all the edges?

The reason why I am asking this question is that when writing data to memory the GPU cores can't see what memory locations the other cores are writing to, so I am interested in knowing how traditional edge detectors can be implemented in on GPU.

",40546,,2444,,8/24/2020 21:01,8/24/2020 21:01,How can traditional edge detection algorithms be implemented on a GPU?,,0,0,,,,CC BY-SA 4.0 23226,1,23227,,8/24/2020 20:21,,14,4642,"

The definitions for these two appear to be very similar, and frankly, I've been only using the term "active learning" the past couple of years. What is the actual difference between the two? Is one a subset of the other?

",40250,,23503,,10/30/2020 13:01,10/30/2020 13:01,What is the difference between active learning and online learning?,,2,0,,,,CC BY-SA 4.0 23227,2,,23226,8/24/2020 20:44,,12,,"

Active learning (AL) is a weakly supervised learning (WSL) technique where you can have both labelled and unlabelled data [1]. The main idea behind AL is that the learner (or learning algorithm) can query an "oracle" (e.g. a human) to label some unlabelled instances. AL is similar to semi-supervised learning (SSL), which is also a WSL technique, given that both deal with unlabelled and labeled data, but do that differently (i.e. SSL does not use an oracle).

Online learning are machine learning techniques that update the models as new data is collected or arrives sequentially, as opposed to batch learning (or offline learning), where you first collect a dataset of multiple instances and then you train a model once (although you can later update it when you update your dataset). Batch learning is currently the common way of training machine learning models, given that it avoids problems like the known catastrophic interference (aka catastrophic forgetting) problem, which can occur if you learn online. For example, neural networks are known to face this problem when learning online. There are incremental learning (aka lifelong learning) algorithms that attempt to address this catastrophic interference problem.

",2444,,2444,,8/25/2020 11:09,8/25/2020 11:09,,,,4,,,,CC BY-SA 4.0 23228,2,,23226,8/24/2020 21:03,,4,,"

As it is referred in the survey paper "Active Learning Literature Survey":

The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer training labels if it is allowed to choose the data from which it learns. An active learner may pose queries, usually in the form of unlabeled data instances to be labeled by an oracle (e.g., a human annotator). Active learning is well-motivated in many modern machine learning problems, where unlabeled data may be abundant or easily obtained, but labels are difficult, time-consuming, or expensive to obtain.

Online learning uses data which become available in a sequential order. It's main goal is to update the best predictor for future data at each step.

So, online learning is a more general method of machine learning that is opposed to offline learning, or batch learning, where the whole dataset has already been generated and used for training / updating the model's parameters. Moreover, a common technique for training Machine Learning models is to first perform online learning, in order to acquire an adequate data size, and then perform offline learning on the whole dataset and finaly compare the results generated by the two learning processes.

On the other hand, active learning can be performed both with online learning[1] and offline learning, in order to reduce manual annotation effort during the annotation of training data for machine learning classifiers. That is, independently of how data have been generated and with what order, active learning should make the least queries, to an Oracle, needed for annotation of a subset of the data.

",36055,,36055,,8/24/2020 21:12,8/24/2020 21:12,,,,0,,,,CC BY-SA 4.0 23231,2,,23162,8/24/2020 21:29,,0,,"

Assuming that the dictionary of the words, that your model comes up with, is a subset of the pretrained embeddings, for example of Google's pretrained word2vec, then it is maybe a better option following these embeddings, if your model can handle that size of dimension.

However, sometimes that would not always be the best solution, taking into account the nature of the problem. For example, if you are trying to use NLP on medical texts that contain rare and special words, then maybe you should use your embedding layer, assuming that you have an adequate data size, or both of them. That is just a thought of mine. For sure, there can be several other use cases which should propose the embedding layer.

",36055,,,,,8/24/2020 21:29,,,,0,,,,CC BY-SA 4.0 23232,2,,23207,8/24/2020 23:56,,1,,"

The purpose of the test set is to test your model before deploying, otherwise, you would not need the test set in the first place. If you retrain your model by also including the validation and test datasets, of course, you cannot test your model anymore. You need to leave the test dataset separate and not use it for retraining, unless you have more data for testing.

",2444,,,,,8/24/2020 23:56,,,,4,,,,CC BY-SA 4.0 23233,2,,20342,8/25/2020 1:16,,0,,"

To anyone who reads this, I still haven't solved this completely. At the moment I'm doing a lot better with much cleaner data, using a loss metric that matches what I'm after (F1_Score), using a very deep learning model (a custom Inception Resnet V2 model), use a custom learning rate function that depends on the training round's F1 Score, and every training round computing a F1 Score for various dB of signal/noise test sets with which I compute a model wellness score with which I determine if the model is good enough. Pretty close.

",36092,,,,,8/25/2020 1:16,,,,0,,,,CC BY-SA 4.0 23234,1,,,8/25/2020 1:32,,0,97,"

I'm not sure how to describe this in the most accurate way but I'll give it a shot.

I've developed a Inception-Resnet V2 model for detecting audio signals via spectrogram. It does a pretty good job but is not exactly the way I'd like it to be.

Some details: I use 5 sets of data to evaluate my model during training. They are all similar but slightly different. Once I get to a certain threshold of F1 Scores for each training set I stop training. My overall threshold is pretty hard to get to. Every time training develops a model that produces a "best yet" of one of these data sets I save the model.

What I've noticed is that, during training, some round will produce a high F1 Score for one particular set while the other sets languish as mediocre. Then, several dozen rounds later, another data set will peak while the others are mediocre. Overall the entire model gets better but there are always some models that work better for some data sets.

What I would like to know is, given I might have 5 different models that each work better for a particular subset of data, is there a way that I can combine these models (either as a whole or better yet their particular layers) to produce a single model that works the best for all my data validation subsets?

Thank you. Mecho

",36092,,,,,10/22/2022 14:04,How to combine specific CNN models that work better at slightly different tasks?,,1,0,,,,CC BY-SA 4.0 23236,2,,23162,8/25/2020 1:40,,1,,"

There are multiple ways to get word embedding from a corpus.

  • Count Vectorizer: You can use the CountVectorizer() from sklearn.feature_extraction.text and then use the fit_transform() if the corpus has been converted into a list of sentences
  • TF-IDF Vectorizer: You can use the TfidfVectorizer from sklearn.feature_extraction.text and then again use the fit_transform() on a list of sentences
  • word2vec: You can make a word2vec model from gensim.models by using word2vec.Word2vec.
",40434,,,,,8/25/2020 1:40,,,,0,,,,CC BY-SA 4.0 23237,1,,,8/25/2020 2:23,,0,76,"

Would anybody share the experience on how to train a hierarchical DQN to play the Montezuma's Revenge game? How should I design the reward function? How should I balance the anneal rate of the two-level?

I've been trying to train an agent to solve this game. The agent is with 6 lives. So, every time when the agent fetches the key and loses his life for the instability or power of the sub-goal network, the agent restart at the original location and simply go through the door, thus gets a huge reward. With an $\epsilon$-greedy rate 0.1, the agent is possible to choose the subgoal key network and fetch the key, so the agent always chooses the door as the subgoal.

Would anyone show me how to train this agent in the setting of one life?

",40552,,2444,,8/25/2020 10:27,8/25/2020 10:27,How to train a hierarchical DQN to play the Montezuma's Revenge game?,,0,2,,,,CC BY-SA 4.0 23238,1,23244,,8/25/2020 3:59,,1,63,"

Based on the Turing test, what would be the criteria for an agent to be considered smart?

",40555,,2444,,8/25/2020 10:29,8/25/2020 15:08,"Based on the Turing test, what would be the criteria for an agent to be considered smart?",,1,0,,,,CC BY-SA 4.0 23239,2,,20658,8/25/2020 4:23,,1,,"

Perhaps it could be due to the lack of inputs? Many videos of Snake AIs that use NN and GA to learn seem to have more than double or triple the amount of inputs you're feeding to your neural network (See here and here). I would recommend that you add more inputs by giving the NN the distance to the snake part and the wall for every direction you're looking in.

",40556,,,,,8/25/2020 4:23,,,,0,,,,CC BY-SA 4.0 23241,1,,,8/25/2020 5:41,,0,550,"

I'm working in A2C and I have an environment where there is increasing or decreasing in the number of agents. The action space in the environment will not change but the state will change when new agents join or leave the game. I have tried encoder-decoder model with attention but the problem is that the state and the model will change when the number of agents is changing. I also tried this way where they use LSTM to get the Q value for the agent but I got this message

Cannot interpret feed_dict key as Tensor: Tensor Tensor("state:0", shape=(137,), dtype=float32) is not an element of this graph.

or error like this because of changing of state size

ValueError: Cannot feed value of shape (245,) for Tensor 'state:0', which has shape '(161,)'

(1) Are there any reference papers that deal with such a problem?

(2) What is the best way to deal with the new agents that join or leave the game?

(3) How to deal with the changing of state space?

",21181,,21181,,8/27/2020 8:41,8/29/2020 18:28,How to handle a changing in the Reinforcement Learning environment where there is increasing or decreasing in number of agents?,,1,0,,,,CC BY-SA 4.0 23243,1,,,8/25/2020 9:57,,0,471,"

How would one extract the feature vector from a given input image using YOLOv4 and pass that data into an LSTM to generate captions for the image?

I am trying to make an image captioning software in PyTorch using YOLO as the base object classifier and an LSTM as the caption generator.

Can anyone help me figure out what part of the code I would need to call and how I would achieve this?

Any help is much appreciated.

",40561,,40561,,8/25/2020 10:20,1/18/2023 16:02,Feeding YOLOv4 image data into LSTM layer?,,1,0,,,,CC BY-SA 4.0 23244,2,,23238,8/25/2020 10:47,,1,,"

If an artificial agent (AA) passes the (standard) Turing test (i.e. where you have to imitate a human that speaks), then, on average, the AA should be able to imitate any human in any situation that mainly requires the conversation abilities and common-sense knowledge of a human, without being ever recognized as an AA.

For example, if you want to talk about football, you don't expect the AA only to say "I don't know" or to clearly avoid a topic (e.g. by redirecting you to a search engine) when it doesn't know something (although some humans behave in this way), but you expect it to have common-sense knowledge, such as that Messi, Cristiano Ronaldo, Pelé, Maradona, etc., are among the best players of all time, and this should be precious information that the AA should have in any case, even if it doesn't know much about football.

You also expect it to be emotional and have a personality, given that humans are emotional and have personalities. So, in the example above, maybe the AA could say that Maradona is its favorite player, and then it could explain why in an emotional way (e.g. by changing the tone of the voice).

The AA should also be able to keep track of (almost) everything you and it said, and it should be able to contextualize very well, as humans do. Some personal assistants already take context into account, but they don't do this very well or just do it to a little extent.

The AA should also be able to reason given the current situation. For example, if you explain something to the AA, you expect it to infer or predict something based on the information it has acquired and the common-sense knowledge.

Moreover, when you speak or write to the AA, you don't expect it to regularly hear badly or not understand what you say or ask (and ask you to repeat), but you expect it to understand well what you say almost always, provided you don't talk or write trash. You also don't expect big delays and you expect the AA to at least say something while it searches for a more appropriate answer, although not all humans behave in this way, but I think that interjections or words such as "hm", "well", "let me think...", etc., will be very important to make the AA look or sound like a human. In general, the AA should be as interactive as a human.

These are some traits that the AA absolutely needs to have in order to pass the Turing test (and be considered intelligent according to that test), but there are probably many others.

",2444,,2444,,8/25/2020 15:08,8/25/2020 15:08,,,,0,,,,CC BY-SA 4.0 23245,1,23247,,8/25/2020 12:27,,1,93,"

Basic deep reinforcement learning methods use as input an image for the current state, do some convolutions on that image, apply some reinforcement learning algorithm, and it is solved.

Let us take the game Breakout or Pong as an example. What I do not understand is: how does the agent understand when an object is moving towards it or away from it? I believe that the action it chooses must be different in these two scenarios, and, from a single image as input, there is no notion of motion.

",36447,,2444,,1/7/2022 16:01,1/7/2022 16:01,How does the RL agent understand motion if it gets only one image as input?,,1,0,,,,CC BY-SA 4.0 23246,1,23251,,8/25/2020 12:35,,3,107,"

I was looking at the Bellman equation, and I noticed a difference between the equations used in policy evaluation and value iteration.

In policy evaluation, there was the presence of $\pi(a \mid s)$, which indicates the probability of choosing action $a$ given $s$, under policy $\pi$. But this probability seemed to be omitted in the value iteration formula. What might be the reason? Maybe an omission?

",40049,,2444,,8/25/2020 15:13,8/25/2020 15:13,Why doesn't value iteration use $\pi(a \mid s)$ while policy evaluation does?,,1,2,,,,CC BY-SA 4.0 23247,2,,23245,8/25/2020 12:50,,3,,"

In the article Playing Atari with Deep Reinforcement Learning, Mnih et al, 2013, which was a major outbreak in Deep Reinforcement learning (especially in Deep Q learning), they don't feed only the last image to the network. They stack the 4 last images :

For the experiments in this paper, the function φ from algorithm 1 applies this preprocessing to the last 4 frames of a history and stacks them to produce the input to the Q-function

So they add the motion through sequentiality. From various articles and own coding experiences, this seems to me to be the main common approach. I don't know if other techniques have been implemented.

One thing we could imagine would be to compute the Cross-correlation between a previous frame and the last one, and then feed the cross correlation product to the net.

Another idea would be to train previously a CNN to extract motion features from a sequence of frames, and feed these extracted features to your net. This article (Performing Particle Image Velocimetry using Artificial Neural Networks: a proof-of-concept), Rabault et al, 2017 is an example of a CNN to extract motion features.

",17759,,17759,,8/25/2020 12:57,8/25/2020 12:57,,,,2,,,,CC BY-SA 4.0 23250,2,,23162,8/25/2020 13:36,,2,,"

If you have to move a lot of data around during training (like retrieving batches from disk/network/what have you), it's much faster to do so as a rank-3 tensor of [batches, documents, indices] than as a rank-4 tensor of [batches, documents, indices, vectors]. In this case, while the embedding is O(1) wherever you put it, it's more efficient to do so as part of the graph.

",29873,,,,,8/25/2020 13:36,,,,1,,,,CC BY-SA 4.0 23251,2,,23246,8/25/2020 14:15,,4,,"

You appear to comparing the value table update steps in policy iteration and value iteration, which are both derived from Bellman equations.

Policy iteration

In policy iteration, a policy lookup table is generated, which can be arbitrary. It usually maps a deterministic policy $\pi(s): \mathcal{S} \rightarrow \mathcal{A}$, but can also be of the form $\pi(a|s): \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R} = Pr\{A_t = a |S_t =s\}$. Policy iteration then alternately evaluates then improves that policy, with the improvement always being to act greedily with respect to expected return. Because the policy function can be arbitrary, and also the current value estimates during evaluation might not relate to it directly, the function $\pi(s)$ or $\pi(a|s)$ needs to be shown.

Typically with policy iteration, you will see this update rule:

$$V(s) \leftarrow \sum_{r,s'} p(r,s'|s,\pi(s))(r + \gamma V(s'))$$

The above rule is for evaluating a deterministic policy, and is probably more commonly used. There is no real benefit in policy iteration to working with stochastic policies.

For completeness, the update rule for an arbitrary stochastic policy is:

$$V(s) \leftarrow \sum_a \pi(a|s) \sum_{r,s'} p(r,s'|s,a)(r + \gamma V(s'))$$

Value iteration

In value iteration, the current policy to evaluate is to always take the greedy action with respect to the current evaluations. As such, it does not need to be explicity written, because it can be derived from the value function, and so can the terms in the Bellman equation (specifically the Bellman equation for the optimal value function is used here, which usually does not refer the policy). What you would typically write for the update step is:

$$V(s) \leftarrow \text{max}_a \sum_{r,s'} p(r,s'|s,a)(r + \gamma V(s'))$$

However, you can write this out as if there was a policy table:

$$\pi(s) \leftarrow \text{argmax}_a \sum_{r,s'} p(r,s'|s,a)(r + \gamma V(s'))$$ $$a \leftarrow \pi(s)$$ $$V(s) \leftarrow \sum_{r,s'} p(r,s'|s,a)(r + \gamma V(s'))$$

This is not the usual way to implement it though, because of the extra maximum value search required to identify the action. In simple value iteration it does not matter what the interim action choices and policies actually are, and you can always derive them from the value function if you want to know.

Other value-based methods

You will find other algorithms that drive the current policy direct from a value function, and when they are described in pseudo-code they might not have an explicit policy function. It is still there, only the Bellman update is easily calculated directly from the value function, so the policy is not shown in the update step. Descriptions of SARSA and Q-learning are often like that.

",1847,,1847,,8/25/2020 14:44,8/25/2020 14:44,,,,0,,,,CC BY-SA 4.0 23254,1,23302,,8/25/2020 22:16,,3,1549,"

I am training an RL agent using Deep-Q learning with experience replay. At each frame, I am currently sampling 32 random transitions from a queue which stores a maximum of 20000 and training as described in the Atari with Deep RL paper. All is working fine, but I was wondering whether there is any logical way to select the proper batch size for training, or if simply using a grid search is best. At the moment, I’m simply using 32, for its small enough that I can render the gameplay throughout training at a stunning rate of 0.5fps. However, I’m wondering how much of an effect batch size has, and if there is any criteria we could generalize across all Deep Q-learning tasks.

",40575,,,,,8/27/2020 23:33,Is there a logical method of deducing an optimal batch size when training a Deep Q-learning agent with experience replay?,,1,0,,,,CC BY-SA 4.0 23256,1,,,8/25/2020 23:30,,4,812,"

In the convolutional layer for CNNs, when you specify the stride of a filter, typical notes show some examples of this but only for the horizontal panning. Is this same stride applied for the vertical direction too when you're done with the current row?

In other words, say our input volume is 7x7, and we apply a stride of 1 for a 3x3 filter. Is the output volume 5x5? (which would mean you applied the stride in both the horizontal and vertical panning).

Is it possible to apply a different stride for each direction?

",20358,,2444,,1/24/2021 1:55,1/24/2021 1:55,Is the stride applied both in the horizontal and vertical directions in convolutional neural networks?,,2,0,,,,CC BY-SA 4.0 23257,2,,23256,8/26/2020 1:35,,3,,"

Yes, in Keras this is simply implemented by using a tuple for the stride argument of a convolutional layer, with each element of the tuple corresponding to the stride of each dimension.

",40575,,,,,8/26/2020 1:35,,,,0,,,,CC BY-SA 4.0 23258,1,,,8/26/2020 4:10,,1,47,"

I know the training of neural nets involves some sort of dimension manipulation to separate classes of different features.

If there is no variation of features, no matter for neural nets or simple dimension reduction methods (e.g. PCA, LDA) + clustering, neither of them are going to distinguish different classes.

In such sense, I would like to know the true power of neural nets:

How classification neural nets are different from simple dimension reduction + clustering?

or rephrase the question:

What value do neural nets add to solving classification problems in terms of its algorithmic architecture compared with simple dimension reduction + clustering?

",40581,,,,,8/26/2020 4:50,How classification neural nets are different from simple dimension reduction + clustering?,,1,0,,,,CC BY-SA 4.0 23259,2,,23258,8/26/2020 4:50,,2,,"

PCA is linear, NN are nonlinear, more generally it's a universal function approximator.

That said basic NN are not terribly usefull, the real value of NN is for structured data which is obvious for us but hard to describe analytically. Basically NN architecture has been designed to learn this structure using operations like convolution and maxpooling.

There are still problems of course, and in some ways I think clustering algorithms still have value for things like anomaly detection.

",32390,,,,,8/26/2020 4:50,,,,0,,,,CC BY-SA 4.0 23260,1,,,8/26/2020 8:27,,1,158,"

In Simulated Annealing, a worse solution is accepted with this probability:

$$p=e^{-\frac{E(y)-E(x)}{kT}}.$$

If that understanding is correct: Why is this probability function used? This means that, the bigger the energy difference, the smaller the probability of accepting the new solution. I would say the bigger the difference the more we want to escape a local minimum. I plotted that function in Matlab in two dimensions:

",27777,,2444,,1/23/2021 18:51,1/23/2021 18:51,Why does Simulated Annealing not take worse solution if the energy difference becomes higher?,,2,1,,,,CC BY-SA 4.0 23261,1,23271,,8/26/2020 9:16,,0,287,"

I am working on a binary classification problem having continuous variables (Gene expression Values). My goal is to classify the samples as case or control using gene expression values (from Gene-A, Gene-B and Gene-C) using decision tree classifier. I am using the entropy criteria for node splitting and is implementing the algorithm in python. The classifier is easily able to differentiate the samples.

Below is the sample data,

sample training set with labels

Gene-A    Gene-B    Gene-C    Sample
   1        0         38       Case
   0        7         374      Case
   1        6         572      Case
   0        2         538      Control
   33       5         860      Control

sample testing set labels

Gene-A    Gene-B    Gene-C    Sample
  1         6        394       Case
  13        4        777       Control

I have gone through a lot of resources and have learned, how to mathematically calculate Gini-impurity, entropy and information gain.

I am not able to comprehend how the actual training and testing work. It would be really helpful if someone can show the calculation for training and testing with my sample datasets or provide an online resource?

",40584,,11539,,8/1/2021 11:38,8/1/2021 11:38,Mathematical calculation behind decision tree classifier with continuous variables,,1,0,,,,CC BY-SA 4.0 23262,1,,,8/26/2020 10:56,,3,141,"

I'm trying to understand how to calculate the strength of every arc in a Bayesian Network.

I came across this report Measuring Connection Strengths and Link Strengths in Discrete Bayesian Networks, but I got lost in the calculation.

In particular, how are the values of Link Strength true, Link Strength blind, and Mutual Information computed in Table 1?

",40586,,2444,,5/24/2021 0:39,10/16/2022 5:03,"How are the ""Link Strength true"", ""Link Strength blind"" and ""Mutual Information"" calculated in this report on Bayesian networks?",,1,0,,,,CC BY-SA 4.0 23263,1,,,8/26/2020 12:11,,1,80,"

Is there any effort made to compress text (and maybe other media) using prediction of next word and thus sending only the order number of the word/token which will be predicted on the client side i.e
Server text: This is an example of a long text example, custom word flerfom inserted to confuse, that may appear on somewhere
Compressed Text transmitted : This [choice no 3] [choice no 4] [choice no 1] [choice no 6] [choice no 1] [choice no 3] [choice no 1], custom word flerfom [choice no 4] inserted [choice no 4] confuse [choice no 5] [choice no 4] [choice no 6] [choice no 5] on somewhere

(Note: of course [choice no 3] will be shortened to [3] to save bytes and also maybe we can do much better in some cases by sending the first letter of the word)

of course it means that the client side neural network has to be static or only updated in a predictable fasion, so the server knows for sure that the client neural network's predictions will follow the given choice orders. I tried example with https://demo.allennlp.org/next-token-lm, but the prediction is not that good. maybe gpt-3 can do better . but its too heavy for use in a normal pc / mobile device

In more details, the process is

Deploy the same model on both sides
Predict the next word after the starting word
Keep the prediction limit say 100
For any word which have more than 2 characters we do the prediction
If the current word is predicted within the top 100 predictions of the model , we can essentially replace it with a numeric char between 0-99 (inclusive) so we are replacing a say , 5 character word with a 2 character numerical char..
And if the word is not predicted in top 100 we send the word as it is..
As much better the model predicts, that much better the compression
And under no scenario it will work worse than the existing method..

",40588,,1671,,8/27/2020 0:08,8/27/2020 0:08,Compressing text using AI by sending only prediction rank of next word,,1,2,,,,CC BY-SA 4.0 23264,2,,23263,8/26/2020 12:37,,1,,"

If you have a fixed predictor, then yes. If the predictor is not fixed but deterministic, the feasibility depends on the effort needed to update the predictor, and ensuring that messages include a time stamp to ensure the correct version of the predictor is used for compression and inflation.

You get a really nice property if the prediction order is in order of probability that the word occurs, and if you use fewer bits for lower numbers. You would get something that is pretty close to Shannon coding, which is not optimal but is still valid.

",40573,,,,,8/26/2020 12:37,,,,2,,,,CC BY-SA 4.0 23265,1,23273,,8/26/2020 15:18,,1,585,"

I understand that the batch size is the number of examples you pass into the neural network (NN). If the batch size is 10, it means you feed the NN 10 examples at once.

Assuming I have an NN with a single Dense layer. This Dense layer of 20 units has an input shape (10, 3). This means that I am feeding the NN 10 examples at once, with every example being represented by 3 values. This Dense layer will have an output shape of (10, 20).

I understand that the 20 in the 2nd dimension comes from the number of units in the Dense layer. However, what does the 10 (Batch Size) in the first dimension mean? Does this mean that the NN learns 10 separate sets of weights (with each set of weights corresponding to one example, and one set of weights being a matrix of 60 values:3 features x 20 units)?

",40592,,2444,,8/30/2020 12:23,8/30/2020 12:23,Why does the output shape of a Dense layer contain a batch size?,,1,1,0,,,CC BY-SA 4.0 23266,1,23281,,8/26/2020 15:31,,0,53,"

Currently, I'm working on 6-axis IMU(Inertial Measurment Unit) dataset. This dataset contain 6 axis IMU data of 7 different drivers. The Imu sensor attached on vehicle. The drivers drives same path. So, the dataset include 6 feature columns and a label column.

I tried multiple neural network models.The sensor data is a sequential data so I tried LSTM(Long Short Term Memory) & classical fully-connected layers. Some of my architecture(in keras framework):


Layer (type)                 Output Shape              Param #   

lstm_4 (LSTM)                (None, 1, 128)            69120     
_________________________________________________________________
lstm_5 (LSTM)                (None, 1, 64)             49408     
_________________________________________________________________
lstm_6 (LSTM)                (None, 1, 32)             12416     
_________________________________________________________________
dense_8 (Dense)              (None, 1, 64)             2112      
_________________________________________________________________
dropout_2 (Dropout)          (None, 1, 64)             0         
_________________________________________________________________
dense_9 (Dense)              (None, 1, 7)              455       


2nd Architecture:

=================================================================
dense_10 (Dense)             (None, 32)                224       
_________________________________________________________________
dense_11 (Dense)             (None, 64)                2112      
_________________________________________________________________
dense_12 (Dense)             (None, 128)               8320      
_________________________________________________________________
dense_13 (Dense)             (None, 256)               33024     
_________________________________________________________________
dropout_3 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_14 (Dense)             (None, 512)               131584    
_________________________________________________________________
dense_15 (Dense)             (None, 256)               131328    
_________________________________________________________________
dense_16 (Dense)             (None, 128)               32896     
_________________________________________________________________
dense_17 (Dense)             (None, 64)                8256      
_________________________________________________________________
dropout_4 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_18 (Dense)             (None, 128)               8320      
_________________________________________________________________
dense_19 (Dense)             (None, 7)                 903     

The best accuracy in my models was %70 which is not good. How style of layers should I use to handle this data? Or, which type of model would increase accuracy?

",28129,,,,,8/27/2020 0:14,What type of model should I fit to increase accuracy?,,1,0,,,,CC BY-SA 4.0 23267,2,,23260,8/26/2020 15:50,,0,,"

Nice question!

My guess is that, if the probability of acceptance increases the bigger the difference between the current and new solutions is, then there's the risk that you need to search a lot again to find a good solution, i.e. you may oscillate between different subspaces or you could actually often end up in subspaces where there are only bad solutions. Your reasoning probably makes sense at the beginning of the search when the initial solutions may not be good enough or maybe if you have parallel searches (and you want to explore the search space at different search subspaces), but once you have a good solution, you don't want to completely discard it and replace it with a quite worse solution.

If you perform some experiments with your idea, I would like to see the results.

",2444,,2444,,8/26/2020 16:03,8/26/2020 16:03,,,,0,,,,CC BY-SA 4.0 23268,2,,23260,8/26/2020 16:16,,1,,"

Note that you can't really predict whether your escape from a local minimum will work or not - you might just wind up in another, worse local minimum. The probability function you describe increases the likelihood of this happening. By upweighting the likelihood of allowing small energy differences, you allow for the possibility of escaping local minima, while ensuring that whatever new minimum you find can't be that much worse that where you started. If you make the acceptance of large energy differences more likely, you can escape local minima more often, but you increase the likelihood that you'll just wind up in a region with an even higher local minimum.

",2841,,,,,8/26/2020 16:16,,,,0,,,,CC BY-SA 4.0 23269,2,,23262,8/26/2020 18:10,,0,,"

The $MI$ is easy enough. For $MI(X \to Z)$, we get $U([0.5, 0.5]) - 0.5 U([0.9, 0.1]) - 0.5 U([0.1, 0.9])$. The $[0.5, 0.5]$ comes from $0.5 [0.9, 0.1] + 0.5[0.1, 0.9]$ -- this is how to calculate $P(Z)$ from $P(Z|X)$ and $P(X)$. For $MI(X \to Y)$, we need to marginalize out $Z$. This takes a bit of work, but is not too hard. I manually verified all of the reported values in the figure, and they are all correct.

The $\mathrm{LS}{\mathrm{true}}$ values are a bit trickier, but still feasible. I managed to get the same values. It is important when calculating $\mathrm{LS}{\mathrm{true}}(X \to Z)$ that $Y$ is not a parent, so it does not come into it -- which is why the MI gives exactly the same value.

The "blind" version took me a bit more effort to implement, but again, I am getting the same values as reported in the table. I used the "simple formula" at the top of page 5.

",40573,,,,,8/26/2020 18:10,,,,9,,,,CC BY-SA 4.0 23270,1,23277,,8/26/2020 18:52,,2,161,"

I was reading this article about the question "Why do we dream?" in which the author discusses dreams as a form of rehearsal for future threats, and presents it as an evolutive advantage. My question is whether this idea has been explored in the context of RL.

For example, in a competition between AIs on a shooter game, one could design an agent that, besides the behavior it has learned in a "normal" training, seeks for time in which is out of danger, to then use its computation time in the game to produce simulations that would further optimize its behavior. As the agent still needs to be somewhat aware of its environment, it could alternate between processing the environment and this kind of simulation. Note that this "in-game" simulation has an advantage with respect to the "pre-game" simulations used for training; the agent in the game experiences the behavior of the other agents, which could not have been predicted beforehand, and then simulates on top of these experiences, e.g. by slightly modifying them.

For more experienced folks, does this idea make sense? has something similar been explored?

I have absolutely no experience in the field, so I apologize if this question is poorly worded, dumb or obvious. I would appreciate suggestions on how to improve it if this is the case.

",40596,,2444,,8/26/2020 22:49,8/26/2020 23:19,"Have agents that ""dream"" been explored in Reinforcement Learning?",,2,2,,,,CC BY-SA 4.0 23271,2,,23261,8/26/2020 19:13,,1,,"

Of course, it depends on what algorithm you use. Typically, a top-down algorithm is used.

You gather all the training data at the root. The base decision is going to be whatever class you have most of. Now, we see if we can do better.

We consider all possible splits. For categorical variables, every value gets its own node. For continuous variables, we can use any possible midpoint between two values (if the values were sorted). For your example, possible splits are Gene-A < 0.5, Gene-A < 17, Gene-B < 1, Gene-B < 3.5, and so on. There is a total of 10 possible splits.

For each of those candidate splits, we measure how much the entropy decreases (or whatever criterion we selected) and, if this decrease looks significant enough, we introduce this split. For example. Our entropy in the root node is $-0.4 \log_2 0.4 - 0.6 \log_2 0.6 \approx 0.97$. If we introduce the split Gene-A < 0.5, we get one leaf with entropy $1$ (with 2 data points in it), and one leaf with entropy $0.918$ (with 3 data points). The total decrease of entropy is $0.97 - (\frac25 \times 1 + \frac35 \times 0.918) \approx 0.02$. For the split Gene-A < 17 we get a decrease of entropy of about $0.3219$.

The best splits for the root are Gene-B < 5.5 and Gene-C < 456. These both reduce the entropy by about $0.42$, which is a substantial improvement.

When you choose a split, you introduce a leaf for the possible outcomes of the test. Here it's just 2 leaves: "yes, the value is smaller than the threshold" or "no, it is not smaller". In every leaf, we collect the training data from the parent that corresponds to this choice. So, if we select Gene-B < 5.5 as our split, the "yes" leaf will contain the first, fourth and fifth data points, and the "no" leaf will contain the other data points.

Then we continue, by repeating the process for each of the leaves. In our example, the "yes" branch can still be split further. A good split would be Gene-C < 288, which results in pure leaves (they have 0 entropy).

When a leaf is "pure enough" (it has very low entropy) or we don't think we have enough data, or the best split for a leaf is not a significant improvement, or we have reached a maximum depth, you stop the process for that leaf. In this leaf you can store the count for all the classes you have in the training data.

If you have to make a prediction for a new data point (from the test set), you start at the root and look at the test (the splitting criterion). For example, for the first test point, we have that Gene-B < 5.5 is false, so we go to the 'no' branch. You continue until you get to a leaf.

In a leaf, you would predict whatever class you have most of. If the user wants, you can also output a probability by giving the proportion. For the first test point, we go to the "no" branch of the first test, and we end up in a leaf; our prediction would be "Case". For the second test point, we go to the "yes" branch of the first test. Here we test whether 777 < 288, which is false, so we go to the "no" branch, and end up in a leaf. This leaf contains only "Control" cases, so our prediction would be "Control".

",40573,,,,,8/26/2020 19:13,,,,0,,,,CC BY-SA 4.0 23272,2,,23270,8/26/2020 19:34,,0,,"

Model-based RL is obviously the correct approach. Mainly because it lets you simulate the environment internally without having direct interaction.

And all successful RL algorithms essentially are model-based because nobody has done real-time RL and been successful.

",32390,,2444,,8/26/2020 22:07,8/26/2020 22:07,,,,7,,,,CC BY-SA 4.0 23273,2,,23265,8/26/2020 19:41,,1,,"

The Dense layers outputs 20 values per example. And since you have 10 examples in the batch the output is (10, 20) (one set of 20 values per example in the batch). The nn doesn't learn 10 separate sets of weights. Each set of 20 values is computed with the same weight (and bias if you have any). So if say example 2 and 5 had the same input values, they'll always have the same output values.

",40597,,,,,8/26/2020 19:41,,,,2,,,,CC BY-SA 4.0 23276,1,,,8/26/2020 20:02,,2,425,"

I have a conceptual question for you all that hopefully I can convey clearly. I am building an RL agent in Keras using continuous PPO to control a laser attached to a pan/tilt turret for target tracking. My question is how the new policy gets updated. My current implementation is as follows

  1. Make observation (distance from laser to target in pan and tilt)
  2. Pass observation to actor network which outputs a mean (std for now is fixed)
  3. I sample from a gaussian with the mean output from step 2
  4. Apply the command and observe the reward (1/L2 distance to target)
  5. collect N steps of experience, compute advantage and old log probabilities,
  6. train actor and critic

My question is this. I have my old log probabilities (probabilities of the actions taken given the means generated by the actor network), but I dont understand how the new probabilities are generated. At the onset of the very first minibatch my new policy is identical to my old policy as they are the same neural net. Given that in the model.fit function I am passing the same set of observations to generate 'y_pred' values, and I am passing in the actual actions taken as my 'y_true' values, the new policy should generate the exact same log probabilities as my old one. The only (slight) variation that makes the network update is from the entropy bonus, but my score np.exp(new_log_probs-old.log_probs) is nearly identically 1 because the policies are the same.

Should I be using a pair of networks similar to DDQN so there are some initial differences in the policies between the one used to generate the data and the one used for training?

",40600,,,,,8/27/2020 7:30,Generation of 'new log probabilities' in continuous action space PPO,,1,1,0,,,CC BY-SA 4.0 23277,2,,23270,8/26/2020 22:17,,2,,"

Yes, the concept of dreaming or imagining has already been explored in reinforcement learning.

For example, have a look at Metacontrol for Adaptive Imagination-Based Optimization (2017) by Jessica B. Hamrick et al., which is a paper that I gave a talk/presentation on 1-2 years ago (though I don't remember well the details anymore).

There is also a blog post about the topic Agents that imagine and plan (2017) by DeepMind, which discusses two more recent papers and also mentions Hamrick's paper.

In 2018, another related and interesting paper was also presented at NIPS, i.e. World Models, by Ha and Schmidhuber.

If you search for "imagination/dreaming in reinforcement learning" on the web, you will find more papers and articles about this interesting topic.

",2444,,2444,,8/26/2020 23:19,8/26/2020 23:19,,,,0,,,,CC BY-SA 4.0 23280,1,23311,,8/26/2020 22:52,,0,258,"

Background: In C51 DQNs you must specify a v-min/max to be used during training. The way this is generally done is you take the max score possible for the game and set that to v-max, then v-min is just negative v-max. For a game like Pong deciding the v-min/max is simple because the max score possible is 20, therefore, v_min=-20 and v_max=20.

Question: In a game like Space Invaders, there is no max score, so how would I calculate the v-min/max for a C51 DQN?

",38062,,38062,,8/28/2020 1:37,8/28/2020 14:49,How to calculate v min and v max for C51 DQN,,1,0,,,,CC BY-SA 4.0 23281,2,,23266,8/27/2020 0:14,,2,,"

after reading some literature in the area I'd recommend the following:

  • Try using Convolution Neural Networks (CNNs) this paper outlines some really good points on why should you use CNNs.
  • Try a Combination of the different layers in the same model. Start with some Convolution Layers, then some LSTMs and then a couple of Dense Layers followed by Dropout
",40434,,,,,8/27/2020 0:14,,,,0,,,,CC BY-SA 4.0 23282,2,,23256,8/27/2020 0:21,,4,,"

Yes, in Keras you can apply different strides by giving a tuple/list, specifying the value of strides along the height and width. If you just give a single value the API assumes the same value for all spatial dimensions.

You can find the official documentation here

In Pytorch, too you can specify the values in a tuple for the stride argument. Link to Pytorch Documentation for stride

",40434,,,,,8/27/2020 0:21,,,,0,,,,CC BY-SA 4.0 23283,2,,23276,8/27/2020 7:30,,1,,"

The idea in PPO is that you want to reuse the batch many times to update the current policy. However, you cannot update mindlessly in a regular actor-critic fashion, because your policy might stray too far away from the optimal point.

This means you repeat your step 6. epoch amount of times for the same batch of trajectories. Usually epoch is somewhere between 3 and 30 but it is a hyper-parameter you need to adjust. For the first repeat, the old and the new policy are the same, so their ratio should be 1. After the first update, the new probabilities will change due to the updated policy, whereas you will still need to use the old probabilities giving you a ratio different from 1. The old probabilities will stay the same during these epoch update steps, whereas your new probabilities will keep changing.

",8448,,,,,8/27/2020 7:30,,,,3,,,,CC BY-SA 4.0 23286,1,,,8/27/2020 10:16,,2,51,"

Hado van Hasselt, a researcher at DeepMind, mentioned in one of his videos (from 7:20 to 8:20) on Youtube (about policy gradient methods) that there are cases when the policy is very simple compared to the value function - and it makes more sense to learn the policy directly rather than first learning the value function and then doing control. He gives a very simple example at minute 7:20.

What are some other real-life examples (even just one example) of simple policies but complex value functions?

By real-life example I mean an example that is not as simple as a robot in a grid world, but some relatively complex real-world situations (say, autonomous driving).

",35585,,2444,,1/24/2022 14:28,1/24/2022 14:28,What are some other real-life examples of simple policies but complex value functions?,,0,0,,,,CC BY-SA 4.0 23287,1,23294,,8/27/2020 10:22,,5,675,"

A deterministic policy in the rock-paper-scissors game can be easily exploited by the opponent - by doing just the right sequence of moves to defeat the agent. More often than not, I've heard that a random policy is the optimal policy in this case - but the argument seems a little informal.

Could someone please expound on this, possibly adding more mathematical details and intuition? I guess the case I'm referring to is that of a game between two RL agents, but I'd be happy to learn about other cases too. Thanks!

EDIT: When would a random policy be optimal in this case?

",35585,,35585,,8/27/2020 16:02,8/27/2020 16:22,What's the optimal policy in the rock-paper-scissors game?,,1,3,,,,CC BY-SA 4.0 23288,1,23316,,8/27/2020 10:36,,3,129,"

I came across the following proof of what's commonly referred to as the log-derivative trick in policy-gradient algorithms, and I have a question -

While transitioning from the first line to the second, the gradient with respect to policy parameters $\theta$ was pushed into the summation. What bothers me is how it skipped over $\mu (s)$, the distribution of states - which (the way I understand it), is induced by the policy $\pi_\theta$ itself! Why then does it not depend on $\theta$?

Let me know what's going wrong! Thank you!

",35585,,,,,9/4/2020 0:11,Why does (not) the distribution of states depend on the policy parameters that induce it?,,2,1,,,,CC BY-SA 4.0 23290,1,23376,,8/27/2020 11:05,,3,292,"

I can't seem to understand why we need importance sampling in prioritized experience replay (PER). The authors of the paper write on page 5:

The estimation of the expected value with stochastic updates relies on those updates corresponding to the same distribution as its expectation. Prioritized replay introduces bias because it changes this distribution in an uncontrolled fashion, and therefore changes the solution that the estimates will converge to (even if the policy and state distribution are fixed).

My understanding of this statement is that sampling non-uniformly from the replay memory is an issue.

So, my question is: Since we are working 1-step off-policy, why is it an issue? I thought that in an off-policy setting we don't care how transitions are sampled (at least in the 1-step case).

The one possibility for an issue that came to my mind is that in the particular case of PER, we are sampling transitions according to the errors and rewards, which does seem a little fishy.

A somewhat related question was asked here, but I don't think it answers my question.

",40603,,,,,9/1/2020 19:35,Why is sampling non-uniformly from the replay memory an issue? (Prioritized experience replay),,1,0,,,,CC BY-SA 4.0 23291,2,,23288,8/27/2020 13:12,,3,,"

The reason you are confused is because this is not the full derivation of the Policy Gradient Theorem. You are correct in thinking that $\mu(s)$ depends on the policy $\pi$ which in turn depends on the policy parameters $\theta$, and so there should be a derivative of $\mu$ wrt $\theta$, however the Policy Gradient Theorem doesn't require you to take this derivative.

In fact, the great thing about the Policy Gradient Theorem is that the final result does not require you to take a derivative of the state distribution with respect to the policy parameters. I would encourage you to read and go through the derivation of the Policy Gradient Theorem from e.g. Sutton and Barto to see why you don't need to take the derivative.

Above is an image of the Policy Gradient Theorem proof from the Sutton and Barto book. If you carefully go through this line by line you will see that you are not required to take a derivative of the state distribution anywhere in the proof.

",36821,,36821,,8/28/2020 9:24,8/28/2020 9:24,,,,6,,,,CC BY-SA 4.0 23292,1,23293,,8/27/2020 14:17,,4,567,"

I have a scheduling problem in which there are $n$ slots and $m$ clients. I am trying to solve the problem using Q-learning so I have made the following state-action model.

A state $s_t$ is given by the current slot $t=1,2,\ldots,n$ and an action $a_t$ at slot $t$ is given by one client, $a_t\in\{1,2,\ldots,m\}$. In my situation, I do not have any reward associated with a state-action pair $(s_t,a_t)$ until the terminal state which is the last slot. In other words, for all $s_t\in\{1,2,\ldots,n-1\}$, the reward is $0$ and for $s_t=n$ I can compute the reward given $(a_1,a_2,\ldots,a_n)$.

In this situation, the Q table, $Q(s_t,a_t)$, will contain only zeros except for the last row in which it will contain the updated reward.

Can I still apply Q-learning in this situation? Why do I need a Q table if I only use the last row?

",37642,,2444,,1/24/2021 3:36,1/24/2021 3:36,How to apply Q-learning when rewards is only available at the last state?,,2,1,,,,CC BY-SA 4.0 23293,2,,23292,8/27/2020 14:59,,3,,"

Having only a non-zero reward at the very end is not uncommon. When rewards are sparse, it becomes a bit harder to learn compared to having lots of different rewards along the way, but for your problem, the goal state is always reached, so that should not be a problem. (The real problem with sparse rewards is that, if an agent can do a lot of exploration without every finding the goal, it essentially receives no feedback and will behave randomly, until it happens to stumble upon the very rare reward state.)

What concerns me more about your problem is that the final reward depends not just on the last state visited, but also on the chain of actions taken so far. That means that, to make this a proper MDP, you need to keep the chain of action in the state. So, your state would be something of the type $(s_k, [a_1, a_2, \ldots, a_{k-1}])$.

This kind of combinatorial problem is not what RL is really great at. RL is really good when the state and action together give a lot of information about the next state. Here it seems that, in your formulation, the next state is independent of the previous action.

Instead of seeing this as a RL problem, you might want to express this as sequences of actions with an associated reward, and look at it as a combinatorial optimization problem.

",40573,,,,,8/27/2020 14:59,,,,2,,,,CC BY-SA 4.0 23294,2,,23287,8/27/2020 16:22,,6,,"

For this, we will need game theory.

In game theory, an optimal strategy is one that cannot be exploited by the opponent even if they know your strategy.

Let's say you want a strategy where your move selection is not based on what happened before (so you are not trying to model your opponent, or trick them into believing you will always play scissors and then throw them off, anything like that). A strategy will look like $(P, S, R)$, where $P, S, R \in [0, 1], P+S+R = 1$. You select paper with probability $P$, scissors with probability $S$, rock with probability $R$. Now, if your probabilities are a bit uneven (for example $(0.5, 0.2, 0.3)$) an opponent can abuse that strategy. If your opponent plays with probabilities $(p, s, r)$, their expected reward (counting +1 for win, -1 for loss, 0 for draw) would be $0.5(s - r) + 0.2(r - p) + 0.3(p - s) = 0.1p + 0.2s - 0.3r$. If they wish to maximize their wins, they would play scissors all the time against you, and expect to have a distinct advantage over you.

In general, for a strategy $(P, S, R)$ for you and $(p, s, r)$ for your opponent, your opponent's winnings would be $P(s - r) + S(r - p) + R(p - s) = p(R-S) + s(P-R) + r(S - P)$. If all the partial derivatives of this, with respect to $p$, $s$ and $r$ are 0, the opponent has no way to maximize his winnings; they would have no incentive to play a particular move over any other move. This occurs when $P = S = R = \frac13$.

That's basically how to approach game theory: find a strategy so your opponent has no incentive to choose one action over another. The approach seems a bit counter-intuitive at first (you're trying to find the optimal strategy for your opponent instead of for yourself) but it works for many similar problems.

",40573,,,,,8/27/2020 16:22,,,,8,,,,CC BY-SA 4.0 23295,2,,19873,8/27/2020 16:41,,2,,"

Yes, it should be no problem.

When you decide to use a CNN, you have to make sure that this makes sense. Another answer mentioned using 3x3 convolutions -- which I would recommend against. For that to work, you would need to turn your vector into a rectangular array, and you would be implying a structure that isn't there.

Use one-dimensional convolutions instead.

",40573,,,,,8/27/2020 16:41,,,,0,,,,CC BY-SA 4.0 23296,1,,,8/27/2020 17:30,,6,672,"

Are there any examples of single player games that use modern ML technique in its games? By this I mean AI that plays with or against the human player, and not just play the game by itself (like Atari).

"Modern ML techniques" is a vague term, but for example, Neural Networks, Reinforcement Learning, or probabilistic methods. Basically anything that goes above and beyond traditional search methods that most games use nowadays.

Ideally, the AI would be:

  • widely available (i.e. not like the OpenAI Five, which was only available for a limited amount of time and requires a high amount of computational power)
  • human level (not overpowered)

Ideally, the game would be:

  • symmetrical (the AI has the same agent capabilities as the player, though answers similar to The Director would be very interesting as well)
  • "complex environment" (more complex than, say, a board game, but a CIV5 game might work)

But any answer would be appreciated, as some of the criteria above are quite vauge.

Edit: the ideal cases listed above are not meant to discourage other answers, nor are they intended to be of strictly inclusionary (ie: any game would need to satisfy all of the above requirements)

",6779,,6779,,8/29/2020 23:07,9/3/2020 15:23,Examples of single player games that use modern ML techniques in the AI?,,5,0,,,,CC BY-SA 4.0 23297,2,,23292,8/27/2020 17:47,,1,,"

If by,

I can compute the reward given $(a_1, a_2, \dots, a_n)$

you simply mean that your game is deterministic, this is absolutely fine. I feel another answer had assumed you were implying your terminal reward is a matter of some sequence. RL does, however, struggle more greatly in games with indeterminable reward until the terminal state, however, it is perfectly normal, but a significant challenge.

In terms of implementation, simply record the reward at the end of each match, and perform a train step only after each terminal state, assigning this reward to each transition recorded in that game. Instead of updating your target network after a given number of steps, update it after some number of terminal states. I suggest 20 games as a starter for the frequency of updating your target network.

",40575,,,,,8/27/2020 17:47,,,,0,,,,CC BY-SA 4.0 23298,2,,23296,8/27/2020 17:56,,1,,"

Beating the World’s Best at Super Smash Bros. Melee with Deep Reinforcement Learning

Firoiu, Whitney, Tenenbaum created a RL agent that plays and defeats professional players in Super Smash Bros Melee. The RL agent first played against the built-in AI, and then via self-play.

Only one character playing on a single stage was trained. The character picked (Captain Falcon) has no "projectile attacks" to simplify training.

",6779,,6779,,8/30/2020 4:18,8/30/2020 4:18,,,,0,,,,CC BY-SA 4.0 23299,1,23310,,8/27/2020 19:39,,2,98,"

I am training an RL agent with Deep Q-learning + Experience Replay on the Q*bert Atari environment. After 400,000 frames, my agent appears to have learned strategic information about the game, but none about the environment. It has learned that a good immediate strategy is to simply jump down both diagonals and fall of the board, thus completing a large portion of the first level. However, it remains to understand neither the boundaries of the board to prevent jumping off, nor anything about avoiding enemies. I’m asking this here, instead of Stack Overflow because it is a more general question with less of a need in terms of programming understanding. Simply, I am asking whether or not this is a matter of a pore exploration policy (which I presume). If you agree, what should be a better exploration policy for Q*bert that would facilitate my agent’s learning experience?

As per the request of a comment:

Could you add what your current exploration approach is, and what options you are using for your Deep Q Learning implementation (e.g. replay size, batch size, NN architecture, steps per target network copy, or if you are using a different update mechanism for the target network). Also if you are using any other approach different to the classic DQN paper such as in state representation.

Here are my parameters:

  • Exploration policy: epsilon = min(1.0, 1000 / (frames + 1))
  • Replay Memory = 20,000 frames
  • Batch size = 32 transitions
  • NN architecture: Conv2D(64, 3, 2), Dropout(0.2), Dense(32, relu), Dense(32, relu), Dense(num_actions, linear)
  • Steps per target network copy: 100
",40575,,40575,,8/28/2020 13:22,8/28/2020 13:22,What is the optimal exploration-exploitation trade-off in Q*bert?,,1,2,,,,CC BY-SA 4.0 23300,1,,,8/27/2020 20:29,,2,214,"

In the average reward setting, the quality of a policy is defined as: $$ r(\pi) = \lim_{h\to\infty}\frac{1}{h} \sum_{j=1}^{h}E[R_j] $$ When we reach the steady state distribution, we can write the above equation as follows: $$ r(\pi) = \lim_{t\to\infty}E[R_t | A \sim \pi] $$ We can use the incremental update method to find $r(\pi)$: $$ r(\pi) = \frac{1}{t} \sum_{j=1}^{t} R_j = \bar R_{t-1} + \beta (R_t - \bar R_{t-1})$$ where $ \bar R_{t-1}$ is the estimate of the average reward $r(\pi)$ at time step $t-1$. We use this incremental update rule in the SARSA algorithm:

Now, in this above algorithm, we can see that the policy will change with respect to time. But to calculate the $r(\pi)$, the agent should follow the policy $\pi$ for a long period of time. Then how we are using $r(\pi)$ if the policy changes with respect to time?

",28048,,2444,,4/12/2022 14:26,4/12/2022 14:26,How are we calculating the average reward ($r(\pi)$) if the policy changes over time?,,1,0,,,,CC BY-SA 4.0 23301,2,,23300,8/27/2020 21:17,,2,,"

You are correct: to evaluate a policy, we need to fix it.

  • We can temporarily fix it, just to evaluate it over a number of test cases. For a fair comparison, we should fix the start states and random seeds used for the transitions.
  • We can wait until convergence / until we are satisfied. The resulting policy would be what we implement in the "true", trained agent. This is important when exploration might be harmful in the "real world" domain where the agent will be operating.
  • We can also measure average reward of the "non-stationary" policy, and assume that, once the agent is doing well, this should be close enough to evaluating the fixed policy. This is not ideal, but on the other hand it is trivial to implement, and is often used to track the learning process. If you have a life-long learning agent, this might be the best you can do.
",40573,,,,,8/27/2020 21:17,,,,6,,,,CC BY-SA 4.0 23302,2,,23254,8/27/2020 23:33,,1,,"

There is no special calculation you can do to determine the optimal batch size for any situation, so you kinda have to do a bit of testing to determine what batch size will work best. But there are some common trends you can take into account to make your testing easier.


How to choose your batch size

According to the paper Accelerated Methods for Deep Reinforcement Learning you get the best performance from DQNs (on average) with a batch size of 512. The problem with this is that is is much slower than the usual batch size of 32 and most of the time the performance improvement doesn't warrant it.

If you are just trying to test out your agents it is generally best to stick with a batch size of 32 or 64 so that you can train the agent quickly yet still get an idea of what it is capable of. But if getting the best performance is your top priority and waiting longer isn't a problem, then you should go for a batch size of 512 (higher can actually lead to worse performance) or something near that.

",38062,,,,,8/27/2020 23:33,,,,0,,,,CC BY-SA 4.0 23304,2,,18659,8/28/2020 3:11,,-2,,"

To begin from the scratch, and in order to keep approach simple we have to analyze the input text(clinical narration) for the following data:

  1. Is input a word or a group of words or a sentence?

  2. Is input a meaningful sentence? By meaningful, I mean grammatically correct.

  3. Does the word, group of words or a sentence contain symptoms or health issues?

  4. Does the sentence contain data about a person’s age and gender?

  5. Does the sentence contain data about a person’s diet, medical history, work routine, travelling history or getting in contact with any ill person?

If there are any other attributes that one has to look for then I would be keen to find out from the subject matter experts.

",34306,,,,,8/28/2020 3:11,,,,0,,,,CC BY-SA 4.0 23305,1,23306,,8/28/2020 4:26,,1,60,"

Like making a bed, washing dishes, taking out the garbage, etc., by training it on the video of specific individuals doing those cores in their own unique environments?

I have researched what machine learning is capable of doing at this point in time, and it seems this may be now feasible when done on a customer-specific basis and enable by an A.I. enhanced, full articulated, robot along the lines of an enhanced InMoov. https://en.wikipedia.org/wiki/InMoov

If it's feasible, what are the AI algorithms I should be considering to train my robot to do these tasks? Isn't deep learning the most promising of these selections: https://www.ubuntupit.com/machine-learning-algorithms-for-both-newbies-and-professionals/?

",40288,,2444,,8/29/2020 0:57,8/29/2020 0:57,Is it feasible using today's technology to use an AI training algorithm to custom teach a robot to do common household cores?,,1,0,,,,CC BY-SA 4.0 23306,2,,23305,8/28/2020 5:03,,1,,"

I would suggest using a neural network with back propagation. From what I know, they can be applied to many different circumstances and work well. For your more simpler and repetitive tasks like moving an object, you can just use simpler regression methods.

",40622,,40622,,8/28/2020 5:21,8/28/2020 5:21,,,,0,,,,CC BY-SA 4.0 23307,1,23309,,8/28/2020 6:37,,0,40,"

I am new in machine learning and learning linear regression concept. Please help with answers to below queries. I want to understand effect on existing independent variable(X1) if I add a new independent variable(X2) in my model. This new variable is highly correlated with dependent variable(Y)

  • Will it have any effect on beta coefficient of X1?
  • Will relationship between X1 and Y become insignificant?
  • Can adjusted R-square value decrease?
",40626,,,,,8/28/2020 8:44,Effect of adding an Independent Variable in Multiple Linear Regression,,1,0,,,,CC BY-SA 4.0 23308,2,,18659,8/28/2020 7:21,,-1,,"

So for Medical Prognosis, there are some variables that commonly come up like Age, Sex, Ascites, Hepato, Spider, Status of the disease and many others but it depends on the disease. You'll commonly encounter these variables if you're doing regression or classification.

Also, if you're reading Radiology Reports for getting the input for the model then you also have to take care of jargons. The same symptoms can be written in various ways but all point towards the same prognosis i.e., there can be synonyms for labels. Try reading this to get more information on how we can do information extraction from Radiology Reports. This is the famous CheXpert paper

",40434,,,,,8/28/2020 7:21,,,,0,,,,CC BY-SA 4.0 23309,2,,23307,8/28/2020 8:44,,0,,"
  • Yes, the addition of the independent variable ($X_2$) will have an impact on the beta coefficient of $X_1$.
  • Maybe, it depends on the relationship between $X_1$ and $Y$, and whether you are using any regularization or not.
  • If the independent variable $X_2$ is more correlated than $X_1$, then the value of $r^2$ should be higher (but I am not sure on this)
",31016,,,,,8/28/2020 8:44,,,,0,,,,CC BY-SA 4.0 23310,2,,23299,8/28/2020 10:28,,1,,"

I can spot three, maybe four, things in your implementation that could be contributing to incomplete learning that you are observing.

More exploration in long term

I think you have correctly identified that exploration could be an issue. In off-policy learning (which Q-learning is an instance of), it is usual to set a minimum exploration rate. It is a hyperparameter that you need to manage. Set too high, the agent will never experience the best rewards as it will make too many mistakes. Set too low, the agent will not explore enough to find the correct alternative actions when the opportunity to learn them occurs.

I would suggest for you something like:

epsilon = max(min(1.0, 1000 / (frames + 1)), 0.01)

You can choose numbers other than 0.01, but I think that is a reasonable start for many Atari games. You could try higher, up to 0.1 in games which are more forgiving of mistakes.

Remove dropout

I am not sure why, but I always have problems with dropout in RL neural networks. Try removing the dropout layer.

More convolutional layers

Convolutional layers are very efficient generalisers for vision and grid-based problems. You won't really benefit much from having a single layer though. I would add another two, increase the number of output channels.

Maybe state representation?

It is not clear from your description whether you are using a single colour frame for the state representation, or stacked greyscale frames for the last 3 inputs. It should be the latter, and if you want to more closely replicate the orginal DQN Atari paper, you should take the previous 4 frames as input.

In addition, you should be normalising the input into range $[0,1]$ or $[-1,1]$. The native image range $[0,255]$ is tricky for neural networks to process, and quite common for value functions to get stuck if you don't normalise.

",1847,,,,,8/28/2020 10:28,,,,4,,,,CC BY-SA 4.0 23311,2,,23280,8/28/2020 14:49,,2,,"

If you're using a discount factor less than 1, you should be able to compute a maximum return (likewise, a minimum return) based on the max (min) reward you can earn at each timestep. However, this issue you bring up is usually cited as a difficulty with C51. I think people tend to simply use fixed values for the min/max return (or just make rough estimates). If you want to avoid this, I recommend looking into the QR-DQN algorithm which circumvents this issue altogether and is more theoretically sound.

",37829,,,,,8/28/2020 14:49,,,,0,,,,CC BY-SA 4.0 23312,1,,,8/28/2020 15:19,,4,268,"

I was going through Sutton's book and, using sample-based learning for estimating the expectations, we have this formula

$$ \text{new estimate} = \text{old estimate} + \alpha(\text{target} - \text{old estimate}) $$

What I don't quite understand is why it's called the target, because since it's the sample, it’s not the actual target value, so why are we moving towards a wrong value?

",40049,,2444,,8/28/2020 19:13,8/28/2020 19:40,"Why is the target called ""target"" in Monte Carlo and TD learning if it is not the true target?",,2,0,,,,CC BY-SA 4.0 23314,2,,20993,8/28/2020 15:49,,1,,"

The original work on NEAT(Neuroevolution of augmenting topologies) was by Ken Stanley in 2002 at The University of Texas at Austin. The web page for the project is here I suggest you download and read the paper linked from that page. As for selection of genome pairs, NEAT makes use of a speciation model so the selection of such pairs is constrained to at least prefer pairs from the same 'species', on the assumption that the species has evolved such that species population is isolated under reproduction. The innovation that has been 'bred' into the species is thus preserved under reproduction. Selection by fitness alone is insufficient in such models. This differs from the simple GA where pair selection is unconstrained.

",26382,,,,,8/28/2020 15:49,,,,0,,,,CC BY-SA 4.0 23315,2,,23241,8/28/2020 15:50,,1,,"

I depends on your overall model architecture (and problem specification). As I understand it, you take the observations of all agents together and feed it into one model, a central controller, which then predicts the action per available agent.

I believe that this varying number of applicable observations (depending on the number of currently present agents) is what you mean by changing state spaces?

One option that works in these kinds of settings is to use padding. So, a fixed number of individual observations gets predetermined before training, being set to the average number of agents expected to be present in the environment at any timestep.

Then, during training, local, individual observations of all agents (for the current time step) get stacked together, resulting in an overall observation being fed into central controller.

Then, the controller predicts (concurrently/in a single pass) one action (or multiple, depends on what you want it to predict per agent...) per agent.

In the case that the number of present agents currently (=in a given time step) is smaller than the predetermined number of expected agents, then padding is used to fill the empty observation slots by zero-pseudo-observations. The quality of the predicted actions is consequently only assessed for the present agents, neglecting predictions for absent agents, indicated by their empty observation slots. If the number of controlled agents happens to exceed the expected maximal number of agents present in the simulation, excess agents are selected at random and randomly controlled for the next time step. Then, this (randomized control of a random selection of some agents) continues until no excess agents are present in the simulation anymore. Alternatively, if available, other reasonable controllers might be used to temporarily steer excess agents.

I have seen researchers doing this in the context of traffic control, where a variable number of cars was controlled by a central controller. See this paper (Section 3.2) for an example of the aforementioned technique.

Alternatively, there exist approaches where each agent gets a periodically updated local copy of some central controller and then contributes to jointly updating the central controller. This paper shortly summarizes a lot of different options and possibilities to choose from in terms of different RL algorithms, where different options might apply depending on the concrete task at hand. Also, the paper discusses different options with respect to algorithms implemented in the python Reinforcement Learning (RL) library RLlib, which might already provide a suitable implementation for a class of algorithm you want to work with.

Unfortunately, a lot of papers don't reveal explicitly whether their (partly) distributed RL training procedures are more suitable, i.e. converge stably to a good solution, for single-agent environments or equally suitable for multi-agent environments as well.

When it comes to introducing new agents to the simulation, this depends entirely on your concrete problem at hand. In the context of vehicle control, you could position a vehicle at the start of the simulated road, steer it (safely) a few time steps straight ahead (to initialize its observation space) and then hand over control to the RL controller. For other types of agents (e.g. soccer playing robots etc.) that might look entirely different. In that case, you might have them sitting or standing next to the field and then have them enter the play ground or so. So, that really depends on your task at hand.

",37982,,37982,,8/29/2020 18:28,8/29/2020 18:28,,,,10,,,,CC BY-SA 4.0 23316,2,,23288,8/28/2020 16:36,,1,,"

The proof you are given in the above post is not wrong. It's just they skip some of the steps and directly written the final answer. Let me go through those steps:

I will simplify some of the things to avoid complication but the generosity remains the same. Like I will think of the reward as only dependent on the current state, $s$, and current action, $a$. So, $r = r(s,a)$

First, we will define the average reward as: $$r(\pi) = \sum_s \mu(s)\sum_a \pi(a|s)\sum_{s^{\prime}} P_{ss'}^{a} r $$ We can further simplify average reward as: $$r(\pi) = \sum_s \mu(s)\sum_a \pi(a|s)r(s,a) $$ My notation may be slightly different than the aforementioned slides since I'm only following Sutton's book on RL. Our objective function is: $$ J(\theta) = r(\pi) $$ We want to prove that: $$ \nabla_{\theta} J(\theta) = \nabla_{\theta}r(\pi) = \sum_s \mu(s) \sum_a \nabla_{\theta}\pi(a|s) Q(s,a)$$

Now let's start the proof: $$\nabla_{\theta}V(s) = \nabla_{\theta} \sum_{a} \pi(a|s) Q(s,a)$$ $$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \nabla_{\theta}Q(s,a)]$$ $$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \nabla_{\theta}[R(s,a) - r(\pi) + \sum_{s^{\prime}}P_{ss^{\prime}}^{a}V(s^{\prime})]]$$ $$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) [- \nabla_{\theta}r(\pi) + \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime})]]$$ $$\nabla_{\theta}V(s) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime})] - \nabla_{\theta}r(\pi)\sum_{a}\pi(a|s)$$ Now we will rearrange this: $$\nabla_{\theta}r(\pi) = \sum_{a} [Q(s,a) \nabla_{\theta} \pi(a|s) + \pi(a|s) \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime})] - \nabla_{\theta}V(s)$$ Multiplying both sides by $\mu(s)$ and summing over $s$: $$\nabla_{\theta}r(\pi) \sum_{s}\mu(s)= \sum_{s}\mu(s) \sum_{a} Q(s,a) \nabla_{\theta} \pi(a|s) + \sum_{s}\mu(s) \sum_a \pi(a|s) \sum_{s^{\prime}}P_{ss^{\prime}}^{a}\nabla_{\theta}V(s^{\prime}) - \sum_{s}\mu(s) \nabla_{\theta}V(s)$$ $$\nabla_{\theta}r(\pi) = \sum_{s}\mu(s) \sum_{a} Q(s,a) \nabla_{\theta} \pi(a|s) + \sum_{s^{\prime}}\mu(s^{\prime})\nabla_{\theta}V(s^{\prime}) - \sum_{s}\mu(s) \nabla_{\theta}V(s)$$ Now we are there: $$\nabla_{\theta}r(\pi) = \sum_{s}\mu(s) \sum_{a} Q(s,a) \nabla_{\theta} \pi(a|s)$$ This is the policy gradient theoram for average reward formulation (ref. Policy gradient).

",28048,,35585,,9/4/2020 0:11,9/4/2020 0:11,,,,15,,,,CC BY-SA 4.0 23317,1,23319,,8/28/2020 17:00,,3,157,"

I am trying to train an agent to explore an unknown two-dimensional map while avoiding circular obstacles (with varying radii). The agent has control over its steering angle and its speed. The steering angle and speed are normalized in a $[-1, 1]$ range, where the sign encodes direction (i.e. a speed of $-1$ means that it is going backwards at the maximum units/second).

I am familiar with similar problems where the agent must navigate to a waypoint, and in which case the reward is the successful arrival to the target position. But, in my case, I can't really reward the agent for that, since there is no direct 'goal'.

What I have tried

The agent is penalised when it hits an obstacle; however, I am not sure how to motivate the agent to move. Initially, I was thinking of having the agent always move forward, meaning that it only has control over the steering angle. But, I want the ability for the agent to control its speed and be able to reverse (since I'm trying to model a car).

What I have tried is to reward the agent for moving and to penalise it for remaining stationary. At every timestep, the agent is rewarded ${1}/{t_\text{max}}$ if the absolute value of the speed is above some epsilon, or penalised that same amount if otherwise. But, as expected, this doesn't work. Rather than motivating the agent to move, it simply causes it to jitter back and forth. This makes sense since 'technically' the most optimal strategy if you want to avoid obstacles is to remain stationary. If the agent can't do that then the next best thing is to make small adjustements in the position.

So my question: how can I add in an exploration incentive to my agent? I am using proximal policy optimization (PPO).

",40635,,2444,,10/7/2020 22:50,10/7/2020 22:50,How do I design the rewards and penalties for an agent whose goal it is to explore a map,,1,0,,,,CC BY-SA 4.0 23319,2,,23317,8/28/2020 17:39,,2,,"

Measure what you want to achieve as directly as possible, and reward that. Later you can add more sophisticated incentives for the type of motion etc, but the key to a good reward signal is that it measures the quality of a solution at a high level, without specifying how to achieve that solution.

If you want your simulated car to explore, you will want to give it a reward signal based on it encountering new unexplored areas. There are lots of reasonable choices here. I suspect a good one will depend on what sensors you can reasonably code for the car, and what you consider to count as exploration - e.g. is it a thorough search of an area, moving far from the original position, experiencing different "views"?

One likely component you will need to give your agent and incorporate into the state representation is a memory. In order to understand whether the agent is exploring, something will need to know whether the agent has experienced something before and how much. A very simple kind of memory would be to add counters to a grid map and allow the agent to know how many time steps it has spent in each position on the map. The reward signal can then be higher when the agent enters a point on the map that it has not been in recently. If you want a non-episodic or repeating tour of exploration you might decay the values over time, so that an area that has not been visited for a long time counts the same as a non-visited one.

A related concept that you might be able to borrow ideas from is curiousity. There have been some interesting attempts to encourage an agent to seek new/interesting states by modifying action selection. Curiosity-driven Exploration by Self-supervised Prediction is one such attempt, and might be of interest to you. In that case, the authors use an intrinsic model of curiousity that can solve some environments even when there is no external reward signal at all!

Alternatively, if you don't care to get involved in technical solution, you could create a maybe acceptable behaviour for your vehicle by setting a random goal position, then granting a reward and moving it to a new random location each time the car reaches it.

",1847,,1847,,8/28/2020 17:45,8/28/2020 17:45,,,,5,,,,CC BY-SA 4.0 23320,2,,23312,8/28/2020 17:46,,6,,"

It is our "current" target. We assume that the value we get now is at least a closer approximation to the "true" target.

We're not so much moving towards a wrong value as we are moving away from a more wrong value.

Of course, it is all base on random trials, so saying anything definite (such as: "we are guaranteed to improve at each step") is hard to show without working probabilistically. The expectation of the error of the value function (as compared to the true value function) will decrease, that is all we can say.

",40573,,,,,8/28/2020 17:46,,,,4,,,,CC BY-SA 4.0 23321,2,,23312,8/28/2020 18:10,,0,,"

It would be heplful for me if you specify the section and page number of the Sutton's book. But as far as I understand your question I will try explain this. Think of TD update. The sample contains $(s_t,a_t,r_{t+1},s_{t+1})$. Using incremental update we can write: $$ v_{t}(s) = \frac{1}{t} \sum_{j=1}^{t}(r_{t+1} + \gamma v_{s_{t+1}})$$ $$ v_{t}(s) = v_{t-1}(s) + \alpha (r_{t+1} + \gamma v_{t-1}(s_{t+1}) - v_{t-1}(s_t))$$ We are calling this $r_{t+1} + \gamma v_{t-1}(s_{t+1})$ as the TD target. From the above equation you can already see that $r_{t+1} + \gamma v_{t-1}(s_{t+1})$ is actually the unbaised estimate for $v(s)$. We are calling $r_{t+1} + \gamma v_{t-1}(s_{t+1})$ an unbiased estimate since $E[r_{t+1} + \gamma v_{t-1}(s_{t+1})] = v_t(s_t)$. That means expectation over $r_{t+1} + \gamma v_{t-1}(s_{t+1})$ lead us to true state value function, $v_t(s)$.

For the monte carlo update same explain will be applied. I hope that this answer your question.

",28048,,28048,,8/28/2020 19:40,8/28/2020 19:40,,,,12,,,,CC BY-SA 4.0 23322,1,,,8/28/2020 19:25,,1,442,"

I was going through this course on reinforcement learning (the course has two lecture videos and corresponding slides) and I had a doubt. On slide 18 of this pdf, it states following condition for an algorithm to have regret sublinear in T (T being number of pulls of multi arm bandit).

C2 - Greedy in the Limit: Let exploit(T) denote the number of pulls that are that are greedy w.r.t the empirical mean up to horizon $T$. For sub-linear regret, we need $$\lim_{T\rightarrow\infty}\frac{\mathbb{E}[exploit(T)]}{T}=1 $$

Here, $exploit(T)$ denote the total number of "exploit" rounds performed in the first $T$ pulls. Given that expectation is defined as "the weighted sum of the outcome values, where the weights correspond to the probabilities of realizing that value",

(Q1) how exactly mathematically we define $\mathbb{E}[exploit(T)]$?

In second video (at 24:44), instructor has said that $\mathbb{E}[exploit(T)]$ is the number of exploit steps.

(Q2) Then how it equals "weighted sum of outcome values"?

(note that instructor assumes that the pulling of arm may give reward which correspond to outcome value of 1 and may not give reward which correspond to ourcome value of 0)

Also in slide 27, for GLIE-ifying $\epsilon_T$-first strategy, he selects $\epsilon_T=\frac{1}{\sqrt{T}}$. Then, the instructor counts $\sqrt{T}$ exploratory pulls and $T-\sqrt{T}$ exploitory pulls. Then to show that this satisfies condition C2, instructor states $$\mathbb{E}[exploit(T)]\geq \frac{T-\sqrt{T}}{T}$$.

Here, $\frac{T-\sqrt{T}}{T}$ is a fraction of exploitory pulls.

(Q3) So, by above equation does the instructor mean, number of exploitory pulls is greater than equal to fraction of number of exploitory pulls?

(Q4) How can we put 2nd equation in first equation and still prove limit in first equation still holds, that is, how following is the case:

$$\lim_{T=\rightarrow\infty}\frac{\frac{T-\sqrt{T}}{T}}{T}=1$$

I guess I am missing some basic concept of expectation here.

",40640,,36821,,2/21/2021 15:46,2/21/2021 15:46,Understanding GLIE conditions for epsilon greedy approach,,0,0,,,,CC BY-SA 4.0 23327,2,,13741,8/29/2020 14:00,,0,,"

LSTM can be tricky, I'll give my $0.02.

LSTM input layer defines the shape so it would be something like this.

If I am understanding your question correctly, your data can be framed as 184 samples with 2 time steps and 70 features?

So the start of the code might look like this.

model = Sequential()
model.add(LSTM(184, input_shape=(50, 2)))
",34095,,12121,,1/20/2023 16:13,1/20/2023 16:13,,,,0,,,,CC BY-SA 4.0 23328,1,23358,,8/29/2020 14:09,,3,185,"

I am training an agent to do object avoidance. The agent has control over its steering angle and its speed. The steering angle and speed are normalized in a $[−1,1]$ range, where the sign encodes direction (i.e. a speed of −1 means that it is going backwards at the maximum units/second).

My reward function penalises the agent for colliding with an obstacle and rewards it for moving away from its starting position. At a time $t$, the reward, $R_t$, is defined as $$ R_t= \begin{cases} r_{\text{collision}},&\text{if collides,}\\ \lambda^d\left(\|\mathbf{p}^{x,y}_t-\mathbf{p}_0^{x,y}\|_2-\|\mathbf{p}_{t-1}^{x,y}-\mathbf{p}_0^{x,y}\|_2 \right),&\text{otherwise,} \end{cases} $$ where $\lambda_d$ is a scaling factor and $\mathbf{p}_t$ gives the pose of the agent at a time $t$. The idea being that we should reward the agent for moving away from the inital position (and in a sense 'exploring' the map—I'm not sure if this is a good way of incentivizing exploration but I digress).

My environment is an unknown two-dimensional map that contains circular obstacles (with varying radii). And the agent is equipped with a sensor that measures the distance to nearby obstacles (similar to a 2D LiDAR sensor). The figure below shows the environment along with the agent.

Since I'm trying to model a car, I want the agent to be able to go forward and reverse; however, when training, the agent's movement is very jerky. It quickly switches between going forward (positive speed) and reversing (negative speed). This is what I'm talking about.

One idea I had was to penalise the agent when it reverses. While that did significantly reduce the jittery behaviour, it also caused the agent to collide into obstacles on purpose. In fact, over time, the average episode length decreased. I think this is the agent's response to the reverse penalties. Negative rewards incentivize the agent to reach a terminal point as fast as possible. In our case, the only terminal point is obstacle collision.

So then I tried rewarding the agent for going forward instead of penalising it for reversing, but that did not seem to do much. Evidently, I don't think trying to correct the jerky behaviour directly through rewards is the proper approach. But I'm also not sure how I can do it any other way. Maybe I just need to rethink what my reward signal wants the agent to achieve?

How can I rework the reward function to have the agent move around the map, covering as much distance as possible, while also maintaining smooth movement?

",40635,,40635,,8/30/2020 15:00,8/31/2020 15:21,How can I fix jerky movement in a continuous action space,,1,0,,,,CC BY-SA 4.0 23329,1,,,8/29/2020 14:50,,1,32,"

i am trying to train an A3C algorithm but I am getting same output in the multinomial function.

can I train the A3C with random actions as in below code.

can someone expert comment.

while count<max_timesteps-1:
            value, action_values, (hx, cx) = model((Variable(state.unsqueeze(0)), (hx, cx)))
            prob = F.softmax(action_values,dim = -1)
            log_prob = F.log_softmax(action_values, dim=-1)
            print(log_prob.shape)
            print("log_prob: ",log_prob)
            entropy = -(log_prob * prob).sum(1, keepdim=True)
            entropies.append(entropy)
            actn = np.random.randn(3)
            action = actn.argmax()
            log_prob = log_prob[0,action]
            # print("log_prob ",log_prob)
            # print("action ",action)
            state, reward, done = env.step(action)
            done = (done or count == max_timesteps-2)
            reward = max(min(reward, 1), -1)
",40051,,,,,8/29/2020 14:50,is it ok to take random actions while training a3c as in below code,,0,0,,,,CC BY-SA 4.0 23330,1,23333,,8/29/2020 15:55,,4,111,"

I am trying to make a classifier.

I am new to AI (even if I know the definition and all such a bit) , and also I have no idea of how to implement it properly by myself even if I know a bit of Python coding (in fact, I am fifteen years old !🙄🙄), but my passion for this has made me ask this (silly, probably) question.

Are there neural networks where nodes are randomly selected from among a set of nodes (in random orders and a random number of times)? I know this is from ML (or maybe deep learning, I suppose), but I have no idea how to recognize such a thing from the presently available algorithms. It will be great if you all could help me, because I am preparing to release an API for programming a model which I call the 'Insane Mind' on GitHub, and I want some help to know if my effort was fruitless.

And for reference, here's the code :

from math import *
from random import *
 
class MachineError(Exception):
    '''standard exception in the API'''
    def __init__(self, stmt):
        self.stmt = stmt
def sig(x):
    '''Sigmoid function'''
    return (exp(x) + 1)/exp(x)

class Graviton:
    def __init__(self, weight, marker):
        '''Basic unit in 'Insane Mind' algorithm
           -------------------------------------
           Graviton simply refers to a node in the algorithm.
           I call it graviton because of the fact that it applies a weight
           on the input to transform it, besides using the logistic function '''
        self.weight = weight # Weight factor of the graviton
        self.marker = marker # Marker to help in sorting
        self.input = 0 # Input to the graviton
        self.output = 0 # Output of the graviton
        self.derivative = 0 # Derivative of the output

    def process(self, input_to_machine):
        '''processes the input (a bit of this is copied from the backprop algorithm'''
        self.input = input_to_machine
        self.output = (sig(self.weight * self.input) - 1)/(self.marker + 1)
        self.derivative = (sig(self.input * self.weight) - 1) * self.input *self.output * (1- self.output) 
        return self.output
    
    def get_derivative_at_input(self):
        '''returns the derivative of the output'''
        return self.derivative

    def correct_self(self, learning_rate, error):
        '''edits the weight'''
        self.weight += -1 * error * learning_rate * self.get_derivative_at_input() * self.weight
        
class Insane_Mind:

    def __init__(self, number_of_nodes):
        '''initialiser for Insane_Mind class.
           arguments : number_of_nodes : the number of nodes you want in the model'''
        self.system = [Graviton(random(),i) for i in range(number_of_nodes)] # the actual system
        self.system_size = number_of_nodes # number of nodes , or 'system size'
        
    def  output_sys(self, input_to_sys):
        '''system output'''
        self.output = input_to_sys
        for i in range(self.system_size):
            self.output = self.system[randint(0,self.system_size - 1 )].process(self.output)
        return self.output
    
    def train(self, learning_rate, wanted):
        '''trains the system'''
        self.cloned = [] # an array to keep the sorted elements during the sorting process below
        order = [] # the array to make out the order of arranging the nodes
        temp = {} # a temporary dictionary to pick the nodes from
        for graviton in self.system:
            temp.update({str(graviton.derivative): graviton.marker})
        order = sorted(temp)
        i = 0
        error = wanted - self.output
        for value in order:
            self.cloned.append(self.system[temp[value]])
            self.cloned[i].correct_self(learning_rate, error)
            error *= self.cloned[i].derivative
            i += 1
        self.system = self.cloned

Sorry for not using that MachineError exception anywhere in my code (I will use it when I am able to deploy this API).

To tell more about this algorithm, this gives randomized outputs (as if guessing). The number of guesses vary from 1 (for a system with one node), 2 (for two nodes) and so on to an infinite number of guesses for an infinite number of nodes.

Also, I wanna try and find how much it can be of use (if this is something that has never been discovered, if it is something that can find a good place in the world of ML or Deep Learning) and where it can be used.

Thanks in advance.

Criticisms (with a clear reason) are also accepted.

",40583,,1847,,8/29/2020 18:12,8/30/2020 10:33,Are there neural networks where nodes are randomly selected from among a set of nodes (in random orders and a random number of times)?,,1,0,,,,CC BY-SA 4.0 23331,1,23344,,8/29/2020 16:21,,6,154,"

Consider a neural network, e.g. as presented by Nielsen here. Abstractly, we just construct some function $f: \mathbb{R}^n \to [0,1]^m$ for some $n,m \in \mathbb{N}$ (i.e. the dimensions of the input and output space) that depends on a large set of parameters, $p_j$. We then just define the cost function $C$ and calculate $\nabla_p C$ and just map $p \to p - \epsilon \nabla_p C$ repeatedly.

The question is why do we choose $f$ to be what it is in standard neural networks, e.g. a bunch of linear combinations and sigmoids? One answer is that there a theorem saying any suitably nice function can be approximated using neural networks. But the same is true of other types of functions $f$. The Stone-Weierstrass theorem gives that we could use polynomials in $n$ variables: $$f(x) = c^0_0 + (c^1_1 x_1 + c^1_2 x_2 + \cdots + c^1_n x_n) + (c^2_{11}x_1 x_1 + c^2_{12} x_1x_2 + \cdots + c^2_{1n} x_1 x_2 + c^2_{21} x_2x_1 + c^2_{22} x_2x_2 + \cdots) + \cdots,$$

and still have a nice approximation theorem. Here the gradient would be even easier to calculate. Why not use polynomials?

",40653,,2444,,8/30/2020 2:34,1/21/2021 23:38,Why are neural networks preferred to other classification functions optimized by gradient decent,,1,2,,,,CC BY-SA 4.0 23332,1,,,8/29/2020 16:36,,4,1033,"

I've been looking into self-attention lately, and in the articles that I've been seeing, they all talk about "weights" in attention. My understanding is that the weights in self-attention are not the same as the weights in a neural network.

From this article, http://peterbloem.nl/blog/transformers, in the additional tricks section, it mentions,

The query is the dot product of the query weight matrix and the word vector, ie, q = W(q)x and the key is the dot product of the key weight matrix and the word vector, k = W(k)x and similarly for the value it is v = W(v)x. So my question is, where do the weight matrices come from?

",33579,,2444,,8/30/2020 0:58,5/23/2022 17:06,What is the weight matrix in self-attention?,,1,1,,,,CC BY-SA 4.0 23333,2,,23330,8/29/2020 18:29,,2,,"

It is difficult to prove a negative, but I do not think there is any classifier (neural network or otherwise) that fully matches to your idea.

I suspect that you will not be able to take the idea of random connections and loops at run time, and make a useful classifier out of it. That's not to say the idea is completely without merit, sometimes it is good to explore blue sky ideas and just see what happens. However, I think it might be a frustrating excercise to build anything on top of your idea without some basic foundation work first. I recommend that you look into the theory and implementation of logistic regression as a starting point, which is a good stepping stone to understanding neural networks.

There are some neural network components and architectures that make use of random behaviour at the activation level:

  • Dropout. This is a method used during training which zeroes outputs from randomly selected neurons. It often gives an effective boost to neural network stability (acting to prevent overfitting to input data) and can improve accuracy of classifiers too due to behaving similarly to having multiple simpler classifiers.

  • Boltzmann machines, and restricted Boltzmann machines (RBMs) output 0 or 1 randomly from each "neuron" unit, with the probability decided by sum of inputs. They are used to create generative models, not classifiers though. Another difference is that the randomness is applied both during training and during inference, whilst dropout is most often applied to augment training. Early on in the days of deep learning, RBMs were used to pre-train layers in a deep neural network. This was effective, but other simpler methods were discovered later and are nowadays preferred in most cases.

  • A variant of dropout call called Monte Carlo dropout is used at inference time. This can be used to measure uncertainty in a model's individual predictions, which is otherwise hard to obtain.

  • Although not quite as freeform as your random connections on a per neuron basis. If you applied dropout to a recurrent neural network, that might be quite close to your idea, because the existence of loops between neurons in each time step would be random. This could be applied in language modelling and classifiers for sequence data. The same motivations apply here as for dropout in simpler feed forward classifiers - it can in theory make a classifier more robust against noise in the inputs and more accurate.

",1847,,1847,,8/30/2020 10:33,8/30/2020 10:33,,,,0,,,,CC BY-SA 4.0 23334,1,,,8/29/2020 21:56,,2,34,"

I don't get how the training of the RPN works. From the forward propagation, I have $W \times H \times k$ outputs from the RPN.

How is the training data labeled such that I can use the loss function and update the weights through bach propagation? Is the training data labeled in the same shape of the output, as there are $W \times H \times k$ anchor boxes and we use the loss function directly or what?

",40659,,2444,,8/30/2020 1:02,4/26/2021 20:09,How is the data labelled in order to train a region proposal network?,,0,1,,,,CC BY-SA 4.0 23335,1,,,8/29/2020 23:09,,2,3105,"

In general, how do I calculate the GPU memory need to run a deep learning network?

I'm asking this question because my training for some network configuration is getting out of memory.

If the TensorFlow only store the memory necessary to the tunable parameters, and if I have around 8 million, I supposed the RAM required will be:

RAM = 8.000.000 * (8 (float64)) / 1.000.000 (scaling to MB)

RAM = 64 MB, right?

The TensorFlow requires more memory to store the image at each layer?

By the way, these are my GPU Specifications:

  • Nvidia GeForce 1050 4GB

Networking topology

  • Unet
  • Input Shape (256,256,4)
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 256, 256, 4) 0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 256, 256, 64) 2368        input_1[0][0]                    
__________________________________________________________________________________________________
dropout (Dropout)               (None, 256, 256, 64) 0           conv2d[0][0]                     
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 256, 256, 64) 36928       dropout[0][0]                    
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 128, 128, 64) 0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 128, 128, 128 73856       max_pooling2d[0][0]              
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 128, 128, 128 0           conv2d_2[0][0]                   
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 128, 128, 128 147584      dropout_1[0][0]                  
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 64, 64, 128)  0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 64, 64, 256)  295168      max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 64, 64, 256)  0           conv2d_4[0][0]                   
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 64, 64, 256)  590080      dropout_2[0][0]                  
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 32, 32, 256)  0           conv2d_5[0][0]                   
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 32, 32, 512)  1180160     max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
dropout_3 (Dropout)             (None, 32, 32, 512)  0           conv2d_6[0][0]                   
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 32, 32, 512)  2359808     dropout_3[0][0]                  
__________________________________________________________________________________________________
conv2d_transpose (Conv2DTranspo (None, 64, 64, 256)  524544      conv2d_7[0][0]                   
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 64, 64, 512)  0           conv2d_transpose[0][0]           
                                                                 conv2d_5[0][0]                   
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 64, 64, 256)  1179904     concatenate[0][0]                
__________________________________________________________________________________________________
dropout_4 (Dropout)             (None, 64, 64, 256)  0           conv2d_8[0][0]                   
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 64, 64, 256)  590080      dropout_4[0][0]                  
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 128, 128, 128 131200      conv2d_9[0][0]                   
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 128, 128, 256 0           conv2d_transpose_1[0][0]         
                                                                 conv2d_3[0][0]                   
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 128, 128, 128 295040      concatenate_1[0][0]              
__________________________________________________________________________________________________
dropout_5 (Dropout)             (None, 128, 128, 128 0           conv2d_10[0][0]                  
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 128, 128, 128 147584      dropout_5[0][0]                  
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 256, 256, 64) 32832       conv2d_11[0][0]                  
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, 256, 256, 128 0           conv2d_transpose_2[0][0]         
                                                                 conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 256, 256, 64) 73792       concatenate_2[0][0]              
__________________________________________________________________________________________________
dropout_6 (Dropout)             (None, 256, 256, 64) 0           conv2d_12[0][0]                  
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 256, 256, 64) 36928       dropout_6[0][0]                  
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 256, 256, 1)  65          conv2d_13[0][0]                  
==================================================================================================
Total params: 7,697,921
Trainable params: 7,697,921
Non-trainable params: 0

This is the error given.

---------------------------------------------------------------------------
ResourceExhaustedError                    Traceback (most recent call last)
<ipython-input-17-d4852b86b8c1> in <module>
     23 # Train the model, doing validation at the end of each epoch.
     24 epochs = 30
---> 25 result_model = model.fit(train_gen, epochs=epochs, validation_data=val_gen, callbacks=callbacks)

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs)
    106   def _method_wrapper(self, *args, **kwargs):
    107     if not self._in_multi_worker_mode():  # pylint: disable=protected-access
--> 108       return method(self, *args, **kwargs)
    109 
    110     # Running inside `run_distribute_coordinator` already.

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1096                 batch_size=batch_size):
   1097               callbacks.on_train_batch_begin(step)
-> 1098               tmp_logs = train_function(iterator)
   1099               if data_handler.should_sync:
   1100                 context.async_wait()

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
    778       else:
    779         compiler = "nonXla"
--> 780         result = self._call(*args, **kwds)
    781 
    782       new_tracing_count = self._get_tracing_count()

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
    838         # Lifting succeeded, so variables are initialized and we can run the
    839         # stateless function.
--> 840         return self._stateless_fn(*args, **kwds)
    841     else:
    842       canon_args, canon_kwds = \

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs)
   2827     with self._lock:
   2828       graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
-> 2829     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   2830 
   2831   @property

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in _filtered_call(self, args, kwargs, cancellation_manager)
   1846                            resource_variable_ops.BaseResourceVariable))],
   1847         captured_inputs=self.captured_inputs,
-> 1848         cancellation_manager=cancellation_manager)
   1849 
   1850   def _call_flat(self, args, captured_inputs, cancellation_manager=None):

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1922       # No tape is watching; skip to running the function.
   1923       return self._build_call_outputs(self._inference_function.call(
-> 1924           ctx, args, cancellation_manager=cancellation_manager))
   1925     forward_backward = self._select_forward_and_backward_functions(
   1926         args,

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager)
    548               inputs=args,
    549               attrs=attrs,
--> 550               ctx=ctx)
    551         else:
    552           outputs = execute.execute_with_cancellation(

~\Anaconda3\envs\tf23\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     58     ctx.ensure_initialized()
     59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:
     62     if name is not None:

ResourceExhaustedError:  OOM when allocating tensor with shape[8,64,256,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
     [[node gradient_tape/functional_1/conv2d_14/Conv2D/Conv2DBackpropInput (defined at <ipython-input-17-d4852b86b8c1>:25) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 [Op:__inference_train_function_17207]

Function call stack:
train_function

Is there any type of mistake in the network definition? How could I improve the network to solve this problem?

",40662,,2444,,9/7/2020 12:37,3/11/2021 6:02,How to calculate the GPU memory need to run a deep learning network?,,3,3,,,,CC BY-SA 4.0 23336,2,,23296,8/30/2020 0:43,,5,,"

There is Google Research Football, which is an open-source platform to develop reinforcement learning algorithms to play a game similar to FIFA or PES, although the football simulation is not as realistic as the current versions of FIFA or PES. You can play this game against different RL agents (e.g. DQN or IMPALA) and, of course, you can even develop your own RL agents and play against them. Here is a video that illustrates the environment. Here is the code and instructions to use it.

As far as I know, there isn't yet an AI that plays simulated football at a human-level (i.e. as good as the best human players). For example, I can regularly (although not always) beat the legendary level-AI at FIFA, but I also don't know the details about this AI (which could also be rule-based).

",2444,,2444,,8/30/2020 13:18,8/30/2020 13:18,,,,1,,,,CC BY-SA 4.0 23338,1,,,8/30/2020 7:50,,1,126,"

I've been reading non-local neural networks as explained in the original paper. My understanding is that they solve the restrained reception of local filters. I see how they are different from convolutions and fully connected networks.

How do they relate to attention (specifically self-attention)? How do they integrate this attention?

",40671,,2444,,9/7/2020 12:38,9/7/2020 12:38,How do non-local neural networks relate to attention and self-attention?,,0,0,,,,CC BY-SA 4.0 23341,1,,,8/30/2020 13:59,,1,37,"

I was reading a paper Multi-Agent Reinforcement Learning for Adaptive User Association in Dynamic mmWave Networks and I was stuck understanding the deep neural network architecture that was used. The authors gave it in Fig. 3 (on top of page 6) and they state the following (on page 9):

This architecture comprises 2 multi-layers perceptron (MLP) of 32 hidden units, one RNN layer (a long short memory term - LSTM) layer with 64 memory cells followed by another 2 MLPs of 32 hidden units. The network then branches off in two MLPs of 16 hidden units to construct the duelling network.

According to Fig. 3 there is one MLP, one RNN and one MLP. So why the authors said 2 MLPs?

Assuming it is 2 MLPs, does this mean we have 2 hidden layers of 32 neurons each? So, at the end we will have:

one input layer - one hidden layer with 32 neurons - another hidden layer with 32 neurons - one RNN layer with 64 cells - one hidden layer with 32 neurons - another hidden layer with 32 neurons - one hidden layer with 16 neurons - another hidden layer with 16 neurons - one output layer.

",37642,,2444,,8/30/2020 14:44,8/30/2020 14:44,How to understand this NN architecture?,,0,0,,,,CC BY-SA 4.0 23344,2,,23331,8/30/2020 17:02,,3,,"

You can indeed fit a polynomial to your labelled data, which is known as polynomial regression (which can e.g. be done with the function numpy.polyfit). One apparent limitation of polynomial regression is that, in practice, you need to assume that your data follows some specific polynomial of some degree $n$, i.e. you assume that your data has the form of the polynomial that you choose, which may not be true.

When you use a neural network to solve a classification or regression problem, you also need to choose the activation functions, the number of neurons, how they are connected, etc., so you also need to limit the number and type of functions that you can learn with neural networks, i.e. the hypothesis space.

Now, it is not necessarily a bad thing to limit the hypothesis space. In fact, learning is generally an ill-posed problem, i.e. in simple terms, there could be multiple solutions or no solutions at all (and other problems), so, actually, you often need to limit the hypothesis space to find some useful solutions (e.g. solutions that generalise better to unseen data). Regularisations techniques are ways of constraining the learning problem, and the hypothesis space (i.e. the set of functions that your learning algorithm can choose from), and thus making the learning problem well-posed.

Neural networks are not preferred over polynomial regression because they are theoretically more powerful. In fact, both can approximate any continuous function [1], but these are just theoretical results, i.e. these results do not give you the magical formula to choose the most appropriate neural network or polynomial that best approximates the desired unknown function.

In practice, neural networks have been proven to effectively solve many tasks (e.g. translation of natural language, playing go or atari games, image classification, etc.), so I would say that this is the main reason they are widely studied and there is a lot of interest in them. However, neural networks typically require large datasets to approximate well the desired but unknown function, it can be computationally expensive to train or perform inference with them, and there are other limitations (see this), so neural networks are definitely not perfect tools, and there is the need to improve them to make them more efficient and useful in certain scenarios (e.g. scenarios where uncertainty estimation is required).

I am not really familiar with research on polynomial regression, but it is possible that this and other tools have been overlooked by the ML community. You may want to have a look at this paper, which states that NNs are essentially doing polynomial regression, though I have not read it, so I don't know the details about the main ideas and results in this paper.

",2444,,2444,,1/21/2021 23:38,1/21/2021 23:38,,,,0,,,,CC BY-SA 4.0 23348,1,23353,,8/30/2020 21:21,,1,66,"

I have a dataset with a number of houses, for each house, I have a description. For example "The house is luxuriously renovated" or "The house is nicely renovated". My aim is to identify for each house whether it is luxuriously, well or poorly renovated. I am new to NLP so any tips on how to approach this problem would be much appreciated.

",40688,,2444,,8/31/2020 12:14,8/31/2020 12:14,How can I classify houses given a dataset of houses with descriptions?,,1,0,,,,CC BY-SA 4.0 23349,1,,,8/30/2020 21:33,,0,121,"

What are examples of problems where neural networks have been used and have achieved human-level or higher performance?

Each answer can contain one or more examples. Please, provide links to research papers or reliable articles that validate your claims.

",2444,,,,,11/17/2020 23:58,What are examples of problems where neural networks have achieved human-level or higher performance?,,2,0,,,,CC BY-SA 4.0 23350,2,,23349,8/30/2020 21:33,,2,,"

Here is an initial list of AI systems that used neural networks and have achieved human-level or superhuman performance. All of these systems are reinforcement learning systems that play videogames.

  • AlphaGo and AlphaGo Zero (an improved version of AlphaGo that does not use human knowledge but learns by playing against itself) have achieved superhuman performance in the game of go and, in the case of AlphaZero (a generalized version of AlphaGo Zero), also in the games of chess and shogi.
  • DQN has achieved human-level or superhuman performance in many Atari games
  • DeepStack has achieved human-level performance in poker
  • AlphaStar defeated a top professional player in the real-time strategy game StarCraft 2
  • OpenAI Five defeated world champions in the game of Dota
",2444,,2444,,11/17/2020 23:58,11/17/2020 23:58,,,,2,,,,CC BY-SA 4.0 23351,2,,23296,8/31/2020 8:21,,0,,"

There are many games where AI involved but less of them against the human player such as playing the video game. For example in this paper, they proposed a three-dimensional multiplayer first-person video game called Quake III Arena in Capture the Flag mode. Also, this paper and this paper show many games where AI involved and they show which kind of games that a human player can play against. I recommend also Awesome-Game-AI.

",21181,,21181,,9/3/2020 15:23,9/3/2020 15:23,,,,0,,,,CC BY-SA 4.0 23352,2,,23243,8/31/2020 8:22,,0,,"

You can save the extracted features coming out of YOLOv4 and save it to pickle file and use them for later.

You can also find related information in this project made by Jason Brownlee

https://machinelearningmastery.com/develop-a-deep-learning-caption-generation-model-in-python/

",40565,,,,,8/31/2020 8:22,,,,0,,,,CC BY-SA 4.0 23353,2,,23348,8/31/2020 9:49,,0,,"

It all depends on what kind of annotations or other variables are present in your dataset. I see 2 possible scenarios here:

  • your dataset is made only of houses descriptions, without any indication of their luxury level.

  • you have annotations regarding the luxury level or another similar variables from which you can infer the luxury level (like the house price for example).

In the first case there's not much you can do except for trying to apply some unsupervised algorithms or transfer learning. Usually in NLP unsupervised techniques are used for tasks like Topic or language modeling, both of which are not really helpful for your specific application since they work at a really abstract level trying to learn relationships between words or documents in corpora containing huge variety of texts. The best you could try could be preprocessing the data to extract specific terms like entities (cities names, furnitures names, etc.) and adjectives from each description, and then try cluster them into n clusters were n is the number of classes you're interest in by applying for example Latent Dirichlet Allocation. Even though everything is possible with the right time and patience, I would never follow this road, especially because it relies on a perfect preprocessing, that involves already transfer learning, for the name entity recognition part for example. And even tough libraries like SpaCy offer really good models to perform these tasks, once you have these kind of annotations a rule based approach would probably be faster and easier to build than another unsupervised model, e.g. creating a simple dictionary contacting names of luxurious furnitures name and adjectives that indicate if the house is expensive would probably be sufficiently good.

In the second case, the story change completely because if you have annotations you could rely on supervised learning. If you already have explicit annotations like "luxuriously renovated" "nicely renovated" nothing stops you from trying to train whatever architecture you feel more comfortable with on this classification task. Even simple architecture like CNN that are easy and fast to train usually achieve good results in classification, and you definitely want to leverage some pre-trained embedding vectors like GloVe as an input feature (every deep learning framework like tensorflow or pytorch already implement the possibility to use them).

To conclude, if you don't have annotations you might try LDA just to check if you're lucky but if I were you I would start annotating data as quick as I can. A good practice in this case is to also ask someone else to perform the annotations, not only to be faster in the creation of the dataset but also to then calculate the Inter Annotator Agreement score, that gives an indication about the quality of the annotations (if the score is low, the dataset quality is poor and no model will be able to learn something from it).

",34098,,,,,8/31/2020 9:49,,,,0,,,,CC BY-SA 4.0 23357,2,,23332,8/31/2020 14:29,,1,,"

The answer is actually really simple: they are all randomly initialised. So they are to all intents and purposes "normal" weights of a neural network.

This is also the reason why in the original paper the authors tested several setting with single and multiple attention heads. If these matrices were somehow "special" or predetermined they would all serve the same purpose. Instead, because of their random initialisation, each attention head learn to contribute to solve a different task, like they show in Figure 3 and 4.

",34098,,,,,8/31/2020 14:29,,,,2,,,,CC BY-SA 4.0 23358,2,,23328,8/31/2020 15:15,,1,,"

I think you should try to reason in terms of total "area" explored by the agent rather than "how far" it moves from the initial point, and also you should add some reward terms to push the agent steering more often. I think that the problem with your setting is more or less this: The agent go as straight as it can because you're rewarding it for it, it start sensing an obstacle so it stops, there is no reward for steering so the best strategy to go away from the obstacle and not end the episode is just to go backwards.

Considering that you have information about the grid points at any time you could rewrite the reward function in terms of grid squared explored by checking at each move if the agent end up in a new square grid:

$$ R_t= \begin{cases} r_{\text{collision}}\\ \lambda^d\left(\|\mathbf{p}^{x,y}_t-\mathbf{p}_0^{x,y}\|_2-\|\mathbf{p}_{t-1}^{x,y}-\mathbf{p}_0^{x,y}\|_2 \right) + r_{new-squared-explored} \end{cases} $$

Moreover it would be useful to add some reward terms also related to how the agent avoid the obstacle, for example a penalisation when the sensor goes and remain under a certain threshold (to make the agent learn to not go and stay too close to an obstacle) but also a rewarding term when an obstacle is detected and the agent manage to maintain a certain distance from it (even though if not well tuned this term could lead the agent to learn to just run in circles around a single obstacle, but if tuned properly I think it might help to make the agent movements smoother).

",34098,,34098,,8/31/2020 15:21,8/31/2020 15:21,,,,3,,,,CC BY-SA 4.0 23360,1,23363,,8/31/2020 16:30,,1,452,"

For an upcoming project I'm trying to write a text classifier for the IMDb sentiment analysis dataset. This needs to vectorize words using an embedding layer and then reduce the dimensions of the output with global average pooling. This is proving however to be very difficult for my low experience level, and I am struggling to wrap my head around the dimensionality involved, bearing in mind I must avoid libraries such as tensorflow that would make it very basic exercise. I am hoping that I could make it easier by encoding each word in the reviews as a one-hot vector, and passing it through a few regular dense layers. Would this work and yield decent results?

",38497,,,,,8/31/2020 19:49,Can I use one-hot vectors for text classification?,,1,0,,,,CC BY-SA 4.0 23361,1,,,8/31/2020 17:39,,1,30,"

I designed a DQN architecture for some problem. The problem has a parameter $m$ as the number of clients. In my situation, $m$ is large, $m\in\{100,200,\ldots,1000\}$. For this situation, the number of input ports of the DQN is some few thousand, $\{1000, 2000, \ldots, 10000\}$. For some fixed $m$, I would like to see the performance of deep Q learning on the performance. So I have to train the DQN for every change that occurs on $m$ and this should handle thousands of inputs ports for each training. Is this situation familiar in DQN and if not how to solve this issue?

",37642,,,,,8/31/2020 17:39,Is it feasible to train a DQN with thousands of input ports?,,0,3,,,,CC BY-SA 4.0 23362,1,23407,,8/31/2020 18:45,,6,312,"

I have watched Stanford's lectures about artificial intelligence, I currently have one question: why don't we use autoencoders instead of GANs?

Basically, what GAN does is it receives a random vector and generates a new sample from it. So, if we train autoencoders, for example, on cats vs dogs dataset, and then cut off the decoder part and then input random noise vector, wouldn't it do the same job?

",36107,,2444,,9/2/2020 23:42,9/27/2020 19:21,Why don't we use auto-encoders instead of GANs?,,2,0,,,,CC BY-SA 4.0 23363,2,,23360,8/31/2020 19:49,,1,,"

One hot encoding is a good strategy to apply with categorical variables that assume few possible values. The problem with text data is that you easily end up with corpora with a really large vocabulary. If I remember correctly the IMDb dataset contains around 130.000 unique words, which means that you should create a network with an input matrix of size 130.000 x max_length where max_length is the fixed maximum length allowed for each review. Apart from the huge size, this matrix would also be extremely sparse, and that's another big issue in using one-hot encoding with text.

For these reasons, I really doubt you would achieve any good results with a simple one-hot encoding. Embeddings where actually designed precisely to overcome all these issues, they have fixed reasonable size, they assume continue values between 0 and 1, which is desirable for deep neural networks, and they can be treated as "extra" trainable weights of a network.

If you really want to avoid embeddings I would suggest you to use (or implement, I don't think it will be so hard) a term frequency–inverse document frequency vectoriser. It is closer to one-hot encoding in the fact that it is based on the creation of a huge co-occurances matrix between words, but at least the values are continuous and not dichotomous. Nevertheless I would not expect high performances with the tf-idf either, simply because this type of encoding works best with shallow models like the Naive Bayes rather than deep models.

",34098,,,,,8/31/2020 19:49,,,,0,,,,CC BY-SA 4.0 23364,1,,,8/31/2020 20:35,,1,40,"

I have a binary classification problem. I have variables (features) var1, var2, var3, ..., var14.

Using these variables (aka features) in a logistic regression, I get their weights.

If I use the same set of variables in a neural network:

  • Should I get a different output?

    or

  • Should I get the same output?

I developed a ROC Curve, and I have both lines overlaying on each other. I am not sure if I am missing something here.

",40706,,2444,,9/7/2020 10:58,9/7/2020 10:58,"Given the same features, do logistic regression and neural networks produce the same output?",,0,1,,,,CC BY-SA 4.0 23365,2,,7416,8/31/2020 21:08,,1,,"

I've published an article with the corresponding new method based on the generative grammars of first-order theories:

Thoughts on generative grammars and their use in automated theorem proving based on neural networks

This approach allows not to use previous data but to generate it as much as it's needed in machine learning. In the article, you may find necessary theory on logic, grammars, and neural networks. You'll also find examples of the python-functions generating proofs literally. I've added a grammar for the propositional logic that can be naturally enlarged to the "real" cases of first-order theories (say, group or number theory).

",40703,,40703,,9/2/2020 21:36,9/2/2020 21:36,,,,1,,,,CC BY-SA 4.0 23367,1,23379,,9/1/2020 0:02,,2,54,"

I read the following from a book:

You can intuitively understand the dimensionality of your representation space as “how much freedom you’re allowing the model to have when learning internal representations.” Having more units (a higher-dimensional representation space) allows your model to learn more-complex representations, but it makes the model more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data).

Why does using a higher representation space lead to performance increase on the training data but not on the test data?

Surely the representations/patterns learnt in the training data will be found too in the test data.

",26159,,2444,,9/1/2020 11:22,9/1/2020 12:01,Why does using a higher representation space lead to performance increase on the training data but not on the test data?,,1,3,,,,CC BY-SA 4.0 23368,1,,,9/1/2020 6:54,,1,140,"

I am trying to train a DDPG agent augmented with Hindsight Experience Replay (HER) to solve the KukaGymEnv environment. The actor and critic are simple neural networks with two hidden layers (as in the HER paper).

More precisely, the hyper-parameters I am using are

  • The actor's hidden layer's sizes: [256, 128] (using ReLU activations and a tanh activation after the last layer)
  • Critic's hidden layer's sizes: [256, 128] (Using ReLU activations)
  • Maximum Replay buffer size: 50000
  • Actor learning rate: 0.000005 (Adam Optimizer)
  • Critic learning rate: 0.00005 (Adam Optimizer)
  • Discount rate: 0.99
  • Polyak constant : 0.001
  • The transitions are sampled in batches of 32 from the replay buffer for training
  • Update rate: 1 (target networks are updated after each time step)
  • The action selection is stochastic with the noise being sampled from a normal distribution of mean 0 and standard deviation of 0.7

I trained the agent for 25 episodes with a maximum of 700 time-steps each and got the following reward plot:

The reward shoots up to a very high value of about 8000 for the second episode and steeply falls to -2000 in the very next time step, never to rise again. What could be the reason for this behavior and how can I get it to converge?

PS : One difference I observed while training this agent from while training a simple DDPG agent is that for simple DDPG, the episode would usually terminate at around 450 time-steps, thus never reaching the maximum specified timesteps. However, here, no episode terminated before the specified 700 maximum time steps. This might have something to do with the performance.

",38895,,2444,,11/20/2020 19:12,11/20/2020 19:12,Why would DDPG with Hindsight Experience Replay not converge?,,0,0,,,,CC BY-SA 4.0 23369,1,,,9/1/2020 8:03,,0,451,"

I have a data set with 36 rows and 9 columns. I am trying to make a model to predict the 9th column

I have tried modeling the data using a range of models using caret to perform cross-validation and hyper parameter tuning: 'lm', random forrest (ranger) and GLMnet, with range of different folds and hyper-parameter tuning, but the modeling has not been very successful.

Next I have tried to use some of the neural-network models. I tried the 'monmlp'. During hyper parameter tuning I could see that the RMSE drops to a level when using ~ 6 hidden units. The problem I observe using this model is

  1. Prediction is almost equal to data
  2. When doing a "manual" cross validation by removing a single datapoint and using the trained model to predict, it has no predictive power

I have tried to use a range of different hidden units, but i think the problem is that the model is overfitted despite using caret cross validation feature.

There two feedbacks I would appreciate

  1. Is there a way to prevent overfitting, by chosen optimal number of training iterations ( optimal RMSE on out of sample ). Can this by done using caret or some other package
  2. Am I using the right model?

I am relatively unexperienced with ML and choosing a good model is tough: when you look at the available packages it is overwhelming:

https://topepo.github.io/caret/train-models-by-tag.html

",40709,,,,,10/5/2022 21:08,How to avoid over-fitting using early stopping when using R cross validation package caret,,2,0,,,,CC BY-SA 4.0 23370,1,,,9/1/2020 8:29,,2,135,"

How does PCA work when we reduce the original space to a 2 or higher-dimensional space? I understand the case when we reduce the dimensionality to $1$, but not this case.

$$\begin{array}{ll} \text{maximize} & \mathrm{Tr}\left( \mathbf{w}^T\mathbf{X}\mathbf{X}^T\mathbf{w} \right)\\ \text{subject to} & \mathbf{w}^T\mathbf{w} = 1\end{array}$$

",40710,,3171,,11/15/2020 13:35,11/15/2020 13:35,How does PCA work when we reduce the original space to 2 or higher-dimensional space?,,2,1,,,,CC BY-SA 4.0 23371,1,23374,,9/1/2020 8:55,,4,327,"

As it is discussed here, and I saw it on other Latin language forums too, everybody complains about how Google Translate fails to translate the Latin language. From my personal experience, it is not that much bad on other languages, including romance languages.

So, what makes Google Translate fail so much to translate the Latin language? Is it about its syntax and grammar or lack of data?

",38344,,2444,,11/10/2020 10:55,11/10/2020 10:55,What makes Google Translate fail on the Latin language?,,2,0,,,,CC BY-SA 4.0 23372,2,,23370,9/1/2020 9:05,,2,,"

You might want to have a look at the wikipedia article of PCA, where it says:

"The $k$th component can be found by subtracting the first $k − 1$ principal components from $\mathbf{X}$:"

$$\hat{\mathbf{X}}_k = \mathbf{X} - \sum_{s=1}^{k-1}\mathbf{X}\mathbf{w}_s\mathbf{w}_s^T$$

Then you repeat the process to find the next component:

$$\mathbf{w}_k = \arg\max \mathbf{w}^T\mathbf{\hat{X}}^T_k\mathbf{\hat{X}}_k\mathbf{w}$$ $$\text{s.t. } \mathbf{w}_k^T\mathbf{w}_k = 1$$

",37120,,,,,9/1/2020 9:05,,,,0,,,,CC BY-SA 4.0 23373,2,,23362,9/1/2020 9:07,,4,,"

In fact, autoencoders are used for generative tasks. Have a look at Tutorial on Variational Autoencoders (VAEs).

The coolest thing about VAE is that abstract features can be easily amplified or suppressed based on extracted vectors from the latent space. Let's imagine a model trained on MNIST to generate digits. If you take two images of the same digit which only vary in thickness, encode both of them and subtract the two vectors, the resulting vector will be a description of thickness in the latent space. Now you can generate an arbitrary digit and incrementally adjust its thickness based on that vector.

",38671,,,,,9/1/2020 9:07,,,,0,,,,CC BY-SA 4.0 23374,2,,23371,9/1/2020 9:48,,6,,"

I don't know what model Google is using for their translations, but it's highly likely that they're using one of today's SOTA deep learning models.

The latest NLP models are trained on data scraped from the web, e.g. OpenAI's GPT-2 was trained on a dataset of 8 million web pages, Google's BERT was trained on the BookCorpus (800M words) and English Wikipedia (2.500M words) pages.

Now think about the amount of latin web pages and notice that there are over 6 million english wikipedia articles but less than 135.000 in latin (see here).

As you can see, massive amounts of data are crucial for neural machine translation and I assume there is simply not enough out there for latin. Plus latin is one of the most complex and complicated languages, this makes the task not easier. Maybe Google and Co also focus less on a 'dead' language which is not spoken anymore and has it's right to exist more for educational purposes.

",37120,,,,,9/1/2020 9:48,,,,3,,,,CC BY-SA 4.0 23375,1,23393,,9/1/2020 10:00,,0,42,"

As I am curious on music theory I would like to know that If is there any such network that analyse like labeling chords, or doing a roman numeral analysis.

Like an example below:

Source

It does not seem to be a difficult task.

Some other examples are given here[external link]

Also I am curious that If it is a possible task for AI to accomplish.

",38344,,38344,,9/1/2020 10:05,9/2/2020 11:05,Is there any network/paper used to analyse music scores?,,1,4,,,,CC BY-SA 4.0 23376,2,,23290,9/1/2020 10:00,,2,,"

The problem is not that we need importance sampling because the learning is off-policy -- you are correct in that for one step off-policy algorithms such as $Q$-learning we don't need importance sampling, see e.g. here for an explanation why. The reason we need the importance sampling is due to the loss used to train the network.

In the original DQN paper, the loss is defined as $$L_i(\theta_a) = \mathbb{E}_{(s,a,r,s') \sim \mbox{U}(D)} \left[ \left( r + \gamma \max_{a'} Q(s',a' ; \theta_i^-) - Q(s,a;\theta_i) \right)^2 \right ]\;.$$ You can see here the expectation over the loss is taken according to a uniform distribution over the replayed buffer $D$. If we started randomly sampling non-uniformly, as is the case in PER, then the expectation wouldn't be satisfied and would introduce bias. Importance sampling is used to correct this bias.

Note that in the paper they mention that the bias isn't as much of an issue at the start of learning and hence they use a decaying $\beta$ that only makes the importance sampling weights the 'correct' weights to use at the end of learning - this means that the estimate of the loss is asymptotically unbiased.

",36821,,36821,,9/1/2020 19:35,9/1/2020 19:35,,,,0,,,,CC BY-SA 4.0 23377,1,,,9/1/2020 11:04,,0,175,"

This question is in relation to a previous doubt of mine :

Are there neural networks where nodes are randomly selected from among a set of nodes (in random orders and a random number of times)?

I have made a bit of progress from there, refurbished my code, and got things ready.

What I intend to make is 'Insane Mind', a model which forms random linear neural networks from a set of nodes at random times ( I made out the 'linear neural network' part from a bit of Google searches).

The basic process involved is :

  1. The system forms nodes of random weights . These nodes also have the Sigmoid function (the logistic fuction : $f(x) = \frac{1}{1 + e^{-x}}$ ) , and I termed these 'Gravitons' (because of the usage of the word 'weights' in them - sorry if my terminology work seems ambiguous...😅)
  2. The input enters the system via one of the gravitons.
  3. The node processes it and either passes the output to the next node or to itself .
  4. Step 3 is repeated a certain number of times as the number of gravitons made for use.
  5. The output of the final graviton is given as the output of the whole system.

One thing I'm sure of this model is that this model can transform an input vector into an output vector.

I am not sure whether this is ambiguous or similar to previously discovered model. Plus, I'd like to know if this will be effective in any situation (I believe it will be of help in classification problems).

Note : I made this out of my imagination , which means this may be useless one way or the other, but still it seemed to work.

Here's the training algorithm I made for this model :

  1. In my Python implementation of this model, I had added a provision in the 'Graviton' class to store the derivative of the output of the graviton. Using this, the gravitons are ordered in the increasing order of the derivatives of their outputs.
  2. The first graviton is taken, and its weight is modified by the error in the output.
  3. The error is modified by the product of the graviton's output derivative and its weight after editing.
  4. Steps 2 through 3 are done for the other gravitons as well. The final error (given by the error variable ) will be the product of the derivatives, the edited weights and the error in the output.
  5. The set of gravitons thus formed is the next set subjected to this training.

For extra reference, here's the code:

  1. Insane_Mind.py :
from math import *
from random import *
 
class MachineError(Exception):
    '''standard exception in the API'''
    def __init__(self, stmt):
        self.stmt = stmt
        
def sig(x):
    '''Sigmoid function'''
    try :
        return exp(x)/(exp(x) + 1)
    except OverflowError:
        if x > 0 :
            return 1
        elif x < 0:
            return 

class Graviton:
    def __init__(self, weight, marker):
        '''Basic unit in 'Insane Mind' algorithm'''
        self.weight = weight
        self.marker = marker + 1
        self.input = 0
        self.output = 0
        self.derivative = 0

    def process(self, input_to_machine):
        '''processes the input'''
        self.input = input_to_machine
        self.output = sig(self.weight * self.input)
        self.derivative = self.input * self.output * (1- self.output) 
        return self.output
    
    def get_derivative_at_input(self):
        '''returns the derivative of the output'''
        return self.derivative

    def correct_self(self, learning_rate, error):
        '''edits the weight'''
        self.weight += -1 * error * learning_rate * self.get_derivative_at_input() * self.weight
        
class Insane_Mind_Base:
    '''Insane_Mind base class - this is what we're gonna use to build the actual machine'''
    def __init__(self, number_of_nodes):
        '''initialiser for Insane_Mind_Base class.
           arguments : number_of_nodes : the number of nodes you want'''
        self.system = [Graviton(random(),i) for i in range(number_of_nodes)] # the actual system
        self.system_size = number_of_nodes # number of nodes , or 'system size'
        
    def  output_sys(self, input_to_sys):
        '''system output'''
        self.output = input_to_sys
        for i in range(self.system_size):
            self.output = self.system[randint(0,self.system_size - 1 )].process(self.output)
        return self.output
    
    def train(self, learning_rate, wanted):
        '''trains the system'''
        self.cloned = []
        order = []
        temp = {}
        for graviton in self.system:
            temp.update({str(graviton.derivative): self.system.index(graviton)})
        order = sorted(temp)
        i = 0
        error = wanted - self.output
        for value in order:
            self.cloned.append(self.system[temp[value]])
            self.cloned[i].correct_self(learning_rate, error)
            error *= self.cloned[i].derivative * self.cloned[i].weight
            i += 1
        self.system = self.cloned

    def details(self):
        '''gets the weights of each graviton'''
        for graviton in self.system:
            print("Node : {0}, weight : {1}".format(graviton.marker , graviton.weight))

class Insane_Mind:
    
    '''Actaul Insane_Mind class'''
    def __init__(self, number_of_gravitons):
        '''initialiser'''
        self.model = Insane_Mind_Base(number_of_gravitons)
        self.size = number_of_gravitons
        
    def get(self, input):
        '''processes the input'''
        return self.model.output_sys(input)
    
    def train_model(self, lrate, inputs, outputs, epoch):
        '''train the model'''
        if len(inputs) != len(outputs):
            raise MachineError("Unequal sizes for training input and output vectors")
        epoch = str(epoch)
        if epoch.lower() == 'sys_size':
            epoch = int(self.model.system_size)
        else:
            epoch = int(epoch)
        for k in range(epoch):
            for j in range(len(inputs)):
                    val = self.model.output_sys(inputs[j])
                    self.model.train(1/val if str(lrate).lower() == 'output' else lrate, outputs[j])
    
    def details(self):
        '''details of the machine'''
        self.model.details()

  1. Insane_Mind_Test.py :
from Insane_Mind import *
from statistics import *

input_data = [3,4,3,5,4,4,3,6,5,4] # list of forces using which the coin is tossed
output_data = [1,0,0,1,1,0,0,0,1,1] # head or tails in binary form (0 = tail (= not head), 1 = head)
wanteds = output_data.copy()
model = Insane_Mind(2) # Insane Mind model
print("Before Training:")
print("----------------")
model.details() # fetches you weights of the model

def normalize(x):
    cloned = x.copy()
    meanx = mean(x)
    stdevx = stdev(x)
    for i in range(len(x)):
        cloned[i] = (cloned[i] - meanx)/stdevx
    return cloned

def random_catch(range_of_catches, sample_length):
    # sample data generator. I named it random catch as part of using it in testing whether my model 
    # ' catches the correct guess'. :)
    return [randint(range_of_catches[0], range_of_catches[1]) for i in range(sample_length)]

input_data = normalize(input_data)
output_data = normalize(output_data)

model.train_model('output', input_data, output_data, 'sys_size')
# the argument 'output' for the argument 'lrate' (learning rate) was to specify that the learning rate at # each step is the inverse of the output, and the use of 'sys_size' for the number of times to be trained
# is used to tell the machine that the required number of epochs is equal to the size of the system or 
# the number of nodes in it.

print("After Training:")
print("----------------")
model.details() # fetches you weights of the model

predictions = [model.get(i) for i in input_data]

threshold = mean(predictions)
predictions = [1 if i >= threshold else 0 for i in predictions]

print("Predicted : {0}".format(predictions))
print("Actual:{0}".format(wanteds))
mse_array = [(wanteds[j] - predictions[j])**2 for j in range(len(input_data))]
print("Mean squared error:{0}".format(mean(mse_array)))

accuracy = 0
for i in range(len(predictions)):
    if predictions[i] == wanteds[i]:
        accuracy += 1

print("Accuracy:{0}({1} out of {2} predictions correct)".format(accuracy/len(wanteds), accuracy, len(predictions)))

print("______________________________________________")

print("Random catch test")
print("-----------------")

times = int(input("No. of tests required : "))
catches = int(input("No. of catches per test"))
mse = {}
for m in range(times):
    wanted = random_catch([0,1] , catches)
    forces = random_catch([1,10], catches)
    predictions = [model.get(k) for k in forces]
    threshold = mean(predictions)
    predictions = [1 if value >= threshold else 0 for value in predictions]
    mse_array = [(wanted[j] - predictions[j])**2 for j in range(len(predictions))]
    print("Mean squared error:{0}".format(mean(mse_array)))
    mse.update({(m + 1):mean(mse_array)})
    accuracy = 0
    for i in range(len(predictions)):
        if predictions[i] == wanted[i]:
            accuracy += 1
    print("Accuracy:{0}({1} out of {2} predictions correct)".format(accuracy/len(wanteds), accuracy, len(predictions)))
    

I tried running 'Insane_Mind_Test.py', and the results I got are :

The formula I used from MSE is (please correct me if I was wrong): $$ MSE = \frac{\sum_{i = 1}^n (x_i - x'_i)^2}{n}$$

where,

$$ x_i = \text{Intended output}$$ $$ x'_i = \text{Output predicted}$$ $$ n = \text{Number of outputs}$$ My main intention was to make a guess system.

Note : Here, I had to think differently. I decided to classify the forces as those yielding a head and those that yield a tail (unlike what I say in the comments in the program).

Thanks for all help in advance.

Edit: Here's the training data :

Forces         Head(1) or not head(0)[rather call it tail] 
_______        ______________________
3                  1
4                  0
3                  0
5                  1
4                  1
4                  0
3                  0
6                  0
5                  1
4                  1
",40583,,40583,,9/1/2020 16:16,1/22/2023 3:10,"Is my ""Insane Mind"" design for a classifier novel or effective?",,1,8,0,,,CC BY-SA 4.0 23378,2,,5399,9/1/2020 11:19,,1,,"

There is a hardware based reasoning. Matrix multiplication is one of the central computations in deep learning. SIMD operations in CPUs happen in batch sizes, which are powers of 2.

Here is a good reference about speeding up neural networks on CPUs by leveraging SIMD instructions:

Improving the speed of neural networks on CPUs

You will notice batch sizes that are powers of 2. This is a good paper to read about implementing neural networks using SIMD instructions.

",40714,,40714,,5/7/2021 15:56,5/7/2021 15:56,,,,1,,,,CC BY-SA 4.0 23379,2,,23367,9/1/2020 11:44,,1,,"

The answer to your question is that the capacity of your model (i.e. the number and type of function that your model can compute) generally increases with the number of parameters. So, a bigger model can potentially approximate better the function represented by your training data, but, at the same time, it may not take into account the test data, a phenomenon known as over-fitting the training data (i.e. fitting "too much" the training data).

In theory, you want to fit the training data perfectly, so over-fitting should not make sense, right? The problem is that, if we just fit all the (training) data, there is no way of empirically checking that our model will perform well on unseen data, i.e. will it generalize to data not seen during training? We split our data into training and test data because of this: we want to understand whether our model will perform well also on unseen data or not.

There are also some theoretical bounds that ensure you that, probabilistically and approximately, you can generalize: if you have more training data than a certain threshold, the probability that you perform badly is small. However, these theoretical bounds are often not taken into account in practice because, for example, we may not be able to collect more data to ensure that the bounds are satisfied.

Surely the representations/patterns learnt in the training data will be found too in the test data.

This is possibly the wrong assumption and the reason why you are confused. You may assume that both your training data and test data come from the same distribution $p(x, y)$, but it does not necessarily mean that they have the same patterns. For example, I can sample e.g. 13 numbers from a Gaussian $N(0, 1)$, the first 10 numbers could be very close to $0$ and the last $3$ could be close to $1$. If you split this data so that your training data contains different patterns than the test data, then it is not guaranteed that you will perform well also on the test data.

Finally, note that, in supervised learning, our ultimate goal when we fit models to labeled data is to learn a function (or a probability distribution over functions), where we often assume that both the training and test data are input-output pairs from our unknown target function, i.e. $y_i = f(x_i)$, where $(x_i, y_i) \in D$ (where $D$ is your labelled dataset), and $f$ is the unknown target function (i.e. the function we want to compute with our model), so, if our model performs well on the training data but not on the test data and we assume that both training and test data come from the same function $f$, there is no way that our model is computing our target function $f$ if it performs badly on the test data.

",2444,,2444,,9/1/2020 12:01,9/1/2020 12:01,,,,1,,,,CC BY-SA 4.0 23380,2,,23369,9/1/2020 13:03,,0,,"

You are handling a very small dataset. The only way to prevent overfitting then is to choose a very restrictive model search space. The simpler the better, and you should prefer models involving some regularization. Even tuning hyperparameters will be hard, so avoid families with many hyperparameters. Neural networks are definitely a no-go, IMHO.

Cross-validation is important in this case, a.o. for tuning the regularization hyperparameter. You were right to perform the extra leave-one-out cross-validation. But remember checking the confidence intervals, which will be very broad.

I would suggest to use some prior business knowledge to further reduce the number of candidate predictor variables to 2-4. Then you can use a linear regression or a ridge regression (i.e. linear regression with L2 regularization, amount to be tuned as hyperparameter) using only these variables. If you have reasons to believe that the relation is very non-linear, you'll need to find another family.

Another possibility is to perform a data-driven feature selection (see https://topepo.github.io/caret/feature-selection-overview.html), but it is harder to properly implement and cross-validate. I wouldn't recommend it to a beginner. Remember that this step can also introduce overfit or instability.

",27606,,,,,9/1/2020 13:03,,,,0,,,,CC BY-SA 4.0 23381,1,23391,,9/1/2020 13:06,,0,28,"

I have IMU (Inertial Measurment Unit- 6 axis) sensor data. The sensor attached on a car and 7 different drivers wipe on same path. I want to extract features and classify drivers. Which type of feature extractor do you guys suggest? I am planning to use PCA and Autoencoders but what do you think about classical signal properties to classify drivers?

",28129,,,,,9/2/2020 10:48,Which type of feature extractor do you suggest to classify sensor data?,,1,0,0,,,CC BY-SA 4.0 23384,1,23388,,9/1/2020 18:46,,1,206,"

I wrote a Python program for a simple inventory control problem where decision epochs are equally divided (every morning) and there is no lead time for orders (the time between submitting an order until receiving the order). I use the Bellman equations, and solve them by policy iteration (Dynamic Programming).

Now I want to consider lead time (between 1 to 3 days with equal probabilities). As I understand, the problem is defined by Semi Markov Decision Process for considering sojourn time in each state. I am confused about the Bellman equations in this scenario because we don't know exactly when the order will be received and is it necessary to discount the reward for day two or three?

",40719,,2444,,9/4/2020 18:13,9/4/2020 18:13,Bellman optimality equation in semi Markov decision process,,1,0,,,,CC BY-SA 4.0 23385,1,,,9/1/2020 18:59,,0,47,"

I am implementing a simple backpropagation neural network for classifying images. One set of images are cars another set of images are buildings (houses). So far I have used Sobel Edge detector after converting the images from black and white. I need a way to remove the offset (in other words normalise the input) of where the car or where the house is in the image.

Will taking the discrete Fourier cosine transform remove the offset? (so the input to the neural network will be the coefficients of the discrete cosine Fourier transform). To be clear, when I mean offset I mean a pair of values (across the number of pixels, and vertically the number of pixels) determining where the car or the building is in the 2D image from the origin.

",40467,,2444,,9/4/2020 15:55,9/4/2020 15:55,How to normalise image input to backpropogation algorithm?,,0,2,,,,CC BY-SA 4.0 23386,2,,23370,9/1/2020 20:08,,0,,"

You can also understand the logic from the view of constrained optimisation. Introduce a Lagrange function: $$ \mathcal{L} = \text{Tr} (w^{T} X X^{T} w) - \lambda w^{T} w $$ And take the derivative with respect to $w$: $$ \frac{\partial \mathcal{L}}{\partial w} = 2 (X X^{T} - \lambda) w $$ For the general case of dimension $\geqslant 1$ $w$ is a set of vectors $w = (w_1 w_2 \ldots w_n)$. This expression vanishes, if for some index $i$ $w_i$ is an of eigenvector of $XX^{T}$ with the eigenvalue $\lambda_i$, and all other components are set to zero. In other words, stationary points are the eigenvectors of $X X^{T}$.

The condititon $w^T w = 1$ imposes the orthogonality condition on the eigenvectors. In fact, going back to the initial functional, one sees, that $w_i X X^{T} w_j = \lambda_j w_i^{T} w_j = 0$ for $i \neq j$. Therefore, we have finally: $$ \mathcal{L} =\sum \lambda_i - \lambda $$ Which is maximized for any $k \geq 1$, by taking $k$ largest eigenvalues.

",38846,,,user9947,9/1/2020 23:15,9/1/2020 23:15,,,,1,,,,CC BY-SA 4.0 23387,2,,23296,9/2/2020 8:23,,0,,"

I don't know about specific game titles but in terms of research University of Malta has a strong team working with application of machine learning to games. The key figure there used to be Georgios N. Yannakakis who published a lot of good papers and even wrote a book about content generation, smart game agents and, imho the most interesting, player modelling.

",38671,,,,,9/2/2020 8:23,,,,1,,,,CC BY-SA 4.0 23388,2,,23384,9/2/2020 9:53,,1,,"

The core problem here is state representation, not estimating return due to delayed response to actions on the original state representation (which is no longer complete for the new problem). If you fix that, then you can solve your problem as a normal MDP, and base calculations on single timesteps. This allows you to continue using dynamic programming to solve it, provided the state space remains small enough.

What needs to change is the state representation and state transitions. Instead of orders resulting in immediate change of stock levels, they become pending changes, and for each item you will have state representation for the amount of current stock, plus amount of stock in each lead time category. State transitions will modify expected lead time for each amount of pending stock as well as amount of current stock.

Your lead time categories will depend on whether the agent knows the lead time immediately after making an order:

  • If lead times are known, track remaining time until items arrive 1,2 or 3 days. These categories will be assigned by the enviroment following the order, then lead time will transition down on each day deterministically. A 1 day lead time will transition to in stock, 2 day lead will transition to 1 day etc.

  • If lead times are not known, but probabilities of them are, track time since the order was made. This will be 0, 1 or 2 days. Although you don't know when an order will arrive, you know the probabilities for state transition - e.g. items in 0 days have a 1 in 3 chance of transitioning to "in stock" and a 2 in 3 chance of transitioning to 1 days.

This makes the state space larger, but is less complex than moving to the Semi MDP representation. For instance, doing it this way means that you can still work with single time step transitions and apply dynamic programming in a standard way.

In general, if the environment has a delayed response to actions, then the best way to maintain Markov trait is to add relevant history of actions taken to the state. The added state variables can either be a direct list of the relevant actions, or something that tracks the logical consequence of those actions.

",1847,,1847,,9/2/2020 12:27,9/2/2020 12:27,,,,8,,,,CC BY-SA 4.0 23391,2,,23381,9/2/2020 10:48,,1,,"

There could be multiple possible ways to extract the features. One would be to use RNNs for a temporal relationship as the input data is time-series.

",31016,,,,,9/2/2020 10:48,,,,0,,,,CC BY-SA 4.0 23393,2,,23375,9/2/2020 11:05,,1,,"

I have not come across music labeling algorithms but upon a google scholar search, I found a couple of papers that aim to do quite the same task.

In general, if you have a labeled dataset then you can take an approach of a general speech recognition model. It should work fine for music labeling too, but you might need to tweak certain parameters.

",31016,,,,,9/2/2020 11:05,,,,0,,,,CC BY-SA 4.0 23394,1,,,9/2/2020 13:57,,0,64,"

I'm trying to white board the different mechanisms behind a convolutional neural network. I have on question regarding the dimension of my volume after using a max pooling layer. Let's suppose I have a (21,21,#filtres) volume's dimension. If Max Pooling divide by 2 the height and width of my volume, what will be the dimension after the Max Pooling layer ? If odd numbers are a problem when using max pooling layer, How do I fix it ?

Thank you !

",40730,,40730,,9/3/2020 10:42,1/31/2021 14:05,"What is the dimension of my output of the form (2n + 1, 2n + 1, #filters) after a MaxPooling layer",,1,3,,,,CC BY-SA 4.0 23395,2,,23394,9/2/2020 14:27,,1,,"

The result from applying a max pooling layer with a stride that does not exactly fit to the input will be dependent on the implementation in your library.

Assuming stride 2, and pool size (2,2), in your case the most likely things are:

  • The result will round up, so you will have a feature map layer with dimensions (11, 11, num_filters) although the right edge pixels will be a max over 2 pixels in the input, and the right bottom corner will just be a copy of the right bottom corner in the input (counting from top left as $(0, 0)$)

  • It is an error condition for your libary.

If it is not an error, then the max pooling should still perform the task it was intended to. If important features are often at the right or bottom edge, then they may generalise slightly less well, but you probably won't notice a measurable effect.

You could experiment with different sizes of pooling, different strides, or padding the previous convolutional layer so that the max pooling fits exactly. You will have to do something like this if the library has errors when the pooling does not fit exactly. You can test your experiments using cross validation, to see if there is any measurable difference.

",1847,,1847,,9/3/2020 13:48,9/3/2020 13:48,,,,0,,,,CC BY-SA 4.0 23396,1,,,9/2/2020 14:52,,1,20,"

I see that domain adaptation and transfer learning has been widely adopted in image classification and semantic segmentation analysis. But it's still lacking in providing solutions to enterprise data, for example, solving problems related to business processes?

I want to know what characteristics of the data determine the applicability or non-applicability with respect to generating models for prediction where multiple domains are involved within an enterprise information database?

",40731,,2444,,9/4/2020 14:55,9/4/2020 14:55,Why is domain adaptation and generative modelling for knowledge graphs still not applied widely in enterprise data? What are the challenges?,,0,0,,,,CC BY-SA 4.0 23397,2,,23296,9/2/2020 15:32,,0,,"

Not sure if this fits your requirements of the AI playing with the player, but I still wanted to mention it because to me it is the quintessential AI-based game:

AIDungeon, which is a text based story-telling game, where you can do literally anything. It's using GPT-2/GPT-3(paid) and has blown my mind several times. You've probably heard of it, but in case you haven't give it a try, only takes a couple of minutes to see what it can do.

",40732,,,,,9/2/2020 15:32,,,,2,,,,CC BY-SA 4.0 23404,1,,,9/2/2020 22:20,,0,125,"

What I want to achieve is this: If my desired outputs are [1, 2, 3, 4] I would rather have my network produce this output:

[0.99, 2.01, 999, 4.01]

than say this:

[0.94, 1.88, 3.12, 4.1]

So I'd rather have a few very accurate outputs and the rest completely off, than have them all be decent but no more than that. My question is, is there a known way to do this? If not, would it make sense to remove the inputs that produce poor outputs, and redo the learning phase?

",38668,,,,,9/3/2020 19:25,Is there a way to make my neural network discard inputs with bad results from learning?,,2,0,,,,CC BY-SA 4.0 23407,2,,23362,9/2/2020 23:37,,4,,"

Auto-encoders are widely used and maybe even more used than GANs (in fact, auto-encoders are older than GANs, although the main general idea behind GANs is quite old). For example, auto-encoders are used in World Models, for drug design (e.g. see this paper) and many other tasks that involve data compression or generation.

So, if we train autoencoders, for example, on cats vs dogs dataset, and then cut off the decoder part and then input random noise vector, wouldn't it do the same job?

Yes, the encoder part of the auto-encoder produces a vector that represents the input in a compressed form. You may also be interested in denoising auto-encoders, but there are other variations, such as convolutional auto-encoders or variational auto-encoders.

",2444,,2444,,9/27/2020 19:21,9/27/2020 19:21,,,,2,,,,CC BY-SA 4.0 23408,2,,23404,9/3/2020 3:34,,0,,"

Use genetic algorithm. Run like 25 neural networks at once and choose the most successful one. This method is similar to evolution, which is why it is very effective. I created a model like this with similar sized training data as yours and it reached an overall error rate of 0.06% in a second. Don’t get rid of nodes. Instead, eliminate the bad networks. However, this doesn’t produce extremely high error rates if that is what you want.

",40622,,40622,,9/3/2020 4:06,9/3/2020 4:06,,,,3,,,,CC BY-SA 4.0 23409,1,23411,,9/3/2020 9:00,,1,69,"

I am very new to the field of AI so please bear with me. Say there is a dice with three sides, -1,0 and 1, and I want to predict which side it lands on (so only one output is needed I guess). The input variables are numerous but not that many, maybe 7-10.

These input variables are certain formulae that involve calculations to do with wind, time, angle, momentum etc, and each formula returns which side it thinks the dice will like roll. Let's say that intuitively, by looking at these variables, I can make a very good guess at which side the dice lands on. If for example 6 out of 7 input variables say it likely that the dice will land on 1 but the 7th input suggests that it will land on 0, I would guess it lands on 1. As a human, I'm essentially consulting these inputs as a kind of "brains trust", and I act as a judge to make the final decision based on the brains trust. Of course in that example, my logic as a judge was simply majority rules, but what if some other more complicated non-linear method of judging was needed?

I essentially want my neural network to take this role as a judge. I have read that feedforward nns have limitations regarding control flow and loops, so I'm not sure if that structure will be appropriate. I'm not sure if recurrent nn will be appropriate either as I don't care what the previous inputs were.

Thanks

",40744,,,,,9/3/2020 10:21,What's a good neural network for this problem?,,1,0,,,,CC BY-SA 4.0 23410,2,,23371,9/3/2020 9:33,,1,,"

Simple old Latin is different from Latin and in the language words are added to written language that are not spoken as well as reverse order of words to have forward meaning.

",40746,,,,,9/3/2020 9:33,,,,0,,,,CC BY-SA 4.0 23411,2,,23409,9/3/2020 10:21,,4,,"

A simple feed-forward neural network with at least one hidden layer would suffice in your problem, and can deal with arbitrary non-linear relationships between input and output. If you expect relationships to be highly non-linear then additional layers might be required, but from your description of the problem, I would be surprised if you needed more than few layers, and a relatively small network.

However, I note that:

The input variables are numerous but not that many, maybe 7-10.

This gives you $3^{10} = 59049$ possible inputs. That's not much in terms of amount of data needed for ML statistical models. Assuming that even the best predictions are still probabilistic, then you may only need a million or so examples to create a reasonably accurate lookup table, not needing a neural network at all.

The strength of a neural network is to be able to generalise well from less examples than that. Of course, this is not perfect, but it would be able to do things such as notice if inputs 1,2 and 3 agree then that is always the most likely answer. If that turns out to be true (and not an accident of having low numbers of samples), then the NN could learn that useful pattern using far less data than a table-based approach.

I have read that feedforward nns have limitations regarding control flow and loops, so I'm not sure if that structure will be appropriate.

This is true, but does not impact your situation, because there is no control flow or loops involved. You have described a simple function. Whilst you or I might inspect the data and look backwards and forwards across it before coming to a decision, a neural network approximating a function does not need to do that, and in simple cases there is usually no benefit to doing so - a statistical summary of the correct mapping from input to output is more than sufficient and likely the best that can be done.

I'm not sure if recurrent nn will be appropriate either as I don't care what the previous inputs were.

As all your inputs represent the same kind of thing, you could implement as a RNN with a single input, -1, 0 or +1, always feeding in the predictions by type in the same order. It might resemble how you are thinking about the problem as a human (at least a better analogy than the direct statistical match in a feed-forward network), especially if you implemented a variant of attention. However, I don't think there would be any benefit to that in improved accuracy, and it would be a significant challenge to build that if you are new to AI.

",1847,,,,,9/3/2020 10:21,,,,6,,,,CC BY-SA 4.0 23412,1,,,9/3/2020 15:37,,1,187,"

I am studying computer vision for the past 3 months. I have come across the object identification problem, where given an image, CV would identify various parts in the image.

If I give an image, and a rectangle coordinates, can CV identify the parts' names within that rectangle? For example, can I train a model to identify the parts in the below image (mountain, river, in this case)? The model should not identify other parts like flowers, sky, etc., as they come outside the rectangle).

I tried searching but could not find similar problems. Can anyone give me a direction to solve this problem?

",40754,,2444,,9/4/2020 18:49,10/11/2022 18:05,Can we identify only the objects in specific parts of an image with computer vision?,,1,1,0,,,CC BY-SA 4.0 23413,2,,23377,9/3/2020 17:13,,0,,"

Well, here's what I ended up with of late :

My model is completely psychological (or philosophical - no idea which to choose) and hence is theoretically possible but needn't be so mathematically (I am not sure of that part, so I leave it to you to verify that in your upcoming answers).

Therefore, there must be some changes in the terminology as well as procedures:

  1. Instead of 'gravitons' , the nodes can preferably be called 'thought centers' , taking them to be the centers of the models perception of what class the object may belong to.
  2. Editing weights randomly while training (NB:I decided to edit the training algorithm) and using them in randomly during the transformation of the input to the output - which I call 'shuffle' (number of times a weight used needn't be taken into consideration - 'freedom of thought granted to AI !') - is something which makes the model really vague when it comes to the flow of data through it, but still I prefer it, as it is basically a guesswork machine and since the guess choices can be limited by using at most 2 'thought centers' (4 choices - if you go for a single node, it will always yield $\varphi(wx)$ for a constant input $x$ [ $w$ = weight, $\varphi(x) = \frac{1}{1 + e^{-x}}$]).
  3. The code I posted here was a bit faulty, so I decided to rebuild it. The code will be posted in my GitHub profile, and if any of you would like to try it, you may download it from there (I am not good at making Python packages and I haven't made one till now, I'll be putting it up there in the master branch itself. I will tell you of the release as an edit in this answer).I built that code from scratch, integrated it into a class (Insane_Mind) and tried that on the iris dataset (not the testing part, but the training part at this time of development)

Again, if anything seems ambiguous, please tell me.

",40583,,,,,9/3/2020 17:13,,,,17,,,,CC BY-SA 4.0 23415,1,,,9/3/2020 19:07,,2,161,"

I am attempting to solve a timetabling problem using deep Q learning. It could be thought of as a resource allocation problem to obtain some certificate of 'optimality'. However, how to define and access the action space is alluding me. Any help, thoughts, or direction towards the literature would be appreciated. Thanks!

The problem is entirely deterministic, the pair of the current state and action is isomorphic to the resulting state. The Q network is therefore being set up to approximate a Q value (a scalar) for the resulting state, i.e. for the current state and proposed action.

I have so far assumed that the action space should be randomly sampled during training to generate some approximation of the Q table. This seems highly inefficient.

I am open to reinterpretations of the action space. The problem involves a set of n individuals and at any given state a maximum of b can be 'active' and, of the remaining 'inactive' individuals, f can be made 'active' by an action. An action will need to involve making some reallocation to active individuals made up of those who are already active and the other f available people.

To give you a sense over the numbers that I will ultimately use, $n=17, b=7$, and $f$ will hover somewhere around 7-10 (but depends on the allocations). At first this sounds tractable, but a (very) rough approximation of the cardinality of the set of actions is 17 choose 7 = 19448.

Does anyone know a more efficient way to encode this action space? If not, is there a more sensible way to sample it (as is my current plan) than uniformly extracting actions from the space? Also when sampling the space is it valid to enforce some cap on the number of samples drawn (say 500). Please feel free to ask for further clarification.

",40758,,,,,9/3/2020 19:07,Handling a Large Discrete Action Space in Deep Q Learning,,0,1,,,,CC BY-SA 4.0 23417,2,,23404,9/3/2020 19:25,,3,,"

I assume [1, 2, 3, 4] are the desired outputs for different examples for a regression task. Sound like you need a different loss function. From your description it seems you don't care how big the error is if it's bigger than some value. Try the Huber loss(in Pytorch and TensorFlow). Examples that are far from the expected value won't produce big gradients (:

",40597,,,,,9/3/2020 19:25,,,,4,,,,CC BY-SA 4.0 23418,1,,,9/3/2020 19:40,,4,159,"

Does anyone know of research involving the GPT models to learn not only regular texts, but also learn from physics books with the equations written in latex format?

My intuition is that the model might learn the rules relating equations and deductions, as they can learn statistically what correlates with what. I understand that the results can also be a little nonsensical, like the sometimes surreal paragraphs written by these models.

Have there been any attempts to do this?

",30433,,5763,,9/6/2020 1:27,9/6/2020 1:27,Can in principle GPT language models learn physics?,,0,2,,,,CC BY-SA 4.0 23420,1,,,9/4/2020 0:32,,1,37,"

I've been working on the Punctuation Restoration Problem for my Master's Thesis, however, me being primarily a programmer at heart, I wish I could use some of my NLP skills to solve issues related to programming in general.

I know Microsoft does lots of research in NLP and I think after they acquired Github, they have an immense dataset to work with for any problems related to programming they want to tackle. Most recently I think they did a great job on their new python suggestion extension on VSCode.

So, could you suggest to me some issues you think are interesting research topics? This is something that I would like to work with, but I have no idea where to start yet.

",22621,,,,,9/4/2020 0:32,What are some programming related topics that can be solved using NLP?,,0,2,,,,CC BY-SA 4.0 23424,1,,,9/4/2020 14:08,,1,61,"

Training neural networks takes a while. My question is, how efficient is a neural network that is completely trained (assuming it's not a model that is constantly learning)?

I understand that this is a vague and simply difficult question to answer, so let me be more specific: Imagine we have a trained Deep Neural Net, and even to be more specific it's a GPT-3 model.

Now, we put the whole thing on a Raspberry Pi. No internet access. The whole process takes place locally.

  • Will it run at all? Will it have enough RAM?

  • Now let's say we give it some text to analyze. Then we ask it a question. Will it take milliseconds to answer? Or is it going to be in the seconds? Minutes?

What I'm trying to understand, once a model is trained is it fairly performant because it's just essentially a bunch of very simple function calls on top of each other, or is it very heavy to execute? (perhaps due to the sheer number of these simple function calls)

Please correct any misunderstanding about how the whole process works if you spot any. Thank you.

",40771,,2444,,9/7/2020 10:28,9/7/2020 10:28,What is the efficiency of trained neural networks?,,0,5,,,,CC BY-SA 4.0 23425,2,,5318,9/4/2020 17:06,,1,,"

Let's start with understanding what over-fitting means. Your model is over-fitting if during training your training loss continues to decrease but (in the later epochs) your validation loss begins to increase. That means the model can not generalize well to images it has not previously encountered.

Naturally, you do not want this situation. What you want is a high training accuracy and a very low validation loss, which implies a high validation accuracy.

The first task is to ensure that your model gets a high training accuracy. Once that is accomplished, you can work on getting a low validation loss.

If your model is overfitting, there are several ways to mitigate the problem. First, start out with a simple model. If you have a lot of dense layers with a lot of neurons, reduce the hidden dense layers to a minimum. Typical just leave the top dense layer used for final classification. Then see how the model trains. If it trains well, look at the validation loss and see if it is reducing in the later epochs. If the model does not train well, add a dense layer followed by a dropout layer. Use the level of dropout to adjust for overfitting. If it still trains poorly, increase the number of neurons, and train again. If that fails, add another dense hidden layer with fewer neurons than the previous layer followed by another dropout layer.

Another method to combat overfitting is to add regularizers to the dense layers. Documentation for that is here.

",33976,,2444,,10/4/2020 20:25,10/4/2020 20:25,,,,0,,,,CC BY-SA 4.0 23427,2,,11375,9/4/2020 21:22,,4,,"

The same book Reinforcement learning: an introduction (2nd edition, 2018) by Sutton and Barto has a section, 1.7 Early History of Reinforcement Learning, that describes what optimal control is and how it is related to reinforcement learning. I will quote the most relevant part to answer your question, but you should read all that section to have a full understanding of the relationship between optimal control and reinforcement learning.

The term "optimal control" came into use in the late 1950s to describe the problem of designing a controller to minimize or maximize a measure of a dynamical system's behavior over time. One of the approaches to this problem was developed in the mid-1950s by Richard Bellman and others through extending a nineteenth-century theory of Hamilton and Jacobi. This approach uses the concepts of a dynamical system's state and of a value function, or "optimal return function", to define a functional equation, now often called the Bellman equation. The class of methods for solving optimal control problems by solving this equation came to be known as dynamic programming (Bellman, 1957a). Bellman (1957b) also introduced the discrete stochastic version of the optimal control problem known as Markov decision processes (MDPs). Ronald Howard (1960) devised the policy iteration method for MDPs. All of these are essential elements underlying the theory and algorithms of modern reinforcement learning.

To answer your specific questions.

In optimal control we have, controllers, sensors, actuators, plants, etc, as elements. Are these different names for similar elements in deep RL? For example, would an optimal control plant be called an environment in deep RL?

Yes. In reinforcement learning (see the first footnote of the cited book on page 48), the term control is often used as a synonym for action. Similarly, the term controller (or decision maker) is used as a synonym for agent (and sometimes also a synonym for policy, given that the policy usually defines and controls the agent, although the concept of the agent is more abstract and we could associate more than one policy with the same agent). The term environment is also used as a synonym for controlled system (or plant).

See also section 38.8 Notes (page 530) of the book Bandit Algorithms by Csaba Szepesvari and Tor Lattimore.

",2444,,2444,,2/11/2021 19:18,2/11/2021 19:18,,,,2,,,,CC BY-SA 4.0 23428,1,,,9/4/2020 21:47,,2,42,"

I have not found a lot of information on this, but I am wondering if there is a standard way to apply the outputs of a Bert model being used for sentiment analysis, and connect them back to the initial tokenized string of words, to gain an understanding of which words impacted the outcome of the sentiment most.

For example, the string "this coffee tastes bad" outputs a negative sentiment. Is it possible to analyze the output of the hidden layers to then tie those results back to each token to gain an understanding of which words in the sentence had the most influence on the negative sentiment?

The below chart is a result at my attempt to explore this, however I am not sure it makes sense and I do not think I am interpreting it correctly. I am basically taking the outputs of the last hidden layer, which in this case has shape (1, 7, 768), [CLS] + 5 word tokens + [SEP], and looping through each token summing up their values (768) and computing the average. The resulting totals are outputted in the below graph.

Any thoughts around if there is any meaning to this or if i am way off on approach, would be appreciated. Might be my misunderstanding around the actual output values themselves.

Hopefully this is enough to give someone the idea of what i am trying to do and how each word can be connected to positive or negative associations that contributed to the final classification.

",40782,,40782,,9/4/2020 23:50,9/4/2020 23:50,Bert for Sentiment Analysis - Connecting final output back to the input,,0,3,,,,CC BY-SA 4.0 23429,1,23430,,9/5/2020 7:56,,1,535,"

In Example 4.3:Gambler's Problem of Sutton and Barto's book whose code is given here. In this code the value function array is initialized as np.zeros(states) where states $\in[0,100]$ and the value function for optimal policy which is returned after solving it with value iteration is same as the one given in the book, but, if we only change the initialization of the value function in the code, suppose to np.ones(states) then the optimal value function returned changes too, which means that the value iteration algorithm converges in both the cases but to different optimal value functions,but two different optimal value function is impossible in a MDP. So why is the value iteration algorithm not converging to optimal value function?

PS: If we change the initialization of value function array to -1*np.random.rand(states), then the converged optimal value function also contains negative numbers which should be impossible as rewards>=0, hence value iteration fails to converge to optimal value function.

",37611,,40583,,9/6/2020 11:45,9/6/2020 11:45,Value Iteration failing to converge to optimal value function in Sutton-Barto's Gambler problem,,1,2,,,,CC BY-SA 4.0 23430,2,,23429,9/5/2020 8:33,,3,,"

So, naturally, if you've observed something that contradicts the theoretical properties of Value Iteration, something's wrong, right?

Well, the code you've linked, as it is, is fine. It works as intended when all the values are initialized to zero. HOWEVER, my guess is that you're the one introducing an (admittedly very subtle) error. I think you're changing this:

state_value = np.zeros(GOAL + 1)
state_value[GOAL] = 1.0

for this:

state_value = np.ones(GOAL + 1)
state_value[GOAL] = 1.0

So, you see, this is wrong. And the reason why it's wrong is that both GOAL (which is 100 in the example) and 0 must have an immutable and fixed values, because they're terminal states, and their values are not subject to estimation. The value for GOAL is 1.0, as you can see in the original code. If you want initial values other than 0, then you must do this:

state_value = np.ones(GOAL + 1)
state_value[GOAL] = 1.0
state_value[0] = 0

In the first case (changing the initial values to 1) what you were seeing was, essentially, an "I don't care policy". Whatever you do, you'll end with a value of 1. In the second case, with the random values, you saw the classic effects of "garbage in, garbage out".

",37359,,,,,9/5/2020 8:33,,,,0,,,,CC BY-SA 4.0 23433,1,,,9/5/2020 12:32,,0,138,"

I know we have developed some mathematical tools to understand deep neural networks, gradient descent for optimization, and basic calculus. Recently, I encountered arxiv paper that describes higher mathematics for neural networks, such as functional analysis. For example, I remember universal approximation theorem was proved with the Hann-Banach theorem, but I lost the link of that article, so I need to find similar papers or articles to develop my understanding of neural networks mathematically (like with functional analysis, in short, I need to learn more advanced math for research), can you suggest some books or arxiv papers or articles or any other source that describes mathematics for deep neural networks?

",36107,,2444,,9/5/2020 21:26,9/5/2020 21:26,What are the mathematical prerequisites needed to understand research papers on neural networks?,,1,11,,9/5/2020 19:45,,CC BY-SA 4.0 23434,2,,23433,9/5/2020 15:24,,1,,"

Knowing you want to focus on the theory, I think that a good choice is Deep Learning Book from Ian Goodfellow et al., which is publicly available. It has three main parts. On the first one the author presents the math/ statistic tools that will be needed to understand the following parts. On the second part, the author explains the current state of the art in Deep Learning and on the last part more advanced topics are introduced. The author also uses references so as to facilitate extra resources to dive more into the theory.

On the other hand I strongly recommend you Google Scholar, there you have plenty of information given by articles/ papers of the current state of the art techniques in the field. There you can also find papers related on what that you mentioned about the Hahn-Banach Theorem.

",40612,,,,,9/5/2020 15:24,,,,0,,,,CC BY-SA 4.0 23435,1,,,9/5/2020 16:10,,1,137,"

I have been reading: Reinforcement Learning: An Introduction by Sutton and Barto. I admit it's a good read for learning RL whereas it's more theoretical with detailed algorithms.

Now, I want something more programming oriented resource(s) maybe a course, book, etc. I have been exploring Kaggle, Open-source RL projects.

I need this to learn and grasp a deeper understanding of RL from the perspective of a developer i.e optimized way of writing code, explanation about using the latest RL libraries, cloud services, etc.

",40485,,2444,,4/13/2022 11:54,4/13/2022 11:54,What are some programming-oriented resources for reinforcement learning?,,2,3,,,,CC BY-SA 4.0 23438,2,,23435,9/5/2020 19:33,,2,,"

Arthur Juliani has some interesting Medium articles on reinforcement learning with TensorFlow backed up with code on GitHub.

  1. Part 0 — Q-Learning Agents
  2. Part 1 — Two-Armed Bandit
  3. Part 1.5 — Contextual Bandits
  4. Part 2 — Policy-Based Agents
  5. Part 3 — Model-Based RL
  6. Part 4 — Deep Q-Networks and Beyond
  7. Part 5 — Visualizing an Agent’s Thoughts and Actions
  8. Part 6 — Partial Observability and Deep Recurrent Q-Networks
  9. Part 7 — Action-Selection Strategies for Exploration
  10. Part 8 — Asynchronous Actor-Critic Agents (A3C)

As nbro pointed out Denny Britz has a good repository: https://github.com/dennybritz/reinforcement-learning

As you have seen with Sutton & Barto's book the code is mostly in Lisp. Shangtong Zhang has replicated the code in Python: https://github.com/ShangtongZhang/reinforcement-learning-an-introduction

Sudharsan and Ravichandiran wrote a book "Hands-On Reinforcement Leraning with Python" which uses OpenAI Gym and TensorFlow. You can find more information on the book along with their code repository at Hands-On Reinforcement Learning with Python

",5763,,5763,,9/6/2020 0:24,9/6/2020 0:24,,,,1,,,,CC BY-SA 4.0 23440,1,,,9/6/2020 3:45,,2,249,"

Above is the algorithm for Policy Iteration from Sutton's RL book. So, step 2 actually looks like value iteration, and then, at step 3 (policy improvement), if the policy isn't stable it goes back to step 2.

I don't really understand this: it seems like, if you do step 2 to within a small $\Delta$, then your estimate of the value function should be pretty close to optimal for each state.

So, why would you need to visit it again after policy improvement?

It seems like policy improvement only improves the policy function, but that doesn't affect the value function, so I'm not sure why you'd need to go back to step 2 if the policy isn't stable.

",30885,,2444,,9/7/2020 10:18,9/8/2020 8:04,Why do we need to go back to policy evaluation after policy improvement if the policy is not stable?,,1,0,,,,CC BY-SA 4.0 23441,1,,,9/6/2020 6:12,,1,133,"

Let's take a 32 x 32 x 3 NumPy array and convolve with 10 filters of size 2 x 2 x 3 with stride 2 to produce feature maps of volume 16 x 16 x 10. The total number of operations - 16 * 16 * 10 * 2 * 2 * 2 * 3 = 61440 operations. Now, let's take an input array of length 3072 (flattening the 32 * 32 * 3 array) and dot it with a weight matrix of size 500 x 3072. The total number of operations - 500 * 3072 * 2 = 3072000 operations. The convolution takes 4-5 times longer than np.dot(w, x) even though number of operations is less.

Here's my code for the convolution operation:

for i in range(16):
    for j in range(16):
        for k in range(10):
            v[i, j, k] = np.sum(x[2 * i:2 * i + 2, 2 * j:2 * j + 2] * kernels[k]) 

Is np.dot(w, x) optimized or something? Or are my calculations wrong? Sorry if this is a silly question...

",38343,,,,,9/6/2020 20:36,Why does CNN forward pass take longer compared to MLP forward pass?,,1,1,,9/7/2020 13:22,,CC BY-SA 4.0 23442,1,23445,,9/6/2020 6:21,,0,50,"

The topologies (or architectures) of the neural networks that I have seen so far are only 2-dimensional. So, are there neural networks whose topology is 3-dimensional (i.e. they have a width, height, and depth)?

",40583,,2444,,9/7/2020 13:55,9/7/2020 13:55,Are there neural networks with 3-dimensional topologies?,,1,0,,,,CC BY-SA 4.0 23443,2,,11643,9/6/2020 7:10,,2,,"

The Flatten layer has no learnable parameters in itself (the operation it performs is fully defined by construction); still, it has to propagate the gradient to the previous layers.

In general, the Flatten operation is well-posed, as whatever is the input shape you know what the output shape is.

When you backpropagate, you are supposed to do an "Unflatten", which maps a flattened tensor into a tensor of a given shape, and you know what that specific shape is from the forward pass, so it is also a well-posed operation.

More formally

Say you have Img1 in input of your Flatten layer

$$ \begin{pmatrix} f_{1,1}(x; w_{1,1}) & f_{1,2}(x; w_{1,2}) \\ f_{2,1}(x; w_{2,1}) & f_{2,2}(x; w_{2,2}) \end{pmatrix} $$

So, in the output you have

$$ \begin{pmatrix} f_{1,1}(x; w_{1,1}) & f_{1,2}(x; w_{1,2}) & f_{2,1}(x; w_{2,1}) & f_{2,2}(x; w_{2,2}) \end{pmatrix} $$

When you compute the gradient you have

$$ \frac{df_{i,j}(x; w_{i,j})}{dw_{i,j}} $$

and everything in the same position as in the forward pass, so the unflatten maps from the (1, 4) tensor to the (2, 2) tensor.

",1963,,2444,,9/10/2020 21:38,9/10/2020 21:38,,,,0,,,,CC BY-SA 4.0 23444,2,,23440,9/6/2020 9:13,,3,,"

There is a difference between accurate value function estimates, and optimal value functions. An optimal value function is more specifically the value function of an optimal policy.

Value functions are always specific to some policy, which is why you will often see the subscript $\pi$ in e.g. $v_{\pi}(s)$ when there is a defined policy.

The policy evaluation step (step 2) in policy iteration converges to an accurate value function estimate for whatever the current policy is. In general this will not be an optimal value function, except on the last time that step 2 is used, and there is no change to the policy in the next stage policy improvement (step 3).

The policy improvement stage (step 3) can only usefully be run once for any value function estimate. The policy is updated to be greedy with respect to the value function from step 2 - that will always give the same results from the same value function estimate. If the value function is accurate then this new policy is guaranteed to be as good or better than the previous policy. Once step 3 is done, further improvements are only possible if the new policy is accurately evaluated.

Comparison with value iteration

The difference with value iteration is that it never accurately evaluates any interim policies. In value iteration, the implied policy changes every time the maximising action changes due to new value estimates. In the later stages, when the optimal policy has been found and is stable, then the value function will converge to the optimal value function. In value iteration, most of the interim value functions are not accurate, but when it becomes accurate it will also be the optimal value function.

",1847,,1847,,9/8/2020 8:04,9/8/2020 8:04,,,,0,,,,CC BY-SA 4.0 23445,2,,23442,9/6/2020 10:17,,1,,"

Yes. Convolutional neural networks are usually 3-dimensional. In fact, they usually deal with images (e.g. RGB images), which can already be 3-dimensional.

",2444,,,,,9/6/2020 10:17,,,,0,,,,CC BY-SA 4.0 23446,1,,,9/6/2020 10:22,,1,301,"

I've read through the Alpha(Go)Zero paper and there is only one thing I don't understand.

The paper on page 1 states:

The MCTS search outputs probabilities π of playing each move. These search probabilities usually select much stronger moves than the raw move probabilities p of the neural network fθ(s);

My question: Why is this the case? Why is $\pi$ usually better than $p$? I think I can imagine why it's the case but I'm looking for more insight.

what $\pi$ and $p$ are:

Say we are in state $s_1$. We have a network that takes the state and produces $p_1$ (probabilities for actions) and $v_1$ (a value for the state). We then run MCTS from this state and extract a policy $\pi(a|s_1) = \frac{N(s_1,a)^{1/\tau}}{\sum_b N(s_1,b)^{1/\tau}}$. The paper is saying that $\pi(-|s_1)$ is usually better than $p_1$.

",40603,,40603,,9/8/2020 13:04,6/26/2022 17:04,"In Alpha(Go)Zero, why is the policy extracted from MCTS better than the network one?",,2,0,,,,CC BY-SA 4.0 23447,1,,,9/6/2020 10:35,,1,92,"

Inverse Reinforcement Learning based on GAIL and GAN-Guided Cost Learning(GAN-GCL), uses a discriminator to classify between expert demos and policy generated samples. Adversarial iRL, build upon GAN-GCL, has its discriminator $D_{\theta, \phi}$ as a function of a state-only reward approximator $f_{\theta, \phi}$.

$$ D_{\theta, \phi}\left(s, a, s^{\prime}\right)=\frac{\exp \left\{f_{\theta, \phi}\left(s, a, s^{\prime}\right)\right\}}{\exp \left\{f_{\theta, \phi}\left(s, a, s^{\prime}\right)\right\}+\pi(a \mid s)}, $$

where $f_{\theta,\phi}$ is expressed as:

$$f_{\theta,\phi} = g_{\theta} (s) + γh_φ (s\prime ) − h_φ (s).$$

The optimal $*g(s)$ tries to recover the optimal reward function $r^*(s)$. While the $h(s)$ tries to recover the optimal value funtion $V^*(s)$, which makes $f_{\theta,\phi}$ interpretable as the advantage.

My question comes from the network architecture used for $h(s)$ in the original paper.

... we use a 2-layer ReLU network for the shaping term h. For the policy, we use a two-layer (32 units) ReLU gaussian policy.

What is meant by the quoted text in bold, because my interpretation of that text, (shown below) doesn't seem viable

h = nn.Sequential([nn.ReLu(), nn.ReLu()])
",40671,,2444,,9/7/2020 9:54,11/9/2020 15:43,Can entire neural networks be composed of only activation functions?,,1,1,,,,CC BY-SA 4.0 23449,1,23684,,9/6/2020 13:31,,1,223,"

As a software engineer, I am searching for an existing solution or, if none exists, willing to create one that will be able to process texts (e.g. news from online media) to extract/paraphrase dry facts from them, leaving all opinions, analysis, speculations, humor, etc., behind.

If no such solution exists, what would be a good way to start creating it (considering that I have zero experience in AI/machine learning)?

It would be no problem to manually create a set of examples (pairs of original news + dry facts extracted), but is that basically what it takes? I doubt so.

(This knowledge domain is already huge, so which parts of it need to be learned first and foremost to figure out how to achieve the goal?)

",40800,,2444,,1/15/2021 0:46,1/15/2021 0:46,How could facts be distinguished from opinions?,,1,0,,,,CC BY-SA 4.0 23450,1,,,9/6/2020 13:36,,1,32,"

I was watching a Youtube video in which the problem of trying to predict the last word in a sentence was posed. The sentence was "I took my cat for a" and the last word was "walk". The lecturer in this video stated that whilst sentences (the sequence) can be of varying lengths, if we take a really large fixed window we can model the whole sentence. In essence she said that we can convert any sentence into a fixed size vector and still preserve the order of the sentence (sequence). I was then wondering why do we need RNNs if we can just use FFNNs? Also does a fixed size vector really preserve sequential order information?

Thank You for any help!

",40801,,,,,9/6/2020 13:36,What's the difference between RNNs and Feed Forward Neural Networks if a fixed size vector can preserve sequential information?,,0,0,,,,CC BY-SA 4.0 23452,2,,23441,9/6/2020 20:36,,0,,"

From the NumPy Linear algebra documentation:

The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient low level implementations of standard linear algebra algorithms. Those libraries may be provided by NumPy itself using C versions of a subset of their reference implementations but, when possible, highly optimized libraries that take advantage of specialized processor functionality are preferred.

In your case, np.dot() is a matrix-vector multiplication which internally calls such a highly optimized BLAS routine. Those libraries are implemented in low-level languages like Fortan or C, they exist for many years now and are still unbeaten in terms of speed.
Your naive implementation of the convolution operation in python on the other hand is not optimized for execution speed and will be executed by the python interpreter (which is much slower than the compiled C functions).

You could try to replace your manual convolution by the SciPy's equivalent scipy.signal.convolve2d(), to make advantage of optimized libraries as well and get a speed up.

",37120,,,,,9/6/2020 20:36,,,,0,,,,CC BY-SA 4.0 23453,1,,,9/7/2020 6:22,,2,181,"

There is emerging effort for Third Wave Artificial Intelligence (Artificial General Intelligence) (http://hlc.doc.ic.ac.uk/3AI_HLC_2019.html and https://www.darpa.mil/work-with-us/ai-next-campaign) and it covers the open-ended life-long machine learning as well. Currently machine learning agents are being run on quite immutable and simple games like Atari and Go. But what about the efforts to build and run machine learning adaptable agents (or even teams of them) in the virtual worlds (like Second Life) which are complex, expanding and in which the interaction with human representatives happens? Are there efforts to do that?

I have found some articles from 2005-2009, but Google gives no recent literature on queries like Reinforcement Learning Second Life etc.

So - maybe there are some efforts to do this, but I can not just Google it.

My question is - are there references for machine learning agents for virtual worlds and if not - what are the obstacles for trying to build them? There are little risks or costs for building them for virtual worlds?

https://meta-guide.com/embodiment/secondlife-npc-artificial-intelligence is some bibliography and it is lacking recent research, for example.

",8332,,2444,,6/30/2022 22:50,7/31/2022 10:03,Use of virtual worlds (e.g. Second Life) for training Artificial General Intelligence agents?,,1,4,,,,CC BY-SA 4.0 23454,1,,,9/7/2020 7:07,,2,1689,"

I have got numerous frames and I've detected all the faces in all the frames using Retinaface. However I need to track the faces of people over frames.

For this purpose, I assumed I could try finding the landmarks from the face using libraries like dlib and maybe compare these landmarks to check if they are infact the face of the same person.

I would like to know if there are other methods or some useful resources I could refer for the same. Thanks a lot in advance.

",16881,,,,,9/7/2020 17:30,How to identify if 2 faces contain the same person?,,2,0,,,,CC BY-SA 4.0 23456,2,,23454,9/7/2020 8:05,,2,,"

The topic of your problem is person re-identification. You can check here.

",28129,,,,,9/7/2020 8:05,,,,1,,,,CC BY-SA 4.0 23457,2,,23435,9/7/2020 10:43,,1,,"

I suggest you to have a look at this repo. It contains state-of-the art algorithms, papers, frameworks, courses and some implementations. You can also check "Deep Reinforcement Learning Hands On" book examples written by Max Lapan here. This repo contains many programming and reinforcement learning examples with PyTorch framework.

",28129,,28129,,9/8/2020 8:50,9/8/2020 8:50,,,,2,,,,CC BY-SA 4.0 23459,1,23479,,9/7/2020 12:14,,0,146,"

I attempt to understand the formulation of dictionary learning for this paper:

  1. Depression Detection via Harvesting Social Media: A Multimodal Dictionary Learning Solution
  2. Multimodal Task-Driven Dictionary Learning for Image Classification

Both papers used the exact formulation in two different domains.

Based on my understanding, in common machine learning, we formulate our matrices, from vectors, as rows to be observations, columns to be predictors.

Given a matrix, $A$:

\begin{array}{lcccccc} & p_1 & p_2 & p_3 & p_4 & p_5 & \text { label } \\ o_1 & 1 & 2 & 3 & 4 & 1 & 1 \\ o_2 & 2 & 3 & 4 & 5 & 2 & 1 \\ o_3 & 3 & 4 & 5 & 6 & 2 & 0 \\ o_4 & 4 & 5 & 6 & 7 & 3 & 0 \end{array}

So, using a math notation and excluding the label, I can define this matrix, $A = [o_1, o_2, o_3, o_4] ∈ R^{4×5}$, as $A = [{(1, 2, 3, 4, 1), (2, 3, 4, 5, 2), (3, 4, 5, 6, 2), (4, 5, 6, 7, 3)}]$, and in numpy:

import numpy as np

A = np.array([[1, 2, 3, 4, 1],
              [2, 3, 4, 5, 2],
              [3, 4, 5, 6, 2],
              [4, 5, 6, 7, 3]])

A.shape
# (4, 5)

Am I right?

",27796,,2444,,2/5/2021 14:55,2/5/2021 14:55,Do the rows of the design matrix refer to the observations or predictors?,,1,0,,,,CC BY-SA 4.0 23461,1,23464,,9/7/2020 17:14,,0,625,"

For the purposes of training a Convolutional Neural Network Classifier, should image augmentation be done before or after resizing the training images?

To reduce file size and speed up training time, developers often resize training images to a set height and width using something like PIL (Python Imaging Library).

If the images are augmented (to increase training set size), should it be done before or after resizing the members of the set?

For simplicity sake, it would probably be faster to augment the images after resizing, but I am wondering if any useful data is lost in this process. I assume it may depend on the method used to resize the images (cropping, scaling technique, etc.)

",40821,,,,,9/7/2020 19:26,Should image augmentation be applied before or after image resizing?,,1,1,,,,CC BY-SA 4.0 23462,2,,23454,9/7/2020 17:30,,1,,"

You can try using something called as a Siamese network if you are willing to train the network on your own using something called as triplet loss(if you have lots of face images).

Another approach would we something called a one-shot using FaceNet(transfer learning approach) FaceNet uses deep convolutional neural network (CNN). The network is trained such that the squared L2 distance between the embeddings correspond to face similarity. The images used for training are scaled, transformed and are tightly cropped around the face area.

Another important aspect of FaceNet is its loss function . It is already trained using triplet loss function.In this case you could just feed to face images and you would get a thershold score for the similarities.

",40823,,,,,9/7/2020 17:30,,,,1,,,,CC BY-SA 4.0 23463,2,,14028,9/7/2020 17:35,,0,,"

You could try building Siamese network and train it on a large set of faces.

Two identical networks are used; one taking the known signature for the person, and another taking a candidate signature. The outputs of both networks are combined and scored to indicate whether the candidate signature is real or a forgery. The deep CNN are first trained to discriminate between examples of each class. The models are then re-purposed for verification to predict whether new examples match a template for each class. Specifically, each network produces a feature vector for an input image, which are then compared using the L1 distance and a sigmoid activation. Similar goes with face.

",40823,,1641,,9/13/2020 10:23,9/13/2020 10:23,,,,4,,,,CC BY-SA 4.0 23464,2,,23461,9/7/2020 19:26,,0,,"

As it is told in PIL documentation

It uses some filters to resize images.And those filters are explained here uses mostly numerical methods as I see. So it is approximating the image data input. Which means you are right about data loss. But here might be the question would it change the data so much If it is done after augmentation or before?

Since in numerical methodic approaches the more values means more valid approximation in general.

So it might be beneficial to augment and then resize.

But one should look for its real answer. Mine is just a thought experiment. You will get better proven answers

",38344,,,,,9/7/2020 19:26,,,,1,,,,CC BY-SA 4.0 23465,2,,11822,9/7/2020 19:33,,0,,"

Addition to dear Oliver Mason's answer

You can check here to see how people uses machine learning tools to summarize texts and also check some articles from here to understand its background If you are curious about.

",38344,,,,,9/7/2020 19:33,,,,0,,,,CC BY-SA 4.0 23467,1,,,9/7/2020 22:43,,3,123,"

I am learning about incremental learning and read that rehearsal learning is retraining with old data. In essence, isn't this the exact same thing as batch learning (with stochastic gradient descent)? You train a model by passing in batches of data and redo this with a set number of epochs.

If I'm understanding rehearsal learning correctly, you do the exact same thing but with "new" data. Thus, the only difference is inconsistencies in the epoch number across data batches.

",26900,,2444,,9/29/2020 19:47,10/20/2022 3:08,"Is batch learning with gradient descent equivalent to ""rehearsal"" in incremental learning?",,1,0,,,,CC BY-SA 4.0 23469,1,,,9/8/2020 1:57,,1,269,"

I understand that I can draw a state-space graph for any problem. However, here is the problem: I can't really figure out how to make production systems.

I am solving the FWGC (Farmer, Wolf, Goat, Cabbage) River Crossing Puzzle using a state-space search. So, my tasks are that:

  1. Represent the state-space graph (which I know how to do)

  2. Write production systems.

My questions: How do I write production systems?

The thing that confused me, was the production system example in Rich's book (about the water jug problem), where he has imagined all the states possible and wrote the next state for them.

Here in the FWGC problem, I see some problems while writing the production system.

For instance, for a given state, there are multiple possible next states, i.e. a farmer can take Goat, Cabbage, Wolf, or go alone to the other side (assuming that all states are safe, just for the sake of simplicity).

So, how would I represent the same state going to multiple next states in production systems?

What I have tried-:

Then, I googled a pdf

https://www.cs.unm.edu/~luger/ai-final2/CH4_Depth-.%20Breadth-,%20and%20Best-first%20Search.pdf

Is that what I call the production system for this case?

But, here are my reasons why it should not be called a production system:

  1. There might be other possible states as well.

  2. It is showing only 1 solution.

So, how do I actually learn to create production rules (I know how to make state-space representation better as I have read that J.Nillison's book which was GOLD in this matter)? And, what would be the production rules in this case?

",40828,,2444,,9/9/2020 16:18,9/9/2020 16:18,How do I write production systems?,,0,4,,,,CC BY-SA 4.0 23471,2,,14028,9/8/2020 5:43,,0,,"

For the Construction of Deep Learning Models

Backbone Deep Learning models which can be applied to a variety of deep learning tasks (including facial recognition) have been implemented in a range of libraries available in Python. I'm assuming by constructing your own algorithm you mean a novel implementation of the model structure. Taking the PyTorch framework as an example, some common pretrained models are available here:

https://github.com/pytorch/vision/tree/master/torchvision/models

To train a novel face recognition model you could follow the tutorial for object detection available here: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html and make changes to the model.

In the tutorial they use model features from the library in the following section of code:

# load a pre-trained model for classification and return
# only the features
backbone = torchvision.models.mobilenet_v2(pretrained=True).features

For the simplest example torchvision.models.AlexNet.features look like this:

self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(64, 192, kernel_size=5, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(192, 384, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(384, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
        )

Adding or subtracting layers from this backbone feature extractor would result in a new "algorithm" for object detection. If you want to know exactly what mathematical operation each of these layers is performing you can look at the PyTorch documentation. For example, in the case of nn.Relu layer: https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html

Applies the rectified linear unit function element-wise:

$$ ReLU(x)=(x)^{+}=max(0,x)$$

",40833,,,,,9/8/2020 5:43,,,,0,,,,CC BY-SA 4.0 23472,1,23476,,9/8/2020 7:45,,3,578,"

After having chosen the number of layers for a convolutional neural network, we must also choose the number of filters/channels for each convolutional layer.

The intuition behind the filter's spatial dimension is the number of pixels in the image that must be considered to perform the recognition/detection task.

However, I still can't find the intuition behind the number of filters. The numbers 128 and 256 are often used in the literature, but why?

",40839,,2444,,11/6/2020 1:56,1/14/2022 9:37,What is the intuition behind the number of filters/channels for each convolutional layer?,,1,0,,,,CC BY-SA 4.0 23473,1,,,9/8/2020 7:53,,1,15,"

I have multiple FFT's taken from a sample at different pressures, through different analysis I can see that the resonant frequencies are shifting in the spectrum for each FFT at a different pressure.

Using conventional peak tracking has been difficult as the peaks increase/decrease in magnitude within the FFT as well as shifting in the spectrum.

Is it possible for a neural network to 'detect'/'pick out' these frequency values?

Any help or guidance is appreciated :)

Thanks!

",40840,,,,,9/8/2020 7:53,Neural Network for locating shifting resonant frequencies,,0,1,,,,CC BY-SA 4.0 23475,2,,23335,9/8/2020 9:22,,1,,"

In fact, I do not know how to calculate GPU memory to run a neural network but I have a solution for allocation problems in GPUs while using tensorflow framework.

import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    # Restrict TensorFlow to only allocate 2GB * 2 of memory on the first GPU
    try:
        tf.config.experimental.set_virtual_device_configuration(
            gpus[0],
            [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2048 * 2)])
        logical_gpus = tf.config.experimental.list_logical_devices('GPU')
        print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
    except RuntimeError as e:
        # Virtual devices must be set before GPUs have been initialized
        print(e)

You can set a memory limit on GPU which sometimes solves memory allocation problems. As shown above, you can set "memory_limit" parameter as your configuration requires.

Also be careful about using correct framework. If you want to use above code to set memory, you have to build your neural network from tensorflow with keras backend.

from tensorflow.python.keras.models import Sequential
",28129,,,,,9/8/2020 9:22,,,,0,,,,CC BY-SA 4.0 23476,2,,23472,9/8/2020 10:43,,2,,"

The channel sizes 32, 128, etc. are used because of memory and efficiency. There is nothing holy about these numbers.

The intuition behind choosing the number of channels is as follows- The initial layers extract low-level features- they consist of edge detectors, etc. There aren't many such features. So, we won't gain much by adding a lot of filters (of course, if we use a 3x3 filters on an RGB image, we would have $2^{27}$ different filters even if our neurons have only 0 and 1 as their values. However, most of them are quite similar/meaningless for our job). Using a lot of filters might even lead to overfitting.

The latter layers are responsible for detecting more nuanced features, like elbows/nose shape from the lower level features extracted previously. So, we might do better if we increase the number of channels. Also, note that the resultant layers become more and more sparse as we go deeper.

Though it might differ in applications like super resolution image, in general, the number of channels stays the same or increases when we go deeper.

A nice experiment would be to try and increase the number of channels until you get no more benefit from it. I believe there was a paper that did exactly this (please cite it if someone remembers). You could even try to visualise the filters at this stage and see if the filters are similar or not.

",40843,,2444,,1/14/2022 9:37,1/14/2022 9:37,,,,3,,,,CC BY-SA 4.0 23479,2,,23459,9/8/2020 12:01,,1,,"

Based on my understanding, in common machine learning, we formulate our matrices, from vectors, as rows to be observations, columns to be predictors.

The rows (or, in general, the first dimension of your tensor) are typically the observations. For example, in TensorFlow, the first dimension of the input tensor typically refers to the batch size, i.e. the number of observations. If you are using Pandas (a Python library to manipulate data), the rows are typically the observations and the columns are the predictors.

However, in general, it does not really matter which convention you use, provided that you use one of the conventions consistently in your implementation (i.e. you choose one of the conventions and you stick with it throughout all your code, to avoid complexity), and make it clear in the documentation. So, you can have a matrix where either the rows or columns are observations and, consequently, the columns or, respectively, rows are the features (aka predictors or independent variables).

Anyway, it is probably a good idea to be consistent with existing literature and implementations/libraries, so you should probably use the rows for the observations.

",2444,,2444,,9/8/2020 12:11,9/8/2020 12:11,,,,0,,,,CC BY-SA 4.0 23480,1,,,9/8/2020 12:14,,2,781,"

I am wondering how much I should extend my training set with data augmentation. Is there somewhere a pre-defined number I can go with?

Suppose I have 10000 images, can I go as far as 10x or 20x times, to get 100000 and 200000, respectively, images? I am wondering how will this impact model training. I am using a mask R-CNN.

",18147,,2444,,9/9/2020 23:26,9/9/2020 23:26,How much should we augment our training data?,,0,4,,,,CC BY-SA 4.0 23481,1,,,9/8/2020 14:00,,1,44,"

I was working on a project involving the search for biosignatures (signs of life) on exoplanets and the probability of that planet harboring life. In this case, we know that Earth is the only planet confirmed to have life on it. So the parameters of atmospheric conditions, radius, temperature, distance from the star for planets confirmed to have life is one (Earth).

Is there any way to use NNs to predict the probability of an exoplanet harboring life if we have the data of all these parameters for that planet?

",40849,,11539,,9/11/2020 12:33,9/11/2020 12:33,Is there any way where you can train a Neural Network with only one data point in the dataset?,,0,4,,,,CC BY-SA 4.0 23482,1,,,9/8/2020 14:38,,5,3174,"

I am comparing different CNN architectures for edge implementation. Some papers describing architectures refer to mult-adds, like the MobileNet V1 paper, where it is claimed that this net has 569M mult-adds, and others refer to floating-point operations (FLOPs), like the CondenseNet paper claims 274M FLOPs.

Are these comparable? Is 1 multiply-add equivalent to 2 floating-point operations? Any direction will be greatly appreciated.

",40753,,2444,,9/8/2020 22:19,3/3/2022 17:02,Are mult-adds and FLOPs equivalent?,,1,3,,,,CC BY-SA 4.0 23483,1,,,9/8/2020 18:16,,0,47,"

I am building a recommendation system that recommends relevant articles to the user. I am doing this using simple similarity-based techniques (with the Jaccard similarity) using as features the page title, the tags, and the article content.

Now my problem is I have different "adult articles" and some are articles that expire (for example, an article about a movie in Jan 2019 would not be relevant in Dec 2019).

I want to keep these adult articles separate, as a person who is reading about history does not want to be led to an adult article and not recommend articles that have expired or would not be relevant in the present moment.

Should I just improve the quality of my features or tags? Or is there any other way to achieve this?

",10118,,2444,,9/10/2020 22:04,9/10/2020 22:04,How can I build a recommendation system that takes into account some constraints or the context?,,0,5,,,,CC BY-SA 4.0 23485,2,,23446,9/8/2020 19:32,,2,,"

The most important word for answering your question from that quote from the paper is probably the word "usually": These search probabilities usually select much stronger moves than the raw move probabilities $p$ of the neural network. It's not always going to be true, but more often than not / most of the time / "on average", we intuitively expect it to be true. In theory there could be pathological cases, especially with low MCTS iteration counts and/or a poorly-trained network, where it may not be true. But even if we just find it to be true most of the time, that can be good enough for the algorithm to work well in practice.

Recall that in the Selection phase of MCTS, Alpha(Go) Zero always selects the action $a$ that maximises the following expression for a state $s$ when traversing the tree (see supplementary material of Alpha Zero paper):

$$Q(s, a) + C(s) \frac{P(s, a) \sqrt{N(s)}}{1 + N(s, a)}$$

The $Q(s, a)$ values here are the value estimates resulting from the MCTS search; these are the results produced by our tree search, and these generally become more and more accurate the longer our tree search runs. If we run our tree search for an infinite amount of time, we expect these to converge to the true minimax values.

These same $Q(s, a)$ values are the things that can cause the final distribution of visit counts $\pi$ to shift away from the prior neural network distribution $P$. Suppose that we have a hypothetical case where for all the different actions $a$, the $Q(s, a)$ values end up staying identical. In this (unlikely) hypothetical case, the visit counts would continue getting distributed proportionally to $P(s, a)$, causing the $\pi$ distribution to stay equal to the $P$ one (barring minor potential differences due to $\pi$ being derived from discrete, integer visit counts which may not be able to exactly replicate the real-valued probabilities in $P$). Only if the $Q(s, a)$ values resulting from a (hopefully smart!) tree search algorithm give us reason to shift away from the prior distribution $P$ do we actually really start shifting away from it.

",1641,,,,,9/8/2020 19:32,,,,2,,,,CC BY-SA 4.0 23487,2,,23482,9/9/2020 4:33,,3,,"

According to this source, one MAC is roughly equal to two FLOP (multiply accumulate). My guess/understanding would be that the distinction is made because neural nets spend compute overwhelmingly on multiply-accumulate operations, and thus optimizations and statistics over MAC operations would be more significant than FLOPs.

",6779,,6779,,10/4/2021 15:35,10/4/2021 15:35,,,,1,,,,CC BY-SA 4.0 23491,1,,,9/9/2020 10:04,,1,447,"

In this article, the term "learned emulator" is used.

Recently, scientists have started creating "learned emulators" using AI neural network approaches, but have not yet fully explored the advantages and potential pitfalls of these surrogates.

What is a "learned emulator"? I believe it is related to neural networks. Where can I read more?

",29801,,2444,,1/7/2021 0:32,1/7/2021 0:32,"What is a ""learned emulator""?",,1,0,,,,CC BY-SA 4.0 23492,2,,23491,9/9/2020 10:50,,1,,"

In a typical situation, for the emulation of physical environments, you need to define all physical rules and forces. In the "learned emulators", they use some machine learning techniques to learn those rules by supervising and interacting (instead of formalizing all of them). In this case, they do not need any exact formulation of the physical environment to emulate it.

An instance of this simulation can be this article "Realistic Atomistic Structure of Amorphous Silicon from Machine-Learning-Driven Molecular Dynamics", i.e., a machine-learning-driven simulation instead of exact formulation of the environment.

",4446,,4446,,9/9/2020 13:15,9/9/2020 13:15,,,,0,,,,CC BY-SA 4.0 23493,1,,,9/9/2020 12:12,,0,194,"

Geometric interpretation of Logistic Regression and Linear regression is considered here.

I was going through Logistic regression and Linear regression. In the optimization equation of both following term is used. $$W^{T}.X$$

W is a vector which holds weights of the hyper-plane.

I realized following about the dimensions of the fitted hyper-plane. Want to confirm it.

Let, d = Number of features for both Logistic Regression and Linear Regression.

Logistic Regression case: Fitted hyper-plane is d-dimensional.

Linear Regression case: Fitted hyper-plane is (d + 1) dimensions.

Example

d = 2

feature 1 : weight, feature 2 : height

Logistic Regression: Its a 2 class classification. y : {obsess, normal)

Linear Regression: y: blood pressure (real value)

Here,

  • Logistic Regression will fit a 2-D line.
  • Linear Regression will fit a 3-D plane.

Please confirm if this understanding is correct and same happens even in higher dimensions.

",40856,,,,,10/1/2022 3:03,Hyper-plane in logistic regression vs linear regression for same number of features,,1,0,,,,CC BY-SA 4.0 23497,1,23509,,9/9/2020 21:36,,0,56,"

Let's suppose we have calculated the gradient and it came out to be $f(WX)(1-f(W X))X$, where $f()$ is the sigmoid function, $W$ of order $2\times2$ is the weight matrix, and $X$ is an input vector of order $2\times 1$. For ease let $f(WX)(1-f(W X))=\Bigg[ \begin{array}{c} 0.3 \\ 0.8 \\ \end{array}\Bigg]$ and $X=\Bigg[ \begin{array}{c} 1 \\ 0 \\ \end{array}\Bigg]$. When we multiply these vectors we will multiply them as $f(WX)(1-f(W X))\times X^T$ i.e $\Bigg[ \begin{array}{c} 0.3 \\ 0.8 \\ \end{array}\Bigg]\times[1 \quad0]$. I do this because I know that we need this gradient to update a $2\times 2$ weight matrix, hence, the gradient should have size $2\times 2$. But, I don't know the law/rule behind this, if I was just given the values and had no knowledge that we need the solution to update the weight matrix, then, I might have done something like $[0.3 \quad 0.8]\times\Bigg[ \begin{array}{c} 1 \\ 0 \\ \end{array}\Bigg]$ which will return a scalar. For a long chain of such operations (multiple derivatives in applying chain rule, resulting in many vectors), how do we know if the multiplication of two vectors should return a vector or matrix (dot or cross product)?

",38423,,,,,9/10/2020 21:13,What are the rules behind vector product in gradient?,,1,0,,,,CC BY-SA 4.0 23498,1,23501,,9/10/2020 8:06,,3,251,"

I am modelling a ride-hailing system where passenger requests continuously arrive into the system. An RL model is developed to learn how to match those requests with drivers efficiently.

Basically, the system can run infinitely as long as there are requests arriving (infinite horizon reality). However, in order for the RL training to conduct, the episode length should be restricted to some finite duration, say $[0,T]$ (finite horizon training).

My question is how to implement the learned policy based on finite horizon $[0,T]$ to the real system with infinite horizon $[0,\infty]$?

I expect there would be a conflict of objectives. The value function near $T$ is partially cut off in a finite horizon and would become an underestimate and affect policy performance in an infinite horizon implementation. To this end, I doubt the applicability of the learned policy.

",32145,,32145,,9/10/2020 8:12,9/10/2020 13:00,How to implement RL policies learned on a finite horizon?,,1,0,,,,CC BY-SA 4.0 23500,2,,21849,9/10/2020 12:22,,1,,"

As mentioned in the comments your assumption about independence is wrong. Here's why. To prove independence we need to show the following holds:

$$P(X=x, Y=y) = P(X=x)P(Y=y)$$

in the case of RL this becomes:

$$P(X=a, X=a') = P(X=a)P(Y=a')$$

The left hand side has the value:

$$P(X=a, Y=a') = b(A_t = a| S_t = s) p(s'|a,s) b(A_{t+1} = a'|, S_{t+1} = s')$$

while the right hand side has the value:

$$P(X=a)P(Y=a') = b(A_t = a| S_t = s)b(A_{t+1} = a'| S_{t+1} = s')$$

And hence not independent.

Now let use look at why the following expression holds:

Eq.5.14: $\mathbb{E}[\rho_{t:T-1}R_{t+k}] = \mathbb{E}[\rho_{t:t+k-1}R_{t+k}]$

I will not derive the exact expressions, but I hope you can form the reasoning I provide. By the rules of probability we know that sum of joint probability is equal to 1 i.e.:

$$\sum_{X_1..X_n} P(X_1=a_1, X_2=a_2,...X_n = a_n) = 1$$

I have alredy showed above, the trajectory is not independent. So $R_{t+k}$ will depend on the trajectory $S_{t:t+k-1}$ where $S_{t:t+k-1}$ is a particular trajectory. At the end of this trajectory we get a reward $R_{t+k}$ and thus $R_{t+k}$ is exclusively a function of $S_{t:t+k-1}$ i.e. $R_{t+k} = f(S_{t:t+k-1})$. The trajectory after this $S_{t+k:T-1}$ irrelevant since it will always sum up of to 1. i.e once you have reached a particular state at time step $t+k-1$ you are now conditioning based on that $P(S_{t+k:T-1}|S_{t:t+k-1})$ and taking the expected value over all trajectories possible from thereon i.e. $\sum_{S_{t+k:T-1}} P(S_{t+k:T-1}|S_{t:t+k-1})$ which is 1 by probability rules. Thus, what you are really doing is:

$$P(S_{t:t+k-1})R_{t+k}(\sum_{S_{t+k:T-1}} P(S_{t+k:T-1}|S_{t:t+k-1}))$$

and hence the remaining trajectory has no contribution.

Another way of thinking this is you are taking weighted trajectories till time step $t+k-1$ weighted by rewards $R_{t+k}$ and hence you cannot sum up to 1. The rest of the trajectory after $t+k-1$ will sum up to 1.

I hope this qualitative description suffices. You can do the maths, but you must be careful with the notations and the assumptions you make.

Also all the equations are correct, I hope you can indirectly see it from my reasoning.

",,user9947,,user9947,9/10/2020 12:56,9/10/2020 12:56,,,,3,,,,CC BY-SA 4.0 23501,2,,23498,9/10/2020 12:53,,2,,"

A normal way to deal with training on infinite horizon (aka "continuing" or "non-episodic") problems is to use TD learning or other bootstrapping methods (of which Q-learning in DQN is one example), and to treat the cutoff at $T$ for pseudo-episodes as a training artefact.

If the state at time $T$ was really a terminal state, the TD target would be just $r$ because $q(s^T,\cdot) = 0$ by definition, but that doesn't apply in your case.

So always use the bootstrapped TD target - e.g. $r + \gamma \text{max}_{a'} \hat{q}(s',a',\theta)$ for single step TD target with a Q estimate having $\theta$ as learned params - and don't treat the horizon data any differently.

If you do this, then your concerns about under-estimates should not be an issue. Your pseudo-episodes do need to be long enough to observe the impact of multiple requests in the long term is the main issue (setting $T$ too low so that the system does not reach any kind of equilibrium would be a problem).

You could also use an average reward setting and differential value functions. It is slightly better from a theoretical standpoint, but Q-learning and DQN is fine if you don't want to be bothered with that. The same basic rule applies - ignore "episode end" for constructing TD targets because it is just a training artefact. Also ensure you set $T$ high enough that the long term impacts of a policy are observable.

If your starting state is special (e.g. cars all in fixed places, and no requests in progress) this could also be an issue, because the real world system will rarely be in that state but you will have many episode starts with it. If you cannot start the system in a reasonable random initial state for your problem, you may also want to discard data from episode starts and allow a run-in time before using data for training from the pseudo-episodes.

",1847,,1847,,9/10/2020 13:00,9/10/2020 13:00,,,,7,,,,CC BY-SA 4.0 23502,1,,,9/10/2020 14:29,,0,36,"

I'm using a transfert-style based deep learning approach that use VGG (neural network). The latter works well with images of small size (512x512pixels), however it provides distorted results when input images are large (size > 1500px). The author of the approach suggested to divide the input large image to portions and perform style-transfert to portion1 and then to portion2 and finally concatenate the two portions to have a final large result image, because VGG was made for small images... The problem with this method is that the resulting image will have some inconsistent regions at the level of areas where the portions were "glued". How can I correct these areas ? Is there an alternative approach to this dividing method ? thanks

",40901,,,,,9/10/2020 14:29,Strategy to input and get large images in VGG neural networks,,0,2,,,,CC BY-SA 4.0 23504,1,,,9/10/2020 18:46,,1,467,"

I wonder if we can use Natural Language Processing (NLP) to process programming code: Given a piece of code, can we

  1. Translate it to human language to understand what it does? The input could be a function definition(normally lack of documentation) in Python and the output could be the documentation for that function.
  2. Compile or translate it to another programming language? Compile Python code to C or machine code, or translate C code to Python code?
",40884,,,,,10/10/2020 20:01,Can we use NLP to understand/parse/compile programming code?,,1,0,,,,CC BY-SA 4.0 23505,2,,23504,9/10/2020 19:17,,1,,"

Yes, and that is the ambition of the Decoder project (H2020 funded in Europe). Analyzing with NLP techniques the comments in e.g. C or C++ source code (of open source projects, and perhaps their git logs).

I even happen to be on the photo.

My dream is to try similar things, in a few years, in RefPerSys. You could join that project, BTW.

",3335,,3335,,9/10/2020 19:22,9/10/2020 19:22,,,,1,,,,CC BY-SA 4.0 23506,1,,,9/10/2020 20:08,,1,770,"

I want to create a CNN in Python, specifically, only with NumPy, if possible. For optimizing the time of convolution (actually correlation) in the network, I wanna try to use FFT-based convolution. The data that needs to be convoluted (correlated) is a 4D image tensor with shape [batch_size, width, height, channels] and 4D filter tensor [filter_width, filter_height, in_channel, out_channel]. I read a lot of articles about FFT-based convolution, but they aren't doing it in my way. Thus, I need your help.

How could I fft-convolve a 4D image and a 4D filter with stride?

",38736,,2444,,9/10/2020 21:34,5/8/2021 17:02,How could I convolve a 4D image and a 4D filter with stride?,,2,5,,,,CC BY-SA 4.0 23507,1,23520,,9/10/2020 20:20,,8,3351,"

I am looking for a book about machine learning that would suit my physics background. I am more or less familiar with classical and complex analysis, theory of probability, сcalculus of variations, matrix algebra, etc. However, I have not studied topology, measure theory, group theory, and other more advanced topics. I try to find a book that is written neither for beginners, nor for mathematicians.

Recently, I have read the great book "Statistical inference" written by Casella and Berger. They write in the introduction that "The purpose of this book is to build theoretical statistics (as different from mathematical statistics) from the first principles of probability theory". So, I am looking for some "theoretical books" about machine learning.

There are many online courses and brilliant books out there that focus on the practical side of applying machine learning models and using the appropriate libraries. It seems to me that there are no problems with them, but I would like to find a book on theory.

By now I have skimmed through the following books

  • Pattern Recognition And Machine Learning

    It looks very nice. The only point of concern is that the book was published in 2006. So, I am not sure about the relevance of the chapters considering neural nets, since this field is developing rather fast.

  • The elements of statistical learning

    This book also seems very good. It covers most of the topics as well as the first book. However, I am feeling that its style is different and I do not know which book will suit me better.

  • Artificial Intelligence. A Modern Approach

    This one covers more recent topics, such as natural language processing. As far as I understand, it represents the view of a computer scientist on machine learning.

  • Machine Learning A Probabilistic Perspective

    Maybe it has a slight bias towards probability theory, which is stated in the title. However, the book looks fascinating as well.

I think that the first or the second book should suit me, but I do not know what decision to make.

I am sure that I have overlooked some books.

Are there some other ML books that focus on theory?

",40905,,2444,,1/16/2021 19:37,1/24/2021 21:12,What are other examples of theoretical machine learning books?,,2,0,,,,CC BY-SA 4.0 23508,2,,23507,9/10/2020 21:07,,1,,"

Pattern Recognition And Machine Learning is a great theoretical book. I don't know anything better on standard ML. I read several pages from it myself and all my colleagues researchers suggest to look there if you are not sure about some concepts. The 2 problems with it are that it's huge and it doesn't cover almost all deep learning models known for today.

So, in addition, I'd suggest you look at Deep Learning by Ian Goodfellow et al.

Your concerns about not studying topology, measure theory and group theory are groundless. These sections of math aren't prerequisites in any way, they aren't even discussed anywhere I know.

Actually, ML theory is more like probability theory and statistics. Especially, statistical learning theory (which is nothing more than probability theory and statistics). I haven't read any books on SLT so have a look at this answer.

",40906,,2444,,1/16/2021 19:34,1/16/2021 19:34,,,,1,,,,CC BY-SA 4.0 23509,2,,23497,9/10/2020 21:13,,0,,"

It helps to think of each output dimension separately.

You have $X$ which is an $(2 \times 1)$ vector, and $W_1$ is a $(1 \times 2 )$ vector. Their product is a scalar, of which we then take the sigmoid to get our output $Y_1$.

The gradient of this w.r.t. $W_1$ will be $f(WX) (1 - f(WX)) X^T$, which has the appropriate dimensions.

Then, you just stack these for all your outputs, giving you the shape you got.

",40573,,,,,9/10/2020 21:13,,,,0,,,,CC BY-SA 4.0 23511,2,,23506,9/10/2020 21:14,,1,,"

I think this should work for you: scipy.signal.correlate | SciPy

I used it myself while I was writing a CNN in numpy.

",40906,,32410,,5/8/2021 17:02,5/8/2021 17:02,,,,4,,,,CC BY-SA 4.0 23513,2,,23493,9/10/2020 21:57,,0,,"

Mm, I understand what you want to say, but I think you're slightly wrong with terminology. If you consider a d-dimensional vector space, people usually a hyper-plane (d - 1)-dimensional, because it's a subspace of lower dimensionality. So for your example d = 2 the logistic regression would fit a 1-dimentional line (which would separate 2 classes) and the linear regression would fit a 2-dimentional plane (for every combination of 2 features the prediction would lay on some plane in 3-D space). So the general rule would be logreg -> (d-1)-dimensional hyperplane in d-dimensional vector space, linear regression -> d-dimensional hyperplane in (d+1)-dimensional vector space.

",40906,,,,,9/10/2020 21:57,,,,0,,,,CC BY-SA 4.0 23517,1,23519,,9/11/2020 6:22,,0,55,"

I am using pretraining code from https://github.com/NVIDIA/DeepLearningExamples

Pretrain parameters:

 15:47:02,534: INFO tensorflow 140678508230464   init_checkpoint: bertbase3layer-extract-from-google
 15:47:02,534: INFO tensorflow 140678508230464   optimizer_type: lamb
 15:47:02,534: INFO tensorflow 140678508230464   max_seq_length: 64
 15:47:02,534: INFO tensorflow 140678508230464   max_predictions_per_seq: 5
 15:47:02,534: INFO tensorflow 140678508230464   do_train: True
 15:47:02,535: INFO tensorflow 140678508230464   do_eval: False
 15:47:02,535: INFO tensorflow 140678508230464   train_batch_size: 32
 15:47:02,535: INFO tensorflow 140678508230464   eval_batch_size: 8
 15:47:02,535: INFO tensorflow 140678508230464   learning_rate: 5e-05
 15:47:02,535: INFO tensorflow 140678508230464   num_train_steps: 10000000
 15:47:02,535: INFO tensorflow 140678508230464   num_warmup_steps: 10000
 15:47:02,535: INFO tensorflow 140678508230464   save_checkpoints_steps: 1000
 15:47:02,535: INFO tensorflow 140678508230464   display_loss_steps: 10
 15:47:02,535: INFO tensorflow 140678508230464   iterations_per_loop: 1000
 15:47:02,535: INFO tensorflow 140678508230464   max_eval_steps: 100
 15:47:02,535: INFO tensorflow 140678508230464   num_accumulation_steps: 1
 15:47:02,535: INFO tensorflow 140678508230464   allreduce_post_accumulation: False
 15:47:02,535: INFO tensorflow 140678508230464   verbose_logging: False
 15:47:02,535: INFO tensorflow 140678508230464   horovod: True
 15:47:02,536: INFO tensorflow 140678508230464   report_loss: True
 15:47:02,536: INFO tensorflow 140678508230464   manual_fp16: False
 15:47:02,536: INFO tensorflow 140678508230464   amp: False
 15:47:02,536: INFO tensorflow 140678508230464   use_xla: True
 15:47:02,536: INFO tensorflow 140678508230464   init_loss_scale: 4294967296
 15:47:02,536: INFO tensorflow 140678508230464   ?: False
 15:47:02,536: INFO tensorflow 140678508230464   help: False
 15:47:02,536: INFO tensorflow 140678508230464   helpshort: False
 15:47:02,536: INFO tensorflow 140678508230464   helpfull: False
 15:47:02,536: INFO tensorflow 140678508230464   helpxml: False
 15:47:02,536: INFO tensorflow 140678508230464 **************************

Pretrain loss: (I remove nsp_loss)

{'throughput_train': 1196.9646684552622, 'mlm_loss': 0.9837073683738708, 'nsp_loss': 0.0, 'total_loss': 0.9837073683738708, 'avg_loss_step': 1.200513333082199, 'learning_rate': '0.00038143058'}
{'throughput_train': 1230.5063662500734, 'mlm_loss': 1.3001925945281982, 'nsp_loss': 0.0, 'total_loss': 1.3001925945281982, 'avg_loss_step': 1.299936044216156, 'learning_rate': '0.00038143038'}
{'throughput_train': 1236.4348949169155, 'mlm_loss': 1.473339319229126, 'nsp_loss': 0.0, 'total_loss': 1.473339319229126, 'avg_loss_step': 1.2444063007831574, 'learning_rate': '0.00038143017'}
{'throughput_train': 1221.2668264552692, 'mlm_loss': 0.9924975633621216, 'nsp_loss': 0.0, 'total_loss': 0.9924975633621216, 'avg_loss_step': 1.1603020071983337, 'learning_rate': '0.00038142994'}

Fine-tune code:

self.train_op = tf.train.AdamOptimizer(0.00001).minimize(self.loss, global_step=self.global_step)

Fine-tune accuracy: (restore from my ckpt pretrained from https://github.com/NVIDIA/DeepLearningExamples)

epoch 1:
training step 895429, loss 4.98, acc 0.079
dev loss 4.853, acc 0.092

epoch 2:
training step 895429, loss 4.97, acc 0.080
dev loss 4.823, acc 0.092

epoch 3:
training step 895429, loss 4.96, acc 0.081
dev loss 4.849, acc 0.092

epoch 4:
training step 895429, loss 4.95, acc 0.082
dev loss 4.843, acc 0.092

Without restore the pretrained ckpt:

epoch 1:
training step 10429, loss 2.48, acc 0.606
dev loss 1.604, acc 0.8036

Restore the google's BERT-Base pretrained ckpt. Or restore from a pretrained ckpt pretrained from https://github.com/guotong1988/BERT-GPU

epoch 1:
training loss 1.89, acc 0.761
dev loss 1.351, acc 0.869
",40765,,40765,,9/14/2020 2:40,9/14/2020 11:02,"BERT: After pretraining 880000 step, why fine-tune not work?",,1,1,,9/14/2020 11:02,,CC BY-SA 4.0 23518,1,23544,,9/11/2020 8:23,,1,72,"

I am working on a deep Q-learning project. My project is different than normal deep Q-learning. The rewards of my neural network must be positive because I need their values to importance sample actions. I know that I can't use ReLU as the activation function of my neural network. So the only suitable functions which I know are sigmoid, softmax and exponential function. I tried working with sigmoid and softmax but they generate wrong results and the loss function diverges. There are two terminal states in my model. Their rewards are 1 and 0. All other states don't have any immediate rewards.

",35633,,,,,10/7/2021 18:06,What are some suitable positive functions as activations of neural networks?,,1,0,,,,CC BY-SA 4.0 23519,2,,23517,9/11/2020 9:47,,0,,"

change

bert_output = bert_model.get_pooled_output()

to

bert_output = tf.reduce_mean(bert_model.get_sequence_output()[:,1:,:],1)
",40765,,,,,9/11/2020 9:47,,,,0,,,,CC BY-SA 4.0 23520,2,,23507,9/11/2020 10:21,,4,,"

Some of the books that you mention are often used as reference books in introductory courses to machine learning or artificial intelligence.

For example, if I remember correctly, in my introductory course to machine learning, the professor suggested the book Pattern Recognition And Machine Learning (2006) by Bishop, although we never used it during the lessons. This is a good book, but, in my opinion, it covers many topics, such as variational inference or sampling methods, that are not suited for an introductory course.

The book Artificial Intelligence. A Modern Approach, by Norvig and Russell, definitely does not focus on machine learning, but it covers many other aspects of artificial intelligence, such as search, planning, knowledge representation, machine learning, robotics, natural language processing or computer vision. This is probably the book that you should read and use if you want to have an extensive overview of the AI field. Although I never fully read it, I often used it as a reference, as I use the other mentioned book. For instance, during my bachelor's and, more specifically, an introductory course to artificial intelligence, we had used this book as the reference book, but note that there are other books that provide an extensive overview of the AI field.

The other two books are not as famous as these two, but they are probably also good books, although their focus may be different.

There are at least three other books that I think you should also be aware of, given that they also cover the actual theory of learning, aka (computational) learning theory, before diving into more specific topics, such as kernel methods.

You can find more books on learning theory here.

",2444,,2444,,1/24/2021 21:12,1/24/2021 21:12,,,,0,,,,CC BY-SA 4.0 23521,2,,21849,9/11/2020 10:41,,0,,"

First Part

We can reduce variance in off-policy importance sapling, even in the absence of discounting ($\gamma = 1$). Notice that the off-policy estimators are made up of terms like $$\rho_{t:T-1}G_t = \rho_{t:T-1} (R_{t+1} + \gamma R_{t+2} + \dots+ \gamma^{T-t-1}R_{T})$$

and consider the second term, imagine $\gamma$=$1$: $$\rho_{t:T-1}R_{t+2} = \frac{\pi(A_t|S_t) \pi(A_{t+1}|S_{t+1})......\pi(A_{T-1}|S_{T-1})}{b(A_t|S_t) b(A_{t+1}|S_{t+1})...... b(A_{T-1}|S_{T-1})} R_{t+2}$$ In above equation, the term $\pi(A_t|S_t)$, $\pi(A_{t+1}|S_{t+1})$, $R_{t+2}$ are correleated, all the other terms are independent of each other.

Notice the very import property of expectation: $E[ab] = E[a] E[b]$ if and only if $a$, $b$ are independent random variables.

Now: $$ E[\frac{\pi(A_t|S_t) \pi(A_{t+1}|S_{t+1}).....\pi(A_{T-1}|S_{T-1})}{b(A_t|S_t) b(A_{t+1}|S_{t+1}).....b(A_{T-1}|S_{T-1})} R_{t+2}]$$ $$ = E[\frac{\pi(A_t|S_t) \pi(A_{t+1}|S_{t+1})}{b(A_t|S_t) b(A_{t+1}|S_{t+1})} R_{t+2}] E[\frac{\pi(A_{t+2}|S_{t+2})}{b(A_{t+2}|S_{t+2})}] ..... E[\frac{\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})}]$$ $$ = E[\frac{\pi(A_t|S_t) \pi(A_{t+1}|S_{t+1})}{b(A_t|S_t) b(A_{t+1}|S_{t+1})} R_{t+2}] \sum_a b(a|s_{t+2}) \frac{\pi(a|s_{t+2}}{b(a|s_{t+2}}.....\sum_a b(a|s_{T-1}) \frac{\pi(a|s_{T-1}}{b(a|s_{T-1}} $$ $$ = E[\frac{\pi(A_t|S_t) \pi(A_{t+1}|S_{t+1})}{b(A_t|S_t) b(A_{t+1}|S_{t+1})} R_{t+2}] \sum_a \pi(a|s_{t+2}).....\sum_a \pi(a|s_{T-1})$$
$$ = E[\frac{\pi(A_t|S_t) \pi(A_{t+1}|S_{t+1})}{b(A_t|S_t) b(A_{t+1}|S_{t+1})} R_{t+2}] 1 * 1 $$ $$ = E[\frac{\pi(A_t|S_t) \pi(A_{t+1}|S_{t+1})}{b(A_t|S_t) b(A_{t+1}|S_{t+1})} R_{t+2}] $$ therefore $$ E[\rho_{t:T-1}R_{t+2}] = E[\rho_{t:t+1} R_{t+2}]$$ If we repeat this analysis for the $k$th term, we will get: $$E[\rho_{t:T-1}R_{t+k}] = E[\rho_{t:t+k-1} R_{t+k}]$$ It follows that the expectation of our original term can be written: $$E[\rho_{t:T-1}G_{t}] = E[\tilde{G_{t}}]$$ where $$\tilde{G}_t \doteq \rho_{t:t}R_{t+1} + \gamma \rho_{t:t+1}R_{t+2} + \gamma^{2} \rho_{t:t+2}R_{t+3} + ...... + \gamma^{T-t-1} \rho_{t:T-1}R_{T}$$ We call this idea per reward importance sampling. It follows immediately that there is an alternative importance sampling estimate, with the same unbiased expectation as the ordinary importance sampling estimate: $$V(s) \doteq \frac{\sum_{t\in\mathcal{T}(s)} \tilde{G}_t}{|\mathcal{T}(s)|}$$ which we might expect to sometimes be of lower variance.

Second Part

The reward $R_{k+1}$ depends on the previous $\pi(a_1|s_1)$ up to $\pi(a_{k-1}|s_{k-1})$. So, you can't seperate them and treat them as independent variable just you did on the aforementioned example.

",28048,,28048,,9/11/2020 10:54,9/11/2020 10:54,,,,0,,,,CC BY-SA 4.0 23523,1,,,9/11/2020 12:34,,1,96,"

Each person probably uses an app that tracks his/her position periodically and sends it to our servers. What I want is to use these data to train a model to predict the rush hours of each bus-stop on the map, so we can send extra buses to handle the predicted cumulation before it happens.

I have no experience in AI nor machine learning. So, which model should I use to do this?

",40917,,2444,,9/11/2020 14:28,10/5/2022 10:06,How to train a model to predict the number of people at a certain bus stop before they cumulate in large numbers?,,2,0,,,,CC BY-SA 4.0 23524,1,,,9/11/2020 12:39,,0,68,"

I am trying to make a NN(probably with dense layers) to map a specific input to a specific output (or basically sequence2sequence). I want the model to learn the relation between the sequences and predict the output of any other input I give it.

I have 2 files - one with the inputs and another with all the corresponding outputs and would probably use a bunch of Dense Layers with word embeddings to vectorize it into higher dimensions. However, I cannot find any good resources out there for that.

Does anyone know how to accomplish such an NN? Which architectures are best for pattern matching? examples, links, and other resources would be very welcome. I was considering using RNN's but found them not very good in the pattern matching tasks so had ditched them. I would still consider them if someone can provide a plausible explanation...

",36322,,,,,12/29/2022 23:02,How to use a NN for seq2seq tasks?,,1,1,,,,CC BY-SA 4.0 23527,1,,,9/11/2020 20:56,,0,152,"

As we all know, zero shot learning involves a model predicting classes that it has not seen. But we are given all the attributes each class might have.

Is it fair to assume that we are "aware" of all the class labels a dataset might have ? (Including the test set)

",40928,,,,,10/13/2020 14:03,Zero shot learning available labels in testing set,,1,1,,,,CC BY-SA 4.0 23528,1,23532,,9/12/2020 1:26,,1,73,"

This is a bit of a weird question.

I am hoping to create an online reference since I have some downtime. I know some about statistics but very little about computer science. As a result, the reference guide I am hoping to create will be very statistics oriented - even though I wish that it could be a reference for someone who wants to start from scratch and work their way to AI.

While I would love to be involved with AI, from what I have read about ML and AI, seems like AI does not involve much statistics. (A lot of statistical theory is based off normal assumption and math, and ML seems to bypass that by not requiring strong assumptions nor analytical results). CS seems to be more relevant.

And so my question is, since my guide will mostly cover statistics, how relevant would it be for someone who wants to get into AI? If it's not relevant, then I guess I'll just make my guide for someone who wants to get into stats/data science, as opposed to someone who wants to be an AI researcher.

I guess another way to phrase my question is, as an AI researcher, when you "google" stuff, wikipedia things, or go to your notes, what subjects are you looking at and what exactly are you googling? Are you getting a refresher on how to code back propagation? Or are you getting a refresher on the pros and cons of L1 vs. L2? Do you ever look at how to implement a boosting tree or NN using a pre-existing package?

Basically, I know that what I can provide will be relevant to HS/college stats and data science students. But what really want to do is create something useful for aspiring/current AI researchers. The former is realistic, the latter is a dream. I want to see if my dream is realistic.

Thanks!

",29801,,29801,,9/12/2020 1:31,9/12/2020 12:46,"As an AI researcher, what subjects do you find yourself referring to most often?",,1,5,,9/12/2020 12:48,,CC BY-SA 4.0 23529,1,,,9/12/2020 2:54,,3,235,"

Genetic algorithms are used to solve many optimization tasks.

If I have a dataset, can I evolve it with a genetic algorithm to create an evolved version of the same dataset?

We could consider each feature of the initial dataset as a chromosome (or individual), which is then combined with other chromosomes (features) to find more features. Is this possible? Has this been done?

I will like to edit the details with an example so that it is easier to understand.

Example: In practice cyber-security attacks evolve over time since it finds a new way to breach a system. The main draw-back of intrusion detection model is that it needs to be trained every time attack evolves. So I was hoping if genetic algorithm can be used on the present benchmarked datasets (like NSL-KDD) to come up with a futuristic type dataset maybe after X-number of generations. And check if a model is able to classify that generated dataset as well.

",40929,,40929,,9/13/2020 14:43,9/13/2020 14:43,Can we use genetic algorithms to evolve datasets?,,2,3,,,,CC BY-SA 4.0 23530,2,,23447,9/12/2020 5:57,,1,,"

The Pytorch docs define a fully connected ReLU network as:

torch.nn.Sequential(
    torch.nn.Linear(D_in, H),
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out),
)

Neural networks are called are made of neurons. Activation functions only help determine which of these neurons to fire up, meaning they have no learnable nodes themselves through which we can back-propagate gradients. A network with no learnable parameters is therefore not a neural net. So neural nets can't be composed of activation functions only.

What is meant by the quoted text in bold, because my interpretation of that text, (shown below) doesn't seem viable

Yes, what's given here is not a network that can approximate the said function $h(s)$. A two layer ReLU network would resemble:

   x = nn.ReLU(nn.Linear(d_IN, H))
   x = nn.ReLU(nn.Linear(H, H))
   out = nn.Linear(x, d_OUT)

Another way to see it is that a network must have an input and ouput layer, and optional hidden layers. It's not possible to use an activation funtion as an input layer, because then you'll have no way of configuring the number of features to represent your input data. In this context, a ReLU can't represent the features of the observation input s.

To show activation functions have no learnable nodes in them and that this interpretation

h = nn.Sequential([nn.ReLu(), nn.ReLu()])

is not what the authors are driving across, here a script that counts the number of parameters in a network.


import torch.nn as nn
import numpy as np

activation = nn.ReLU


def count_params(module):
    return np.sum([np.prod(x.shape) for x in module.parameters()])


one_linear = nn.Sequential(nn.Linear(32, 10), nn.Linear(10, 1))
linear_act = nn.Sequential(nn.Linear(32, 10), activation(), nn.Linear(10, 1))
act_only = nn.Sequential(activation(), activation())

t_lin = count_params(linear_act)
lin = count_params(one_linear)
act = count_params(act_only)

print(f'Linear only: {lin}, Linear + Activation: {t_lin},' +
        f'Activation only: {act}')


[Out]: Linear only: 341, Linear + Activation: 341, Activation only: 0.0

The activation-function-only module has zero learnable parameters. Likewise, an activation function adds no parameters to the fully connected layers.

Update: Links to implementations

To confirm this answer's interpretation is correct, here are links to GAIL and GAN-GCL example implementations

  1. GAIL : discriminator prediction (Forward call), discriminator architecture (The ReLU net):
  2. GAN-GCL : discriminator prediction, discriminator architecture:
",40671,,40671,,11/9/2020 15:43,11/9/2020 15:43,,,,0,,,,CC BY-SA 4.0 23532,2,,23528,9/12/2020 8:16,,1,,"

I think AI researcher mostly google some new papers because now there's just a crazy amount of them published. Sometimes people just forget the new concept which was introduced in a paper they read several months ago and they google that concept. Sometimes I forget some loss functions (and intuition behind them) used in a specific area like computer vision, natural language processing or audio processing. Like dice loss or contrastive losses. So these are more advanced things than pros and cons of L1 and L2 losses and how to code backprop. Usually I find the answers on https://distill.pub/ or https://towardsdatascience.com/. Have a look at those, I think they represent correctly current interests and topics for refreshment in AI research community. From my experience I can say not much statistics is used in contemporary AI research (unless you're doing research in statistical learning theory). Sometimes I google some statistical tests to prove results of my experiments are statistically significant and I think that's it.

",40906,,40906,,9/12/2020 10:34,9/12/2020 10:34,,,,2,,,,CC BY-SA 4.0 23535,2,,23524,9/12/2020 13:50,,0,,"

I think you meant pattern recognition instead of pattern matching in you question. Because pattern matching has nothing to do with NNs as far as I know. So RNNs is the easiest architecture you could use for this task. Have a look at this post. http://karpathy.github.io/2015/05/21/rnn-effectiveness/ It's long, but it's very well written and explains why RNNs work well with sequential data. In brief RNNs accumulate information about what was fed in them previously and store it in the hidden state vector. That information is a summary of a sequence which helps it to predict the output sequence. RNNs can be applied to variable length input and catch dependencies among different parts of the input (as opposed to dense layers that can be applied to a sequence only point-wise). Here's how exactly you can apply reccurent layers (https://www.tensorflow.org/guide/keras/rnn, https://pytorch.org/docs/stable/generated/torch.nn.RNN.html). There are more advanced architectures like LSTMs. Also you can try transformers (http://jalammar.github.io/illustrated-transformer/) which use attention mechanism (https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html).

",40906,,,,,9/12/2020 13:50,,,,2,,,,CC BY-SA 4.0 23544,2,,23518,9/12/2020 16:08,,0,,"

First of all: an activation function is usually placed after a linear operation and you can have a lot of them (maybe different) in your nn. That's why it would be better to say $\bf a$ activation function of the neural net and not $\bf the$ activation function. So if you meant by activation function the last operation which is going to make your outputs non-negative, then you're right, ReLU isn't a good choice. Usually when people need to output some positive values, they take exponent as the last operation. I used it several times and everything was just fine.

",40906,,,,,9/12/2020 16:08,,,,0,,,,CC BY-SA 4.0 23547,1,23548,,9/12/2020 19:01,,2,486,"

I read an article about captioning videos and I want to use solution number 4 (extract features with a CNN, pass the sequence to a separate RNN) in my own project.

But for me, it seems really strange that in this method we use the Inception model without any retraining or something like that. Every project has different requirements and even if you use pretrained model instead of your own, you should do some training.

And I wonder how to do this? For example, I created a project where I use the network with CNN layers and then LSTM and Dense layers. And in every epoch, there is feed-forward and backpropagation through the whole network, all layers. But what if you have CNN network to extract features and LSTM network that takes sequences as inputs. How to train CNN network if there is no defined output? This network should only extract features but the network doesn't know what features. So the question is: How to train CNN to extract relevant features and then passing these features to LSTM?

",,user40943,32410,,4/24/2021 10:11,4/24/2021 10:11,Extract features with CNN and pass as sequence to RNN,,2,0,,,,CC BY-SA 4.0 23548,2,,23547,9/12/2020 19:27,,1,,"

The approach that you don't train the whole net, but just the latter part of it (all starting with lstm in our case), can actually work. The idea is that the inception was already pretrained a very large dataset (imagenet for instance). And it's capable of extracting some useful information from it. Actually there are different domains of images in the imagenet and the inception net needed to capture a vast variety of input information to classify images well. The idea is that the pretrained inception is already capable to extract almost everything what could possibly be useful (unless your images aren't something completely different from imagenet, but that a rare case). Then you adapt the lstm layers and the fully connected layers to correctly process that information. Maybe you aren't going to get the perfect score with this approach and maybe it's better to train the whole large net including the inception part on the new data to lower the distributional shift and that's what people usually do in fact, but it takes more time to train and if you don't have enough data you won't be able to achieve results that are significantly better than those with a frozen CNN part.

",40906,,,,,9/12/2020 19:27,,,,7,,,,CC BY-SA 4.0 23550,1,,,9/12/2020 20:52,,0,134,"

I am training an A3C with stacked LSTM.

During initial training, my model was giving descent +ve reward. However, after many episodes, its reward just goes to zero and is continuing for a long time. Is it because of LSTM?

Is it normal?

Should I expect it to work after the training is over or just terminate the training and increase the density of my network?

",40051,,2444,,9/12/2020 21:26,10/3/2022 7:08,Why would the reward of A3C with LSTM suddenly drop off after many episodes?,,1,0,,,,CC BY-SA 4.0 23551,1,,,9/12/2020 21:14,,3,100,"

Sorry if this is too noob question, I'm just a beginner.

I have a data set with companies' info. There are 2 kinds of features: financial (revenue and so on) and general info (like the number of employees and date of registration)

I have to predict the probability of default. And the data has gaps: about the half of the companies have no financial data at all. But general features are 100% filled.

What is the best practice for such a situation?

Will be great if you can give some example links to read.

",40944,,2444,,9/12/2020 23:21,9/12/2020 23:21,How to perform prediction when some features have missing values?,,1,1,,,,CC BY-SA 4.0 23552,2,,12020,9/12/2020 21:28,,0,,"

There is a lot of related research out there.

You can look at contextual bandit problems, which is the basis for "Monte Carlo Tree Search". Here some clever bookkeeping is used to make sure that branches that looked bad but haven't been explored recently will still get explored eventually. This results in the UCT algorithm, which then got used in combination with deep learning in AlphaGo, but you can use it without deep learning too, if that makes more sense for your particular problem.

The exploration / exploitation trade-off is crucial for this kind of problem. Your original estimates of the value of a position will be very uninformed, so it should not be used to prune the search tree too aggressively. This is exactly what UTC does, with provable theoretical guarantees.

",40573,,,,,9/12/2020 21:28,,,,0,,,,CC BY-SA 4.0 23553,2,,23551,9/12/2020 21:58,,3,,"

You should look into "missing values". This is an entire research field in itself.

First, you need to identify the type of missing values:

  1. They can be missing purely at random.
  2. Whether they are missing or not is itself a useful feature, and should be treated as a class of its own.

(Those two are the best case scenarios.)

  1. Whether they are missing or not depends on the underlying (unknown) value. For example, a thermometer might fail occasionally if the temperatures get too high. In your case, certain types of companies might be more likely to not share their information.
  2. Information might be missing specifically to mislead you, the data analyst. This is the worst possible scenario, and there is not much you can do.

So, what do you do about it? A few typical options:

  1. Throw out all the rows with missing data: we do not have enough information about these companies.
  2. Throw out all the columns with missing data: this field is not reliably measurable and we shouldn't use it.
  3. Try to guess the missing values. This can be done if the amount of missing data is small. Either you train a predictive model based on the non-missing data, or you fill in the median for that type of row, or you fill in the value of the "closest" matching row. This can be dangerous.
  4. Some algorithms are OK with missing data. Check the documentation for your models and algorithms to see how they deal with missing values.
",40573,,,,,9/12/2020 21:58,,,,5,,,,CC BY-SA 4.0 23554,2,,23529,9/12/2020 22:21,,2,,"

This question raises a lot more questions. It seems like a solution looking for a problem, instead of the other way round.

  • How do you measure the fitness of a feature?
  • What would one of the "evolved datasets" mean? What does it represent?
  • What would your overal purpose be? If you just wish to generate simulated datasets, there are easier ways to do this, with more control over the various aspects of the resulting datasets.

If you want to compute a new set of features to "better" describe a given dataset, there are many approaches to this, such as PCA, ISOMAP, self-organizing maps, ... If this is the kind of thing you're thinking about, I would recommend starting there.

",40573,,,,,9/12/2020 22:21,,,,1,,,,CC BY-SA 4.0 23556,2,,23550,9/13/2020 1:03,,0,,"

The thing you're explaining is not impossible for a RL model, but it's rare. That's a known thing that some RL algorithms work or don't work depending on a random seed. I implemented the same model once to play KungFuMaster-v0. It was during a university RL course and the code seemed fine (actually 2 people including teacher looked at it very carefully and they didn't find any bugs). I remember calling it 10 times a row and one time out of ten it showed that nasty behavior. There were teacher's notes in the task: if the reward suddenly drops to 0 and stay there for a long time, check your code, there's a high probability of a bug. So I'd say if your net works just fine 9 times out of 10, probably there are no bugs, otherwise if I were you, I'd carefully check the code.

",40906,,,,,9/13/2020 1:03,,,,0,,,,CC BY-SA 4.0 23557,1,23566,,9/13/2020 1:19,,3,132,"

Suppose that we want to generate a sentence made of words according to language $L$: $$ W_1 W_2 \ldots W_n $$

Question: What is the perfect language model?

I ask about perfect because I want to know the concept fundamentally at its fullest extent. I am not interested in knowing heuristics or shortcuts that reduce the complexity of its implementation.


1. My thoughts so far

1.1. Sequential

One possible way to think about it is moving from left to right. So, 1st, we try to find out value of $W_1$. To do so, we choose the specific word $w$ from the space of words $\mathcal{W}$ that's used by the language $L$. Basically: $$ w_1 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_1 = w) $$

Then, we move forward to find the value of the next word $W_2$ as follows $$ w_2 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_2 = w | W_1 = w_1) $$

Likewise for $W_3, \ldots, W_n$: $$ w_3 = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_3 = w | W_1 = w_1, W_2=w_2) $$ $$ \vdots $$ $$ w_n = \underset{w \in \mathcal{W}}{\text{arg max }} \Pr(W_n = w | W_1 = w_1, W_2=w_2, \ldots W_{n-1}=w_{n-1}) $$

But is this really perfect? I personally doubt. I think while language is read and written usually from a given direction (e.g. left to right), it is not always done so, and in many cases language is read/written possibly in a funny order as we always do. E.g. even when I wrote this question, I jumped back and forth, then went to edit it (as I'm doing now). So I clearly didn't write it from left to right! Similarly, you, the reader; you won't really read it in a single pass from left to right, will you? You will probably read it in some funny order and go back and forth for awhile until you conclude an understanding. So I personally really doubt that the sequential formalism is perfect.

1.2. Joint

Here we find all the $n$ words jointly. Of course ridiculously expensive computationally (if implemented), but our goal here is to only know what is the problem at its fullest.

Basically, we get the $n$ words as follows:

$$ (w_1, w_2, \ldots, w_n) = \underset{(w_1,w_2,\ldots,w_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n) $$

This is a perfect representation of language model in my opinion, because its answer is gauranteed to be correct. But there is this annoying aspect which is that its words candidates space is needlessly large!

E.g. this formalism is basically saying that the following is a candidate words sequence: $(., Hello, world, !)$ even though we know that in (say) English a sentence cannot start by a dot ".".

1.3. Joint but slightly smarter

This is very similar to 1.2 Joint, except that it deletes the single bag of all words $\mathcal{W}$, and instead introduces several bags $\mathcal{W}_1, \mathcal{W}_2, \ldots, \mathcal{W}_n$, which work as follows:

  • $\mathcal{W}_1$ is a bag that contains words that can only appear as 1st words.
  • $\mathcal{W}_2$ is a bag that contains words that can only appear as 2nd words.
  • $\vdots$
  • $\mathcal{W}_n$ is a bag that contains words that can only appear as $n$th words.

This way, we will avoid the stupid candidates that 1.2. Joint evaluated by following this: $$ (w_1, w_2, \ldots, w_n) = \underset{w_1 \in \mathcal{W}_1,w_2 \in \mathcal{W}_2,\ldots,w_n \in \mathcal{W}_n) \in \mathcal{W}^n}{\text{arg max }} \Pr(W_1 = w_1, W_2=w_2, \ldots W_n=w_n) $$

This will also guarantee being a perfect representation of a language model, yet it its candidates space is smaller than one in 1.2. Joint.

1.4. Joint but fully smart

Here is where I'm stuck!

Question rephrase (in case it helps): Is there any formalism that gives the perfect correctness of 1.2. and 1.3., except for also being fully smart in that its candidates space is smallest?

",2361,,2361,,9/14/2020 14:21,9/14/2020 14:21,"Fundamentally, what is a perfect language model?",,1,10,,,,CC BY-SA 4.0 23558,2,,23529,9/13/2020 10:59,,2,,"

The paper Evolutionary Dataset Optimisation: learning algorithm quality through evolution (2019), by Henry Wilde et al., proposes a method to generate datasets with a genetic algorithm. Their goal is to generate data for which a particular algorithm performs well, in terms of a certain metric, so that to get more insights about this algorithm and why it performs well. The individuals of the population are datasets (so not features of the dataset!), which can be combined with a crossover operator or mutated. The details are explained in section 2 (page 4) and they also provide nice diagrams that summarise their descriptions.

The authors evaluate their approach on k-means (section 3, page 12) and they use the k-means objective function as the fitness function of the genetic algorithm.

They also developed a library edo that is freely available, so you can start to play with their approach.

",2444,,2444,,9/13/2020 11:07,9/13/2020 11:07,,,,2,,,,CC BY-SA 4.0 23559,1,,,9/13/2020 11:56,,2,61,"

I'm about to write a non-player character (NPC). I wonder how much the AI should know about the game's world. So, my question isn't about the amount of training data the AI has to collect. I'm interested in how much the AI is allowed to know about what's going on in the game's world. For example, can (shall) it have knowledge about the build queue of the player?

To provide more details: while a human plays a game against another human, not all information of what the opponent is doing is available (e.g. the queue of the units your opponent is building). This could give you an advantage (so that you can prepare for a rush, when he's building many cheap units). Theoretically, an NPC could access and make use of that knowledge and, in addition, spare resources for scouting/spying/exploring.

But is this the way of constructing an NPC AI? Or should this data also be restricted? I have never done anything like this before.

I don't know where to ask else wise or what more information I could provide. So, if something in my question is unclear or unfit, please let me know what exactly.

",40950,,2444,,9/13/2020 13:00,11/17/2020 6:03,How much can/should the non-player character know about the game's world?,,1,1,,,,CC BY-SA 4.0 23560,1,,,9/13/2020 12:45,,1,51,"

I am trying to create a language generation model to generate very short sentences/words, like a rapper name generator. The sentences in my dataset are anywhere between 1 word and 15 words (3-155 characters). So far, I have tried LSTM's with 1-3 layers and inputs as subwords and characters. The results so far are not that great, I am getting ~0.5 crossentropy loss and ~50% accuracy.

My inputs are like a sliding window with prepadding, (eg. (for a batch) Inputs = [[0,0,0,1], [0,0,1,2]...[n-4,..n-1]], outputs=[[0,0,1,2], ...[n-3,n-2,n-1,n]]) where 0 is padding, 1 is the start token and n is the end token. Outputs are 1 hot encoded.

The model is an embedding layer, few lstm and dropout layers, followed by time distributed dense and then a dense layer.

My doubt is, is accuracy a right metric, I am using it because at the end, I am making a classification for 4 output values. Another one is, will a transformer be suitable for this, since I want to generate small sentences, (which are nouns) and models like GPT/ Bert are more suitable for capturing dependency between long sentences.

",27875,,27875,,9/13/2020 15:27,9/13/2020 15:47,Appropriate metric and approach for natural language generation for small sentences,,1,3,,,,CC BY-SA 4.0 23561,1,,,9/13/2020 12:57,,19,3321,"

As we all know, "Hello World" is usually the first program that any programmer learns/implements in any language/framework.

As Aurélien Géron mentioned in his book that MNIST is often called the Hello World of Machine Learning, is there any "Hello World" problem of Reinforcement Learning?

A few candidates that I could think of are multi armed bandits problem and Cart Pole Env.

",40485,,2444,,9/13/2020 13:08,9/15/2020 6:32,"What is the ""Hello World"" problem of Reinforcement Learning?",,2,1,,,,CC BY-SA 4.0 23562,2,,23527,9/13/2020 13:18,,1,,"

The formal definition of zero-shot learning is that given labeled training instances $D_{tr}$ belonging to the seen classes $S$, the aim is to learn a classifier $f^u(·):X→U$ that can classify testing instances $X_{te}$ (i.e., to predict $Y_{te}$) belonging to the unseen classes $U$.

The general idea of zero-shot learning is to transfer the knowledge contained in the training instances $D_{tr}$ to the task of testing instance classification. So it is considered a transfer learning method, and, more specifically, heterogeneous transfer learning with different label spaces.

However, since no labeled instances belonging to the unseen classes are available, to solve the zero-shot learning problem, some auxiliary information is necessary. Such auxiliary information should contain information about all of the unseen classes. This auxiliary information usually contains some semantic information about the unseen classes, and its representation belongs to a space that is often referred as $semantic$ $space$. In the semantic space, each class has a corresponding vector representation, which is referred to as the class prototype or $prototype$ for short.

So, you should be aware of classes of your problem, in order to construct the semantic space and their class prototypes. I also suggest you read the survey paper "A survey of zero-shot learning: settings, methods and applications".

",36055,,36055,,9/13/2020 13:42,9/13/2020 13:42,,,,0,,,,CC BY-SA 4.0 23563,2,,23561,9/13/2020 13:28,,10,,"

MNIST (along with CIFAR) may be the "Hello World" of supervised learning for image classification, but it is definitely not the "Hello World" of all machine learning techniques, given that RL is also part of ML and MNIST is definitely not the "Hello World" of RL.

I don't think there is a single "Hello World" problem for RL. However, if you are looking for simple problems (or environments) that are usually used as baselines to assess the quality of RL agents, then I would say that the simple grid worlds where you need to move from one place to the other, the CartPole, MountainCar, Pendulum or other environments listed here are often used.

The environment that you choose to train and test your RL agent depends on your goals. For example, if you designed an algorithm that is supposed to deal with continuous action spaces, then an environment where you can take only a discrete number of actions may not be a good option.

The mentioned environments are very simple (i.e. toy problems). In my opinion, we need more serious environments that can show the applicability of RL to other areas other than (relatively simple) games.

",2444,,2444,,9/13/2020 20:26,9/13/2020 20:26,,,,0,,,,CC BY-SA 4.0 23565,2,,23560,9/13/2020 15:07,,1,,"

I wouldn't say accuracy in the next word prediction is a good global metric. It would depend on the length of sentences. It's always difficult to predict the first word, cos you don't have any context. And having at least one word in the context it's easier. As long as your error rate is averaged among all predicted words, the accuracy could be higher if the lengths of sentences are higher. So the value of 0.5 doesn't tell much. The fact that you improved accuracy by 0.1 for instance on the same dataset means one method is better than another. Also, people measure perplexity and it's more sensitive to small changes of loss than accuracy. That's why I suggest you to measure perplexity. This metric also depends on lengths of sentences, that's why you should only compare 2 models trained on the same dataset. What about GPT and Transformer, they were invented to solve one of the issues of LSTMs (they're not capable to memorize a very large context), that's why transformers would be better for long sentences, but they would also be better for short I think due to their attention mechanist, which has many useful properties.

",40906,,40906,,9/13/2020 15:47,9/13/2020 15:47,,,,2,,,,CC BY-SA 4.0 23566,2,,23557,9/13/2020 18:37,,1,,"

One of your hypothesis is very close to the truth, it's 1.2

So, a language model measures the probability of a given sentence in a language $L$. The sentences can have any length and the sum of probabilities of all the sentences in the language $L$ is 1. It's very difficult to compute, thus people use some simplifications, like say if the words are located far enough from each other, then the occurrence of a current word doesn't depend on a word which was occurred far away in the past. Each sentence is a sequence $w_1, \dots, w_n$ and a language model computes the probability of the sequence $p([w_1, \dots w_n])$ (it's not joint distrribution yet). It can be decomposed into a joint distribution with some special tokens added $p(BOS, w_1, \dots w_n, EOS])$. BOS is begin of the sentence and EOS is end of sentence. Then this joint distribution can be decomposed using the chain rule $p(BOS, w_1, \dots w_n, EOS]) = p(BOS) p(w_1 | BOS) \Big[ \prod\limits_{i=1}^n p(w_i | BOS, w_1, \dots, w_{i-1}) \Big] p(EOS | BOS, w_1, \dots, w_n)$. There are 2 types of probabilities that are usually modelled differently: a prior probability $p(BOS)$ which is always equal to 1, because you always have BOS as the first token in the augmented sequence. Then conditional probabilities can be computed as follows $p(w_i | BOS, w_1, \dots, w_{i-1}) = \frac{c(BOS, w_1, \dots, w_{i-1}, w_i)}{\sum_{w_i \in W} c(BOS, w_1, \dots, w_{i-1}, w_i)}$. Where $c$ is a counter function that measures how many times a given sequence occured in the dataset you specified to train your model. You can notice it's a maximum likelihood estimate of the unknown conditional probabilities. Obviously if you're using a certain dataset you compute a model of that dataset, not of a language, but that's the to approximate true probabilities of sentences in a language. The EOS token is needed to make difference between a probability of a non-finished yet sequence and that which has finished, because if you take those counters from above and forget about adding the EOS into your dataset in the end of all sentences, you'll get probabilities that don't sum into 1 (which is bad).

",40906,,,,,9/13/2020 18:37,,,,9,,,,CC BY-SA 4.0 23567,1,24529,,9/13/2020 20:35,,4,640,"

In general, is continuous learning possible with a deep convolutional neural network, without changing its topology?

In my case, I want to use a convolutional neural network as a classifier of heartbeat types. The ECG signal is split, and a color image is created using feature extraction. These photos (the inputs) are fed into a deep CNN, but they must be labeled by someone first.

Are there ways to implement continuous learning in a deep neural network for image recognition? Does such an implementation make sense if the labels have to be specially prepared in advance?

",37928,,2444,,11/10/2020 10:39,11/10/2020 12:48,"Is continuous learning possible with a deep convolutional neural network, without changing its topology?",,1,2,,,,CC BY-SA 4.0 23568,1,,,9/13/2020 20:58,,1,123,"

I was watching the video Constraint Satisfaction: the AC-3 algorithm, and I tried to solve this question:

Given the variables A, B, C and D, with domain {1, 2, 3, 4} in each of them and restrictions A> B, B = C and D ≠ A, use the AC algorithm.

But the teacher told me that my answer below is wrong!

He gave me a tip: Domain D will not be changed!

Below, I is my answer step by step. If someone can help me find the error, I appreciate it!

To solve this exercise, it is first necessary to organize the data in order to separate what is the domain, agenda and arc.

Soon after, we will analyze the first item on the agenda “A> B” with domain A, in order to eliminate unnecessary elements from the domain.

Analyze domain B with the agenda item “B <A”

Analyze domain B with the agenda item “B = C” and add the constraint “A> B”

Analyze domain D with the agenda item “D ≠ A” and add the constraint “B <A”

Analyze domain A with the agenda item “A ≠ D”

Analyze domain A with the agenda item “A> B”

Analyze domain B with the agenda item “B = C”

Analyze domain B with the agenda item “B <A”

Result

",40555,,2444,,6/26/2022 9:26,6/26/2022 9:26,"What's wrong with my answer to this constraint satisfaction problem, which needs to be solved the AC-3 algorithm?",,1,0,,,,CC BY-SA 4.0 23570,1,,,9/14/2020 2:24,,2,344,"

I am confused about the training part in AttnGan.

If you observe page 3. There are two types of losses for generator network: one involving the Deep Attentional Multimodal Similarity Model (DAMSM) loss $(L_{DAMSM})$ and the others for individual generator $(L_{G_i})$ for $i= 1, 2, 3$.

My doubt is: if each generator has its own loss function that is useful in training, what is the purpose in using $L_G$, i.e., with DAMSM loss function? Is my assumption wrong?

",18758,,18758,,4/5/2021 6:37,4/5/2021 6:37,What is the purpose of the DAMSM loss for the generators in AttnGAN?,,0,0,,,,CC BY-SA 4.0 23571,1,34467,,9/14/2020 3:29,,1,151,"

I am constructing a convolutional variational autoencoder for images, starting out with mnist digits. Typically I would specify convolutional layers in the following way:

input_img = layers.Input(shape=(28,28,1))
conv1 = keras.layers.Conv2D(32, (3,3), strides=2, padding='same', activation='relu')(input_img)
conv2 = keras.layers.Conv2D(64, (3,3), strides=2, padding='same', activation='relu')(conv1) 
...

However, I would also like to construct a convolutional filter/kernel that is fixed BUT dependent on some content related to the input, which we can call an auxiliary label. This could be a class label or some other piece of relevant information corresponding to the input. For example, for MNIST I can use the class label as auxiliary information and map the digit to a (3,3) kernel and essentially generate a distinct kernel for each digit. This specific filter/kernel is not learned through the network so it is fixed, but it is class dependent. This filter will then be concatenated with the traditional convolutional filters shown above.

input_img = layers.Input(shape=(28,28,1))
conv1 = keras.layers.Conv2D(32, (3,3), strides=2, padding='same', activation='relu')(input_img)

# TODO: add a filter/kernel that is fixed (not learned by model) but is class label specific
# Not sure how to implement this?
# auxiliary_conv = keras.layers.Conv2D(1, (3,3), strides=2, padding='same', activation='relu')(input_img)

I know there are kernel initializers to specify initial weights https://keras.io/api/layers/initializers/, but I'm not sure if this is relevant and if so, how to make this work with a class specific initialization.

In summary, I want a portion of the model's weights to be input content dependent so that some of the trained model's weights vary based on the auxiliary information such as class label, instead of being completely fixed regardless of the input. Is this even possible to achieve in Keras/Tensorflow? I would appreciate any suggestions or examples to get started with implementation.

",40781,,,,,2/8/2022 17:44,How to construct input dependent convolutional filter?,,2,1,,,,CC BY-SA 4.0 23572,1,,,9/14/2020 3:52,,1,50,"

Let's say I have a primary dataset that its secondary dataset is hundreds to match and group like an one-to-many relationship.

I'm new in this world of the AI but my problem is that many child groups contain the same elements or even different combinations to result the parent data but in this case more to avoid duplication is get those duplications and the some way add up the data.

This is an example of what secondary data can look like and what I want to get from grouping it.

Parent data

  ID        FIELD1       FIELD2 FIELD3  FIELD4      FIELD5
  90148001  BLABLA       40     0       35896.89479 35896.89479

Child data

  ID        FIELD1       FIELD2 FIELD3  FIELD4      FIELD5
* 90148001  BLABLA       1      1770    1769.572665 1769.572665
* 90148001  DESCRIPTION2 1      13146   13146.45284 13146.45284
* 90148001  BLABLA       1      2176    2176.435074 2176.435074
* 90148001  BLABLA       1      2306    2305.716285 2305.716285
* 90148001  BLABLA       1      2531    2531.271196 2531.271196
* 90148001  BLABLA       1      1147    1146.803622 1146.803622
* 90148001  BLABLA       1      1991    1990.613246 1990.613246
* 90148001  BLABLA       1      3641    3641.394446 3641.394446
* 90148001  BLABLA       1      2471    2470.8253   2470.8253
* 90148001  BLABLA       1      2247    2246.984815 2246.984815
* 90148001  BLABLA       1      2471    2470.8253   2470.8253

Could a neural network be able to process, aggregate, and group those quantities?

",25333,,25333,,9/14/2020 15:34,9/14/2020 15:34,Could the neural network automatically calculate and get different one-to-many quantities relative to their parent quantity?,,0,6,,,,CC BY-SA 4.0 23573,1,23578,,9/14/2020 6:29,,6,381,"

As a followup to this question, I'm interested in what the typical "Hello World" problem (first easy example problem) is for unsupervised learning.

A quick Google search didn't find any obvious answers for me.

",40968,,,,,9/14/2020 9:48,What is the “Hello World” problem of Unsupervised Learning?,,1,2,,,,CC BY-SA 4.0 23574,1,23583,,9/14/2020 7:18,,1,90,"

Taken from section 2.1 in the article:

We consider the standard reinforcement learning formalism consisting of an agent interacting with an environment. To simplify the exposition we assume that the environment is fully observable. An environment is described by a set of states $S$, a set of actions $A$, a distribution of initial states $p(s_0)$, a reward function $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, transition probabilities $p(s_{t+1} \mid s_t, a_t)$, and a discount factor $\gamma \in [0, 1]$.*

How should one interpret the maths behind it?

",40971,,2444,,12/2/2020 0:26,12/2/2020 0:26,"What does $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ mean in the article Hindsight Experience Replay, section 2.1?",,1,0,,,,CC BY-SA 4.0 23575,1,,,9/14/2020 7:20,,2,72,"

Model based RL attempts to learn a function $f(s_{t+1}|s_t, a_t)$ representing the environment transitions, otherwise known as a model of the system. I see linear functions are still being used in model-based RL such as in robotic manipulation to learn system dynamics, and can work effectively well. (Here, I mean in learning the model, not as an optimization method for the controller selecting the best actions).

In model-based RL, are there situations where a learning a linear model such as using a Lyapunov function would be better suited than using a neural network, or are the examples of problems framed to use linear models when addressing them using model-based RL?

",40671,,,,,6/11/2021 15:03,Are linear approximators better suited to some tasks compared to complex neural net functions?,,1,1,,,,CC BY-SA 4.0 23577,2,,23561,9/14/2020 8:56,,2,,"

While there's no simple Hello World problem of RL, if your aim is to understand the basic working of Reinforcement Learning and see it at play while using as few moving parts as possible, a simple suggestion would be using Tabular Q-Learning in a toy environment (like your suggested Cart-Pole Env).

Here's the reasoning behind this suggestion

Let's say we interpret MNIST's label as a Hello World of Supervised Learning to mean something showing the basic steps of doing Supervised Learning: Create a model, load the data, then train.

If that interpretation is not far off, we can say a simple introductory problem to Reinforcement Learning (RL) should focus on easily demonstrating a working Markov Decision Process (MDP) which is the backbone of the RL decision making process. As such, this minimal working would involve: Observing the world, selecting an action, as shown in this loop:

This picture is missing two important steps in an RL algorithm learning loop:

  1. Estimating the rewards or Fitting the model
  2. Improving how you select actions. (Updating your policy)

How we decide to update the policy, or fit the model is what makes difference in the RL algorithm most of the time.

So a suggested first problem would be one that helps you see the MDP in action, while keeping steps 1 and 2 simple enough so that you understand how the agent learns. Tabular Q-Learning seems clear enough for this because it uses a Q-table represented as a 2D array to do the two steps. This should not suggest Q-learning is a "Hello World" RL algorithm because of the said relative ease in understanding it :)

You will be unable to use it's Tabular version anywhere else than in a toy environment though, typically Frozen-Lake and CartPole. An improvement would be using a neural network instead of a table to estimate Q values.

Here are a few useful resources:

  1. Q-Learning with Tables (Guide)
  2. Q-learning jupyter notebook (Code ~25 lines)
  3. Q-Learning with Frozen-Lake and Taxi (Code)
  4. Reinforcement Learning with Q-Learning (Guide)

A multi-armed bandit would also be great in introducing you to exploration-exploitation trade-off (which Q-learning does too), though it wouldn't be considered a full RL algorithm since it has no context.

",40671,,40671,,9/15/2020 6:32,9/15/2020 6:32,,,,3,,,,CC BY-SA 4.0 23578,2,,23573,9/14/2020 9:48,,5,,"

I disagree with the context that MNIST is the "hello world" of supervised learning. It is definitely, though, the "hello world" of image classification, which is a very specific sub-field of supervised learning.

I'd consider the Iris dataset a better candidate for the "hello world" of supervised learning, with other close candidates such as the Wine, Wisconsin breast cancer or Pima indians datasets. However, as an even simpler and more alternative choice, a lot of people prefer generating their own 2-dimensional datasets so that can more intuitively understand what the different algorithms are doing. An example of this is TensorFlow playground.

Equivalently, in unsupervised learning there are a lot of different tasks. I personally think that clustering is probably the task that is easier for people to understand and as such the most common intro to unsupervised learning. Here there are, as well, two options:

  • Using an already established dataset, e.g. Iris (without the labels).
  • Generating your own synthetic 2-dimensional data, to better understand how the algorithms work. An example is this.
",26652,,,,,9/14/2020 9:48,,,,0,,,,CC BY-SA 4.0 23579,1,,,9/14/2020 10:12,,0,60,"

I'm writing a General Game Playing (GGP) AI in python. I'd like to test it on some GGP games. So are there any python implementations of GGP games?

I found in http://games.ggp.org/base games written in Game Description Language (GDL). How to use them in python if it is possible to do so?

",40975,,40975,,9/14/2020 10:26,9/14/2020 10:26,Are there any python implementations of GGP games or how to use game logic written in GDL in python?,,0,6,,,,CC BY-SA 4.0 23581,2,,22581,9/14/2020 12:23,,2,,"

The main point in GPT-3 and already in 2 was the observation that performance was steadily increasing with increasing model size (As seen in Figure 1.2 in your linked paper). So it seems that while all progress made in NLP was definitely useful, it also is important to just scale up the model size.

This may not seem like surprising point, but it actually kind of is. Normally, performance would saturate or at least the gain would slope off, but this is not the case! So the main innovation may not be that big and is kind off brute-force but the point still stands: Bigger models are better.

Another point to mention is the way they did the training. Such a large model needs some tricks to be actually trained (and fast at that). You also want to make use of multiple GPUs for parallel training. This means they also had to develop new structures for training.

Why exactly it is predicted as a huge innovation may only be contained to some twitter demonstration, there are no real sources on this as far as I know. Especially because the model is not openly available.

",38328,,,,,9/14/2020 12:23,,,,0,,,,CC BY-SA 4.0 23583,2,,23574,9/14/2020 13:14,,3,,"

This answer assumes that you only have a problem with this notation from the article:

$r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$

This is a standard notation, used in many disciplines, for defining a function and its input and output domains. It is a bit like the method signature for the function - it does not fully define it, but does enough to show how it can interact with other expressions.

All functions can be thought of as maps between the input domain and output domain. You provide an input value, and it returns an output value. The values can be arbitrary mathematical objects. To show what kind of objects the inputs and outputs are allowed to be, the notation for sets is used.

Importantly the symbol $\mathbb{R}$ at the end does not refer to the set of possible rewards in the environment (although it is a reward function, and that will be its output), but the set of all real numbers, because a reward is always a real number*.

As a concrete example, if you had the function $f(x) = x^2 - 2x + 7$ defined for a real number $x$, then its equivalent notation might be $f : \mathbb{R} \rightarrow \mathbb{R}$. If you allowed $x$ to be complex then it would be $f : \mathbb{C} \rightarrow \mathbb{C}$, because $\mathbb{C}$ is the standard symbol for the set of all complex numbers.

So now we can break down the notation $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$

$r$

The function is called $r$

$:$

It has an input domain of . . .

$\mathcal{S} \times \mathcal{A}$

The cartesian product of the set of all possible states $\mathcal{S}$ and the set of possible actions $\mathcal{A}$.

That is much the same as saying the function has a signature $r(s, a)$ where $s \in \mathcal{S}$ and $a \in \mathcal{A}$

$\rightarrow$

It has an output domain of . . .

$\mathbb{R}$

any single real number.


* This choice (of declaring the more general $\mathbb{R}$ instead of specific $\mathcal{R}$) is made partly because operators like $+$ and $\times$ are well defined for real numbers. This is a useful thing to assert about the behaviour of the reward function output when defining how value functions work for instance. Of course you could be more specific, defining $\mathcal{R}$ as some subset of $\mathbb{R}$, that would be correct and more precise definition, but it is not needed for general theory in reinforcement learning. The less precise definition is fine for nearly all purposes.

",1847,,1847,,9/14/2020 16:02,9/14/2020 16:02,,,,0,,,,CC BY-SA 4.0 23584,2,,23575,9/14/2020 13:21,,1,,"

This is just a case of supervised learning. You are trying to predict $s_{t+1}$ given $s_t$ and $a_t$, so the answer to your question depends on how complex your state dynamics are.

For example, if the state space is really complex, e.g. if your state space is an image and you want to predict the next image given the current image and an action, then linear methods are unlikely to work well.

",36821,,,,,9/14/2020 13:21,,,,2,,,,CC BY-SA 4.0 23585,1,,,9/14/2020 13:25,,2,100,"

I'm working on a project compiling various versions of the Bible into a dataset. For the most part versions separate verses discreetly. In some versions, however, verses are combined. Instead of verse 16, the marker will say 16-18. I wonder if, given I have a lot of other versions that separate them discretely, I can train an NLP model (I have about 30 versions that could act as a training set which would constitute to separate those combined verses into discrete verses. I'm fairly new at deep learning, having done a few toy projects. I wonder how to think about this problem? What kind of problem is it? I think it might be similar to auto-punctuation problems and it seems the options there are seq2seq and classifier. This makes more sense to me as a classification problem, but maybe my inexperience is what drives me that direction. Can people suggest ways to think about this problem and resources I might use?

In answer to questions in the comment, I am dealing only with text, not images. An example might be like this:

Genesis 2, New Revised Standard Version:

5 when no plant of the field was yet in the earth and no herb of the field had yet sprung up—for the Lord God had not caused it to rain upon the earth, and there was no one to till the ground; 6 but a stream would rise from the earth, and water the whole face of the ground— 7 then the Lord God formed man from the dust of the ground, and breathed into his nostrils the breath of life; and the man became a living being.

Genesis 2, The message version:

5-7 At the time God made Earth and Heaven, before any grasses or shrubs had sprouted from the ground—God hadn’t yet sent rain on Earth, nor was there anyone around to work the ground (the whole Earth was watered by underground springs)—God formed Man out of dirt from the ground and blew into his nostrils the breath of life. The Man came alive—a living soul!

The goal then would be to divide the message version into discrete verses in the way that the NRSV is. Certainly, a part of the guide would be that a verse always ends in some kind of punctuation, though while necessary it is not sufficient to assign a distinct verse.

",40982,,40982,,9/15/2020 12:36,9/15/2020 12:36,NLP Bible verse division problem: Whats the best model/method?,,0,3,,,,CC BY-SA 4.0 23590,1,23591,,9/15/2020 1:22,,1,113,"

I have roughly 30,000 images of two categories, which are 'crops' and 'weeds.' An example of what I have can be found below:

The goal will use my training images to detect weeds among crops, given an orthomosaic GIS image of a given field. I guess you could say that I'm trying to detect certain objects in the field.

As I'm new to deep learning, how would one go about generating training labels for this task? Can I just label the entire photo as a 'weed' using some type of text file, or do I actually have to draw bounding boxes (around weeds) on each image that will be used for training? If so, is there an easier way than going through all 30,000 of my images?

I'm very new to this, so any specific details would really help a lot!

",32750,,,,,9/24/2020 6:08,How do I label images for deep learning classification?,,2,0,,,,CC BY-SA 4.0 23591,2,,23590,9/15/2020 1:41,,2,,"

If each photo is intended to show a photo of weed or crops you should give one label. If your task is different where you also try to localize weed or crops in the image, then you need to label accordingly. My understanding is you are trying to do the first case, therefore, there should be one label for each image.

",40957,,,,,9/15/2020 1:41,,,,3,,,,CC BY-SA 4.0 23592,2,,23590,9/15/2020 1:45,,2,,"

This is really a semantic segmentation problem if OP wants to pinpoint the weeds.

If OP wants to have such a segmentation he will need to hand segment every.single picture.

",32390,,,,,9/15/2020 1:45,,,,1,,,,CC BY-SA 4.0 23593,2,,23523,9/15/2020 1:53,,0,,"

Maybe you can use a recurrent neural network on saved data to train a predictive model based on past data.

",40957,,,,,9/15/2020 1:53,,,,1,,,,CC BY-SA 4.0 23594,2,,23523,9/15/2020 2:36,,0,,"

H2O's AutoML is the thing that you are looking for believe me it will make your life super easy.

So how it generally works is:

  • You have data and want to make sense out of it by making a prediction/classification and whole other array of things, in your case prediction.
  • Suppose there are 50 prediction algorithms out there, it's highly unlikely that an individual knows all of them. That's where AutoML comes into the picture.
  • You give AutoML some part of your processed data and tell it to find a perfect algorithm that he thinks you should use on this data for this type of prediction. See the usage of the AutoML's API in the doc and videos on youtube.
  • It then gives you a list of best algorithms that AutoML thinks is best based on the loss function that you specify. There are many other parameters that you can specify in the AutoML's API.
  • Pick top 1-2 and tune the hyperparameters of the algorithm.
  • That's it.

Even then if you are not happy with the performance of your system try out ensemble learning before you jump for the power of neural nets which comes with lots of complexities and performance issues.

",40485,,40485,,9/15/2020 2:44,9/15/2020 2:44,,,,0,,,,CC BY-SA 4.0 23595,2,,23571,9/15/2020 4:18,,0,,"

Not a tensorflow expert but I may be able to offer some conceptual advice. Since you do not care to learn the filter, but instead want to fix a discrete set of possible values for a discrete set of cases, you can use tensor operations (i.e convolutions) rather than neural network layer operations. Essentially, in framework-agnostic pseudocode this would look like:

# layers with learned parameters
output1 = layers1(input)

# apply unlearned but changeable layer convolutions
kernel_val = kernel_val_selection_function(output1)
output2 = convolve_2D(output1,kernel_val)

# more layers with learned parameters
output3 = layers3(output2)

...

The function graph will treat kernel_val as a constant for purposes of backpropagation, so as long as your convolution operations are done within the framework used to create the function graph (i.e. tensorflow) you shouldn't have any problems with backprop.

",33839,,,,,9/15/2020 4:18,,,,0,,,,CC BY-SA 4.0 23596,2,,23349,9/15/2020 4:52,,1,,"

AlphaDogfight - from Defense Advanced Research Projects Agency (DARPA) a programme that pitted computers using F-16 flight simulators against one another and later went on to defeat Air Force’s top F-16 fighter pilots.

Check out this and this news and events by DARPA.

",40485,,40485,,9/15/2020 5:02,9/15/2020 5:02,,,,0,,,,CC BY-SA 4.0 23597,2,,23506,9/15/2020 5:13,,0,,"

I think what you need to use is 3D convolution operation. Your data is 3D, width, height, and num_channels. Your data is similar to color images with RGB channels. However, since you are trying to consider the correlation amongst channels 2D convolution will not work for you. You can use 3D convolution which is available to use with deep learning tools such as Tensorflow.

",40957,,,,,9/15/2020 5:13,,,,3,,,,CC BY-SA 4.0 23599,1,23600,,9/15/2020 7:26,,0,56,"

I am using a dataset from Google which contains 1,27,000 data points on simulated concentrations of the atmosphere of exoplanets which can sustain life. So, the output label of all these data points is 1 i.e, probability of life existing there is 1. If I train my neural network on this data, and test it on data points with concentrations other than these, can I expect to get probability values at the output? Asking because the model knows no false labelled value.

",40849,,,,,9/15/2020 8:39,Can a neural network be trained on a dataset containing only values for true output for a classification problem?,,1,0,,,,CC BY-SA 4.0 23600,2,,23599,9/15/2020 8:39,,0,,"

Yes, you can and the answer is One-Class Classification. A well-written resource to understand is this.

",40485,,,,,9/15/2020 8:39,,,,1,,,,CC BY-SA 4.0 23601,2,,22900,9/15/2020 8:54,,6,,"

Markov decision problems are usually defined with a reward function $r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$, and in these cases the rewards are expected to be scalar real values. This makes reinforcement learning (RL) easier, for example when defining a policy $\pi(s,a)=\arg\max_a Q(s,a)$, it is clear what is the maximum of the Q-factors in state $s$.

As you might have also realized, in practice however, problems often have multiple objectives that we wish to optimize at the same time. This is called multiobjective optimization and the related RL field is multiobjective reinforcement learning (MORL). If you have access to the paper Liu, Xu, Hu: Multiobjective Reinforcement Learning: A Comprehensive Overview (2015) you might be interested in reading it. (Edit: as Peter noted in his answer, the original version of this paper was found to be a plagiarism of various other works. Please refer to his answer for better resources.)

The above-mentioned paper categorizes methods for dealing with multiple rewards into two categories:

  • single objective strategy, where multiple rewards are somehow aggregated into one scalar value. This can be done by giving weights to rewards, making some of the objectives a constraint and optimize the others, ranking the objectives and optimize them in order etc. (Note: in my experience, weighted sum of rewards is not a good objective as it might combine two completely unrelated objectives in a very forced way.)
  • Pareto strategy, where the goal is to find Pareto-optimal strategies or a Pareto front. In this case we keep the rewards a vector and may compute a composite Q-factor, eg.: $\bar{Q}(s,a)=[Q_1(s,a), \ldots, Q_N(s,a)]$ and may have to modify the $\arg\max_a$ function to select the maximum in a Pareto sense.

Finally, I believe it is important to remind you that all these methods really depend on the use-case and on what you really want to achieve and that there is no one solution that fits all. Even after finding an appropriate method you might find yourself spending time tweaking hyper-parameters just so that your RL agent would do what you would like it to do in one specific scenario and do something else in a slightly different scenario. (Eg. taking over on a highway vs. taking over on a country road).

",8448,,8448,,1/15/2021 15:45,1/15/2021 15:45,,,,0,,,,CC BY-SA 4.0 23602,1,23606,,9/15/2020 10:19,,1,157,"

My simple understanding of AI is that it is based on a mathematical model of a problem. If I understood correctly, the model is a polynomial equation and its weights are calculated by training the model with data sets.

I am interested to see a few example polynomial equations (trained models) which are used in certain problem areas. I tried to search it, but so far could not find any simple answers.

Can anyone list a few examples here?

",41003,,2444,,9/15/2020 13:38,9/15/2020 14:23,What are some examples of functions that machine learning models compute?,,1,4,,,,CC BY-SA 4.0 23604,1,23608,,9/15/2020 11:06,,3,148,"

Equation 7.3 of Sutton Barto book: $$\text{Equation: } max_s|\mathbb{E}_\pi[G_{t:t+n}|S_t = s] - v_\pi| \le \gamma^nmax_s|V_{t+n-1}(s) - v_\pi(s)| $$ $$\text{where }G_{t:t+n} = R_{t+1} + \gamma R_{t+2} + .....+\gamma^{n-1} R_{t+n} + \gamma^nV_{t+n-1}(S_{t+n})$$ Here $V_{t+n-1}(S_{t+n})$ is the estimate of $V_\pi(S_{t+n})$

But the Left Hand Side of the above equation should be zeros as, for any state s, $G_{t:t+n}$ is an unbiased estimate of $v_\pi(s)$ hence $\mathbb{E}_\pi[G_{t:t+n}|S_t = s] = v_\pi(s)$.

",37611,,2444,,9/20/2020 10:05,9/20/2020 10:05,What is wrong with equation 7.3 in Sutton & Barto's book?,,1,0,,,,CC BY-SA 4.0 23605,1,23624,,9/15/2020 11:59,,3,100,"

I've been reading through the research literature for image processing, computer vision, and convolutional neural networks. For image classification and object recognition, I know that convolutional neural networks deliver state-of-the-art performance when large amounts of data are available. Furthermore, I know that Hinton et al. created "capsule networks" to try and overcome some of the fundamental limitations of CNN architecture (such as them not being rotationally invariant). However, my understanding is that capsule networks have been a failure (so far), and most people expect them to go nowhere. And CNNs have progressively been improved in various ways (Bayesian optimisation for hyper parameter tuning, new convolution kernels, etc.). It seems to me that, at the moment, and for the foreseeable future, CNNs are the best architecture available for image-related stuff.

But, as I said, CNNs, like other Deep Learning architectures, require large amounts of data. So my question is as follows:

What are the research areas/topics for improving CNNs in the sense of making them work more effectively (that is, have greater performance) with less data (working with small datasets)?

I know that there is various research looking at approaches to increasing data (such as data augmentation, generative networks, etc.), but I am primarily interested in fundamental modifications to CNNs themselves, rather than purely focusing on changes to the data itself.

And to expand upon my question, using my above definition of "performance", I am interested in these two categories:

  1. "Computational methods" for increasing CNN performance. This would be the non-mathematical stuff that I've read about, such as just increasing the number of layers and making the CNN deeper/wider (and I think another one had to do with just making the size of the convolution kernel smaller, so that it looks at smaller pieces of the image at any one time, or something like that?).

  2. "Mathematical methods" for increasing CNN performance. This would be the cutting-edge mathematical/statistical stuff that I've read about: things like algorithms (such as Bayesian optimization); I've come across a lot of geometric stuff; and I guess the cutting-edge convolution kernels created by the image processing people would also fall under this category.

Obviously, this "list" is not exhaustive, and it's probably incorrect; I'm a novice to this research, so I'm trying to find my way around.

I am interested in studying both of the above categories, but I will primarily be working from the mathematical/statistical side. And I want to work on research that is still practical and can be put to use in industry for improved performance (even if it might still be "advanced"/complex for most people in industry) – not the the highly theoretical stuff related.

Related (but unanswered): Are there any good research papers on image identification with limited data?

",16521,,,,,9/16/2020 14:41,Research paths/areas for improving the performance of CNNs when faced with limited data,,1,3,,,,CC BY-SA 4.0 23606,2,,23602,9/15/2020 13:33,,2,,"

If I understood correctly, the model is a polynomial equation

No, it's not true that all machine learning (ML) models compute (or represent) a polynomial function. For example, a sigmoid is not a polynomial, but, for example, in a neural network, you can combine many sigmoids to build complicated functions that may not necessarily be polynomials.

We usually distinguish between linear (straight-lines) and non-linear functions (rather than polynomials and non-polynomials). In some cases, it is straightforward to visualize the function that your model computes: for example, in the case of linear regression, once you learned the coefficients (i.e. the slope and y-intercept), you can plot the learned straight-line function. In other cases, for example, in the case of neural networks, it is not fully clear how to visualize the function that your model computes, given that it is the composition of many non-linear functions (typically, ReLUs, sigmoids or hyperbolic tangents).

If you are interested in solving problems with polynomials, take a look at polynomial regression.

and its weights are calculated by training the model with data sets.

Yes, in machine learning, we want to find a function that "fits the given data", and the specific meaning of "fitting the data" depends on the specific machine learning technique.

For simplicity, let's focus on supervised learning, a machine learning technique where we are given a labelled dataset, i.e. a dataset of pairs $D = \{(x_1, y_1), \dots, (x_N, y_N)\}$, where we assume that $f(x_i) = y_i$, for some typically unknown function $f$, and $y_i$s are the labels (the outputs of $f$) and $x_i$ the inputs of $f$. The goal is to find function $g_{\theta}$ that approximates well $f$. I will soon describe what the subscript $\theta$ represents.

For simplicity, let's assume that $f$ is a linear function (i.e. a straight-line). So, we can define a linear model $g_{\theta}$ that we can use to find a function that approximates well $f$. Here is the linear model

$$g_{\theta}(x) = ax + b,$$

where

  • $g_{\theta}(x)$ is the output
  • $x$ is the input
  • $ax + b$ is the linear function (a straight-line)
  • $a$ is the slope (a parameter, aka weight)
  • $b$ is the $y$-intercept (another parameter)
  • $\theta = \{ a, b \}$ (the set of parameters of the linear model)

Why is this a model? I call this a model because, depending on the specific values of the parameters $\theta$, we have different specific functions. So, I am using the term "model" as a synonym for a set of functions, which, in this case, are limited by the definition $ax + b$ and the specific values that $a$ and $b$ (i.e. $\theta$) can take.

So, what do we do with this linear model? We want to find a specific set of parameters $\hat{\theta}$ (note that I use the $\hat{ }$ to emphasize that this is a specific configuration of the variable $\theta$) that corresponds to a linear function (a straight-line) that approximates $f$ well. In other words, we need to find the parameters $\hat{\theta}$, such that $g_\hat{\theta} \approx f$, where $\approx$ means "approximately computes".

How do we do that? We typically don't know $f$, but we know (or assumed) that $f(x_i) = y_i$, so the labeled dataset $D$ contains information about our unknown function $f$. So, the idea is that we can use the dataset $D$ to find a specific set of parameters $\hat{\theta}$ that corresponds to some function that approximates $f$ according to the information in $D$.

This process of finding $\hat{\theta}$ based on $D$ is often denoted as "fitting the model to the data". There different ways of fitting the model to the data that differ in the way they compute some notion of distance between the information in $D$ and $g_{\hat{\theta}}$. I will not explain them here because this answer is already quite long. If you want to know more about it, you should take a book about the topic and read it.

What are some examples of functions that machine learning models compute?

I don't have specific examples, but you can easily try to fit a linear regression model to some labelled data, then plot the function that you found. You could use the Python library sklearn to do that.

",2444,,2444,,9/15/2020 14:23,9/15/2020 14:23,,,,0,,,,CC BY-SA 4.0 23607,1,,,9/15/2020 13:36,,1,100,"

After the state of the art Deep Learning techniques/algorithms being implemented in low-level languages like Objective-C, C++, etc to high-level languages like Python, JS, etc. and with the help of huge libraries like Tensorflow, Pytorch, Scikit-Learn, etc.

Now, Swift: Google's bet on differentiable programming, they are making Swift differential programming ready see this manifesto and they are building TensorFlow from the ground up in swift S4TF.

So, how differentiable programming and programming language supporting it will potentially help the development towards AGI?

",40485,,2444,,9/16/2020 16:28,9/16/2020 16:28,How differentiable programming and programming language supporting it will potentially help the development towards AGI?,,0,0,,,,CC BY-SA 4.0 23608,2,,23604,9/15/2020 14:54,,3,,"

In general, $\mathbb{E}_\pi[G_{t:t+n}|S_t = s] \neq v_\pi(s)$. $v_\pi(s)$ is defined as $\mathbb{E}_\pi[\sum_{k=0}^{\infty} \gamma^k R_{t+k+1} | S_t = s]$, so you should be able to see why the two are not equal when the LHS is an expectation of the $n$th step return. They would only be equal as $n \rightarrow \infty$.

",36821,,,,,9/15/2020 14:54,,,,0,,,,CC BY-SA 4.0 23609,2,,23369,9/15/2020 15:43,,0,,"

Thanks for your answer. I have tried to assess other models using cross validation and could see that the complex neural network models did not perform very well on out of sample data.. what tricked me was that i looked at the predicted data from the training on the full data set ( after optimization of hyper parameter tuning). This predicted data alwasys looked extremely well correlated with actual data. I did not look at the magnitude of the RMSE after hyperparamter tuning.Because. When I then did an extra "leave-one-out" validation i could get random results.

In the end I used caret's resampling method which compares RMSE of different models.and found out that svmPoly (support vector machine) or random forrest had the best out of sample performance - better than lm and GLMnet.

",40709,,,,,9/15/2020 15:43,,,,0,,,,CC BY-SA 4.0 23611,1,,,9/15/2020 16:16,,7,353,"

BERT encodes a piece of text such that each token (usually words) in the input text map to a vector in the encoding of the text. However, this makes the length of the encoding vary as a function of the input length of the text, which makes it more cumbersome to use as input to downstream neural networks that take only fixed-size inputs.

Are there any transformer-based neural network architectures that can encode a piece of text into a fixed-size feature vector more suitable for downstream tasks?

Edit: To illustrate my question, I’m wondering whether there is some framework that allows the input to be either a sentence, a paragraph, an article, or a book, and produce an output encoding on the same, fixed-sized format for all of them.

",9220,,9220,,2/11/2022 21:29,11/8/2022 23:05,Are there transformer-based architectures that can produce fixed-length vector encodings given arbitrary-length text documents?,,1,3,,,,CC BY-SA 4.0 23614,1,,,9/15/2020 21:45,,1,162,"

In GPT-2, the large achievement was being able to generate coherent text over a long-form while maintaining context. This was very impressive but for GPT-2 to do new language tasks, it had to be explicitly fine-tuned for the new task.

In GPT-3 (From my understanding), this is no longer the case. It can perform a larger array of language tasks from translation, open domain conversation, summarization, etc., with only a few examples. No explicit fine-tuning is needed.

The actual theory behind GPT-3 is fairly simple, which would not suggest any level of ability other than what would be found in common narrow intelligence systems.

However, looking past the media hype and the news coverage, GPT-3 is not explicitly programmed to "know" how to do these wider arrays of tasks. In fact, with limited examples, it can perform many language tasks quite well and "learn on the fly" so to speak. To me, this does seem to align fairly well with what most people would consider strong AI, but in a narrow context, which is language tasks.

Thoughts? Is GPT-3 an early example of strong AI but in a narrower context?

",22840,,2444,,12/12/2021 19:08,12/12/2021 19:08,Is GPT-3 an early example of strong AI in a narrow setting?,,1,0,,,,CC BY-SA 4.0 23615,2,,23614,9/16/2020 2:41,,3,,"

GPT-3 is based on in-context learning. It’s common wisdom one can hope that bigger models will yield better in-context capabilities. And indeed, this holds true, in the case of GPT-3 175B or "GPT-3".

Neverthless GPT-3 is more powerful than it's predecessors. In some of the tasks, GPT-3 failed miserably. This might be due to the choice to use an autoregressive LM, instead of incorporating bidirectional information (similarly to Bert).

While in-context learning is more straightforward with autoregressive LMs, bidirectional models are known to be better at downstream tasks after fine-tuning.

In the end, training a bidirectional model at the scale of GPT-3 or trying to make bidirectional models work with few-shot learning is a promising direction for future research.

Check out this, this and the paper on Scaling Laws for Neural Language Models.

",40485,,,,,9/16/2020 2:41,,,,1,,,,CC BY-SA 4.0 23618,1,,,9/16/2020 11:28,,4,170,"

That is, if AGI were an existing technology, how much would it be valued to?

Obviously it would depend on its efficiency, if it requires more than all the existing hardware to run it, it would be impossible to market.

This question is more about getting a general picture of the economy surrounding this technology.

Assuming a specific definition of AGI and that we implemented that AGI, what is its potential economical value?

Current investments in this research field are also useful data.

",41025,,41025,,9/16/2020 13:13,10/21/2020 22:56,What is the current artificial general intelligence technology valuation?,,2,7,,,,CC BY-SA 4.0 23619,1,,,9/16/2020 12:58,,1,151,"

Background I have tried to fit a logistic regression model - written using a forward / back propagation approach (as part of Andrew Ng's deep learning course) - to a very non-linear data set (see picture below). Of course, it totally fails; in Andrew Ng's course, the failure of logistic regression to fit to this motivates developing a neural net - which works quite nicely. But my question concerns what my logistic model is doing and why.

The problem My logistic regression model's cost increases, even after massively reducing the learning rate. But at the same time my accuracy (slowly) increases. I simply cannot see why.

To confuse matters even more - if I resort to a negative learning rate (essentially trying to force the calibration to higher cost values) the cost then decreases for a time until the accuracy hits 50%. After this point, the cost then inexorably increases - but the accuracy stays equal to 50%. The solution so found is to set all points to either red or blue (a reasonable fit given logistic regression simply cannot work on this data).

My questions and thoughts on answers I have reproduced the Python code below - hopefully it's clear. My questions are:

  1. Is there a mistake in the model that explains why negative learning rates seem to work better?
  2. On the topic of why the cost increases even as accuracy asymptotes to 50%: is the issue that once the model has discovered the "all points equal to either red or blue" solution the parameters "w" and "b" just get larger and larger (in absolute terms) - driving all of the predictions closer to 1 (or conversely if it predicts all points are 0)?

To explain this second question a bit more: imagine red points are defined by y = 1. Suppose parameters w, b are chosen such that the probability for every point equals 0.9. Then the model predicts all points are red - which is correct for half the points. The model can then improve half the predictions by driving w and b up (so that sigmoid ( w*x + b) --> 1). But of course, this makes half the predictions (the blue points) more and more wrong - which causes the cost function for those points - log(1 - prob) - to diverge. I don't truly see why gradient descent would do this but it's all I can think of for the peculiar behaviour of the algorithm.

Hope this all makes sense. Hit me up if not.

import numpy as np
import matplotlib.pyplot as plt


# function to create a flower-like arrangement of 1s and 0s
def load_planar_dataset():
    np.random.seed(1)
    m = 400 # number of examples
    N = int(m/2) # number of points per class
    D = 2 # dimensionality / i.e. work in 2d plane - so X is a set of (x,y) coordinate points
    X = np.zeros((m,D)) # data matrix where each row is a single example
    Y = np.zeros((m,1), dtype='uint8') # labels vector (0 for red, 1 for blue)
    a = 4 # maximum ray of the flower

    for j in range(2):
        ix = range(N*j,N*(j+1))
        t = np.linspace(j*3.12,(j+1)*3.12,N) + np.random.randn(N)*0.2 # theta / random element mixes up some of the petals so you get mostly blue with some red petals and vice-versa
        r = a*np.sin(4*t) + np.random.randn(N)*0.2 # radius / again random element alters  shape of flower slightly
        X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
        Y[ix] = j
        
    X = X.T # transpose so columns = training example as per standard in lectures
    Y = Y.T

    return X, Y

# function to plot the above data plus a modelled decision boundary - works by applying model to grid of points and colouring accordingly
def plot_decision_boundary(model, X, y):
    # Set min and max values and give it some padding
    x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
    y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
    h = 0.01
    # Generate a grid of points with distance h between them
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
    # Predict the function value for the whole grid
    Z = model(np.c_[xx.ravel(), yy.ravel()])
    Z = Z.reshape(xx.shape)
    # Plot the contour and training examples
    plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
    plt.ylabel('x2')
    plt.xlabel('x1')
    plt.scatter(X[0, :], X[1, :], c=y, cmap=plt.cm.Spectral)


# sigmoid function as per sandard linear regression
def sigmoid(z):
    """
    Compute the sigmoid of z

    Arguments:
    z -- A scalar or numpy array of any size.

    Return:
    s -- sigmoid(z)
    """

    s = 1. / (1. + np.exp(-z))
    
    return s


# 
def propagate(w, b, X, Y):
    """
    Implement the cost function and its gradient for the propagation explained above

    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)

    Return:
    cost -- negative log-likelihood cost for logistic regression
    dw -- gradient of the loss with respect to w, thus same shape as w
    db -- gradient of the loss with respect to b, thus same shape as b
    """

    m = X.shape[1];

    # forward prop
    Z = np.dot(w.T, X) + b; 
    A = sigmoid(Z); # activiation = the prediction of the model


    # compute cost
    cost =  - 1. / m * np.sum( (Y * np.log(A) + (1. - Y) * np.log(1. - A)  ) )

    #back prop for gradient descent

    da = - Y / A + (1. - Y) / (1. - A)  
    dz = da * A * (1. - A) # = - Y (1-A) + (1. - Y) A =  A - Y  
    dw = 1. / m * np.dot( X, dz.T )
    db = 1. / m * np.sum(dz)

    grads = {"dw": dw,
                "db": db}

    return grads, cost


def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
    """
    This function optimizes w and b by running a gradient descent algorithm
    
    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of shape (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
    num_iterations -- number of iterations of the optimization loop
    learning_rate -- learning rate of the gradient descent update rule
    print_cost -- True to print the loss every 100 steps
    
    Returns:
    params -- dictionary containing the weights w and bias b
    grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
    costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
    
    """
    costs = []

    for i in range(num_iterations):
        
        # cost /gradient calculation
        grads, cost = propagate(w, b, X, Y)

        #retrieve derivatives
        dw = grads["dw"]
        db = grads["db"]

        # update values according to gradient descent algorithm
        w = w - learning_rate * dw
        b = b - learning_rate * db

        # record the costs
        if i % 100 == 0:
            costs.append(cost)

            # Print the cost every 100 training iterations
            if print_cost:
                print("Cost after iteration %i: %f" %(i, cost))



    params = {  "w": w,
                "b": b}

    grads = {   "dw": dw,
                "db": db}

    return params, grads, costs


def predict(w, b, X):
    '''
    Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
    
    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)
    
    Returns:
    Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
    '''
    Z = np.dot(w.T, X) + b
    A = sigmoid(Z)


    Y_prediction = (A >= 0.5).astype(int)

    return Y_prediction




np.random.seed(1) # set a seed so that the results are consistent

X, Y = load_planar_dataset()

# Visualize the data:

plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral); # s = size of points; cmap are nicer colours
plt.show()

shape_X = X.shape
shape_Y = Y.shape
m = shape_Y[1]  # training set size
n = shape_X[0] # number of features (2)


# initialise parameters
w = np.random.rand(n, 1)
b = 0

# print accuracy of initial parameters by comparing prediction to 
print("train accuracy: {} %".format(100 - np.mean(np.abs(predict(w, b, X) - Y)) * 100))


# fit model and print out costs every 100 iterations of the forward / back prop
parameters, grads, costs = optimize(w, b, X, Y, num_iterations = 10000, learning_rate = 0.000005, print_cost = True)


# return the prediction
Y_prediction = predict(parameters["w"], parameters["b"], X)

# print accuracy of fitted model
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction - Y)) * 100))


# print parameters for interest
print( parameters["w"] , parameters["b"] )

# plot decision boundary
plot_decision_boundary(lambda x: predict(parameters["w"], parameters["b"], x.T), X, Y)
plt.show()

List item
",41027,,,,,9/16/2020 15:11,Back propagation approach to logistic regression: why is cost diverging but accuracy increasing?,,1,0,,,,CC BY-SA 4.0 23620,1,,,9/16/2020 13:06,,4,318,"

How much is currently invested in artificial general intelligence research and development worldwide?

Feel free to add company or VC names, but this is not the point. The point is to get an idea of the economics around artificial general intelligence.

",41025,,2444,,5/21/2021 22:06,3/21/2022 4:36,How much is currently invested in artificial general intelligence research and development?,,2,0,,,,CC BY-SA 4.0 23622,2,,23620,9/16/2020 13:49,,5,,"

In the last years, there have been big investments in AI technologies. For an overview, maybe take a look at this article Artificial Intelligence: Investment Trends and Selected Industry Uses (2019).

A few companies that have the long-term goal of creating an AGI, although, currently, they mainly do research on specific problems (e.g. video games) or AI techniques (e.g. reinforcement learning), have received many funds. I will only list a few (maybe the most well-known ones) of these companies below, but there are probably many other companies that have this long-term goal and have been funded by other companies or people.

DeepMind

In their site, they write

Like the Hubble telescope that helps us see deeper into space, we aim to build advanced AI - sometimes known as Artificial General Intelligence (AGI) - to expand our knowledge and find new answers. By solving this, we believe we could help people solve thousands of problems.

DeepMind was acquired by Google in 2015 for $500 million, given its success in playing games at a superhuman performance, which is a promising step towards the development of more AI techniques and maybe AGI.

The Wikipedia article on DeepMind contains some information about people or companies that have invested in DeepMind, which includes companies Horizons Ventures and Founders Fund, and people Scott Banister, Peter Thiel, Elon Musk, and Jaan Tallinn, although I cannot give you the exact numbers in terms of capital. In any case, serious investments have been done in DeepMind, which is definitely one of the promising companies that could develop good insights into the development of AGI systems.

OpenAI

Another company that has a similar goal and is doing research on similar topics (such as reinforcement learning or natural language processing) is OpenAI, which also has the long-term goal of creating AGI systems, as they write in their website

OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

Their investors include Microsoft, Reid Hoffman's charitable foundation, and Khosla Ventures.

Vicarious

They write in their site

Artificial general intelligence is the finish line on our journey toward progressively more capable robots. Our approach leverages deep expertise in neuroscience and is shaped by a decade of research.

They are apparently backed by more than 150 million dollars from people like Jeff Bezos, Elon Musk and Mark Zuckerberg and companies like Samsung.

",2444,,2444,,9/16/2020 15:26,9/16/2020 15:26,,,,2,,,,CC BY-SA 4.0 23624,2,,23605,9/16/2020 14:41,,2,,"

Some research areas that come to mind which can be useful when faced with a limited amount of data:

  • Regularization: Comprises different methods to prevent the network from overfitting, to make it perform better on the validation data but not necessarily on the training data. In general, the less training data you have, the stronger you want to regularize. Common types include:

    • Injecting noise in the network, e.g., dropout.

    • Adding regularization terms to the training loss, e.g., L1 and L2 regularization of the weights, but also confident output distributions can be penalized.

    • Reducing the number of parameters in the network to make it unable to fit the training data completely and thus unable to overfit badly. Interestingly, increasing the number of parameters for large models can also improve the validation performance.

    • Early stopping of training. For example, if one part of the training set is set aside and not used to update the weights, training can be stopped when the observed loss on this part of the training set is observed to start to increase.

  • Generating new training data:

    • Data augmentation: Ways to augment existing training examples without removing the semantics, e.g., slight rotations, crops, translations (shifts) of images.

    • Data interpolation, e.g., manifold mixup.

    • Using synthetic data, e.g., frames from video games or other CGI.

  • Transfer learning: When you take a neural network that has already been trained on another, much larger dataset of the same modality (images, sounds, etc.) as your dataset and fine-tune it on your data.

  • Multitask learning: Instead of training your network to perform one task, you give it multiple output heads and train it to perform many tasks at once, given that you have that the labels for the additional tasks. While it may seem that this is a more difficult for the network, the extra tasks have a regularizing effect.

  • Semi-supervised learning: If you have much unlabeled data that labeled data, you can combine supervised learning with unsupervised learning. Much like with multitask learning, the extra task introduced by the unsupervised learning also has a regularizing effect.

Other interesting methods can be found in systems that perform one-shot learning, which inherently implies very little training data. These systems often uses slightly modified network architectures. For example, facial recognition systems can learn to recognize a face from only a single photo, and usually use a triplet loss (or similar) of a vector encoding of the face, instead of cross-entropy loss of the output of a softmax layer normally used for image classification.

Zero-shot learning also exists (e.g., zero-shot machine translation), but this is a completely different type of problem setup and requires multiple data modalities.

",9220,,,,,9/16/2020 14:41,,,,2,,,,CC BY-SA 4.0 23625,1,23628,,9/16/2020 15:09,,7,585,"

I understand that this is the update for the parameters of a policy in REINFORCE:

$$ \Delta \theta_{t}=\alpha \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) v_{t}, $$ where $v_t$ is usually the discounted future reward and $\pi_{\theta}\left(a_{t} \mid s_{t}\right)$ is the probability of taken the action that the agent took at time $t$. (Tell me if something is wrong here)

However, I don't understand how to implement this with a neural network.

Let's say that probs = policy.feedforward(state) returns the probabilities of taking each action, like [0.6, 0.4]. action = choose_action_from(probs) will return the index of the probability chosen. For example, if it chose 0.6, the action would be 0.

When it is time to update the parameters of the policy network, what should we do? Should we do something like the following?

gradient = policy.backpropagate(total_discounted_reward*log(probs[action])
policy.weights += gradient

And I only backpropagate this through one output neuron?

Which loss function should I use in this case? What would the labels be?

If you need more explanation, I have this question on SO.

",41026,,2444,,12/19/2021 22:06,12/19/2021 22:06,"Which loss function should I use in REINFORCE, and what are the labels?",,1,7,,,,CC BY-SA 4.0 23626,2,,23619,9/16/2020 15:11,,1,,"

Just realised the issue - super subtle (at least for a Python novice like me) - I implemented numerical gradient checking (as I should have done from the start) and saw the gradient descent was working incorrectly.

The dataset Y was created as a datatype "uint8". In the line of the back-propagation where I calculate the derivative with respect to "a" ("da") the statement "-Y" was returning 255 for those values of Y that were 1 - this appears to be a feature of these unsigned binary numbers.

By replacing "uint8" with "int8" in the definition of Y - or else forcing the correct behaviour by writing "-1.*Y" in the calculation of "da" (which I think casts "Y" as an integer rather than an unsigned integer) I managed to produce the correct behaviour.

This in turn means the gradient descent is doing what it should and the logistic regression converges to a stable value of the cost. Accuracy then tracks cost as well.

",41027,,,,,9/16/2020 15:11,,,,0,,,,CC BY-SA 4.0 23627,1,23642,,9/16/2020 16:45,,1,58,"

I am training an algorithm to identify weeds within crops using the YOLOv5 algorithm. This algorithm will be used in the future to identify weeds in images collected by unmanned aircraft (drones) after making an orthomosaic images. Using the open-source LabelImg software, I am labeling images for object detection that were collected with both UAV and hand-held digital cameras. Using both platforms, I collected many images of weeds that will need to be identified.

My question is this: Does it make sense to collect training samples from the hand-held digital camera, since it will be of much higher resolution than the UAV imagery (and thus not used for future imagery collections after the model is trained)? My initial thought is that it would be best to only use the UAV imagery, since it will be the most similar to what will be collected in the future. However, I do not want to throw out the hand-held digital imagery if it could help in the image classification process.

",32750,,,,,9/18/2020 12:57,Does it make sense to train images (for object detection algorithms) with cameras that will not be used to collect future data?,,1,0,,,,CC BY-SA 4.0 23628,2,,23625,9/16/2020 17:23,,3,,"

The loss function you are looking for is cross entropy loss. The 'label' that you use is the action you took at the time point you are updating for.

",36821,,,,,9/16/2020 17:23,,,,6,,,,CC BY-SA 4.0 23629,1,,,9/16/2020 23:12,,1,27,"

Straying from the current trends in deep learning, there is an, arguably, interesting idea of neuronal ensembles possibly providing an alternative to the current "layered feature detectors" framework for neural network construction by being considered a basic computational unit instead of one feature detecting neuron. This idea certainly has at least some presence in neuroscience circles, but I found it hard to find studies, attempting to obtain a working computational ensemble based model, which would try to solve any of the existing computer vision/NLP tasks or anything of the sort. This may be just due to me looking in the wrong places, but in any case, I would appreciate any references to papers, exploring building neural network architectures with neuronal ensemblies involvement.

Just to be clear, I would be interested in any papers on computational modelling of ensemblies even if they are not trying to solve any particular ML task, but it would be better, if the topic of the research is more closely aligned with computer science instead of neurobiology even if CS connection is of a more exotic kind; for example, paper trying to see, whether you can store different concepts and their relations in the ensemble based network is more desirable than paper, trying to accurately model individual neuron and synaptic plasticity dynamics and see, that ensembles emerge, if you scale the system. But again, I would be glad to get references to research in both of these example topics and many more.

",2672,,2672,,9/18/2020 14:39,9/18/2020 14:39,Literature on computational modelling involving neuronal ensemblies,,0,0,,,,CC BY-SA 4.0 23630,1,23639,,9/17/2020 0:44,,2,142,"

If I am attempting to train a CNN on some image data to perform image classification, but some of the images have pieces of text overlaying them (for the purpose of description to humans), then is it better for the CNN to remove the text? And if so, then how do I remove the text? Furthermore, is it a good idea to use both the images with text overlaying them and the images with the removed text for training, since it might act as a form of data augmentation?

",16521,,2444,,9/20/2020 10:02,9/20/2020 10:02,Should I remove the text overlaying some images in the dataset before training the CNN?,,1,0,,,,CC BY-SA 4.0 23633,2,,23222,9/17/2020 11:33,,1,,"

AI in healthcare is already playing a big role in diagnosing the diseases, assisting patients or helping the medical staff in supplying various things or performing actions.

The medical imaging helps AI to diagnosis the diseases without help radiologist. And to develop the AI model using medical images to diagnosis various disease need training datasets for machine learning algorithm helping to detect with accuracy.

But just like AI, medical imaging is not improving faster making sometimes difficult or more time taking to diagnosis the diseases. Actually, while developing such highly sensitive models, high-quality training data is required and there is lack of such annotated data to train the models.

The more training data is feed into the model, it will learn the detection process with more accuracy resulting predicting the faster diagnosis. And to improve the quality and quantity of data, more dedicated and fast labeling process is required.

Hence, data annotation companies now use the AI-assisted labeling process to annotate the medical images at faster speed with better accuracy. Compare to manual annotation, AI companies can get multiple times faster-annotated data for machine learning algorithms.

Once the training data will start available in large quantity for medical imaging analysis process through AI will also improve. The AI-assisted data annotation process can only help to produce the medical imaging datasets in large quantity for better predictions in healthcare sector.

",32316,,32316,,9/21/2022 6:06,9/21/2022 6:06,,,,0,,,,CC BY-SA 4.0 23637,1,,,9/17/2020 17:45,,1,64,"

When I run a meta-heuristics, like a Genetic Algorithm or a Simulated Annealing, I want to have a termination criterion that stops the algorithms when there is not any significant fitness improvement.

What are good methods for that?

I tried something like

$$improvement=\frac{fit(Solution_{new})}{fit(Solution_{old})}$$

and

$$improvement={fit(Solution_{new})}-{fit(Solution_{old})}$$

Both options don't seem to be good, because as the old solutions get better and newer solutions even if they are good don't improve so much compare to the old.

",27777,,2444,,9/18/2020 12:59,10/8/2022 18:01,What are most commons methods to measure improvement rate in a meta-heuristic?,,1,0,,,,CC BY-SA 4.0 23638,1,,,9/17/2020 21:02,,3,49,"

I have a deep learning network that outputs grayscale image reconstructions. In addition to good reconstruction performance (measured through mean squared error or some other measure like psnr), I want to encourage these outputs to be sparse through a regularization term in the loss function.

One way to do this is to add an L1 regularization term that penalizes the sum of the absolute value of pixel intensities. While this is a good start, is there any penalization that take adjacency and spatial contiguity into account? It doesn't have to be a commonly used constraint/regularization term, but even potential concepts or papers that go in this direction would be extremely helpful. In natural images, sparse pixels tend to form regions or patches as opposed to being dispersed or scattered. Are there ways to encourage regions of contiguous pixels to be sparse as opposed to individual pixels?

",40781,,,,,9/17/2020 21:02,Enforcing sparsity constraints that make use of spatial contiguity,,0,4,,,,CC BY-SA 4.0 23639,2,,23630,9/18/2020 4:54,,1,,"

Removing the overlayed text might increase accuracy, but you'd need to train a different model to do this, and that is an entirely different task as it is no longer classification, but generation. There are easier ways to augment your data and probably get similar benefits to your accuracy. However, if you would still like to do this, there is a lot of examples you can find by simple searching "Watermark removal machine learning" in google. Here's an example I found.

Overall, a CNN will be able to look past the overlayed text without issue, and perform classification like it would without the overlayed text. There is the possibility that it actually learns relationships between the overlayed text and the expected output, but that depends on the data, and is likely a harder task then simply identifying features.

The only issue you might run into is if the real data this model will be used on is different to the data provided, as in the real word images do not contain overlayed text describing what the image is.

",26726,,,,,9/18/2020 4:54,,,,3,,,,CC BY-SA 4.0 23640,1,,,9/18/2020 9:18,,1,90,"

I'm planning an RL project and I have to decide which RL framework do I use if any at all. The project has a highly custom environment, and testing different algorithms will be required to obtain optimal results. Furthermore, it will use a custom neural network, not implemented in the popular TensorFlow/PyTorch ML frameworks. Therefore, the framework should allow for customization with regard to approximation function (1) and the environment (2). The problem is that to my current knowledge, most of the framework allows only to work with a built-in environment. Does anybody know a framework that meets the two conditions (1) and (2)? Or anybody knows a review that contains information about framework in the context of those conditions?

",31324,,,,,3/12/2022 21:06,What framework for a project with a custom environment?,,0,2,,3/15/2022 16:56,,CC BY-SA 4.0 23641,2,,23637,9/18/2020 12:55,,0,,"

You can use one of your suggested methods to calculate the relative improvement, but you need also to define a threshold value $\epsilon$ that determines when a relative improvement is negligible, and so you can terminate the algorithm. To be more concrete, you could terminate the genetic algorithm when, for example, the following condition is met

$$ |f(x_{t+1}) - f(x_{t})| < \epsilon, \tag{1}\label{1} $$

where

  • $f$ is the fitness function
  • $t$ is the iteration number of your iterative algorithm
  • $x_t$ is a solution at iteration $t$
  • $|\cdot|$ is the absolute value
  • $\epsilon$ is the threshold value (a hyper-parameter of your algorithm), which is typically a small number (e.g. $10^{-6}$), but this also depends on the magnitude of the fitness of the solutions

This stopping criterion in \ref{1} (sometimes known as the absolute error) is not the only possible one. For example, you can stop the genetic algorithm when it has run for a certain maximum number of iterations. The stopping criterion (or criteria, i.e. you can use more than one stopping criteria) that you choose probably depends on what you want to achieve and this is not just specific to genetic algorithms, but, in general, this applies to any iterative numerical algorithm.

The MathWorks article How the Genetic Algorithm Works enumerates several stopping criteria for genetic algorithms.

If you want to know more about the topic, maybe take a look at this paper On Stopping Criteria for Genetic Algorithms (2004) by Martín Safe et al.

",2444,,2444,,9/18/2020 13:07,9/18/2020 13:07,,,,0,,,,CC BY-SA 4.0 23642,2,,23627,9/18/2020 12:57,,1,,"

I think this can only be used for pretraining/some kind of transfer learning. This would be useful if the ratio of real training data to pretraining data is really low. You could then pretrain on digital data, and fine-tune on your UAV data.

How much of this is really useful I can't really say, this depends on how close the digital data is to UAV data. If it is significantly different, you are training on a different distribution and sample space than your UAV images, which is pointless.

",38328,,,,,9/18/2020 12:57,,,,1,,,,CC BY-SA 4.0 23647,2,,10620,9/18/2020 13:58,,1,,"

Given that model-based RL algorithms do not necessarily estimate or compute the transition model or reward function, in the case these are unknown, how can they be computed or estimated (so that they can be used by the model-based algorithms)?

A generally reliable approach to creating learned models from interacting with the environment, then using those models internally for planning or explicitly model-based learning, is still something of a holy grail in RL. An agent that can do this across multiple domains might be considered a significant step in autonomous AI. Sutton & Barto write in Reinforcement Learning: An Introduction (Chapter 17.5):

More work is needed before planning with learned models can be effective. For example,the learning of the model needs to be selective because the scope of a model strongly affects planning efficiency. If a model focuses on the key consequences of the most important options, then planning can be efficient and rapid, but if a model includes details of unimportant consequences of options that are unlikely to be selected, then planning may be almost useless. Environment models should be constructed judiciously with regard to both their states and dynamics with the goal of optimizing the planning process. The various parts of the model should be continually monitored as to the degree to which they contribute to, or detract from, planning efficiency. The field has not yet addressed this complex of issues or designed model-learning methods that take into account their implications.

[Emphasis mine]

This was written in 2019, so as far as I know still stands as a summary of state-of-the-art. There is ongoing research into this - for instance, the paper Model-Based Reinforcement Learning via Meta-Policy Optimization considers using multiple learned models to assess reliability. I have seen a similar recent paper which also assesses the reliability of the learned model and chooses how much it should trust it over a simpler model-free prediction, but cannot recall the name or find it currently.

One very simple form of a learned model is to memorise transitions that have been experienced already. This is functionally very similar to the experience replay table used in DQN. The classic RL algorithm for this kind of model is Dyna-Q, where the data stored about known transitions is used to perform background planning. In its simplest form, the algorithm is almost indistinguishable from experience replay in DQN. However, this memorised set of transition records is a learned model, and is used as such in Dyna-Q.

The basic Dyna-Q approach creates a tabular model. It does not generalise to predicting outcomes from previously unseen state, action pairs. However, this is relatively easy to fix - simply feed experience so far as training data into a function approximator and you can create a learned model of the environment that attempts to generalise to new states. This idea has been around for a long time. Unfortunately, it has problems - planning accuracy is strongly influenced by the accuracy of the model. This applies for both background planning and looking forward from the current state. Approximate models like this to date typically perform worse than simple replay-based approaches.

This general approach - learn the model statistically from observations - can be refined and may work well if there is any decent prior knowledge that restricts the model. For example, if you want to model a physical system that is influenced by current air pressure and local gravity, you could have free parameters for those unknowns starting with some standardised guesses, and then refine the model of dynamics when observations are made, with strong constraints about the form it will take.

Similarly, in games of chance with hidden state, you may be able to model the unknowns within a broader well-understood model, and use e.g. Bayesian inference to add constraints and best guesses. This is typically what you would do for a POMDP with a "belief state".

Both of the domain-specific approaches in the last two paragraphs can be made to work better than model-free algorithms alone, but they require deep understanding/analysis of the problem being solved by the researcher to set up a parametric model that is both flexible enough to match the environment being learned, but also constrained enough that it cannot become too inaccurate.

",1847,,2444,,1/24/2022 11:29,1/24/2022 11:29,,,,2,,,,CC BY-SA 4.0 23648,2,,23547,9/18/2020 14:40,,0,,"

you could also just use a Task-agnostic CNN as an encoder to get extract features like in (1) and then use the output of the last global pooling layer and then feed that as an input to the LSTM layer or any other downstream task. Add another small Neural Network (projection head) after the CNN. And then use contrastive loss on output of this projection head to improve upon the model.

(1) Big Self-Supervised Models are Strong Semi-Supervised Learners (Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton ) Link: https://arxiv.org/abs/2006.10029

",40434,,,,,9/18/2020 14:40,,,,0,,,,CC BY-SA 4.0 23650,1,,,9/18/2020 17:33,,1,108,"

I've been researching the topic of Cognitive Load Measurement through pupil dilation measurement. All solutions to pupil dilation measurement require some kind of special hardware setup. I was wondering if it would be possible to use AI on a regular webcam record and do those measurements later. If yes, I'd love some pointers to resources of what I need to know to be able to implement it.

",41069,,,,,9/18/2020 17:33,"Would it be possible to use AI to measure pupil dilation diameters and fluctuation, on video films on a regular webcam?",,0,3,,,,CC BY-SA 4.0 23651,2,,12023,9/19/2020 3:58,,1,,"

Image Caption Generation is an interesting problem to work on. I think your question was to know if there are any open-source libraries with built-in functions for Image Captioning. You can build Image Caption Generation models using Frameworks like Tensorflow, PyTorch, and Trax.

I'd also recommend you to read the following papers:

  1. Show and Tell: A Neural Image Caption Generator. Link
  2. Transfer learning from language models to image caption generators: Better models may not transfer better. Link
  3. Image Captioning with Unseen Objects. Link

Also, here are a couple of blog posts you can read:

",40434,,,,,9/19/2020 3:58,,,,0,,,,CC BY-SA 4.0 23652,1,,,9/19/2020 6:23,,1,14,"

I am a newbie to Machine Learning field as I am engaging to a personal project that I am trying to use the 6 degree of freedom Inertial Measurement Units(IMUs) measuring the Acceleration acting on 3 axes(x-y-z) and the Angular velocity around the same 3 axis(x-y-z). One sensor generates a set of 6 raw variables of: Acc_x, Acc_y, Acc_z, Gyro_x, Gyro_y, Gyro_z.

Initially I have 2 of those sensors that used to be attached on to the arm (one to the part above the elbow and one to the part bellow the elbow) together they spit out a dataset of 12 raw variables that represent a specific movement of the arm, I save them as a the csv file. This is the point where I really get overwhelmed with a huge amount of data that I don't know how to process this kind of data and extract the features to differentiate the gestures.

My dataset of the first movement I recorded looks like this:

I denoted 1 for the first sensor above the elbow and 2 for the sensor below the elbow.

Looking forward to hearing the opinions from the experts and seniors on this.

Thank you in advanced.

Let me know if my question is inappropriate and lack of information as it is my first time.

",41076,,,,,9/19/2020 6:23,Multiple Inertia sensors system based for gestures recognition,,0,0,,,,CC BY-SA 4.0 23653,1,,,9/19/2020 12:23,,1,199,"

For imbalanced datasets (either in the context of computer vision or NLP), from what I learned, it is good to use a weighted log loss. However, in competitions, the people who are in top positions are not using weighted loss functions, but treating the classification problem as a regression problem, and using MSE as the loss function. I want to know which one should I use for imbalanced datasets? Or maybe should I combine both?

the weighted loss I am talking is:: 
neg_weights=[]
pos_weights=[]
for i in tqdm(range(5)):##range(num_classes)
    neg_weights.append(np.sum(y_train[:,i],axis=0)/y_train.shape[0])
    pos_weights.append(np.sum(1-y_train[:,i],axis=0)/y_train.shape[0])
def customloss(y_true,y_pred):
    y_true=tf.cast(y_true,dtype=y_pred.dtype)
    loss=0.0
    loss_pos=0.0
    loss_neg=0.0
    for i in range(5):
        loss_pos+=-1*(K.mean(pos_weights[i]*y_true[:,i]*K.log(y_pred[:,i]+1e-8)))
        loss_neg+=-1*(K.mean(neg_weights[i]*(1-y_true[:,i])*K.log(1-y_pred[:,i]+1e-8)))
    loss=loss_pos+loss_neg
    return loss

the competition I was talking about is https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/109594

",38737,,38737,,9/20/2020 11:25,9/20/2020 11:25,Which loss function to choose for imbalanced datasets?,,0,1,,,,CC BY-SA 4.0 23654,1,23663,,9/19/2020 12:34,,2,102,"

My doubt is like this :


Suppose we have an MLP. In an MLP, as per the backprop algorithm (back-propagation algorithm), the correction applied to each weight is :

$$ w_{ij} := -\eta\frac{\partial E}{\partial w_{ij}}$$ ($\eta$ = learning rate, $E$ = error in the output, $w_{ij}$ = $i^{\text{th}}$ neuron in the $j^{\text{th}}$ row or layer)

Now, if we put an extra factor in the correction as:

$$ w_{ij} := -k\eta \frac{\partial E}{\partial w_{ij}}$$ ($k$ denotes the number of iterations at the time of correction)

how much will that factor affect the learning of the network ? Will it affect the convergence of the network such that it takes time to fit to the data ?

NB : I am only asking this as a doubt. I haven't tried any ML projects recently, so this is not related to anything I am doing.

",40583,,,,,9/20/2020 3:21,How much can an inclusion of the number of iterations have on the training of an MLP?,,1,0,,,,CC BY-SA 4.0 23656,1,,,9/19/2020 20:24,,0,209,"

I cannot find reliable sources but someone says it is 40 moves and someone else says it is 50+ moves. I read their papers and they use value function (NN) and policy function to trim the tree, so more layers can be searched while spending less time searching less different positions.

My question is, is the search depth a fixed preset parameter? If so, approximately how much is it back to 2016 (AlphaGo) and 2018 (AlphaGo Zero)?

",38299,,2444,,9/19/2020 20:46,9/22/2020 16:50,What is the search depth of AlphaGo and AlphaGo Zero?,,1,0,,,,CC BY-SA 4.0 23659,2,,20523,9/19/2020 20:54,,0,,"

It depends on your image size and the size of the compression you want! Usually deep learning algorithms are not so fast as why they run on GPU, and we have highly optimized frameworks like TensorFlow! Something I can say for sure is:

  1. Compressing video using autoencoders means compressing each frame one by one! However, video compressions usually contain the calculation of the deference of every frame with the previous frame. This means the compressing video is much more time consuming than compressing just a single image.

  2. The encoder is half part of the autoencoder, so the compression is faster than training the whole autoencoder.

  3. Use GPU! It really makes much different!

  4. Try Google Colab! You can choose between CPU and GPU and then make a decision.

",35757,,35757,,9/22/2020 6:30,9/22/2020 6:30,,,,0,,,,CC BY-SA 4.0 23660,2,,9333,9/19/2020 21:12,,0,,"

You can also check this paper, which discusses the use of LSTM and GRU with Active Learning and word embeddings (word2vec).

",36055,,,,,9/19/2020 21:12,,,,0,,,,CC BY-SA 4.0 23661,2,,23112,9/19/2020 21:27,,0,,"

You are right!

1- the number of hidden layers shouldn't be too high! Because of the gradient descent when the number of layers is too large, the gradient effect on the first layers become too small! This is why the Resnet model was introduced.

2- the number of hidden layers shouldn't be too small to extracts good features. It's proved that in CNN networks the first layers extract very simple elements like lines and curves but last layers extracts more complex features.

3- number of hidden units is a hyper-parameters and usually you should find it by testing or based on your background knowledge.

But what can you do at all? As you can tests different parameters and compare their results there is some other options! One option is grid search you can check this tutorial https://towardsdatascience.com/grid-search-for-model-tuning-3319b259367e

",35757,,,,,9/19/2020 21:27,,,,0,,,,CC BY-SA 4.0 23662,1,,,9/20/2020 0:48,,2,64,"

Why is there no upper confidence bound algorithm for linear stochastic bandits that uses lasso regression in the case that the regression parameters are sparse in the features?

In particular, I don't understand what is hard about lasso regression that makes it hard to be used in a UCB type algorithm whereas there is a lot of work on ridge regression based UCB algorithms see e.g. Yadkori et al.

I looked up some works e.g. Bastani and Bayati, Kim and Paik but they all do not a UCB-type algorithm, instead, they propose forced or probabilistic sampling to satisfy the compatibility condition (see Lemma EC.6. of Bastani and Bayati).

",27277,,27277,,9/20/2020 1:27,9/20/2020 1:27,Is there a UCB type algorithm for linear stochastic bandit with lasso regression?,,0,0,,,,CC BY-SA 4.0 23663,2,,23654,9/20/2020 3:21,,2,,"

If anything, you want the learning rate to decrease as the number of iterations increases.

When you're looking for a good spot and you're clueless, take large steps. When you've found a pretty good spot, take small steps, so you don't end up far away.

In other fields of machine learning, there are studies of how the learning rate should scale. For example, in traditional reinforcement learning methods, if $\alpha_i$ is the learning rate at step $i$, then we want to have the following two criteria, to make sure we get convergence to the optimal policy:

  1. $\sum_{i=0}^{\infty} \alpha_i = \infty$. This makes sure that, no matter how bad our initial experience was, we can eventually forget it and replace it with better information.
  2. $\sum_{i=0}^{\infty} \alpha_i^2 < \infty$. This guarantees eventual convergence.

A typical choice here is $\alpha_i = \frac{1}{1+i}$, which fits both criteria.

I am unaware of similar criteria for MLPs, but if you're going to modify the step sizes, I would follow a similar approach. Make the step sizes decrease, but not too fast.

",40573,,,,,9/20/2020 3:21,,,,7,,,,CC BY-SA 4.0 23666,1,,,9/20/2020 10:21,,2,96,"

I am trying to create a simple Deep Q-Network with 2d convolutional layers.

I can't figure out what I am doing wrong, and the only thing I can see that doesn't seem right is when I get the model prediction for a state after the optimizer step it doesn’t seem to get closer to the target.

I am using pixels from pong in OpenAI's gym with single-channel 90x90 images, a batch size of 32, and replay memory.

As an example, if I try with a batch size of 1, and try running self(states) again right after the optimizer step the output is as follows:

current_q_values -> -0.16351485  0.29163417  0.11192469 -0.08969332  0.11081569  0.37215832
q_target ->         -0.16351485  0.5336551   0.11192469 -0.08969332  0.11081569  0.37215832
self(states) ->     -0.8427617   0.6415581   0.44988257 -0.43897176  0.8693738   0.40007943

Does this look as what would be expected for a single step?

The network with loss and optimizer:

    self.in_layer = Conv2d(channels, 32, 8)
    self.hidden_conv_1 = Conv2d(32, 64, 4)
    self.hidden_conv_2 = Conv2d(64, 128, 3)
    self.hidden_fc1 = Linear(128 * 78 * 78, 64)
    self.hidden_fc2 = Linear(64, 32)
    self.output = Linear(32, action_space)

    self.loss = torch.nn.MSELoss()
    self.optimizer = torch.optim.Adam(
        self.parameters(), lr=learning_rate) # lr is 0.001

def forward(self, state):
    in_out = fn.relu(self.in_layer(state))
    in_out = fn.relu(self.hidden_conv_1(in_out))
    in_out = fn.relu(self.hidden_conv_2(in_out))
    in_out = in_out.view(-1, 128 * 78 * 78)
    in_out = fn.relu(self.hidden_fc1(in_out))
    in_out = fn.relu(self.hidden_fc2(in_out))
    return self.output(in_out)

Then the learning block:

        self.optimizer.zero_grad()

        sample = self.sample(self.batch_size)
        states = torch.stack([i[0] for i in sample])
        actions = torch.tensor([i[1] for i in sample], device=device)
        rewards = torch.tensor([i[2] for i in sample], dtype=torch.float32, device=device)
        next_states = torch.stack([i[3] for i in sample])
        dones = torch.tensor([i[4] for i in sample], dtype=torch.uint8, device=device)

        current_q_vals = self(states)
        next_q_vals = self(next_states)
        q_target = current_q_vals.clone()
        q_target[torch.arange(states.size()[0]), actions] = rewards + (self.gamma * next_q_vals.max(dim=1)[0]) * (~dones).float()

        loss = fn.smooth_l1_loss(current_q_vals, q_target)
        loss.backward()

        self.optimizer.step()
```
",41097,,2444,,3/18/2021 10:59,3/28/2021 18:32,DQN not learning and step not stepping towards target,,3,0,,,,CC BY-SA 4.0 23667,1,,,9/20/2020 14:18,,0,1426,"

I know that if you use an ReLU activation function at a node in the neural network, the output of that node will be non-negative. I am wondering if it is possible to have a negative output in the final layer, provided that you do not use any activation functions in the final layer, and all the activation functions in the previous hidden layers are ReLU?

",20358,,,,,9/21/2020 18:19,"Is it possible to have a negative output using only ReLU activation functions, but not in the final layer?",,2,0,,,,CC BY-SA 4.0 23668,2,,23667,9/20/2020 14:45,,2,,"

Yes, if there's no activation function in the last layer, the weights could simply be negative there, so the network would multiply a positive value with a negative weight, therefore outputting a negative value.

There is still an activation function, but it is the identity.

",38328,,,,,9/20/2020 14:45,,,,0,,,,CC BY-SA 4.0 23669,1,24074,,9/20/2020 14:53,,2,278,"

The understanding I have is that they somehow adjust the objective to make it easier to meet, without changing the reward function.

... the observed proxy reward function is the approximate solution to a reward design problem

(source: Inverse Reward Design)

But I have trouble getting how they fit the overall reward objective and got confused by some examples of them. I had the idea of them being small reward functions (as in the case of solving for sparse rewards) eventually leading to the main goal. But the statement below, from this post, made me question that.

Typical examples of proxy reward functions include “partial credit” for behaviors that look promising; artificially high discount rates and careful reward shaping;...

  1. What are they, and how would one go about identifying and integrating proxy rewards in an RL problem?

  2. In the examples above, how would high discount rates form a proxy reward?

I'm also curious about how they are used as a source of multiple rewards

",40671,,2444,,10/8/2020 11:48,11/13/2020 17:03,What are proxy reward functions?,,1,6,,,,CC BY-SA 4.0 23670,1,,,9/20/2020 15:07,,2,514,"

I have been studying about auto-encoders and variational auto-encoders. I would like to know how many variants of VAEs are there today.

If there are many variants, can they be used for feature extraction for complex reinforcement learning tasks like self-driving cars?

",41103,,2444,,9/20/2020 17:53,12/22/2020 18:01,How many types of variational auto-encoders are there?,,1,0,,,,CC BY-SA 4.0 23671,2,,23618,9/20/2020 15:12,,1,,"

I will try to give some sense to this question.

Artificial general intelligence (AGI) is the hypothetical[1] intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,[2][3][4] full AI,[5] or general intelligent action.[6] Some academic sources reserve the term "strong AI" for machines that can experience consciousness.

These are the first sentences on AGI on wikipedia (link), and the softest limit there is

[learn] any intellectual task that a human being can.

Even taking only this, it would mean that any AGI has infinite economic value. As soon as there is something that can learn any human task and has the speed of current GPUs/CPUs it could potentially immediatly replace every human in every task. There are certainly enough computers with CPUs and GPUs out there.

This question is still a little flawed because you not only have to constrain the definition of AGI but also how it would actually be implemented.

",38328,,,,,9/20/2020 15:12,,,,2,,,,CC BY-SA 4.0 23672,1,23689,,9/20/2020 16:30,,3,314,"

There are many types of CNN architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet, etc. Can we apply transfer learning between any two different CNN architectures? For instance, can we apply transfer learning from AlexNet to GoogLeNet, etc.? Or even just from a "conventional" CNN to one of these other architectures, or the other way around? Is this possible in general?

EDIT: My understanding is that all machine learning models have the ability to perform transfer learning. If this is true, then I guess the question is, as I said, whether we can transfer between two different CNN architectures – for instance, what was learned by a conventional CNN to a different CNN architecture.

",16521,,16521,,9/21/2020 15:27,4/18/2021 20:53,Can we apply transfer learning between any two different CNN architectures?,,2,6,,,,CC BY-SA 4.0 23676,1,23679,,9/21/2020 7:38,,4,776,"

While exploration is an integral part of reinforcement learning (RL), it does not pertain to supervised learning (SL) since the latter is already provided with the data set from the start.

That said, can't hyperparameter optimization (HO) in SL be considered as exploration? The more I think about this the more I'm confused as to what exploration really means. If it means exploring the environment in RL and exploring the model configurations via HO in SL, isn't its end goal "mathematically" identical in both cases?

",30959,,2444,,9/21/2020 10:53,10/8/2020 23:12,"What is the meaning of ""exploration"" in reinforcement and supervised learning?",,2,0,,,,CC BY-SA 4.0 23678,2,,23656,9/21/2020 10:28,,1,,"

For easier visualization, I recommend this video: https://twitter.com/i/status/1257053365424578565

The more detailed article about GO algorithms: https://deepmind.com/blog/article/alphago-zero-starting-scratch.

With its breadth of $250$ possible moves each turn (go is played on a $19$ by $19$ board, compared to the much smaller $8$ by $8$ chess field) and a typical game depth of $150$ moves, there are about $250^{150}$, or $10^{360}$.

After $2$ moves in go, it's $130000$ possible combinations.

Decision tree pruning

",32352,,2444,,9/22/2020 16:50,9/22/2020 16:50,,,,2,,,,CC BY-SA 4.0 23679,2,,23676,9/21/2020 10:41,,5,,"

In reinforcement learning, exploration has a specific meaning, which is in contrast with the meaning of exploitation, hence the so-called exploration-exploitation dilemma (or trade-off). You explore when you decide to visit states that you have not yet visited or to take actions you have not yet taken. On the other hand, you exploit when you decide to take actions that you have already taken and you know how much reward you can get. It's like in life: maybe you like cereals $A$, but you never tried cereals $B$, which could be tastier. What are you going to do: continue to eat cereals $A$ (exploitation) or maybe try once $B$ (exploration)? Maybe cereals $B$ are as tasty as $A$, but, in the long run, $B$ are healthier than $A$.

More concretely, recall that, in RL, the goal is to collect as much reward as you can. Let's suppose that you are in state $s$ and, in the past, when you were in that state $s$, you had already taken the action $a_1$, but not the other actions $a_2, a_3$ and $a_4$. The last time you took action $a_1$, you received a reward of $1$, which is a good thing, but what if you take action $a_2, a_3$ or $a_4$? Maybe you will get a higher reward, for example, $10$, which is better. So, you need to decide whether to choose again action $a_1$ (i.e. whether to exploit your current knowledge) or try another action that may lead to a higher (or smaller) reward (i.e. you explore the environment). The problem with exploration is that you don't know what's going to happen, i.e. you are risking if you already get a nice amount of reward if you take an action already taken, but sometimes exploration is the best thing to do, given that maybe the actions you have taken so far have not led to any good reward.

In hyper-parameter optimization, you do not need to collect any reward, unless you formulate your problem as a reinforcement learning problem (which is possible). The goal is to find the best set of hyper-parameters (e.g. the number of layers and neurons in each layer of the neural network) that performs well, typically, on the validation dataset. Once you have found a set of hyper-parameters, you usually do not talk about exploiting it, in the sense that you will not continually receive any type of reward if you use that set of hyper-parameters, unless you conceptually decide that this is the case, i.e., whenever you use that set of hyper-parameters you are exploiting that model to get good performance on the test sets that you have. You could also say that when you are searching for new sets of hyper-parameters you are exploring the search space, but, again, the distinction between exploitation and exploitation, in this case, is typically not made, but you can well talk about it.

It makes sense to talk about the exploitation-exploration trade-off when there is stochasticity involved, but in the case of the hyper-parameter optimization there may not be such a stochasticity, but it's usually a deterministic search, which you can, if you like, call exploration.

",2444,,2444,,9/21/2020 11:02,9/21/2020 11:02,,,,5,,,,CC BY-SA 4.0 23682,1,,,9/21/2020 16:02,,0,40,"

I used OCR to extract text from an image, but there are some spelling mistakes in it :

The text is as follows :

'gaRBOMATED WATER\n\nSFMEETENED CARBONATED 6\nBSREDERTS: CARBONATED WATER,\nSUGAR. ACIOITY REGULATOR (338),\n\nCFFENE. CONTAINS PERMITTED NATURAL\nCOLOUR (1506) AMD ADDED FLAVOURS QUcTURAL,\nSATIRE: OENTICAL AND ARTIFICIAL PLIVOUREE\n\nCOLA\nl 1187.3 PIRANGUT, TAL. MULSHI,\nGBST. PUME 612111, MAHARASHTRA.\nHELPLINE: 1800- 180-2653\ntet indishetptine@cocs-cola.com\nAUTHORITY OF THE COCA-COLA\n‘COCA-COLA PLAZA, ATLANTA, GA 36313, USA\nme DATE OF MANUFACTURE. BATCH NO. &\nLP CNL. OF ae TAXES}:\nSE BOTTOM OF CAN.\n\nTST Fone Sor MOTHS FROM\nWe, RE WHEN STORED ft.\n\nY PLACE.\nChe coca conn\nnee\n\n| BRA License uo:\n‘ eS wo:\n\n \n\x0c'

I would like to know if there are some NLP models/libraries that I can use to correct spelling mistakes(like correcting gaRBOMATED to CARBONATED

",16881,,,,,9/21/2020 16:02,What are some good models to use for spelling corrections?,,0,3,,,,CC BY-SA 4.0 23683,2,,23221,9/21/2020 16:04,,20,,"

What is a transformer?

The original transformer, proposed in the paper Attention is all you need (2017), is an encoder-decoder-based neural network that is mainly characterized by the use of the so-called attention (i.e. a mechanism that determines the importance of words to other words in a sentence or which words are more likely to come together) and the non-use of recurrent connections (or recurrent neural networks) to solve tasks that involve sequences (or sentences), even though RNN-based systems were becoming the standard practice to solve natural language processing (NLP) or understanding (NLU) tasks. Hence the name of the paper "Attention is all you need", i.e. you only need attention and you don't need recurrent connections to solve NLP tasks.

Both the encoder-decoder architecture and the attention mechanism are not novel proposals. In fact, previous neural network architectures to solve many NLP tasks, such as machine translation, had already used these mechanisms (for example, take a look at this paper). The novelty of the transformer and this cited paper is that it shows that we can simply use attention to solve tasks that involve sequences (such as machine translation) and we do not need recurrent connections, which is an advantage, given that recurrent connections can hinder the parallelization of the training process.

The original transformed architecture is depicted in figure 1 of the cited paper. Both the encoder and decoder are composed of

  • attention modules
  • feed-forward (or fully connected) layers
  • residual (or skip) connections
  • normalization layers
  • dropout
  • label smoothing
  • embedding layers
  • positional encoding

The decoder part is also composed of a linear layer followed by a softmax to solve the specific NLP task (for example, predict the next word in a sentence).

What is BERT?

BERT stands for Bidirectional Encoder Representations from Transformers, so, as the name suggests, it is a way of learning representations of a language that uses a transformer, specifically, the encoder part of the transformer.

What is the difference between the transformer and BERT?

  • BERT is a language model, i.e. it represents the statistical relationships of the words in a language, i.e. which words are more likely to come after another word and stuff like that. Hence the part Representations in its name, Bidirectional Encoder Representations from Transformers.

    BERT can be trained in an unsupervised way for representation learning, and then we can fine-tune BERT on the so-called downstream tasks in a supervised fashion (i.e. transfer learning). There are pre-trained versions of BERT that can be already fine-tuned (e.g. this one) and used to solve your specific supervised learning task. You can play with this TensorFlow tutorial to use a pre-trained BERT model.

    On the other hand, the original transformed was not originally conceived to be a language model, but to solve sequence transduction tasks (i.e. converting one sequence to another, such as machine translation) without recurrent connections (or convolutions) but only attention.

  • BERT is only an encoder, while the original transformer is composed of an encoder and decoder. Given that BERT uses an encoder that is very similar to the original encoder of the transformer, we can say that BERT is a transformer-based model. So, BERT does not use recurrent connections, but only attention and feed-forward layers. There are other transformed-based neural networks that use only the decoder part of the transformer, for example, the GPT model.

  • BERT uses different hyper-parameters than the ones used in Attention is all you need to achieve the best performance. For example, it uses 12 and 16 "attention heads" (please, read the transformer paper to know more about these "attention heads") rather than 8 (although in the original transformer paper the authors experimented with a different number of heads).

  • BERT also uses segment embeddings, while the original transformer only uses word embeddings and positional encodings.

There are probably other small differences that I missed, but, after having read the paper Attention is all you need and quickly read some parts of the BERT paper, these seem to be the main differences.

When to use BERT and the transformer?

Although I never used them, I would say that you want to use BERT whenever you want to solve an NLP task in a supervised fashion, but your labeled training dataset is not big enough to achieve good performance. In that case, you start with a pre-trained BERT model, then fine-tune it with your small labeled dataset. You probably need to add specific layers to BERT to solve your task.

",2444,,2444,,9/21/2020 21:29,9/21/2020 21:29,,,,6,,,,CC BY-SA 4.0 23684,2,,23449,9/21/2020 17:37,,5,,"

I will be starting my PhD in natural language processing in a few days and this is very similar to my proposed topic. It's an open problem that ties NLP and AI into philosophy of science and epistemology and is, I think, extremely interesting. I say all this to drive home the point that this is not a simple problem.

Two major theoretical concerns come to my mind:

  1. What is a "fact"? Is it a universal truth, if there is such a thing? Or is it a generally accepted theory, and if so how do you measure acceptance? That is, accepted by whom, where, when?

  2. Are there any linguistic markers of opinions vs. facts? Only in rare cases, such as when the speaker prefaces their statement with something like "I believe". In most cases, I think, opinions will be stated linguistically similarly to facts. For example, compare "Cats are felines." (a "fact") with "Cats are aliens." (an opinion some may hold). They have the exact same syntactic structure. The difference here is deeply semantic, and probably relates to the speaker's intention. I'd venture that often people state their opinions with the intention of communicating a "fact".

Some more practical concerns are:

  1. Information extraction (also called relationship extraction, text mining, etc.), which for the most part assumes that the "facts" given in the labeled datasets are correct, is far from a solved problem. E.g. the state of the art model developed for a task released in 2010 has an F1 of only 76! What you propose adds significant uncertainty to these types of tasks.

  2. I suspect that even if you were able to compile a dataset of facts and opinions with corresponding labels you would encounter a number of modeling problems. Given the linguistic similarity between the statements of facts and opinions, I'd guess that your model will simply memorize the dataset, making it generalize poorly to your test set. Either that or it would would pick up on random, hidden correlations in the data to solve the problem (neural nets are really good at this), perhaps generalizing to the test set, but failing to apply to any other data.

  3. Fact vs. opinion is something that is embedded in a cultural milieu, so a model would, I think, need access to some proxy for what is culturally accepted in order to make this distinction, perhaps a via knowledge base. This may be feasible for limited, highly curated domains (e.g. biomedicine), but there is currently nothing suitable for a general-purpose fact finder.

tldr: No, it is not enough to simply create a dataset of facts vs. opinions. This problem poses major theoretical concerns related to epistemology, linguistics, and cognitive science. Additionally, there are more mundane (but non-trivial!) modeling issues to consider. @Sceptre is right that it will be impossible to start this without knowledge of AI/ML/NLP, especially a rather deep knowledge of what current AI systems are really capable of.

",37972,,,,,9/21/2020 17:37,,,,2,,,,CC BY-SA 4.0 23685,2,,23676,9/21/2020 17:40,,-3,,"

Just to add up to the answer above.

In fact if the reward that you get are not stochastic in RL then you simply take a step into your parameter space that guaranteed you the best reward so far (after the evaluation of all other states). So for example if action up is the best one so far, nothing motivates you to try an other one.

When you are doing naïve HO it could be seen as an exploration of the space. The environment is not stochastic but the reward (loss decrease) that you will get are not known by the agent beforehand. That's enough to make the exploration step mandatory. So let's say the combination (up, up, down) has got you the best loss so far, you need to actually try other combinations to know if they are the best above all others. In that sense you are exploring too.

So when are you not exploring ? If the next step in your HO is given by an optimization step, let's say by a function $f$, then you are not exploring anymore. You are progressing toward the objective given by $f$.

Thus, you have to make sure that $f$ correctly gives you the best combination of parameters - mathematically $f$ is converging to a global optimum.

So grid search could be viewed as exploration, Bayesian optimization HO not that much.

",41123,,-1,,10/8/2020 23:12,10/8/2020 23:12,,,,3,,,,CC BY-SA 4.0 23686,2,,16014,9/21/2020 18:08,,1,,"

Nothing is written on stone in here, but as a rule of thumb linear activation is not very common. A linear activation function in a hidden layer can collapse more neurons in more layers. Linear activation can be implemented in the last layer if a scale of the outputs is not used. (This is the most common use I have seen.)

",41126,,40573,,9/25/2020 2:45,9/25/2020 2:45,,,,0,,,,CC BY-SA 4.0 23687,1,,,9/21/2020 18:08,,2,52,"

One of the main arguments against $n$-gram models is that, as $n$ increases, there is no way to compute $P(w_n|w_1,\cdots,w_{n-1})$ from training data (since the chance of visiting $w_n,...,w_1$ is practically zero).

Wondering why we cannot estimate $P(w_n|w_1,\cdots,w_{n-1})$ using the following:

Let $P_i(u|v)$ be the probability of having sequences where word $u$ comes exactly $i$ words after word $v$ (This is easy to compute).

Then we can esitmate $P(w_n|w_1,\cdots,w_{n-1})$ as a function of $P_i(u|v)$. I could not find any reference to such approach in the literature. The most similar approach is the smoothing/backoff methods.

Is there any reason why no-one used this approach? Or if one can share some previous work about this approach.

P.S.1. The disadvantage of this approach, comparing with standard $n$-gram model, is its running time.

P.S.2. We could use bucketing idea: Instead of computing/storing/using $P_i$, for every $i$, we can compute/store/use $PB_{i}=P_{2^i}$ . Then $P_i(u|v) \approx PB_{\log i}(u|v)$.

",41124,,41124,,4/23/2021 17:22,4/23/2021 17:22,Estimating an $n$-Gram model using on bigrams,,0,2,,,,CC BY-SA 4.0 23688,2,,23667,9/21/2020 18:19,,0,,"

I guess you are using NN for Regresions. In the most common aplication a scale of the outputs is implemented. This is recommended. Specialy if you have more than one output with diferent scales. Otherwise, you will remunerate the neural network for correcting the error of one variable over the other. If you still want to avoid a scale of the outputs. Yes. You can use the identity function in the output layer or a linear function (tha same with different slope). The weights and bias of some conections will become negative and the hidden neurons are going to work as always.

",41126,,,,,9/21/2020 18:19,,,,0,,,,CC BY-SA 4.0 23689,2,,23672,9/21/2020 20:06,,1,,"

No, transfer learning cannot be applied "between" different architectures, as transfer learning is the practice of taking a neural network that has already been trained on one task and retraining it on another task with the same input modality, which means that only the weights (and other trainable parameters) of the network change during transfer learning but not the architecture.

In my understanding, transfer learning is also only really effective in deep learning, but I could be wrong, considering that this Google search seems to yield some results.

You might otherwise be thinking of knowledge distillation, which is a related but different concept, where an already trained network acts as a teacher and teaches another network (a student network) with possibly a different architecture (or a machine learning model not based on neural networks at all) the correct outputs for a bunch of input examples.

",9220,,,,,9/21/2020 20:06,,,,1,,,,CC BY-SA 4.0 23690,1,23717,,9/21/2020 23:10,,5,256,"

This is a simple question. I know the weights in a neural network can be initialized in many different ways like: random uniform distribution, normal distribution, and Xavier initialization. But what is the weight initialization trying to achieve?

Is it trying to allow the gradients to be large so it can quickly converge? Is it trying to make sure there is no symmetry in the gradients? Is it trying to make the outputs as random as possible to learn more from the loss function? Is it only trying to prevent exploding and vanishing gradients? Is it more about speed or finding a global maximum? What would the perfect weights (without being learned parameters) for a problem achieve? What makes them perfect? What are the properties in an initialization that makes the network learn faster?

",41026,,41026,,9/22/2020 16:50,9/24/2020 3:55,What is the goal of weight initialization in neural networks?,,2,5,,,,CC BY-SA 4.0 23691,2,,23690,9/21/2020 23:15,,1,,"

The most important thing we achieve is indeed making sure the weights are not all equal. If they were, every layer would behave as if it were a single cell.

We typically want weights that are near zero (so unimportant connections will not accidentally dominate) but non-zero.

The different types of initialization all have different motivations, including those mentioned in the question.

If you're curious what the motivation for each one is, I would recommend you check the documentation and try to find the original papers where they were first introduced.

",40573,,,,,9/21/2020 23:15,,,,1,,,,CC BY-SA 4.0 23692,1,,,9/21/2020 23:23,,1,591,"

I have a deep learning configuration in which I obtain good results on the validation set but even better results in the training set. From my understanding this means that there is overfitting to some extent. What does this mean in practice? Does it mean that my model is not good and that I should not use it? If I decrease the gap between the validation and training accuracy (decreasing the overfitting) but at the same time decrease the validation accuracy, which of the two models is better?

Below are some images to illustrate the two situations outlined previously:

",41131,,,,,9/22/2020 1:49,How much overfitting is acceptable?,,1,2,,,,CC BY-SA 4.0 23693,2,,23692,9/22/2020 1:49,,1,,"

Validation results will almost never be as good as training results; that's just natural. As long as they are not too different, you should be fine. What "too different" means depends on the particular data set and model you're using.

If you plot the curves for varying parameter values, when the training error keeps going down but the validation error starts going up again, that's when you know there is overfitting. In your second graph, after 14 epochs, we might see the start of overfitting. If you continue this until 20 epochs or so, it should be even more clear. I would guess that 12 is probably a good value for the number of epochs for that problem.

In the first graph, we don't see that happening yet. The model might not be well-suited (the gap between training and validation results is a bit larger) but that can also be because of too little data, or other factors. Perhaps that's just the best you can do; there might be noise in the data or something.

",40573,,,,,9/22/2020 1:49,,,,0,,,,CC BY-SA 4.0 23698,1,,,9/22/2020 11:25,,0,266,"

What is the time complexity for training a single-hidden layer auto-encoder, for 1 epoch?

You can assume that there are $n$ training examples, $m$ features, and $k$ neurons in the hidden layer, and that we use gradient descent and back-propagation to train the auto-encoder.

",41141,,2444,,9/22/2020 12:09,9/22/2020 12:09,What is the time complexity for training a single-hidden layer auto-encoder?,,0,6,,,,CC BY-SA 4.0 23699,1,,,9/22/2020 11:42,,1,53,"

Convolutions can be expressed as a matrix-multiplication (see e.g. this post) and as an element-wise multiplication using the Fourier domain (https://en.wikipedia.org/wiki/Convolution_theorem).

Attention utilizes matrix multiplications, and is as such $O(n^2)$. So, my question is, is it possible to exploit the Fourier domain for attention mechanisms by turning the matrix multiplication of attention into a large convolution between the query and the key matrices?

",33058,,2444,,9/23/2020 11:47,9/23/2020 11:47,Is it possible to express attention as a Fourier convolution?,,0,0,,,,CC BY-SA 4.0 23700,1,,,9/22/2020 13:25,,2,65,"

I have an environment that is computationally heavy (takes several seconds to get a reward and next state). This limits reinforcement capability, due to poor sampling of the problem. There is any strategy that could be used to address the problem (e.g. If I can use the environment in parallel, then I could use a multi-agent approach)

",31324,,31324,,9/22/2020 13:31,11/7/2022 21:02,What are the strategies for computationally heavy environments or long-time waiting environments?,,1,2,,,,CC BY-SA 4.0 23703,1,,,9/22/2020 19:02,,1,83,"

When it comes to using Transformers for image captioning is there any reason to use masking?

I currently have a resnet101 encoder and am trying to use the features as the input for a transformer model in order to generate a caption for the image, is there any need to use masking? and what would I mask if I did need to?

Any help would be much appreciated

Thanks in advance.

",40561,,,,,12/1/2020 14:59,How to implement or avoid masking for transformer?,,1,0,,,,CC BY-SA 4.0 23706,2,,22963,9/22/2020 22:10,,0,,"

Layer freezing means that the layer weights of the trained model do not change when reused on a subsequent downstream mission, they remain frozen. Basically, when backpropagation is performed during training, these layer weights aren't compromised.

",17348,,,,,9/22/2020 22:10,,,,0,,,,CC BY-SA 4.0 23708,1,,,9/23/2020 9:39,,1,409,"

Recently I have come up with a VGG16 model for my binary classification task. I have relatively simple signal images

Therefore (maybe?) other deeper models like resnet18 and Inceptionv3 were not as good. As known, VGG uses 3x3 filters for convolving the images to make feature maps. I have tried several hyper-parameters to get a desired performance. However, there are still some things I need to do. I was thinking of replacing the 3x3 conv filters with 3x1 followed by 1x3 filters to reduce the compute. I think it will definitely do so considering the multiplications (9 operations for 3x3and 6 for 3x1 followed by 1x3).

Then I came to think: If I replace all the 3x3 filters with separable filters, will I get any performance improvement?

What are the benefits of replacing 3x3 filters with separable ones?

Thanks

",31870,,,,,11/18/2021 14:04,Does replacing 3x3 filters with 3x1 and 1x3 filters improve the performance?,,2,2,,,,CC BY-SA 4.0 23711,1,,,9/23/2020 16:12,,0,105,"

I've started to work on time series. I was wondering what would be the best data normalizing and pre-processing technique for non-linear models, specifically, neural networks.

One I can think of is min-max normalization

$$z = \frac{x - min(x)}{max(x) - min(x)}$$

",41110,,2444,,10/3/2020 10:16,1/9/2023 12:07,What would be a typical pre-processing and data normalization pipeline for time series data (for non-linear models such as neural networks)?,,2,2,,,,CC BY-SA 4.0 23713,1,,,9/23/2020 21:48,,3,26,"

Consider following problem statement:

You have given $n$ actions. You can perform any of them. Each action gives you success with some probability. The challenge is to perform given finite number of actions to get maximum successes.

Here, we can perform actions and slowly decide upon possible probabilities of each action for success. I have no doubts in this problem.

Now consider following variant of the problem:

You have given $n$ actions. You can perform any of them. Each action gives you success with some probability. Also you are given set of $n$ probabilities, but you are not told which probability is associated with which action. The challenge is to utilise this additional information to perform given finite number of actions to get maximum successes.

I have doubt in this problem that how we can map probabilities to actions? I can do some enough number of actions to gather empirical probability and them try to associate given probabilties with actions having closest empirical probabities. But, is there any algorithm for such problem in the literature?

",41169,,,,,9/23/2020 21:48,Mapping given probabilities to empirical probabilities,,0,1,,,,CC BY-SA 4.0 23714,1,,,9/23/2020 22:23,,0,43,"

Ridgid body simulation is a well known field with well established methods. It's still fairly computationally expensive to simulate things.

I am interested in approaches to training deep learning networks to predict rigid body dynamics and interactions to reduce the computational load associated with simulations.

Has this been done before and what approaches have been used?

",32390,,,,,9/23/2020 22:23,Deep learning based physics engine,,0,3,,,,CC BY-SA 4.0 23716,2,,23207,9/24/2020 3:35,,1,,"

We usually divide the dataset into multiple subsets namely (training, validation and test sets). During training, we validate the model against the validation set. And during testing, we use the test dataset to obtain metrics for the model. We should make sure the subsets are taken from the same sample. Once you've tested it against the test subset, there's nothing really we can do.

You can also increase your dataset, by using multiple data sources if the problem statement allows you to.

",40434,,,,,9/24/2020 3:35,,,,0,,,,CC BY-SA 4.0 23717,2,,23690,9/24/2020 3:38,,3,,"
  • Is it trying to make sure there is no symmetry in the gradients?

The aim of weight initialization is to make sure that we don't converge to a trivial solution. That's why we have different kinds of initialization depending on the dataset type. So, Yes it is trying to avoid symmetry.

  • Is it trying to allow the gradients to be large so it can quickly converge?

The time it takes to converge, is I think a property of the optimizer and not of the weights initialization. Of course, the manner in which we initialize our weights matters but I think Optimization Algorithms contribute more towards convergence

  • What are the properties in an initialization that makes the network learn faster?

Glorot and Bengio believed that Xavier weight initialization would maintain the variance of activations and back-propagated gradients all the way up or down the layers of a network. Incidentally, when they trained deeper networks that used ReLUs, it was found that a 30-layer CNN using Xavier initialization stalled completely and didn’t learn at all. Thus, it depends on the particular problem at hand.

",40434,,40434,,9/24/2020 3:55,9/24/2020 3:55,,,,2,,,,CC BY-SA 4.0 23718,1,,,9/24/2020 3:52,,0,193,"

While researching why we need non linear activation functions, all the explanations revolve around neural network being able to separate values that aren't linearly separable. So I wonder, if we have a neural network whose task is something else, say predicting an output value of a time series, is it still important to have an activation function that is non linear?

",38668,,,,,9/24/2020 4:08,Do we need non-linear activation function in neural networks whose task isn't classification?,,1,0,,,,CC BY-SA 4.0 23719,2,,23718,9/24/2020 4:08,,2,,"

Yes, definitely.

In the simplest example, predicting an output value for a time series is classification. You take in the previous time steps and classify what is the most likely next value. You could do this with a RNN (Recurrent Neural Network) for example.

If the activation functions are all linear, the nerual network is just a glorified linear regression. Think of it like this: a neural network is trying to approximate a complicated function in $n$ dimensional space. It does this by combining operations on a series of known functions, to get a resultant function that hopefully mimics the desired function. The issue with combining linear functions is the only thing you'll ever get at the end is a linear function.

As a concrete example, try and approximate the function $y = x^3 + x^2 -x -1$ by adding a series of linear functions together. You'll find pretty quickly this is useless. However, if you use a non-linear function, such as a ReLU (Rectified Linear Unit) you can quite easily approximate this function. See this implementation on desmos.

If a problem has any sort of complexity to it, the function it follow is likely incredibly complicated, and futile to approximate using linear equations.

",26726,,,,,9/24/2020 4:08,,,,4,,,,CC BY-SA 4.0 23720,2,,23708,9/24/2020 4:16,,-2,,"

First of all, Keep in mind that maths operations aren’t the only thing that contribute to performance. Memory bandwidth can also be a factor.

And most importantly, we want to capture as much area as we can in lowest possible number of operations. So in 3x3 kernel case, we can capture 9 cells in one shot, but with 3x1 followed by 1x3, we have to compute 6 times to capture 9 cells. Which clearly states that 3x3 kernel is far more efficient than these two sequential kernels.

So, answer to your question will be no, it will not improve performance, instead it will increase computation overhead for your system.

",41110,,,,,9/24/2020 4:16,,,,1,,,,CC BY-SA 4.0 23723,1,23855,,9/24/2020 8:50,,1,67,"

I would like to classify the subject of a conversation. I could classify each messages of the conversation, but I will loose some imformation because of related messages.

I also need to do it gradually and not at the end of the conversation.

I searched near recurrent neural network and connectionist classification but I'm not sure it answer really well my issue.

",41176,,2444,,9/24/2020 15:53,10/1/2020 14:58,Is it possible to classify the subject of a conversation?,,2,4,,,,CC BY-SA 4.0 23724,1,,,9/24/2020 11:08,,1,31,"

Suppose I have a dataset with hand images. Hand completely opened is labeled as 0 and hand completely closed (fist) are labeled as 1. I also have a bunch of unlabeled images of hands which, if properly labeled would have values between 0 and 1. Because they are not completely opened and not completely closed.

Extra info I have is the ordering between all the pairs of unlabeled images. For example, given image A and B, I can tell you which image should be predicted with higher value, but I cannot tell you what exactly is the value. The unlabeled dataset is collected by recording a video of a closing hand from completely opened to completely closed.

What are some machine learning techniques that I can use to give the label or predict the values of hands not completely closed and not completely opened? I expect it to be based on ordering or ranking system. If it doesn't even require the ordering label (A > B?) then that would be a very smart algorithm.

I want the values between 0 and 1. If there's a code for it that would be a plus. Thank you.

",20819,,,,,9/24/2020 11:08,How do you make a regression model from a binary labeled dataset?,,0,0,,,,CC BY-SA 4.0 23726,1,,,9/24/2020 11:40,,1,82,"

In a deep connected network, when every unit gets all the input features(X) so it has one parameter for every feature and every unit tweaks its parameters for loss optimization. What if we use only one unit and that one unit will have all the parameters which it can tweak for loss optimization. Is there a reason or benefit of using multiple units in every layer except the output layer?

",41178,,2444,,9/25/2020 13:43,9/25/2020 15:54,Why one unit in the layers of neural network is not enough?,,1,0,,,,CC BY-SA 4.0 23728,1,,,9/24/2020 12:57,,2,60,"

For example, if I use some iterative solvers to find a solution to a non-linear least squares problem, is that already considered machine learning?

",32111,,4446,,9/24/2020 13:41,9/24/2020 15:02,What's the threshold to call something 'machine learning'?,,1,3,,,,CC BY-SA 4.0 23729,2,,23723,9/24/2020 13:13,,1,,"

This is a difficult problem.

First, how do you define 'subject'? Do you have a (closed) lists of labels you want to assign? What about subjects that overlap, or don't occur in your list? What even is a subject? This is a non-trivial issue.

Second, and this is even harder, how do you want to recognise subjects? A simple solution could be using a list of associated keywords, but this is problematic as many words have multiple meanings, and words are not really a good indicator of a conversation topic in the first place.

Instead of jumping to an implementation method, be clear about how you want to tackle these two items first. Start by annotating a conversation transcript by hand. You will then get a feeling for the problems and possible solutions. After you have done this, you can think about how to get a machine to do it efficiently.

UPDATE: For a scheme to annotate the functions of lines within a conversation have a look at Francis & Hunston (1992) Analysing Everyday Conversation. In Coulthard, M. (ed.) "Advances in Spoken Discourse Analysis". London: Routledge. pp.123-161. This is more oriented towards linguistics, but might give you some ideas on how to proceed.

",2193,,2193,,9/25/2020 16:03,9/25/2020 16:03,,,,3,,,,CC BY-SA 4.0 23730,2,,23728,9/24/2020 13:34,,2,,"

T. Mitchell defines machine learning in "Machine Learning" book as

a computer program is said to learn from experience 𝐸 concerning some class of tasks 𝑇 and performance measure 𝑃, if its performance at tasks in 𝑇, as measured by 𝑃, improves with experience 𝐸

Hence, based on the above definition, we can't say a machine learning method to every iterative method. In your specific example, it is just a non-linear solver such as the Newton method to finding roots.

However, you should notice that a non-specific machine learning method can be used in the learning process. For example, you might need some numerical methods to compute the measure $P$‌ (in the above definition). But, we can't say that the specified method is a machine learning method.

",4446,,4446,,9/24/2020 15:02,9/24/2020 15:02,,,,2,,,,CC BY-SA 4.0 23732,1,23733,,9/24/2020 15:58,,0,147,"

What are trap functions in genetic algorithms? Suppose you ran a GA with a trap function and examined the population midway through the run. Can someone explain what you would expect the population to look like?

",41181,,2444,,12/8/2020 16:08,12/8/2020 16:08,What are trap functions in genetic algorithms?,,1,0,,12/8/2020 16:10,,CC BY-SA 4.0 23733,2,,23732,9/24/2020 16:19,,2,,"

Traps are functions that are designed to have a very obvious gradient that leads to basically the second-best solution, with the best one being very far removed from that, often the complement of the second-best.

Take the 1-max problem. You have N bits, and the fitness of each individual is just the number of 1 bits in the string. If you run a GA on that, you'd expect to see an initial random distribution of 1s and 0s across the population fairly quickly start to converge to strings of all 1s. Now let's add one more clause to the fitness criteria. Instead of f(0)=0, let f(0)=N+1. This gives you a trap function. The "trap" is the solution of all 1s. Everything in the search space points to this being the best solution, but it's actually maximally far away from the best solution.

In terms of dyanmics, now the optimal solution is the string of all 0s. But unless you happen to very quickly stumble across that string, you likely never will. That is, there's some slight chance that your initial population included 00001111 and 11110000 and you crossed them over in the middle and got the optimum. Or maybe you had 00001000 and you got lucky and mutated that one bit to a zero. But if you don't do that immediately, you're screwed, because the algorithm is going to pretty relentless drive all the zero bits out of the population. No matter what it does, flipping a 0 to a 1 is always better unless it would otherwise lead to the single string of all 0s, so there's constant pressure to find more and more 1 bits. A few generations in, you might have 11110110, but you're never realistically going to randomly mutate all six of those 1s to 0s. And you have to get it all in one shot. Any subset of fewer than six of those bits being flipped will have worse fitness than you started with, and be selected against.

",3365,,,,,9/24/2020 16:19,,,,0,,,,CC BY-SA 4.0 23734,1,,,9/24/2020 18:46,,1,54,"

What I want to do is from an Internet challenge to transform any given image into the Polish flag using the available filters and crop tool on the iPhone camera app. Here's an example.

There aren't nearly enough of these videos to train a neural network using a labeled dataset, and (while I haven't ruled it out) I don't think automatically inserting a polish flag into an image then adding random filters to it to create my own dataset would work out.

My thinking is that I would feed a neural network the image and it would output a value for each filter & cropping coordinates. Then, I could easily calculate the loss by comparing the resulting picture to a picture of the polish flag. The obvious problem here is that you don't know how each of the neurons in the last layer affects the loss so you can't perform back propagation.

Is my best bet to mathematically calculate the loss (by this I mean as opposed to using high level libraries, which would be difficult but I'm sure it's possible) so I can find the partial derivative of each last layer neuron with respect to the loss function and then backpropagate? Would this even work? Are there any alternatives that you recommend?

",41183,,11539,,9/29/2020 16:55,9/29/2020 16:55,Is this ML task possible?,,1,0,,,,CC BY-SA 4.0 23736,2,,23734,9/24/2020 20:05,,1,,"

I think the best thing to use here is a form of "structured prediction". Our "target" is a sequence of operations. The framework of structured prediction allows us to chain together as many filters as we want.

With a neural network of fixed architecture, you would have to make sure you have enough space for all the filters you might need.

",40573,,,,,9/24/2020 20:05,,,,2,,,,CC BY-SA 4.0 23737,1,,,9/24/2020 22:15,,1,125,"

I am trying to comprehend how the Gradient Descent works.

I understand we have a cost function which is defined in terms of the following parameters,

$J(𝑤_{1},𝑤_{2},.... , w_{n}, b)$

the derivative would tell us which direction to adjust the parameters.

i.e. $\dfrac{dJ(𝑤_{1},𝑤_{2},.... , w_{n}, b)}{d(𝑤_{1}}$ is the rate of change of the cost w.r.t $𝑤$

The lecture kept on saying this very valuable as we are asking the question how should I change $𝑤$ to improve the cost?

But then the Lecturer presented $w_{1}$, $w_{2}$, ... as scaler value, How can we differentiate a scalar value.

I am fundamentally missing what is happening.

Can anyone please guide me to any blog post, a book that I should read to understand better?

",41187,,41187,,9/24/2020 23:03,9/24/2020 23:03,How parameter adjustment works in Gradient Descent?,,1,0,,,,CC BY-SA 4.0 23738,1,,,9/24/2020 22:39,,3,35,"

Is there a way to quantify the amount of information lost in the lossy part of an autoencoder where the original input is compressed to a representation with less degrees of freedom?

I was thinking maybe to use somehow the mutual information either in the image or frequency domain.

$$ \mathrm{I}(X ; Y)=\sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}} p_{(X, Y)}(x, y) \log \left(\frac{p_{(X, Y)}(x, y)}{p_{X}(x) p_{Y}(y)}\right) $$

where $p_{(X,Y)}$ is the joint probability density function which is deduced somehow empirically from a set of $N$ input and output to the network.

Maybe it's not even an interesting question since the loss function evaluates exactly that?

",12975,,,,,9/24/2020 22:39,How to quantify the amount of information lost by the decoder NN in an AE?,,0,1,,,,CC BY-SA 4.0 23739,2,,23737,9/24/2020 22:59,,3,,"

Imagine we have the curve $f(x) = x^2$, and we want to find the minimum of this function. The derivate of $f$ with respect to $x$ is $2x$. Now, gradient descent works by updating our current estimate of the minimum, say $c_t$, by the following iterative process $$c_{t+1} = c_t - \alpha \times \nabla_xf(x=c_t),$$ where $\alpha$ is some constant to control how much we shift towards the gradient.

Intuitively, this should make sense. Imagine our current estimate of the minimum is $c_t = -1$. The update would then give us $c_{t+1} = -1 - \alpha \times -2 = -1 + 2\alpha > -1$. As you can see, the update has shifted our estimate in the direction of the minimum. If our estimate were $+1$ then you can probably see that the update would again have shifted us in the direction of the minimum again.

Now, what happens in machine learning is we have a loss function $L$ that we typically want to find the minimum of in terms of the parameters of our model. By applying gradient descent on the loss function as we did above with $f(x)$ we iteratively apply the update rule which will eventually lead us to the minimum of our loss function with respect to the weights. The process is exactly the same as above except it is likely to happen in higher dimensions, where the derivative of $f$ becomes a vector of partial derivates. Note that whilst $w_i$'s are scalar values, we are not differentiating these values, rather we are differentiating the loss function with respect to these scalars.

I would recommend you to try this out with a simple linear model of maybe 2 or 3 parameters. After a quick Google I found this article that may be useful.

",36821,,,,,9/24/2020 22:59,,,,0,,,,CC BY-SA 4.0 23740,1,23748,,9/25/2020 3:40,,7,2674,"

In all examples I've ever seen, the learning rate of an optimisation method is always less than $1$. However, I've never found an explanation as to why this is. In addition to that, there are some cases where having a learning rate bigger than 1 is beneficial, such as in the case of super-convergence.

Why is the learning rate generally less than 1? Specifically, when performing an update on a parameter, why is the gradient generally multiplied by a factor less than 1 (absolutely)?

",26726,,2444,,11/29/2020 12:25,11/29/2020 12:25,Why is the learning rate generally beneath 1?,,1,0,,,,CC BY-SA 4.0 23741,1,23742,,9/25/2020 6:22,,0,299,"

I have read a lecture note of Prof. Andrew Ng. There was something about data normalization like how can we flatten an image of (64x64x3) into a (64x64x3)*x1 vector. After that there is pictorial representation of flatten

As per the picture height, length and width of the picture is 64 , 64, 3. I think nx is a row vector which is then transpose to a column vector. If there is 3 pictures I think nx contains {64,64,3,64,64,3,64,64,3}. Am I right?

To use a 64x64x3 image as an input to our neuron, we need to flatten the image into a (64x64x3)x1 vector. And to make Wᵀx + b output a single value z, we need W to be a (64x64x3)x1 vector: (dimension of input)x(dimension of output), and b to be a single value. With N number of images, we can make a matrix X of shape (64x64x3)xN. WᵀX + b outputs Z of shape 1xN containing z’s for every single sample, and by passing Z through a sigmoid function we get final ŷ of shape 1xN that contains predictions for every single sample. We do not have to explicitly create a b of 1xN with the same value copied N times, thanks to Python broadcasting.

As per my understanding, Wᵀ = nx and x= nxᵀ.

Is it Wᵀ= [64,64,3,64,64,3,64,64,3] and x = [64,64,3,64,64,3,64,64,3]ᵀ?

In that case there product will be a symmetry matrix.

Is there any significance of symmetry matrix?

I just messed up all the things while flatten the image. If anyone has any idea please share with me.

Thank you in advance.

",18384,,18384,,9/25/2020 7:02,9/25/2020 7:02,Flatten image using Neural network and matrix transpose,,1,0,,,,CC BY-SA 4.0 23742,2,,23741,9/25/2020 6:37,,1,,"

Yes, if you have 3 images (and by images I assume you mean samples) the flatten layer will be of the shape $12288*3$ ($64*64*3=12288$). The size of $W$ however does not change, and nor does the size of $b$ as these are parameters and are independent of the amount of samples passed through the network.

ETA: I only answered the "Am I right?" part of your question because that's the only part of your questions that's actually a question. I don't know what you're trying to ask in the second half of your question

",26726,,,,,9/25/2020 6:37,,,,1,,,,CC BY-SA 4.0 23744,1,,,9/25/2020 8:31,,1,21,"

Could someone help me to understand in detail each step of this fuzzy diagram, because I am lost?

",40555,,2444,,9/25/2020 10:05,9/25/2020 10:05,Can someone explain and help to understand this fuzzy diagram?,,0,1,,,,CC BY-SA 4.0 23745,1,23794,,9/25/2020 10:12,,2,371,"

Consider this slide from a Stanford lecture on reinforcement learning. It states that a model is

the agent's representation of how the world changes in response to the agent's action.

I've been experimenting with Q-learning for simple problems such as OpenAI's FrozenLake and Mountain Car, which both are amenable to the Q-learning framework (the latter upon discretization). I consider the topologies of the lake and the mountain to be the "worlds" (aka. environments) in the two cases, respectively.

Q-learning is said to be "model-free". Given the two examples above, is it because neither the lake's topology nor that of the mountain are changed by the actions taken?

",30959,,,,,9/27/2020 13:35,"How does one know that a problem is ""model-free"" in reinforcement learning?",,2,4,,,,CC BY-SA 4.0 23746,1,23751,,9/25/2020 12:28,,2,276,"

Say I trained a Neural Network (not RNN or CNN) to classify a particular data set.

So I train using a specific data set & then I test using another and get an accuracy of 95% which is good enough.

I then deploy this model in a production level environment where it will then be processing real world data.

My question is, will this trained NN be constantly learning even in a production scenario? I can't figure out how it will because say it processes a dataset such as this: [ [1,2,3] ] and gets an output of [ 0, 0.999, 0 ]

In a training scenario it will compare the predicted output to the actual output and back propagate but in a real world scenario it will not know the actual value.

So how does a trained model learn in a real world scenario?

I am still very much a beginner in this field and I am not sure if the technology used is going to affect the answer to this question, but I am hoping to use Eclipse Deeplearning4J to create a NN. That being said the answer does not need to be restricted to this technology in particular as I am hoping more for the theory behind it and how it works.

",41196,,2444,,9/25/2020 13:56,9/25/2020 13:56,How does a neural network that has been trained keep learning while in a real world scenario,,1,0,,,,CC BY-SA 4.0 23748,2,,23740,9/25/2020 13:22,,11,,"

If the learning rate is greater than or equal to $1$ the Robbins-Monro condition $$\sum _{{t=0}}^{{\infty }}a_{t}^{2}<\infty\label{1}\tag{1},$$

where $a_t$ is the learning rate at iteration $t$, does not hold (given that a number bigger than $1$ squared becomes a bigger number), so stochastic gradient descent is not generally guaranteed to converge to a minimum [1] (although the condition \ref{1} is a sum from $t=0$ to $t=\infty$, but, of course, we only iterate for a finite number of iterations). Moreover, note that, if the learning rate is bigger than $1$, you are essentially giving more weight to the gradient of the loss function than to the current value of the parameters (you give weight $1$ to the parameters).

This is probably the main reason why the learning rate is usually in the range $(0, 1)$ and there are methods to decay the learning rate, which can be beneficial (and there are several explanations of why this is the case [2]).

",2444,,2444,,9/25/2020 13:28,9/25/2020 13:28,,,,0,,,,CC BY-SA 4.0 23749,2,,23745,9/25/2020 13:29,,1,,"

A reinforcement learning algorithm is considered model based if it uses estimates of the environments dynamics to help learn. For instance, in the Tabular Dyna-Q algorithm, every time you visit a state action tuple you store in a look-up table the reward received and the next state transitioned to, and after every execution of an action you loop $n$ times to further back up your $Q$ table using these stored model values from the look-up table. I will attach a copy of the pseudo-code for an algorithm at the bottom of this post.

Algorithms like vanilla $Q$-learning are model free because they don't require a model of the environment to learn.

",36821,,,,,9/25/2020 13:29,,,,2,,,,CC BY-SA 4.0 23750,2,,23726,9/25/2020 13:32,,1,,"

So if I understand correctly, you're proposing to use a neutral net with $N$ input units (let's say data is in $\mathbf{R}^N$), 1 hidden unit, and whatever the necessary output needs to be.

Let's say we try to do this. Then each unit of the output layer is responsible for computing its output based on a single scalar input. So it's like as if you're saying the problem can be compressed to another problem that is 1-dimensional. Now let's say you're doing regression. By modeling your NN this way, you're saying there's a function that (approximately) maps a scalar (one dimensional object) to the target output. Suppose your output layer is a linear layer (which is usually the case in regression), this is saying that the dimension of the output space is at most 1. This is almost never the case, especially with complex data.

Now suppose you had several hidden units (call this number $M$) and the weights are initialized properly. Then, it's likely that your hidden unit will have dimension larger than one, perhaps even $M$, which allows for a richer output space as $M$ increases.

You can make similar arguments for the classification setting, but the output nonlinearity just makes it a little less clear.

",37829,,37829,,9/25/2020 15:54,9/25/2020 15:54,,,,2,,,,CC BY-SA 4.0 23751,2,,23746,9/25/2020 13:51,,2,,"

You are right. If you don't continuously train the neural network after you have deployed it, there is no way it can continuously learn or be updated with more information. You need to program the neural network to learn even after it has been deployed. There is no such thing as a neural network that decides what it does without a human deciding first what it needs to do: this is a very common misconception (probably caused by the media and science fiction movies). It's also true that you need to label your data if you intend to train the neural network in a supervised fashion, but there are other ways to train neural networks (e.g. by reinforcement learning), depending also on the problem you want to solve.

If you want to develop neural networks that can learn continually, you probably want to look into continual learning techniques for neural networks. Another term that you may be looking for is online machine learning.

",2444,,2444,,9/25/2020 13:56,9/25/2020 13:56,,,,2,,,,CC BY-SA 4.0 23754,1,,,9/25/2020 14:17,,1,50,"

Let be a Bayesian multivariate normal distribution classifier with distinct covariance matrices for each class and isotropic, i.e. with equal values over the entire diagonal and zero otherwise, $\mathbf{\Sigma}_i=\sigma_i^2\mathbf{I},~\forall i$.

How can I compute the equation for estimating the parameter $\sigma_{i}$ by the maximum likelihood method? Here $\sigma_{i,j}$ is is the covariance between $x_i$ and $x_j$. So $\sigma_i$ is just the variance of $x_i$.

Attempt:

Suppose $\mathcal{X}_i = \{x^t_i\}^N_{t=1}$ i.i.d, $x_i^t$ is in the class $C_i$ and $x_i^t \sim \mathcal{N}(\mu, \sigma^2)$.

Do I have to find the log-likelihood under $p(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left[ -\frac{(x-\mu)^2}{2 \sigma^2}\right]$, find the derivative and put it equal to $0$ to find the maximum?

EDIT

Suppose my data points are $m$-dimensional, and I have $K$ classes.

",41202,,41202,,9/25/2020 20:37,9/25/2020 20:37,Estimating $\sigma_i$ according to maximum likelihood method,,0,3,,,,CC BY-SA 4.0 23756,1,23813,,9/25/2020 15:20,,6,218,"

For many problems in computer science, there is a formal, mathematical problem defition.
Something like: Given ..., the problem is to ...

How can the Object Detection problem (i.e. detecting objects on an image) be formally defined?

Given a set of pixels, the task is to decide

  1. which pixels belong to an object at all,
  2. which pixels belong to the same object.

How can this be put into a formula?

",23496,,,,,9/29/2020 15:23,Formal definition of the Object Detection problem,,1,4,,,,CC BY-SA 4.0 23762,1,,,9/25/2020 19:57,,0,35,"

Pretty much the title. I'm no expert but from what I know, if you add up enough sine functions with proper amplitudes and frequencies you can get any function you want as a result. With that knowledge, wouldn't it make sense to have neuron's activation function be a sine function?

",38668,,,,,9/25/2020 19:57,Why is sine activation function not used frequently since we know from fourier transforms that sine functions can combine to fit any function?,,0,2,,,,CC BY-SA 4.0 23763,1,,,9/25/2020 21:20,,1,20,"

I am designing and researching algorithms which I call of a wavefront nature. It is image analsyis agorithms when every pixel may change many times during the processing. I have heard this name before, but it seems it is not used widely. Are there other "wavefront algorithms"?

",41207,,,,,9/25/2020 21:20,What is a wavefront algorithm?,,0,0,,,,CC BY-SA 4.0 23767,1,,,9/26/2020 2:26,,1,19,"

In paper "DeepMDP: Learning Continuous Latent Space Models for Representation Learning", Gelada et al. state in the beginning of section 2.4

The degree to which a value function of $\bar{\mathcal M}$, $\bar{V}^{\bar\pi}$ approximates the value function $V^\bar\pi$ of ${\mathcal M}$ will depend on the Lipschitz norm of $\bar V^\bar\pi$ .

where $\mathcal M$ is the Markov Decision Process(MDP) defined in the original state space $\mathcal S$ and $\bar{\mathcal M}$ is the MDP defined in the corresponding latent space $\bar{\mathcal S}$. $\bar\pi$ is a policy defined on the latent space, which can be applied to $\mathcal M$ by first mapping $s\in \mathcal S$ to $\bar{\mathcal S}$.

My question is how they draw the connection between "The degree to which a value function of $\bar{\mathcal M}$, $\bar{V}^{\bar\pi}$ approximates the value function $V^\bar\pi$ of ${\mathcal M}$" and " the Lipschitz norm of $\bar V^\bar\pi$"?

",8689,,,,,9/26/2020 2:26,Relation between a value function of an MDP and a value function of the corresponding latent MDP,,0,0,,,,CC BY-SA 4.0 23769,1,,,9/26/2020 8:41,,2,55,"

As far I know, this is how the latter two algorithms work...

Lloyd's algorithm

  1. Choose the number of clusters.
  2. Choose a distance metric (typically squared euclidean).
  3. Randomly assign each observation to a cluster and compute the cluster centroids.
  4. Iterate below until convergence (i.e. until cluster centroids stop changing):
  • Assign each observation point to the cluster whose centroid is closest.
  • Update cluster centroids only after a complete pass through all observations.

Macqueen's Algorithm

  1. Choose the number of clusters.
  2. Choose a distance metric (typically squared euclidean).
  3. Randomly assign each observation to a cluster and compute the cluster centroids.
  4. Perform a complete pass of below (i.e. go through all observations):
  • Assign an observation to a cluster whose centroid is closest.
  • Immediately update the centroids for the two affected clusters (i.e. for the cluster that lost an observation and for the cluster that gained it).
  1. Update centroids after a complete pass.

How does the Hartigan & Wong algorithm compare to these two above? I read this paper in an effort to understand but it's still not clear to me. The first three steps is the same as Lloyd's and Macqueen's algorithm (as described above), but then what does the algorithm do? Does it update the centroids as often as Macqueen's algorithm does, or as often as Lloyd's algorithm does? At what point does it take into consideration the within-cluster sum of squares and how does it fit into the algorithm?

I'm generally confused when it comes to this algorithm and would very much appreciate a step-wise explanation as to what's going on.

",41219,,,,,9/26/2020 8:41,How does Hartigan & Wong algorithm compare to Lloyd's and Macqueen's algorithm in K-means clustering?,,0,0,,,,CC BY-SA 4.0 23772,1,23916,,9/26/2020 11:40,,3,129,"

I am still somewhat a novice in the ML world, but I had a strange idea about CNNs and wanted to ask if this would be a valid way to check the robustness of a general CNN that classifies certain images.

Let's say that I make a CNN that takes in many different images of sports players performing a certain action (basketball shot, football kick, freestyle in swimming, flip in gymnastics, etc). Firstly, would it be possible for such a CNN to distinguish between such varied images and classify them accurately? And if so, can it be a good idea to compare this "larger" CNN to multiple "smaller" more specialized ones that take in images from one particular sport?

In other words, I want to know that if I have a "larger" CNN that gives me an output like "football being kicked", is there a way to then double-check that output with a smaller CNN that only focuses on football moves? In essence, could we create a system where once you obtain an output from a general CNN, it automatically classifies the same image through a more specialized CNN, and then if the results are of similar accuracy, you know for sure that CNN works?

Kind of like having a smaller CNN as a "ground-truth" for the bigger one? In my head it kind of goes like this:

large_net_output = 'Football kick identified with 95.56% confidence' 

for sport in large_net:
    if sport == 'football':
        access = small_net_for_football
        return small_net_for_football_output

    elif sport == 'swimming':
        access = small_net_for_swimming
        return small_net_for_swimming_output

    elif sport == 'baseball':
        access = small_net_for_baseball
        return small_net_for_baseball_output

# and so on....
>>> small_net_for_football_output = 'Football kick identified with 97.32% confidence'

robustness_check = large_net_output - small_net_for_football_output
print(robustness_check)

>>> 'Your system is accurate within a good range of 1.76%'
     

I hope this makes sense, and that this question does not cause any of your to cringe. Would appreciate any feedback on this!

",41222,,32410,,4/26/2021 12:23,4/26/2021 12:23,Comparing a large/general CNN to a smaller more specialized one?,,1,1,,,,CC BY-SA 4.0 23773,1,23797,,9/26/2020 15:45,,4,422,"

Consider a multi-armed bandit(MAB). There are $k$ arms, with reward distributions $R_i$ where $1 \leq i \leq k$. Let $\mu_i$ denote the mean of the $i^{th}$ distribution.

If we run the multi-armed bandit experiment for $T$ rounds, the "pseudo regret" is defined as $$\text{Regret}_T = \sum_{t=1}^T \mu^* - \mu_{it},$$ where $\mu^*$ denotes the highest mean among all the $k$ distributions.

Why is regret defined like this? From what I understand, at time-step $t$, the actual reward received is $r_t \sim R_{it} $ and not $\mu_{it}$ - so shouldn't that be a part of the expression for regret instead?

",35585,,2444,,2/11/2021 0:03,2/11/2021 0:03,Why is regret so defined in MABs?,,2,2,,,,CC BY-SA 4.0 23774,1,23784,,9/26/2020 16:55,,5,1290,"

Here is a linear regression model

$$y = mx + b,$$

where $b$ is known as $y$-intercept, but also known as the bias [1], $m$ is the slope, and $x$ is the feature vector.

As I understood, in machine learning, there is also the bias that can cause the model to underfit.

So, is there a connection between the bias term $b$ in a linear regression model and the bias that can lead to under-fitting in machine learning?

",41226,,2444,,9/8/2021 15:31,9/8/2021 15:31,Is there a connection between the bias term in a linear regression model and the bias that can lead to under-fitting?,,1,0,,,,CC BY-SA 4.0 23775,1,23881,,9/26/2020 16:56,,1,219,"

Is it legal to license and sell the output of a neural network that was trained on data that you don't own the license to? For example, suppose you trained WaveNet on a collection of popular music. Could you then sell the audio that the WaveNet produces? There are copyright restrictions on using samples to produce music, but the output of a generative neural network might not include any exact replicas from the training data, so it's not clear to me whether those laws apply.

",41227,,2444,,10/3/2020 14:58,10/3/2020 14:58,Is it legal to license and sell the output of a neural network that was trained on data that you don't own the license to?,,1,3,,,,CC BY-SA 4.0 23777,1,23788,,9/26/2020 17:57,,3,169,"

A model can be classified as parametric or non-parametric. How are models classified as parametric and non-parametric models? What is the difference between the two approaches?

",41228,,2444,,9/26/2020 21:36,11/1/2021 16:47,What is the difference between parametric and non-parametric models?,,1,0,,,,CC BY-SA 4.0 23778,1,23790,,9/26/2020 18:58,,5,724,"

Boosting refers to a family of algorithms which converts weak learners to strong learners. How does it happen?

",41228,,2444,,9/26/2020 21:28,9/29/2020 2:31,How do weak learners become strong in boosting?,,3,3,,,,CC BY-SA 4.0 23781,2,,23778,9/26/2020 20:09,,1,,"

You take a bunch of weak learners, each of them trained on a subset of the data.

You then just get all of them to make a prediction, and you learn how much you can trust each one, resulting in a weighted vote or other type of combination of the individual predictions.

",40573,,,,,9/26/2020 20:09,,,,0,,,,CC BY-SA 4.0 23782,1,,,9/26/2020 20:18,,1,59,"

From statistical mechanics, the Boltzmann distribution over a system's energy states arises from the assumption that many replicas of the system are exchanging energy with each other. The distribution of these replicas in each energy level is the maximum entropy distribution subject to the constraint that their total energy is fixed, and that any one assignment of energy levels to each replica, a "microstate", satisfying this constraint, is equally probable.

From machine learning, the so-called Energy-based model defines a Hamiltonian (energy function) to its various configurations, and uses Boltzmann's distribution to convert an "energy" to a probability over these configurations. Thus, an EBM can model a probability distribution over some data domain.

Is there some viewpoint by which one can interpret the EBM as a "system" exchanging energy with many other replicas of that system? What semantic interpretation of EBMs connects them to the Boltzmann distribution's assumptions?

",41231,,,,,9/26/2020 20:18,How are Energy Based models really connected to Statistical Mechanics?,,0,0,,,,CC BY-SA 4.0 23784,2,,23774,9/26/2020 22:22,,6,,"

In machine learning, the term bias can refer to at least 2 related concepts

  1. A (learnable) parameter of a model, such as a linear regression model, which allows you to learn a shifted function. For example, in the case of a linear regression model $y = f(x) = mx + b$, the bias $b$ allows you to shift the straight-line up an down: without the bias, you would only be able to control the slope $m$ of the straight-line. Similarly, in a neural network, you can have a neuron that performs a linear combination of the inputs, then it uses a bias term to shift the straight-line, and you could also use the bias after having applied the activation function, but this will have a different effect.

  2. Anything that guides the learning algorithm (e.g. gradient descent with back-propagation) towards a specific set of solutions. For example, if you use regularization, you are biasing the learning algorithm to choose, typically, smoother or simpler functions. The bias term in the linear regression model is also a way of biasing the learning algorithm: you assume that the straight-line function does not necessarily go through zero, and this assumption affects the type of functions that you can learn (and this is why these two concepts of bias are related!). So, there are many ways of biasing a learning algorithm.

The bias does not always lead you to the best solutions and can actually lead to the wrong solutions, but it is often useful in many ways, for example, it can speed up learning, as you restrict the number of functions that can be learned and searched. The bias (as described in point 2) is often discussed in the context of the bias-variance trade-off, and both the bias and the variance are related to the concepts of generalization, under-fitting, and over-fitting. The linked Wikipedia article explains these concepts quite well and provides an example of how the bias and variance are decomposed, so you should probably read that article for more details.

",2444,,2444,,9/27/2020 13:47,9/27/2020 13:47,,,,0,,,,CC BY-SA 4.0 23786,1,,,9/26/2020 23:02,,3,73,"

This is a bit of a soft question, not sure if it's on topic, please let me know how I can improve it if it doesn't meet the criteria for the site.

GPT models are unsupervised in nature and are (from my understanding) given a prompt and then they either answer the question or continue the sentence/paragraph. They also seem to be the most advanced models for producing natural language, capable of giving outputs with correct syntax and (to my eye at least) indistinguishable from something written by a human (sometimes at least!).

However if I have a problem where I have an input (could be anything, but lets call it an image or video) and a description of the image or video as the output I could in theory train a model with convolutional filters to identify the object and describe the image (assuming any test data is within the bounds of the training data). However when I've seen models like this in the past the language is either quite simple or 'feels' like it's been produced by a machine.

Is there a way to either train a GPT model as a supervised learning model with inputs (of some non language type) and outputs (of sentences/paragraphs); or a similar type of machine learning model that can be used for this task?

A few notes:

I have seen the deep learning image captioning methods - these are what I mention above. I'm more looking for something that can take an input-output pair where the output is text and the input is any form.

",10505,,1847,,9/27/2020 7:20,10/15/2022 14:07,Is there a complement to GPT/2/3 that can be trained using supervised learning methods?,,1,0,,,,CC BY-SA 4.0 23787,2,,14359,9/27/2020 1:31,,1,,"

The standard tool to work with XML files is XSLT. You may not need AI to solve this problem. But.. you have to learn how to program with XSLT ;) On Windows you can use MSXML, if you work from C++ - msxsl.exe, C# has internal supoort for XSLT. That is what I know about. There are also non-MS tools.

",41207,,41207,,9/27/2020 2:42,9/27/2020 2:42,,,,0,,,,CC BY-SA 4.0 23788,2,,23777,9/27/2020 2:27,,6,,"

Parametric Methods

A parametric approach (Regression, Linear Support Vector Machines) has a fixed number of parameters and it makes a lot of assumptions about the data. This is because they are used for known data distributions, i.e., it makes a lot of presumptions about the data.

Non-Parametric Methods

A non-parametric approach (k-Nearest Neighbours, Decision Trees) has a flexible number of parameters, there are no presumptions about the data distribution. The model tries to "explore" the distribution and thus has a flexible number of parameters.

Comparision

Comparatively speaking, parametric approaches are computationally faster and have more statistical power when compared to non-parametric methods.

",40434,,2444,,11/1/2021 16:47,11/1/2021 16:47,,,,0,,,,CC BY-SA 4.0 23790,2,,23778,9/27/2020 3:00,,6,,"

As @desertnaut mentioned in the comment

No weak learner becomes strong; it is the ensemble of the weak learners that turns out to be strong

Boosting is an ensemble method that integrates multiple models(called as weak learners) to produce a supermodel (Strong learner).

Basically boosting is to train weak learners sequentially, each trying to correct its predecessor. For boosting, we need to specify a weak model (e.g. regression, shallow decision trees, etc.), and then we try to improve each weak learner to learn something from the data.

AdaBoost is a boosting algorithm where a decision tree with a single split is used as a weak learner. Also, we have gradient boosting and XG boosting.

",41226,,41226,,9/29/2020 2:31,9/29/2020 2:31,,,,2,,,,CC BY-SA 4.0 23791,2,,23778,9/27/2020 3:02,,1,,"

In Boosting, we improve the overall metrics of the model by sequentially building weak models and then building upon the weak metrics of previous models.

We start out by applying basic non-specific algorithms to the problem, which returns some weak prediction functions by taking arbitrary solutions (like sparse weights or assigning equal weights/attention). We improve upon this in the following predictions by adjusting weights to those having a higher error rate. After going through many iterations, we combine it to create a single Strong Prediction Function which has better metrics.


Some popular Boosting Algorithms :

  • AdaBoost
  • Gradient Tree Boosting
  • XGBoost
",40434,,,,,9/27/2020 3:02,,,,1,,,,CC BY-SA 4.0 23792,1,,,9/27/2020 10:05,,1,104,"

For weeks I've been working with this toy game of Rock-Paper-Scissor. I want to use a PPO agent learn to beat a computer opponent whose logic is defined as the code bellow.

For short, this computer opponent, named abbey, uses a strategy that tracks all two consecutive plays of the agent, and gives the opposite play of the most likely guess of the agent's next play, according to the agent's last play.

I design the agent(using gym env) to have an internal state, keeping track of all counts of its two consecutive plays, in a 3x3 matrix. And then I normalized each row of the matrix to be an observation of the agent, representing the probabilities of the second play given the previous one. So the agent will get the same knowledge as what abbey knows.

Then I copied an PPO network algorithm from some RL book, which works well with CartPole. Then I did some minor changes which are commented in the code bellow.

But the algorithm does not converge even a little, and abbey always wins the agent about 60% of the time from first run to the last.

I doubt the state and observation space I designed is the reason why it does not converge. All I get is that the agent maybe should find something from the histories of its own successes and fails, and find its way out.

Can you give me some advice for the designing of a state space? Thank you very much.

### define a Rock-Paper-Scissor opponent

abbey_state = []
play_order=[{
              "RR": 0,
              "RP": 0,
              "RS": 0,
              "PR": 0,
              "PP": 0,
              "PS": 0,
              "SR": 0,
              "SP": 0,
              "SS": 0,
          }]
def abbey(prev_opponent_play,
          re_init=False):
    if not prev_opponent_play:
        prev_opponent_play = 'R'
    global abbey_state, play_order
    if re_init:
        abbey_state = []
        play_order=[{
              "RR": 0,
              "RP": 0,
              "RS": 0,
              "PR": 0,
              "PP": 0,
              "PS": 0,
              "SR": 0,
              "SP": 0,
              "SS": 0,
          }]
    abbey_state.append(prev_opponent_play)
    last_two = "".join(abbey_state[-2:])
    if len(last_two) == 2:
        play_order[0][last_two] += 1
    potential_plays = [
        prev_opponent_play + "R",
        prev_opponent_play + "P",
        prev_opponent_play + "S",
    ]
    sub_order = {
        k: play_order[0][k]
        for k in potential_plays if k in play_order[0]
    }
    prediction = max(sub_order, key=sub_order.get)[-1:]
    ideal_response = {'P': 'S', 'R': 'P', 'S': 'R'}
    return ideal_response[prediction]


### define the gym env
import gym
from gym import spaces
from collections import defaultdict
import numpy as np

ACTIONS = ["R", "P", "S"]
games = 1000

class RockPaperScissorsEnv(gym.Env):
  metadata = {'render.modes': ['human']}

  def __init__(self):
    super(RockPaperScissorsEnv, self).__init__()
    self.action_space = spaces.Discrete(3)
    self.observation_space = spaces.Box(low=0.0, high=1.0,
                                        shape=(3,3), dtype=float)
    self.reset()

  def step(self, actions):
    assert actions == 0 or actions == 1 or actions == 2
    opponent_play = self.opponent_play()

    self.prev_plays[self.prev_actions * 3 + actions] += 1
    reward = self.calc_reward(actions, opponent_play)
    terminal = False

    self.calc_state(self.timestep, opponent_play, actions)
    self.prev_actions = actions
    self.prev_opponent_play = opponent_play
    self.timestep += 1
    return self.get_ob(), reward, terminal, None

  def reset(self):
    self.opponent = abbey
    self.timestep = 0
    self.prev_opponent_play = 0
    self.prev_actions = 0
    self.prev_plays = defaultdict(int)
    self.init_state = np.zeros((3,3), dtype=int)
    # the internal state
    self.state = np.copy(self.init_state)
    self.results = {"win": 0, "lose": 0, "tie": 0}
    return self.get_ob()

  def render(self, mode='human'):
    pass

  def close (self):
    pass

  def calc_reward(self, actions, play):
    if self.timestep % games == games - 1:
      pass
    if actions == play:
      self.results['tie'] += 1
      return 0
    elif actions == 0 and play == 1:
      self.results['lose'] += 1
      return -0.3
    elif actions == 1 and play == 2:
      self.results['lose'] += 1
      return -0.3
    elif actions == 2 and play == 0:
      self.results['lose'] += 1
      return -0.3
    elif (actions == 1 and play == 0) or (actions == 2 and play == 1) or (actions == 0 and play == 2):
      self.results['win'] += 1
      return 0.3
    else:
      raise NotImplementedError('calc_reward something get wrong')

  def opponent_play(self):
    re_init = (self.timestep == 0)
    opp_play = self.opponent(ACTIONS[self.prev_actions], re_init=re_init)
    return ACTIONS.index(opp_play)

  def calc_state(self, timestep, opponent_play, actions):
    self.state[self.prev_actions][actions] += 1

  def get_ob(self):
    '''return observations'''
    state0 = self.state[0]
    sum0 = state0.sum()
    state1 = self.state[1]
    sum1 = state1.sum()
    state2 = self.state[2]
    sum2 = state2.sum()
    init = np.ones(3, dtype=float) / 3.0
    ob = np.array([
      state0 / sum0 if sum0 else init,
      state1 / sum1 if sum1 else init,
      state2 / sum2 if sum2 else init,
    ])
    # print(ob)
    return ob



### Learning Algo copied from some book

import  matplotlib
from    matplotlib import pyplot as plt
matplotlib.rcParams['font.size'] = 18
matplotlib.rcParams['figure.titlesize'] = 18
matplotlib.rcParams['figure.figsize'] = [9, 7]
matplotlib.rcParams['axes.unicode_minus']=False

plt.figure()

import  gym,os
import  numpy as np
import  tensorflow as tf
from    tensorflow import keras
from    tensorflow.keras import layers,optimizers,losses
from    collections import namedtuple
env = RockPaperScissorsEnv()
env.seed(2222)
tf.random.set_seed(2222)
np.random.seed(2222)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
assert tf.__version__.startswith('2.')



gamma = 0.98
epsilon = 0.2
batch_size = 32

Transition = namedtuple('Transition', ['state', 'action', 'a_log_prob', 'reward', 'next_state'])

class Actor(keras.Model):
    def __init__(self):
        super(Actor, self).__init__()
        self.fc1 = layers.Dense(18, kernel_initializer='he_normal') # I changed 100 to 18
        self.fc2 = layers.Dense(3, kernel_initializer='he_normal') # I changed 4 to 3

    def call(self, inputs):
        x = tf.nn.relu(self.fc1(inputs))
        x = self.fc2(x)
        x = tf.nn.softmax(x, axis=1)
        return x

class Critic(keras.Model):
    def __init__(self):
        super(Critic, self).__init__()
        self.fc1 = layers.Dense(18, kernel_initializer='he_normal') # I changed 100 to 18
        self.fc2 = layers.Dense(1, kernel_initializer='he_normal')

    def call(self, inputs):
        x = tf.nn.relu(self.fc1(inputs))
        x = self.fc2(x)
        return x




class PPO():
    def __init__(self):
        super(PPO, self).__init__()
        self.actor = Actor()
        self.critic = Critic()
        self.buffer = []
        self.actor_optimizer = optimizers.Adam(1e-3)
        self.critic_optimizer = optimizers.Adam(3e-3)

    def select_action(self, s):
        s = tf.constant(s, dtype=tf.float32)
        # s = tf.expand_dims(s, 0)   # I removed this line, otherwise we will get a (1,3,3) tensor and later we will get an error
        prob = self.actor(s)
        a = tf.random.categorical(tf.math.log(prob), 1)[0]
        a = int(a)
        return a, float(prob[0][a])

    def get_value(self, s):
        s = tf.constant(s, dtype=tf.float32)
        s = tf.expand_dims(s, axis=0)
        v = self.critic(s)[0]
        return float(v)

    def store_transition(self, transition):
        self.buffer.append(transition)

    def optimize(self):
        state = tf.constant([t.state for t in self.buffer], dtype=tf.float32)
        action = tf.constant([t.action for t in self.buffer], dtype=tf.int32)
        action = tf.reshape(action,[-1,1])
        reward = [t.reward for t in self.buffer]
        old_action_log_prob = tf.constant([t.a_log_prob for t in self.buffer], dtype=tf.float32)
        old_action_log_prob = tf.reshape(old_action_log_prob, [-1,1])

        R = 0
        Rs = []
        for r in reward[::-1]:
            R = r + gamma * R
            Rs.insert(0, R)
        Rs = tf.constant(Rs, dtype=tf.float32)

        for _ in range(round(10*len(self.buffer)/batch_size)):

            index = np.random.choice(np.arange(len(self.buffer)), batch_size, replace=False)

            with tf.GradientTape() as tape1, tf.GradientTape() as tape2:

                v_target = tf.expand_dims(tf.gather(Rs, index, axis=0), axis=1)

                v = self.critic(tf.gather(state, index, axis=0))
                delta = v_target - v
                advantage = tf.stop_gradient(delta)
                a = tf.gather(action, index, axis=0)
                pi = self.actor(tf.gather(state, index, axis=0)) 
                indices = tf.expand_dims(tf.range(a.shape[0]), axis=1)
                indices = tf.concat([indices, a], axis=1)
                pi_a = tf.gather_nd(pi, indices)
                pi_a = tf.expand_dims(pi_a, axis=1)
                # Importance Sampling
                ratio = (pi_a / tf.gather(old_action_log_prob, index, axis=0))
                surr1 = ratio * advantage
                surr2 = tf.clip_by_value(ratio, 1 - epsilon, 1 + epsilon) * advantage
                policy_loss = -tf.reduce_mean(tf.minimum(surr1, surr2))
                value_loss = losses.MSE(v_target, v)
            grads = tape1.gradient(policy_loss, self.actor.trainable_variables)
            self.actor_optimizer.apply_gradients(zip(grads, self.actor.trainable_variables))
            grads = tape2.gradient(value_loss, self.critic.trainable_variables)
            self.critic_optimizer.apply_gradients(zip(grads, self.critic.trainable_variables))

        self.buffer = []


def main():
    agent = PPO()
    returns = []
    total = 0
    for i_epoch in range(500):
        state = env.reset()
        for t in range(games):
            action, action_prob = agent.select_action(state)
            if t == 999:
              print(action, action_prob)
            next_state, reward, done, _ = env.step(action)
            # print(next_state, reward, done, action)
            trans = Transition(state, action, action_prob, reward, next_state)
            agent.store_transition(trans)
            state = next_state
            total += reward
            if done:
                if len(agent.buffer) >= batch_size:
                    agent.optimize()
                break
        print(env.results)

        if i_epoch % 20 == 0:
            returns.append(total/20)
            total = 0
            print(i_epoch, returns[-1])

    print(np.array(returns))
    plt.figure()
    plt.plot(np.arange(len(returns))*20, np.array(returns))
    plt.plot(np.arange(len(returns))*20, np.array(returns), 's')
    plt.xlabel('epochs')
    plt.ylabel('total return')
    plt.savefig('ppo-tf.svg')


if __name__ == '__main__':
    main()
    print("end")


```
",41152,,41152,,9/28/2020 13:23,9/28/2020 13:23,How to design an observation(state) space for a simple `Rock-Paper-Scissor` game?,,0,0,0,,,CC BY-SA 4.0 23793,1,,,9/27/2020 10:58,,3,56,"

In Sutton and Barto's Book in chapter 12, they state that if weights sum to 1, then an equation's updates have "guaranteed convergence properties". Actually why it ensures convergence?

There is a full citation from the mentioned fragment in Richard S. Sutton and Andrew G. Barto. Second Edition:

Now we note that a valid update can be done not just toward any n-step return, but toward any average of n-step returns for different ns. For example, an update can be done toward a target that is half of a two-step return and half of a four-step return: $\frac{1}{2}G_{t:t+2} + \frac{1}{2}G_{t:t+4}$. Any set of n-step returns can be averaged in this way, even an infinite set, as long as the weights on the component returns are positive and sum to 1. The composite return possesses an error reduction property similar to that of individual n-step returns (7.3) and thus can be used to construct updates with guaranteed convergence properties.

",31324,,,,,9/27/2020 10:58,Why weighting by lambda that sums to 1 ensures convergence in eligibility trace?,,0,0,,,,CC BY-SA 4.0 23794,2,,23745,9/27/2020 11:12,,5,,"

Q-learning is said to be "model-free". Given the two examples above, is it because neither the lake's topology nor that of the mountain are changed by the actions taken?

No. That's not why Q-learning is model-free. Q-learning assumes that the underlying environment (FrozenLake or MountainCar, for example) can be modelled as a Markov decision process (MDP), which is a mathematical model that describes problems where decisions/actions can be taken and the outcomes of those decisions are at least partially stochastic (or random). More precisely, an MDP is composed of

  • A set of actions $A$ (that the RL agent can take); for example, up and down, in some grid world
  • A set of states $S$ (where the RL agent can be);
  • A transition function $p(s_{t+1} = s' \mid s_{t} = s , a_t = a)$ (aka the model), which represents the probability of going to state $s'$ at time step $t+1$, given that at time step $t$ the RL agent is in the state $s$ and takes action $a$.
  • A reward function $r(s, a, s')$ (sometimes also denoted as $r(s)$ or $r(s, s')$, although these can have different semantics); the reward function gives the reward (or reinforcement) to the RL agent when it takes an action in a certain state and moves to another state; the reward function can also be included in the transition function, i.e., often you will also see $p(s_{t+1} = s', r_{t+1} = r \mid s_{t} = s , a_t = a)$, and this is the model: this is what we mean by model in reinforcement learning, it's this $p$ (which is a probability distribution)!

A model-free algorithm is any algorithm that does not use or estimate this $p$. Q-learning, if you look at its pseudocode, does not make use of this model. Q-learning estimates the value function $q(s, a)$ by interacting with the environment (taking actions and receiving rewards), but, meanwhile, it does not know or keep track of the dynamics (i.e. $p$) of the environment, and that's why it's model-free.

And, no, the value function is not what we mean by "model" in reinforcement learning. The value function is, as the name suggests, a function.

How does one know that a problem is "model-free" in reinforcement learning?

A problem is not model-free or model-based. An algorithm is model-free or model-based. Again, a model-free algorithm does not use or estimate $p$, a model-based one uses (and/or estimates) it.

Given the two examples above, is it because neither the lake's topology nor that of the mountain are changed by the actions taken?

No. As stated in the other answer, you could apply the model-based algorithm Dyna-Q to these environments.

",2444,,2444,,9/27/2020 13:35,9/27/2020 13:35,,,,0,,,,CC BY-SA 4.0 23797,2,,23773,9/27/2020 15:41,,4,,"

In short, you don't regret your bad luck that you could do nothing about, you regret your bad choices that you could have done something about if only you knew.

The point of regret as a metric therefore is to compare your choices with the ideal choices. This makes sense in MABs, because although the primary goal is to gain the most reward, the learning part of the goal is to calculate from experience what are the best choices - usually whilst sacrificing as little as possible in the process.

The formula captures that concept, so does not concern itself with individual rewards in the past that could have been due to good or bad luck. Hence it uses expected (or mean) rewards.

",1847,,1847,,9/27/2020 15:50,9/27/2020 15:50,,,,0,,,,CC BY-SA 4.0 23798,1,,,9/27/2020 17:56,,0,29,"

As far as I can tell, most NLP tasks today use word embeddings and recurrent networks or transformers.

Are there any examples of state-of-the-art NLP applications that are still n-gram based and use Naive Bayes?

",12201,,2444,,9/27/2020 20:50,9/27/2020 20:50,Are there any examples of state-of-the-art NLP applications that are still n-gram based and use Naive Bayes?,,0,3,,,,CC BY-SA 4.0 23799,1,,,9/27/2020 22:37,,0,69,"

I have an agent (drone) that has to allocate subchannels for different types of User Equipment.

I have represented the subchannel allocation with a 2-dimentional binary matrix, that is initialized to all zeros as there is no requests at the beginning of the episode.

When the agent chooses an action, it has to choose which subchannels to allocate to which UEs, hence populating the matrix with 1s.

I have no idea how to do it.

",42372,,,,,6/21/2022 22:06,How to let the agent choose how to populate a state space matrix in RL (using python),,1,2,,,,CC BY-SA 4.0 23801,1,,,9/28/2020 0:57,,2,48,"

I see so many papers claim to have an algorithm that beats 'human-level performance' for semantic segmentation tasks, but I can't find any papers reporting on what the human-level performance actually is. An analysis of the similarity between segmentations drawn by multiple different human experts would be good. Could someone point me towards a paper that reports on something like that?

",9983,,,,,9/28/2020 0:57,What is human-level performance for semantic segmentation?,,0,0,,,,CC BY-SA 4.0 23802,2,,5861,9/28/2020 9:46,,2,,"

Check out Figure 6 in this paper: PyTorch Distributed: Experiences on Accelerating Data Parallel Training

It breaks down the latency of the forward pass, the backward pass, the communication step, and the optimization step for running both ResNet50 and BERT on a NVIDIA Tesla V100 GPUs.

From measuring the pixels in the figure, I estimated the times for the forward, backward, and optimization steps as a percentage of their total time combined. (I ignored the communication step shown in the figure because that was only to show how long an unoptimized communication step would take when doing data-parallel training). Here are the estimates I got:

  • Forward: 23%
  • Backward: 74%
  • Optimization: 3%

So the backward pass takes about 3x as long as the forward pass, and the optimization step is relatively fast.

",41264,,41264,,10/1/2020 2:29,10/1/2020 2:29,,,,0,,,,CC BY-SA 4.0 23803,1,,,9/28/2020 10:05,,2,139,"

I have been looking into transformers lately and have been reading tons of tutorials. All of them address the intuition behind attention, which I understand. However, they treat learning the weight matrices (for query, key, and value) as it is the most trivial thing.

So, how are these weight matrices learned? Is the error just backpropagated, and the weights are updated accordingly?

",41265,,2444,,11/30/2021 15:28,11/30/2021 15:28,How are weight matrices in attention learned?,,0,0,,,,CC BY-SA 4.0 23804,1,,,9/28/2020 11:34,,1,36,"

I am new to the field of AI but due to the high level of abstraction that comes with services such as Google VisionAI I got motivated to write an application that detects symbols in photos based on tensorflow.js and a custom model trained in Google Vision AI.

My App is about identifying symbols in photos, very similar to traffic signs or logo detection. Now I wonder if

  1. I should train the model based on real, distorted and complex photos that contain those symbols and lots of background noise
  2. if it was enough to train the model based on cropped, clean symbols
  3. A hybrid of both

I started with option a and it works fine, however it was a lot of work to create the training dataset. Does the model need the distorted background to work?

",41268,,,,,9/28/2020 12:24,Should I prefer cropped images or realistic images for object detection?,,1,1,,,,CC BY-SA 4.0 23805,2,,23804,9/28/2020 12:24,,1,,"

Google recommandation seems to answer this:

The training data should be as close as possible to the data on which predictions are to be made.

For example, if your use case involves blurry and low-resolution images (such as from a security camera), your training data should be composed of blurry, low-resolution images. In general, you should also consider providing multiple angles, resolutions, and backgrounds for your training images.

https://cloud.google.com/vision/automl/object-detection/docs/prepare

Would you agree, also in the case of symbols?

",41268,,,,,9/28/2020 12:24,,,,1,,,,CC BY-SA 4.0 23806,1,,,9/28/2020 12:36,,1,207,"

I'm wanting to build a NN that can create a policy for each possible state. I want to combine this with MCTS to eliminate randomness so when expansion occurs, I can get the probability of the move to winning.

I am confident (I believe) in how to code the neural network, but the input shape is the hardest part here. I am firstly wanting to try with 2 player chess and then expand to 3 player chess.

What is the best vector/matrix to use for the input in a chess game? How should the input be fed into a neural network to output the most promising move from the position? In addition, what format should it look like (i.e [001111000], etc.)?

",41271,,2444,,3/9/2021 17:57,3/9/2021 17:57,"To solve chess with deep RL and MCTS, how should I represent the input (the state) to a neural network?",,0,1,,,,CC BY-SA 4.0 23807,1,,,9/28/2020 13:19,,2,156,"

I'm currently looking at NN to deal with noisy data. I like the Autoencoder approach https://medium.com/@aliaksei.mikhailiuk/unsupervised-learning-for-data-interpolation-e259cf5dc957 because it seems to be adaptive and does not require to be trained on specific training data.

However, as it is described in this article it seems to rely on having none-noise samples in the input data that are true to the ground truth, so I wonder if an autoencoder also could work in the case of white or blue noise instead of salt-and-pepper noise?

",32111,,,,,10/20/2022 4:05,Are Autoencoders for noise-reduction only suited to deal with salt-and-pepper kind of noise?,,1,0,,,,CC BY-SA 4.0 23809,1,,,9/28/2020 18:48,,0,137,"

I currently learning on Transformers, so check my understanding I tried implementing a small transformer-based language model and compare it to RNN based language model. Here's the code for transformer. I'm using PyTorch inbuilt layer for Transformer Encoder

class TransformerLM_1(nn.Module):

    def __init__(self, head, vocab_size, embedding_size, dropout = 0.1, device = 'cpu', 
                 pad_idx = 0, start_idx = 1, end_idx = 2, unk_idx = 3):
      
        super(TransformerLM_1, self).__init__()
      
        self.head = head
        self.embedding_size = embedding_size
        self.vocab_size = vocab_size
        self.device = device
        self.embed = WordEmbedding(self.vocab_size, self.embedding_size, pad_idx)
        self.postional_encoding = PostionalEncoding(embedding_size, device)
        self.decoder = nn.TransformerEncoderLayer(self.embedding_size, self.head)
        self.out_linear = nn.Linear(self.embedding_size, vocab_size)
        self.dropout = dropout
        self.pad_idx = pad_idx
        self.start_idx = start_idx
        self.end_idx = end_idx
        self.unk_idx = unk_idx
        self.device = device

    
    def make_src_mask(self, src_sz):
        mask = (torch.triu(torch.ones(src_sz, src_sz)) == 1).transpose(0, 1)
        mask = mask.float().masked_fill(mask == 0, 10e-20).masked_fill(mask == 1, float(0.0))
        mask = mask.to(self.device)
        return mask

    def forward(self, x):
        dec_in = x.clone()[:, :-1]
        src_mask = self.make_src_mask(dec_in.size()[1])
        src = self.embed(dec_in)
        src = self.postional_encoding(src) 
        src = src.transpose(0,1)
        transformer_out = self.decoder(src, src_mask)
        out = self.out_linear(transformer_out)
        return out

I'm using teacher forcing to make it converge faster. From what I saw from the results, the text generated by the RNN model is better than transformer's.

Here is sample generated text with the expected

Expected: you had to have been blind not to see the scenario there for what it was and is and will continue to be for months and even years a part of south carolina that has sustained a blow that the red cross expects will cost that organization alone some $ n million <eos> 
Predicted: some <unk> been the been <unk> not be $ the total has was the may has <unk> the that that be to the <unk> the 

Expected: citicorp and chase are attempting to put together a new lower bid <eos> 
Predicted: a are <unk> carries n't to the together with <unk> jersey than 

Expected: it ' s amazing the amount of money that goes up their nose out to the dog track or to the tables in las vegas mr . katz says <eos> 
Predicted: <unk> ' s <unk> comeback money of the in mr to their <unk> and of <unk> <unk> or or <unk> the money 

Expected: moreover while asian and middle eastern investors <unk> gold and help <unk> its price silver does n't have the same <unk> dealers say <eos> 
Predicted: the production the routes <unk> of its 

Expected: a board of control spokesman said the board had not seen the claim and declined to comment <eos> 
Predicted: the board said declined of said 

Expected: property capital trust said it dropped its plan to liquidate because it was n't able to realize the value it had expected <eos> 
Predicted: the claims markets said its was n <unk> to sell insolvent of was n't disclosed to sell its plan 

Expected: similarly honda motor co . ' s sales are so brisk that workers <unk> they have n't had a saturday off in years despite the government ' s encouragement of more leisure activity <eos> 
Predicted: the honda ' credit . s s <unk> 

Expected: we expect a big market in the future so in the long term it will be profitable <eos> 
Predicted: it can it <unk> board 

Expected: u . k . composite or <unk> insurers which some equity analysts said might be heavily hit by the earthquake disaster helped support the london market by showing only narrow losses in early trading <eos> 
Predicted: the . s . s trading sell said which <unk> traders market said the be able in the the earthquake 

Expected: this will require us to define and <unk> what is necessary or appropriate care <eos> 
Predicted: <unk> is be the $ <unk> <unk> <unk> <unk> is the to <unk> and or 

As you can see Transformer fails to grasp grammar compared to RNN. Is there anything wrong with my understanding?

EDIT

This is one example that caught my eye

Expected: also the big board met with angry stock specialists <eos> 
Predicted: also met specialists board met the stock big with after 

Most of the words predicted have is from the expected but in a different order. I have read that transformers are permutation invariant which is the reason why we include positional encoding with the word embedding.

",41279,,41279,,9/29/2020 5:41,10/15/2022 17:07,Transformer Language Model generating meaningless text,,1,2,,,,CC BY-SA 4.0 23810,1,23814,,9/28/2020 20:10,,3,1233,"

I am running a drone simulator for collision avoidance using a slight variant of D3QN. The training is usually costly (runs for at least a week) and I have observed that reward function gradually increases during training and then drastically drops. In the simulator, this corresponds to the drone exhibiting cool collision avoidance after a few thousand episodes. However, after training for more iterations it starts taking counterintuitive actions such as simply crashing into a wall (I have checked to ensure that there is no exploration at play over here).

Does this have to do with overfitting? I am unable to understand why my rewards are falling this way.

",31755,,2444,,9/28/2020 22:37,9/29/2020 8:47,Why do my rewards reduce after extensive training using D3QN?,,1,0,,,,CC BY-SA 4.0 23811,1,,,9/28/2020 23:31,,2,281,"

I have come across the Monte Carlo tree search (MCTS) algorithm, but I can't find what the tree should look like. For example, does it still represent a minimax process, i.e. player 1 from the root has its child nodes as probabilities of moves, then from those child nodes, the next move is player 2, etc.? Is this how the tree looks like, so when we backpropagate we update only the winning player nodes?

",41285,,2444,,10/29/2020 11:00,7/21/2022 16:08,How does the MCTS tree look like?,,1,0,,,,CC BY-SA 4.0 23812,1,,,9/29/2020 2:03,,1,29,"

The title is one of the special things in Progressive GAN, a paper of the NVIDIA team. By using this method, they introduced that

Our approach ensures that the dynamic range, and thus the learning speed, is the same for all weights.

In details, they inited all learnable parameters by normal distribution $N(0,1)$. During training time, each forward time, they will scale the result with per-layer normalization constant from He's initializer

I reproduced the code from pytorch GAN zoo Github's repo

def forward(self, x, equalized):
    # generate He constant depend on the size of tensor W
    size = self.module.weight.size()
    fan_in = prod(size[1:])
    weight = math.sqrt(2.0 / fan_in)
    '''
    A module example:

    import torch.nn as nn
    module = nn.Conv2d(nChannelsPrevious, nChannels, kernelSize, padding=padding, bias=bias) 
    '''
    x = self.module(x)

    if equalized:
        x *= self.weight
    return x

At first, I thought the He constant will be $c = \frac{\sqrt{2}}{\sqrt{n_l}}$ as He's paper. Normally, $n_l > 2$ so $w_l$ can be scale up which lead to the gradient in backpropagation is increase as the formula in ProGan's paper $\hat{w}_i=\frac{w_i}{c}$ $\rightarrow$ prevent vanishing gradient.

However, the code shows that $\hat{w}_i=w_i*c$.

In summary, I can't understand why to scale down the parameter many times during training will help the learning speed be more stable.

I asked this question on some communities e.g: StackOverflow, mathematics, Data Science, and still haven't had an answer.

Please help me explain it, thank you!

",41287,,2444,,9/29/2020 16:23,9/29/2020 16:23,Why scaling down the parameter many times during training will help the learning speed be the same for all weights in Progressive GAN?,,0,2,,,,CC BY-SA 4.0 23813,2,,23756,9/29/2020 2:34,,5,,"

This is just an idea

Given a set of pixels, the task is to decide:

  1. Which pixel is the center of an object?
  2. What is the size of the bounding boxes with the center is the pixel in part 1?

Formula, consider this is a 2D image, call $(x,y)$ is the horizontal and vertical coordinate and $(w_i,h_i)$ is the size of bouding box of object $i$:

$\text{For }m \in[x,x+w_i] \text{ and } n\in[y,y+h_i]$

$c_i(m,n) = \begin{cases} 1, \text{if pixel at position (m,n) is belongs to object i,}\\ 0, \text{else} \end{cases}$

",41287,,,,,9/29/2020 2:34,,,,3,,,,CC BY-SA 4.0 23814,2,,23810,9/29/2020 7:42,,4,,"

It is not 100% clear, but this seems like an instance of catastrophic forgetting. This is something that often impacts reinforcement learning.

I have answered a very similar question on Data Science stack exchange, and reproduce the same answer here.


This is called "catastrophic forgetting" and can be a serious problem in many RL scenarios.

If you trained a neural network to recognise cats and dogs and did the following:

  • Train it for many epochs on a full dataset until you got a high accuracy.

  • Continue to train it, but remove all the cat pictures.

Then in a relatively short space of time, the NN would start to lose accuracy. It would forget what a cat looks like. It would learn that its task was to switch the dog prediction as high as possible, just because on average everything in the training population was a dog.

Something very similar happens in your DQN experience replay memory. Once it gets good at a task, it may only experience success. Eventually, only successful examples are in its memory. The NN forgets what failure looks like (what the states are, and what it should predict for their values), and predicts high values for everything.

Later on, when something bad happens and the NNs high predicted value is completely wrong, the error can be high. In addition the NN may have incorrectly "linked" features of the state representation so that it cannot distinguish which parts of the feature space are the cause of this. This creates odd effects in terms of what it learns about values of all states. Often the NN will behave incorrectly for a few episodes but then re-learn optimal behaviour. But it is also possible that it completely breaks and never recovers.

There is lots of active research into catastrophic forgetting and I suggest you search that term to find out some of the many types of mitigation you could use.

For Cartpole, I found a very simple hack made the learning very stable. Keep aside some percentage of replay memory stocked with the initial poor performing random exploration. Reserving say 10% to this long term memory is enough to make learning in Cartpole rock solid, as the NN always has a few examples of what not to do. The idea unfortunately does not scale well to more complex environments, but it is a nice demonstration. For a more sophisticated look at similar solutions you could see the paper "The importance of experience replay database composition in deep reinforcement learning"

",1847,,1847,,9/29/2020 8:47,9/29/2020 8:47,,,,1,,,,CC BY-SA 4.0 23815,2,,23811,9/29/2020 9:26,,1,,"

In a MCTS (Monte Carlo tree search) the resulting tree is expanded by two policies:

  • tree-policy (selects next and unknown child nodes by a exploration-exploitation tradeoff)
  • default policy (selects known child nodes till terminal state e.g. with random uniform moves).

The term terminal state is often considered to be the end a tree. If you consider designing an agent with MCTS that is able to play chess, then the terminal state would a win, lose or draw. One game like this is called a playout (sometimes also roll-out). The main concept of MCTS relies on repeating this playouts many times and updating the tree according to the scores. The tree is built up in an incremental and asymmetric manner.

Imagine designing a chess-playing agent. The tree, or rather how the tree is built up can be like this:

  • Root Node: Starting with an empty board
  • Child Nodes: Board configuration after each move (opponent and agent)
  • Selection of Child Nodes: If the opponent is a human, then the selection of child nodes is performed by his/her own strategy. The agent is selecting moves according to a policy.
  • Terminal State: Score of the game (win/lose/draw).

So after each move (opponent or agent) the current child node/root node is expanded by the resulting child nodes.

If you want to dive deeper into MCTS, then I would recommend to take a look at A Survey of Monte Carlo Tree Search Methods.

",26494,,,,,9/29/2020 9:26,,,,1,,,,CC BY-SA 4.0 23817,1,23831,,9/29/2020 13:10,,2,499,"

I have to build a model where I pre-process the data with a Gaussian kernel. The data are an $n\times n$ matrix (i.e one channel), but not an image, thus I can't refer to this matrix as an image and to its elements as pixels. The Gaussian kernel is built by the following function (more i.e. here)

$$\begin{equation} \begin{aligned} g(x,y,\sigma) = \dfrac{1}{2\pi\sigma^2} e^{\dfrac{-(x^2+y^2)}{2\sigma^2}}. \end{aligned} \end{equation}$$

This kernel is moving one by one element and doing convolution. In my case, most of the elements are zero, the matrix is sparse.

How can I describe/understand the process of convolving the original data with a Gaussian kernel?

I have been looking for some articles, but I am unable to find any mathematical explanations, only explanation in words or pseudo-code.

",40205,,2444,,11/7/2020 0:42,11/7/2020 0:42,How to mathematically describe the convolution operation (with a Gaussian kernel)?,,1,0,,,,CC BY-SA 4.0 23819,1,,,9/29/2020 13:53,,1,21,"

I created a VGG based U-Net in order to perform image segmentation task on yeast cells images obtained by a microscope.

There are a couple of problems with the data:

  1. There is inhomogeneity in the amount of yeast in the images. 1 image can have hundred of yeast cells while others can have less the one hundred.
  2. The GT segmentation map is also incomplete, and some of the cells are not labeled.

All in all the model, given the above problem, is able to learn in some manner. My problem is that the segmentation maps seem incomplete.

For example:

My loss function contains BCE, I was wondering if there is a way to force the model to create a 'fuller' segmentation maps. Something like using Random fields of some sort. Or maybe to enhance my loss function to overcome the above-mentioned problems.

I wish to stay in the domain of simple architectures rather than using more sophisticated ones such as RCNN.

Would appreciate any suggestions

",41293,,41293,,10/1/2020 5:27,10/1/2020 5:27,Model output segmentation maps which are not full,,0,0,,,,CC BY-SA 4.0 23821,1,,,9/29/2020 14:05,,2,81,"

You might think to apply some classifier combination techniques like ensembling, bagging and boosting but these methods would not help. Actually, “ensembling, boosting, bagging” won’t help since their purpose is to reduce variance. Naive Bayes has no variance to minimize.

The above paragraph is mentioned in this article.

  1. How can they say the purpose of these methods is to reduce variance?
  2. Naive Bayes has no variance, is it true?

Thanks in advance

",41226,,,,,9/29/2020 14:05,"Why don't ensembling, bagging and boosting help to improve accuracy of Naive bayes classifier?",,0,0,,,,CC BY-SA 4.0 23822,1,,,9/29/2020 14:11,,3,59,"

According to DeepMinds's paper Prioritized Experience Replay (2016), specifically Appendix B.2.1 "Proportional prioritization" (p. 13), one should equally divide the priority range $[0, p_\text{total}]$ into $k$ ranges, where $k$ is the size of the batch, and sample a random variable within these sub-ranges. This random variable is then used to sample an experience from the sum-tree according to its priority (probability).

Why do we need to do that? Why not simply sampling $k$ random variables in $[0, p_\text{total}]$ and getting $k$ variables from the sum-tree without dividing the priority range into $k$ different ranges? Isn't this the same?

",41294,,2444,,11/1/2020 19:04,11/1/2020 19:04,Why is it necessary to divide the priority range according to the batch size in Prioritized Experience Replay?,,0,1,,,,CC BY-SA 4.0 23825,1,,,9/29/2020 15:33,,7,1020,"

What does "ground truth" mean in the context of AI especially in the context of machine learning?

I am a little confused because I have read that the ground truth is the same as a label in supervised learning. And I think that's not quite right. I thought that ground truth refers to a model (or maybe the nature) of a problem. I always considered it as something philosophical (and that's what also the vocabulary 'ground truth' implies), because in ML we often don't build a describing model of the problem (like in classical mechanics) but rather some sort of a simulator that behaves like it is a describing model. That's what we/I call sometimes black box.

What is the correct understanding?

",27777,,2444,,9/29/2020 16:04,10/1/2020 19:08,"What is meant by ""ground truth"" in the context AI?",,2,1,,,,CC BY-SA 4.0 23826,2,,23825,9/29/2020 15:41,,7,,"

In the context of ML, ground truth refers to information provided by direct observation (empirical evidence). If you're training an algorithm to classify your data, then the ground truth will be the actual, true labels which could for example be manually annotated by an domain expert. Please note that the models prediction or the inferred labels, are not considered ground truth.

",20430,,20430,,9/30/2020 12:36,9/30/2020 12:36,,,,0,,,,CC BY-SA 4.0 23828,2,,23825,9/29/2020 16:51,,5,,"

It really depends on what words you put after "ground truth". Sometimes people will talk about "ground truth labels", for example in the context of classification or regression problems. The "ground truth labels" in such a case would refer to the true labels of instances; the labels that we use as target labels for instances from a training set, or the labels that we expect our models to output (and "punish" them for if they fail to do so) when evaluating/testing a trained model. This basically follows razvanc92's answer.

"Ground truth" can also refer to something more abstract though, something that we know exists in some form or another, but we may not even know how to express it. For example, there may be "ground truth laws of physics", the laws of physics that our world "follows". We may build or train a simulator trying to approximate those ground truth functions / laws, but we may not actually know how to explicitly express all of them.

",1641,,,,,9/29/2020 16:51,,,,1,,,,CC BY-SA 4.0 23830,2,,23799,9/29/2020 17:29,,-1,,"

I think this question is hinting at the problem of choosing an exploration strategy.

The simplest strategy is to use the so called epsilon-greedy strategy (or $\epsilon$-greedy). This means that you select an action at random $x$ percent of the times that an agent has to select an action. The other times, the agent takes the action that its current policy dictates. Usually, $x$ declines throughout the learning process.

There are many different exploration strategies to choose from, with the aforementioned one being really the most simple one. For getting an impression of how diverse these different approaches to exploration strategies are, please be referred to Google Scholar (searching for "reinforcement learning exploration strategies").

",37982,,,,,9/29/2020 17:29,,,,1,,,,CC BY-SA 4.0 23831,2,,23817,9/29/2020 17:35,,2,,"

Mathematically, the convolution is an operation that takes two functions, $f$ and $g$, and produces a third function, $h$. Concisely, we can denote the convolution operation as follows

$$f \circledast g = h$$

In the context of computer vision and, in particular, image processing, the convolution is widely used to apply a so-called kernel (aka filter) to an input (typically, an image, but this does not have to be the case). The input (e.g. an image), the kernel, and the output of the convolution, in this context, is usually a matrix or a tensor. In image processing, the convolution is typically used to e.g. blur images or maybe to remove noise.

However, in the beginning, I said that the convolution is an operation that takes two functions (and not matrices) and produces a third one, so these two explanations of the convolution do not seem to be consistent, right?

The answer to this question is that the two explanations are consistent with each other. More precisely, if you have a function $f : X \rightarrow Y$ (assuming that $X$ is discrete/countable), you can represent it in a vector form as follows $\mathbf{f} = [y_1, y_2, \dots, y_n]$, i.e. $\mathbf{f}$ is a vector that contains all outputs of the function $f$ (for all possible inputs).

In image processing, an image and a kernel can also be thought of as a function with a discrete domain (i.e. the pixels), so the matrices that represent the image or the kernel are just the vector forms of the corresponding functions. See this answer for more details about representing an image as a function.

Once you understand that the convolution in image processing is really the convolution operation as defined in mathematics, then you can simply look up the mathematical definition of the convolution operation.

In the discrete case (i.e. you can think of the function as vectors, as explained above), the convolution is defined as

$${\displaystyle h[n] = (f \circledast g)[n]=\sum _{m=-M}^{M}f[n-m]g[m].} \tag{1}\label{1}$$

You can read equation $1$ as follows

  • $f \circledast g$ is the convolution of the input function (or matrix) $f$ and the kernel $g$
  • $(f \circledast g)[n]$ is the output of the convolution $f \circledast g$ at index (or input position) $n$ (so you need to apply equation \ref{1} for all $n$, if you want to have $h$ and not just $h[n]$)
  • So, the result of the convolution at $n$, $h[n]$, is defined as $\sum _{m=-M}^{M}f[n-m]g[m]$, a sum that goes from $m = -M$ to $m = M$. Here $M$ may be half of the length of the kernel matrix. For example, if you use the following Gaussian kernel, then $M = 2$ (and I assume that the center of the kernel is at coordinate $(0, 0)$).

$$ \mathbf{g} = \frac{1}{273} \begin{bmatrix} 1 & 4 & 7 & 4 & 1 \\ 4 & 16 & 26 & 16 & 4 \\ 7 & 26 & 41 & 26 & 7 \\ 4 & 16 & 26 & 16 & 4 \\ 1 & 4 & 7 & 4 & 1 \end{bmatrix} \label{2}\tag{2} $$

Here are some notes:

  • The kernel \ref{2} is symmetric around the $x$ and $y$ axes: this actually implies that the convolution is equal to the cross-correlation, so you don't even have to worry about their equivalence or not (in case you have ever worried about it, which would have happened only if you already came across the cross-correlation). See this question for more info.

  • The kernel \ref{2} is the vector form of the function form of the 2d Gaussian kernel (the one in your question): more precisely, an integer-valued approximation of the 2D Gaussian kernel when $\sigma = 1$ (as stated in your slides).

  • The convolution can be implemented as matrix multiplication. This may not be useful now, but it's something useful to know if you want to implement it. See this question for more info.

Question for you: what is the result of the application of this Gaussian kernel to any input? What does this kernel intuitively do? Once you fully understand the convolution, you can answer this question.

",2444,,2444,,9/29/2020 17:54,9/29/2020 17:54,,,,2,,,,CC BY-SA 4.0 23832,2,,23467,9/29/2020 19:43,,0,,"

In rehearsal, you do not necessarily train with all old training data, but you can just use some of it [1], which you add to your current (or new) training data.

In batch learning, at every epoch, you typically train with all training data, every step with a different batch (or subset) of the training data; so, if you have $N$ training examples and your batch size is $M$, then you will have $\lfloor N/M \rfloor $ gradient descent steps.

There is also pseudo-rehearsal, which is an alternative to rehearsal that is useful when you may not have access to the previously used training data [1] (for example, because it is expensive to store).

",2444,,,,,9/29/2020 19:43,,,,1,,,,CC BY-SA 4.0 23833,2,,23807,9/29/2020 21:23,,0,,"

The autoencoder proposed in the link is similar to a Denoising autoencoder (DAE), in sense that both starting from a noisy image try to reconstruct the original image. The difference is that the noisy pixel are ignored during the backpropagation. The DAE takes in input an image, where before the forward phase noise is applied on the image. On output the DAE have to reconstruct the image without noise.

So, given an image $I$ and $\epsilon \sim \mathcal{N}(0,\,\sigma^{2})$ the noise is applied to the image obtaining the image $\hat{I}$. Now, $\hat{I}$ is the DAE input and the autoencoder is trained to minimize the function $L(I,g(f(\hat{I})))$. Where $f$ is the encoder and $g$ is the decoder.

The image on the left is an image of the MNIST dataset where $\epsilon \sim \mathcal{N}(0,\,0.3)$ and on the right there is the reconstruction of a deep DAE. You can change the type of noise in the image, the results should be the same. So the answer to your question is no, autoencoder are suited also for other type of noise.

If interested you can read the paper "What Regularized Auto-Encoders Learn from the Data-Generating Distribution" Guillaume Alain, Yoshua Bengio where is shown that the DAE is learning an approximation of the gradient of the data distribution.

",41306,,41306,,9/29/2020 21:31,9/29/2020 21:31,,,,1,,,,CC BY-SA 4.0 23836,1,,,9/29/2020 23:30,,0,30,"

I am struggling to learn certain Evolutionary algorithm concepts and also relations between each of them. I am going through the Linkage Learning Genetic Algorithm (LLGA) right now and came across this question:

Which 6-bit string of LLGA would represent an optimal solution for trap-3?

Can anyone give me an answer or explain it?

",41276,,2444,,10/4/2020 20:11,10/4/2020 20:11,Which 6-bit string would represent an optimal solution for trap-3 in the Linkage Learning Genetic Algorithm?,,0,4,,,,CC BY-SA 4.0 23837,1,,,9/30/2020 8:37,,1,93,"

I am trying to implement a CNN (U-Net) for semantic segmentation of similar large grayscale ~4600x4600px medical images. The area I want to segment is the empty space (gap) between a round object in the middle of the picture and an outer object, which is ring-shaped, and contains the round object. This gap is "thin" and is only a small proportion of the whole image. In my problem having a small a gap is good, since then the two objects have a good connection to each other.

My questions:

  1. Is it possible to feed such large images on a CNN? Downscaling the images seems a bad idea since the gap is thin and most of the relevant information will be lost. From what I've seen CNN are trained on much smaller images.

  2. Since the problem is symmetric in some sense is it a good idea to split the image in 4 (or more) smaller images?

  3. Are CNNs able to detect such small regions in such a huge image? From what I've seen in the literature, mostly larger objects are segmented such as organs etc.

I would appreciate some ideas and help. It is my first post on the site so hopefully I didn't make any mistakes.

Cheers

",41311,,,,,9/30/2020 8:37,Training a CNN for semantic segmentation of large 4600x4600px images,,0,0,,,,CC BY-SA 4.0 23838,1,23839,,9/30/2020 8:46,,2,850,"

While reading the book AI A modern approach, 4th ed, I came across the section of "Agent program" with following text:

It is instructive to consider why the table-driven approach to agent construction is doomed to failure. Let $P$ be the set of possible percepts and let $T$ be the lifetime of the agent (the total number of percepts it will receive).

The lookup table will contain $\sum_{i=1}^T |P|^T$ entries.

Consider the automated taxi: the visual input from a single camera (eight cameras is typical) comes in at the rate of roughly 70 mb per sec. (30 frames per sec, 1080 X 720 pixels, with 24 bits of color information).

This gives a lookup table with over $10^{600,000,000,000}$ for an hour's driving.

Could someone please explain how the lookup table number is derived? (or what the author's point is which I am missing). If I were to multiply all of the numbers $30 × 1080 × 720 × 24 × 8 × 3600$, then I get $1.6124314e+13$ which comes very close I think, but can't get what would be the reason to build a table (even though a theoretic one) in such a way - something which is obviously intractable

edit:

My core question is this:

Assuming $10^{600,000,000,000}$ is derived from $30 × 1080 × 720 × 24 × 8 × 3600$, what is the purpose of storing data in the look up table at pixel precision? Wouldn't storing higher level of details be enough to solve these kind of problems (ie, autonomous driving)? Coming more from standard software database systems, I am missing that point. Thanks

",41312,,41312,,9/30/2020 13:30,9/30/2020 13:30,Why would the lookup table (of a table-driven artificial agent) need to store data at pixel precision?,,1,1,,,,CC BY-SA 4.0 23839,2,,23838,9/30/2020 11:00,,3,,"

A tabular system for agent decisions is a direct and simple map of percept to control choice. For each percept received, the agent looks up the percept and cross-references it to the action it should take. In order to construct this, you need to list all percepts in full detail, with the associated control choice.

Clearly that is not going to be feasible for the automated taxi example. No-one would think to build such a table to handle natural image inputs. That is the author's point.

However, a tabular structure is a reasonable theoretical construct for mapping an arbitrary discrete function, and also is practical for simple environments.

To answer your extended question:

Assuming $10^{600,000,000,000}$ is derived from $30 × 1080 × 720 × 24 × 8 × 3600$, what is the purpose of storing data in the look up table at pixel precision?

It is the only way to get a map from percept to control using a tabular system.

By proposing any kind of summarisation or approximation of the input-to-output function to solve this, you have gone beyond the capability of a tabular system. That again is the author's point.

Wouldn't storing higher level of details be enough to solve these kind of problems (ie, autonomous driving)?

If it is really obvious to you that this is the solution, then that's a good thing as you are thinking ahead. However, you should also consider what that means in terms of what you might be giving up, from a theoretical perspective. For instance, a tabular system can make radically different decisions based on very minor differences between percepts, whilst any form of processing of the inputs to make them easier to handle is necessarily going to remove information that might be important.

",1847,,1847,,9/30/2020 13:02,9/30/2020 13:02,,,,0,,,,CC BY-SA 4.0 23840,1,,,9/30/2020 11:23,,4,82,"

A stable/smooth learning validation curve often seems to keep improving over more epochs than an unstable learning curve. My intuition is that dropping the learning rate and increasing the patience of a model that produces a stable learning curve could lead to better validation fit.

The counter argument is that jumps in the curve could mean that the model has just learned something significant, but they often jump back down or tail off after that.

Is one better than the other? Is it possible to take aspects of both to improve learning?

",41315,,32410,,4/23/2021 1:43,4/23/2021 1:43,Is stable learning preferable to jumps in accuracy/loss,,2,0,,,,CC BY-SA 4.0 23841,2,,23840,9/30/2020 13:04,,2,,"

There is an approach to machine learning, called Simulated Annealing, which varies the rate: starting from a large rate, it is slowly reduced over time. The general idea is that the initial larger rate will cover a broader range, while the increasingly lower rate then produces a less 'erratic' climb towards a maximum.

If you only use a low rate, you risk getting stuck in a local maximum, while too large a rate will not find the best solution but might end up close to one. Adjusting the rate gives you the best of both.

",2193,,,,,9/30/2020 13:04,,,,4,,,,CC BY-SA 4.0 23842,1,,,9/30/2020 14:09,,4,169,"

I was watching this series: https://www.youtube.com/watch?v=aircAruvnKk

The series demonstrates neural networks by building a simple number recognizing network.

It got me thinking: Why neural networks try to recognize multiple labels instead of just one? In the above example, the network tries to recognize numbers from 0 to 9. What is the benefit of trying to recognize so many things simultaneously? Wouldn't it make it easier to reason about if there would be 10 different neural networks which would specialize to recognize only one number at a time?

",41318,,41318,,8/2/2021 14:29,8/2/2021 14:29,Why neural networks tend to be trained to recognize multiple things instead of just one?,,3,3,,,,CC BY-SA 4.0 23843,2,,23840,9/30/2020 15:41,,1,,"

If you have an erratic loss landscape, it can lead to an unstable learning curve. Thus, it's always better to choose a simpler function which creates a simple landscape. Sometimes even due to uneven dataset distribution, we can observe those jumps/irregularities in the training curve.

And yes, those jumps do mean it might've found something significant in the landscape. Those jumps can arise while the model is exploring the multiple local minima of the landscape.

During Machine Learning Optimization, we usually use algorithms like Stochastic Gradient Descent and Adam to find Local Minima's whereas approaches like Simulated Annealing find global minima. There have been multiple discussions around why to use local minima's instead of global minima. Some argue that local minima are just as useful as global minima in case of machine learning problems.

Thus, Stable Learning is preferable as it symbolizes that the model is converging to local minima.

References


You can read A Survey of Optimization Methods from a Machine Learning Perspective by Shiliang Sun, Zehui Cao, Han Zhu, and Jing Zhaoto et al. and read about all the optimization functions commonly used in Machine Learning.

",40434,,,,,9/30/2020 15:41,,,,0,,,,CC BY-SA 4.0 23844,2,,22976,9/30/2020 17:07,,1,,"

One of the essential pre-processing we do on the corpus involves treating the variable-length sentences to a fixed length. There are various ways in which we can do this:

Truncate


This involves reducing the length of all the sentences to the length of the shortest sentence in the corpus. This is generally not done as it reduces the amount of information that we can learn from the corpus. This image shows pre sequence truncation, where we remove from the back to make the sentences of the same length.

Padding


This is the most preferred method when it comes to handling the problem of variable length sentences. In this approach, we increase the size of each vector to the longest sentence in the corpus. There are two ways to this:

  • Post-Padding: Adding zeroes in the ending
  • Pre-Padding: Adding zeroes in the beginning

References


Effect of Padding on LSTMs and CNNs by Dwarampudi Mahidhar Reddy and N V Subba Reddy, et al.

",40434,,,,,9/30/2020 17:07,,,,0,,,,CC BY-SA 4.0 23846,2,,10549,9/30/2020 20:28,,1,,"

This is more meant like a comment to the previous answer. I also originally thought that

$$ \nabla_{\theta}\log \pi_{\theta}(f_{\theta}(\varepsilon, s)\mid s) = \nabla_{a}\log\pi_{\theta}(a\mid s)\vert_{a=f_{\theta}(\varepsilon,s)}\nabla_{\theta}f_{\theta}(\varepsilon, s), $$ instead of

$$ \nabla_{\theta}\log \pi_{\theta}(f_{\theta}(\varepsilon, s)\mid s) = \nabla_{a}\log\pi_{\theta}(a\mid s)\vert_{a=f_{\theta}(\varepsilon,s)}\nabla_{\theta}f_{\theta}(\varepsilon, s) + \nabla_{\theta}\log\pi_{\theta}(a\mid s)\vert_{a=f_{\theta}(\varepsilon, s)}. $$ The following is certainly less elegant, but I hope that it gives some additional intuition why we need to take the gradient with respect to $a$ and $\theta$. For simplicity, I will assume that $a$ is one-dimensional, but the same argument would apply for higher dimensions. In the SAC paper, they assume that $\pi_{\theta}$ is a Gaussian distribution $\mathcal{N}(\mu_{\theta}(s), \sigma_{\theta}(s))$. Therefore: $$ \log\pi_{\theta}(a\mid s)=-\frac{1}{2}\log(2\pi) - \log\sigma_{\theta}(s)-\frac{(a-\mu_{\theta}(s))^2}{2\sigma_{\theta}(s)^2}. $$ Then the gradient becomes: \begin{align} \nabla_{\theta}\log \pi_{\theta}(f_{\theta}(\varepsilon, s)\mid s)&=-\frac{\nabla_{\theta}\sigma_{\theta}(s)}{\sigma_{\theta}(s)}-\frac{(f_{\theta}(\varepsilon, s)-\mu_{\theta}(s))(\nabla_{\theta}f_{\theta}(\varepsilon,s)-\nabla_{\theta}\mu_{\theta}(s))}{\sigma_{\theta}(s)^2} \\&+\frac{(f_{\theta}(\varepsilon, s)-\mu_{\theta}(s))^2\nabla_{\theta}\sigma_{\theta}(s)}{\sigma_{\theta}(s)^3}.\end{align}

Let us calculate now the terms on the right hand side:

$$ \nabla_{a}\log\pi_{\theta}(a\mid s)\vert_{a=f_{\theta}(\varepsilon,s)}=-\frac{f_{\theta}(\varepsilon,s)-\mu_{\theta}(s)}{\sigma_{\theta}(s)^2} $$

and \begin{align}\nabla_{\theta}\log\pi_{\theta}(a\mid s)\vert_{a=f_{\theta}(\varepsilon, s)}&=-\frac{\nabla_{\theta}\sigma_{\theta}(s)}{\sigma_{\theta}(s)}+\frac{(f_{\theta}(\varepsilon, s)-\mu_{\theta}(s))\nabla_{\theta}\mu_{\theta}(s)}{\sigma_{\theta}(s)^2}\\ &+\frac{(f_{\theta}(\varepsilon, s)-\mu_{\theta}(s))^2\nabla_{\theta}\sigma_{\theta}(s)}{\sigma_{\theta}(s)^3},\end{align} which proves the equality. For the Q-function, we apply the chain rule as usual as $Q$ does not depend on $\theta$.

",41323,,,,,9/30/2020 20:28,,,,0,,,,CC BY-SA 4.0 23847,1,23862,,9/30/2020 22:12,,5,1477,"

I know with policy gradients used in an environment with a discrete action space are updated with $$ \Delta \theta_{t}=\alpha \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) v_{t} $$ where $v_t$ could be many things that represent how good the action was. And I know that this can be calculated by performing cross entropy loss with the target being what the network would have outputted if it were completely confident in its action (zeros with the index of the action chosen being one). But I don’t understand how to apply that to policy gradients that output the mean and variance of a Gaussian distribution for a continuous action space. What is the loss for these types of policy gradients?

I tried keeping the variance constant and updating the output with mean squared error loss and the target being the action it took. I thought this would end up pushing the mean towards actions with greater total rewards but it got nowhere in OpenAI’s Pendulum environment.

It would also be very helpful if it was described in a way with a loss function and a target, like how policy gradients with discrete action spaces can be updated with cross entropy loss. That is how I understand it best but it is okay if that is not possible.

Edit: for @Philipp. The way I understand it is that the loss function is the same with a continuous action space and the only thing that changes is the distribution that we get the log-probs from. In PyTorch we can use a Normal distribution for continuous action space and Categorical for discrete action space. The answer from David Ireland goes into the math but in PyTorch, that looks like log_prob = distribution.log_prob(action_taken) for any type of distribution. It makes sense that for bad actions we would want to decrease the probability of taking the action. Below is working code for both types of action spaces to compare them. The continuous action space code should be correct but the agent will not learn because it is harder to learn the right actions with a continuous action space and our simple method isn't enough. Look into more advanced methods like PPO and DDPG.

import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions.categorical import Categorical #discrete distribution
import numpy as np
import gym
import math
import matplotlib.pyplot as plt

class Agent(nn.Module):
    def __init__(self,lr):
        super(Agent,self).__init__()
        self.fc1 = nn.Linear(4,64)
        self.fc2 = nn.Linear(64,32)
        self.fc3 = nn.Linear(32,2) #neural network with layers 4,64,32,2

        self.optimizer = optim.Adam(self.parameters(),lr=lr)

    def forward(self,x):
        x = torch.relu(self.fc1(x)) #relu and tanh for output
        x = torch.relu(self.fc2(x))
        x = torch.sigmoid(self.fc3(x))
        return x

env = gym.make('CartPole-v0')
agent = Agent(0.001) #hyperparameters
DISCOUNT = 0.99
total = []

for e in range(500): 
    log_probs, rewards = [], []
    done = False
    state = env.reset()
    while not done:
        #mu = agent.forward(torch.from_numpy(state).float())
        #distribution = Normal(mu, SIGMA)
        distribution = Categorical(agent.forward(torch.from_numpy(state).float()))
        action = distribution.sample()
        log_probs.append(distribution.log_prob(action))
        state, reward, done, info = env.step(action.item())
        rewards.append(reward)
        
    total.append(sum(rewards))

    cumulative = 0
    d_rewards = np.zeros(len(rewards))
    for t in reversed(range(len(rewards))): #get discounted rewards
        cumulative = cumulative * DISCOUNT + rewards[t]
        d_rewards[t] = cumulative
    d_rewards -= np.mean(d_rewards) #normalize
    d_rewards /= np.std(d_rewards)

    loss = 0
    for t in range(len(rewards)):
        loss += -log_probs[t] * d_rewards[t] #loss is - log prob * total reward

    agent.optimizer.zero_grad()
    loss.backward() #update
    agent.optimizer.step()

    if e%10==0:
        print(e,sum(rewards)) 
        plt.plot(total,color='blue') #plot
        plt.pause(0.0001)    


def run(i): #to visualize performance
    for _ in range(i):
        done = False
        state = env.reset()
        while not done:
            env.render()
            distribution = Categorical(agent.forward(torch.from_numpy(state).float()))
            action = distribution.sample()
            state,reward,done,info = env.step(action.item())
        env.close()

Above is the discrete action space code for CartPole and below is the continuous action space code for Pendulum. Sigma (variance or standard deviation) is constant here but adding it is easy. Just make the final layer have two neurons and make sure sigma is not negative. Again, the pendulum code won't work because most environments with continuous action spaces are too complicated for such a simple method. Making it work would probably require a lot of testing for hyper parameters.

import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions.normal import Normal #continuous distribution
import numpy as np
import gym
import math
import matplotlib.pyplot as plt
import keyboard

class Agent(nn.Module):
    def __init__(self,lr):
        super(Agent,self).__init__()
        self.fc1 = nn.Linear(3,64)
        self.fc2 = nn.Linear(64,32)
        self.fc3 = nn.Linear(32,1) #neural network with layers 3,64,32,1

        self.optimizer = optim.Adam(self.parameters(),lr=lr)

    def forward(self,x):
        x = torch.relu(self.fc1(x)) #relu and tanh for output
        x = torch.relu(self.fc2(x))
        x = torch.tanh(self.fc3(x)) * 2
        return x

env = gym.make('Pendulum-v0')
agent = Agent(0.01) #hyperparameters
SIGMA = 0.2
DISCOUNT = 0.99
total = []

for e in range(1000): 
    log_probs, rewards = [], []
    done = False
    state = env.reset()
    while not done:
        mu = agent.forward(torch.from_numpy(state).float())
        distribution = Normal(mu, SIGMA)
        action = distribution.sample().clamp(-2.0,2.0)
        log_probs.append(distribution.log_prob(action))
        state, reward, done, info = env.step([action.item()])
        #reward = abs(state[1])
        rewards.append(reward)
        
    total.append(sum(rewards))

    cumulative = 0
    d_rewards = np.zeros(len(rewards))
    for t in reversed(range(len(rewards))): #get discounted rewards
        cumulative = cumulative * DISCOUNT + rewards[t]
        d_rewards[t] = cumulative
    d_rewards -= np.mean(d_rewards) #normalize
    d_rewards /= np.std(d_rewards)

    loss = 0
    for t in range(len(rewards)):
        loss += -log_probs[t] * d_rewards[t] #loss is - log prob * total reward

    agent.optimizer.zero_grad()
    loss.backward() #update
    agent.optimizer.step()

    if e%10==0:
        print(e,sum(rewards)) 
        plt.plot(total,color='blue') #plot
        plt.pause(0.0001)
        if keyboard.is_pressed("space"): #holding space exits training
            raise Exception("Exited")


def run(i): #to visualize performance
    for _ in range(i):
        done = False
        state = env.reset()
        while not done:
            env.render()
            distribution = Normal(agent.forward(torch.from_numpy(state).float()), SIGMA)
            action = distribution.sample()
            state,reward,done,info = env.step([action.item()])
        env.close()

David Ireland also wrote this on a different question I had:

The algorithm doesn't change in this situation. Say your NN outputs the mean parameter of the Gaussian, then logπ(at|st) is just the log of the normal density evaluated at the action you took where the mean parameter in the density is the output of your NN. You are then able to backpropagate through this to update the weights of your network.

",41026,,36821,,4/4/2021 20:16,4/4/2021 20:16,What is the loss for policy gradients with continuous actions?,,1,12,,,,CC BY-SA 4.0 23848,1,23849,,10/1/2020 3:09,,0,634,"

I am new to machine learning and recently I joined a course where I was given a logistic regression assignment in which I had to split 20% of the training dataset for the validation dataset and then use the validation dataset to capture the minimum possible loss and then use the test dataset to find the accuracy of the model. Below is my code for implementing logistic regression

class LogReg(LinReg):
    def __init__(self, n_dim, bias=True):
        if bias:
            n_dim = n_dim + 1
        super(LogReg, self).__init__(n_dim)
        self.bias = bias
    
  def __call__(self, x):
      return x.mm(self.theta).sigmoid()

  def compute_loss(self, x, y, lambda_reg):
      # The function has a generic implementation, and can also work for the neural nets!
      predictions = self(x)
      loss = -(y * torch.log(predictions) + (1-y) * torch.log(1 - predictions)).mean()
      regularizer = self.theta.transpose(0, 1).mm(self.theta)
      return loss + regularizer.mul(lambda_reg)
  @staticmethod
  def add_bias(x):
      ones = torch.ones((x.size(0), 1), dtype=torch.float32)
      x_hat = torch.cat((ones, x), dim=-1)
      return x_hat

  def fit(self, x, y, num_iter=10, mb_size=32, lr=1e-1, lambda_reg=1e-2, reset=True):
      N = x.size(0)
      losses = []
      x_hat = x
      # Adding a bias term if needed
      if self.bias:
          x_hat = self.add_bias(x)
      if reset:
          self.reset() # Very important if you want to call fit multiple times
      num_batches = x.size(0) // mb_size
      # The outer loop goes over `epochs`
      # The inner loop goes over the whole training data
      for it in range(num_iter):
          loss_per_epoch = 0
          for batch_it in range(num_batches):
              # has been implemented for the linear model
              self.zero_grad()

              ind = torch.randint(0, N, (mb_size, 1)).squeeze()
              x_mb, y_mb = x_hat[ind, :], y[ind, :]

              loss = self.compute_loss(x_mb, y_mb, lambda_reg)

              loss.backward()
              self.theta.data = self.theta.data - lr*self.grad().data
              loss_per_epoch += loss.item()
          
          loss_per_epoch /= num_batches
          losses.append(loss_per_epoch)
      
      return losses

How should I use the validation set at the level of epoch to find the best loss?

",41329,,41293,,10/1/2020 10:18,10/1/2020 10:18,How to use validation dataset in my logistic regression model?,,1,0,,,,CC BY-SA 4.0 23849,2,,23848,10/1/2020 5:36,,1,,"

So generally, when you seperate your training data to 80%-20% then you fit method should get 2 x,y. better to call them x_train,y_train, x_val, y_val or something similar.

Now its important you do the split before entering the fit, and not do it for each epoch or something alike.

Once you do that and the fit method should be something like:

def fit(self, x_train, y_train, x_val, y_val, num_iter=10, mb_size=32, lr=1e-1, lambda_reg=1e-2, reset=True):

Then you should, at the end of each epoch, test the performance of the model on the validation set entirely and calculate the desired metric for evaluation. If you improved it's better to save the current model. This is done repeatedly for each epoch until the end of the training and you will guarantee to have the model who gave you the best results on the validation set rather than on the training set, which might be overfitting it.

I will do it in a separate method with the following flow:

  1. iterate on each sample in the validation set
  2. for each one calculate the loss/metric
  3. append it to some list
  4. return the mean of that list

And if the average result is better from some previously saved one, save the new model

",41293,,,,,10/1/2020 5:36,,,,5,,,,CC BY-SA 4.0 23850,1,,,10/1/2020 8:32,,3,156,"

I'm new to reinforcement learning.

As it is common in RL, $\epsilon$-greedy search for the behavior/exploration is used. So, at the beginning of the training, $\epsilon$ is high, and therefore a lot of random actions are chosen. With time, $\epsilon$ decreases and we often choose the best action.

  1. I was wondering, e.g. in Q-Learning, if $\epsilon$ is small, e.g. 0.1 or 0.01, do the Q-values really still change? Do they just change their direction, i.e. the best action remains the best action but the Q-values diverge further, or do the values really change again so that the best action always changes for a given state?

  2. If the Q-values really do still change strongly, is it because of the remaining random actions, which we still have at $\epsilon>0$ or would it still change at $\epsilon=0$?

",41331,,2444,,10/2/2020 10:30,10/2/2020 10:30,Can we stop training as soon as epsilon is small?,,1,2,,,,CC BY-SA 4.0 23852,1,,,10/1/2020 10:17,,1,72,"

While the advantages of actor-only algorithms, the ones that search directly the policy without the use of the value function, are clear (possibility of having a continuous action space, a stochastic policy, etc.), I can't figure out the disadvantages, if not general statements about less stability or bigger variance with respect to critic-only methods (with which I refer to methods that are based on the value function).

",37169,,2444,,1/12/2022 21:30,1/12/2022 21:30,What are the disadvantages of actor-only methods with respect to value-based ones?,,0,0,,,,CC BY-SA 4.0 23854,2,,23842,10/1/2020 14:05,,1,,"

In practice you never want to classify just a single digit rather than series. In such case you have to pass a patch of image to multiple network, which would make it inconvenient. If you built different accurate models, training parameters will not significantly reduced. For example sloppy written 6, in a single model the probability of being 6 and 0 would be close, not same if you consider likelihood you may get closest answer. While with different models the probability may vary in a greater scale and you may not have good generalization as you have in single model. At the end everything boils down to generalization and in my experience neural networks trained with multiple things have good generalization property that a single.

",41286,,,,,10/1/2020 14:05,,,,0,,,,CC BY-SA 4.0 23855,2,,23723,10/1/2020 14:58,,0,,"

Thank you very much for your help, all of you.

I finally find on the Internet key words : "Dialog act classification".

I don't know yet how to implement it, but it's a good start !

",41176,,,,,10/1/2020 14:58,,,,0,,,,CC BY-SA 4.0 23856,2,,23154,10/1/2020 16:13,,1,,"

It is hard to tell what exactly is better because these are hyperparameters. However, the sigmoid activation function is closer to biological neurons.

In the paper below, Bengio demonstrates why ReLU activation functions are better for hidden layers. In summary, they increase the sparsity of calculations (matrix in each layer shod multiply to its relative weights), and because of that, it made it possible for data to classify faster and easier.

http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf

",35757,,,,,10/1/2020 16:13,,,,0,,,,CC BY-SA 4.0 23857,2,,16939,10/1/2020 18:25,,1,,"

? This means that there are not promising versions of this algorithm fro regression until 2012. After your question, I have found one of the survey research paper which is done or ensemple methods for regression. This table also extracted from this paper. Read this paper, it will help you a lot more

This one is latest paper published on object detection with an ensemble approach

",41226,,,,,10/1/2020 18:25,,,,0,,,,CC BY-SA 4.0 23862,2,,23847,10/1/2020 23:44,,3,,"

This update rule can still be applied in the continuous domain.

As pointed out in the comments, suppose we are parameterising our policy using a Gaussian distribution, where our neural networks take as input the state we are in and output the parameters of a Gaussian distribution, the mean and the standard deviation which we will denote as $\mu(s, \theta)$ and $\sigma(s, \theta)$ where $s$ shows the dependancy of the state and $\theta$ are the parameters of our network.

I will assume a one-dimensional case for ease of notation but this can be extended to multi-variate cases. Our policy is now defined as $$\pi(a_t | s_t) = \frac{1}{\sqrt{2\pi \sigma(s_t, \theta)^2}} \exp\left(-\frac{1}{2}\left(\frac{a_t - \mu(s_t, \theta)}{\sigma(s_t, \theta)}\right)^2\right).$$

As you can see, we can easily take the logarithm of this and find the derivative with respect to $\theta$, and so nothing changes and the loss you use is the same. You simply evaluate the derivative of the log of your policy with respect to the network parameters, multiply by $v_t$ and $\alpha$ and take a gradient step in this direction.

To implement this (as I'm assuming you don't want to calculate the NN derivatives by hand) then you could do something along the lines of the following in Pytorch.

First you want to pass your state through your NN to get the mean and standard deviation of the Gaussian distribution. Then you want to simulate $z \sim N(0,1)$ and calculate $a = \mu(s,\theta) + \sigma(s, \theta) \times z$ so that $a \sim N( \mu(s, \theta), \sigma(s, \theta))$ -- this is the reparameterisation trick that makes backpropagation through the network easier as it takes the randomness from a source that doesn't depend on the parameters of the network. $a$ is your action that you will execute in your environment and use to calculate the gradient by simply writing the code torch.log(normal_pdf(a, \mu(s, \theta), \sigma(s, \theta)).backward() -- here normal_pdf() is any function in Python that calculates the pdf of a normal distribution for a given point and parameters.

",36821,,36821,,10/2/2020 22:42,10/2/2020 22:42,,,,12,,,,CC BY-SA 4.0 23863,2,,23850,10/1/2020 23:52,,2,,"
  1. How much the $Q$-values change does not depend on the value of $\epsilon$, rather the value of $\epsilon$ dictates how likely you are to take a random action and thus take an action that could give rise to a large TD error -- that is a large difference between the returns you expected from taking this action as to what you actually observed. How much the $Q$-value changes depends on the magnitude of this TD error.

  2. $Q$-learning is not guaranteed to converge if there is no exploration. Part of the convergence criteria assumes that each state-action pair will be visited infinitely often in an infinite number of episodes, and so if there is no exploration then this will not happen.

",36821,,,,,10/1/2020 23:52,,,,0,,,,CC BY-SA 4.0 23864,1,23975,,10/2/2020 6:54,,0,74,"

Just starting learning things about tensorflow and NN. As an exercise I decided to create a dataset of images, watermarked and not, in order to binary classify these. First of all, the dataset ( you can see it here ) was created artificially by me applying some random watermarks. First doubt, in the dataset I don't have both images one watermarked and one not, would be better to have? Second, frustrating: model stand on 0.5 accuracy, so it just produce random output :( Model I tried is this:

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(16,(1,1), activation='relu', input_shape=(150, 150, 3)),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Conv2D(32,(3,3), activation='relu'),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Conv2D(64,(3,3), activation='relu'),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation='elu'),
    tf.keras.layers.Dense(64, activation='elu'),
    tf.keras.layers.Dense(32, activation='relu'),
    tf.keras.layers.Dense(1,activation="sigmoid")

and then compiled as this:

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics = ['accuracy'])

Here below the fit:

history = model.fit(train_data,
                              validation_data=valid_data,
                              steps_per_epoch=100,
                              epochs=15,
                              validation_steps=50,
                              verbose=2)

As for any other details, code is here. I already checked for technical issues, I'm pretty sure image enter properly, train and validation dataset are 80/20, about 12K images for training. However accuracy bounches up and down around .5 while fitting. How can I improve?

",41346,,,,,10/8/2020 15:05,Image Classification for watermarks with poor results,,1,0,,,,CC BY-SA 4.0 23871,1,,,10/2/2020 10:31,,1,56,"

I have $N$ (time) sequences of data with length $2048$. Each of these sequences correseponds to a different target output. However, I know that only a small part of the sequence is needed to actually predict this target output, say a sub-sequence of length $128$.

I could split up each of the sequences into $16$ partitions of $128$, so that I end up with $16N$ training smaples. However, I could drastically increase the number of training samples if I use a sliding window instead: there are $2048-128 = 1920$ unique sub-sequences of length $128$ that preserve the time series. That means I could in fact generate $1920N$ unique training samples, even though most of the input is overlapping.

I could also use a larger increment between individual "windows", which would reduce the number of sub-sequences but it could remove any autocorrelation between them.

Is it better to split my data into $16N$ non-overlapping sub-sequences or $1920N$ partially overlapping sub-sequences?

",5344,,5344,,10/7/2020 7:42,10/7/2020 7:42,Is it better to split sequences into overlapping or non-overlapping training samples?,,0,0,,,,CC BY-SA 4.0 23874,2,,7408,10/2/2020 14:43,,1,,"

Firstly, note that the Gaussian policies you describe are not equivalent to $\epsilon$-greedy, mainly for this reason: for a fixed policy, the policy's variance in the Gaussian case depends on the state, while in the $\epsilon$-greedy case it does not. Right off the bat, the Gaussian policy should achieve less regret than $\epsilon$-greedy.

Other approaches to exploration in continuous action spaces include :

  • Parameterizing the policy differently. You're not limited to gaussians, rather any parameterizable distribution (particularly those that can be reparameterized according to the reparameterization trick) will do.
  • Using an entropy bonus. You can subtract your policy's entropy in the expression for your loss function, which helps prevent your policy from becoming "too deterministic" before the agent learns the environment sufficiently.
  • Surprise/curiosity based methods. From this I mean methods that do reward shaping based on some measure of uncertainty in the policy -- at each transition, this measure of uncertainty is added to the reward. See "Exploration by Random Network Distillation" for example.
  • Maximum Entropy methods. These have slightly different objectives than standard RL that also emphasize policy entropy, so they should promote exploration. See SAC for example.
  • Use deterministic policy gradients. Then you can literally apply $\epsilon$-greedy if you want, or simply add noise to the output of the policy. See TD3 for example.

I doubt this is an exhaustive list, but I hope it helps.

",37829,,,,,10/2/2020 14:43,,,,0,,,,CC BY-SA 4.0 23875,1,23887,,10/2/2020 15:48,,3,2073,"

I'm unable to find online, or understand from context - the difference between estimation error and approximation error in the context of machine learning (and, specifically, reinforcement learning).

Could someone please explain with the help of examples and/or references?

",35585,,2444,,12/14/2021 10:55,12/14/2021 11:38,What's the difference between estimation and approximation error?,,1,0,,,,CC BY-SA 4.0 23876,1,,,10/2/2020 16:33,,-1,80,"

I want to create a system so that when a human being says a word or command through a microphone, such as "shut down", the system can execute that command "shut down".

I used the Deep Speech algorithm on the Persian language database, which takes audio data through a microphone and returns the text. The problem I have now is what I have to do from this point on. I need to explain that my system has to work offline and also the number of commands I have is limited.

",41355,,,,,10/2/2020 21:45,speech comment detection by deep speech mozilla for data set,,1,0,0,5/30/2022 1:45,,CC BY-SA 4.0 23877,1,24033,,10/2/2020 16:43,,1,74,"

I try to create autonomous car using keyboard data so this is a multi class classification problem. I have keys W,A,S and D. So I have four categories. My model should decide what key should be pressed based on the screenshot (or some other data, I have some ideas). I have some API that I can use to capture keyboard data and screen (while gathering data) and also to simulate keyboard events (in autonomous mode when the car is driven by neural network).

Should I create another category called for example "NOKEY"? I will use sigmoid function on each output neuron (instead of using softmax on the all neurons) to have probabilities from 0 to one for each category. But I could have very low probabilities for each neuron. And it can mean either that no key should be pressed or that the network doesn't know what to do. So maybe I should just create additional "artificial" category? What is the standard way to deal with such situations?

",,user40943,,user40943,10/5/2020 21:32,10/14/2020 18:58,Should I use additional empty category in some categorical problems?,,2,2,,,,CC BY-SA 4.0 23880,2,,23876,10/2/2020 21:45,,1,,"

The problem you state is a well known problem, and it is called "keyword spotting" os KWS. If you add a wake up word before it (like "hey google/siri"), you can also use "voice command" system to alleviate the problem.

There are two kind of KWS systems: those which develop to detect a hard coded set of keywords, and those who are flexible to amend the keyword set. As you state that you trained an acoustic model using deep speech, it looks that you choose the latter approach.

If you trained a deep speech model, you done the easy half of the solution. The other half which is more tricky is to develop a search algorithm based on your acoustic model. I think wfst based algorithms and prefix beam search algorithms are what you should be looking for.

Regards

",41365,,,,,10/2/2020 21:45,,,,0,,,,CC BY-SA 4.0 23881,2,,23775,10/3/2020 0:06,,1,,"

Disclaimer: I am not an attorney and this does not constitute formal legal advice.

  • If the output is novel the copyright resides with the creator

In this case, almost certainly the human who utilizes the algorithm†. There was a recent US patent case "Dabus" [U.S. Patent Application No.: 16/524,350] where the human programmers tried to claim an AI as an inventor, which was rejected by USPTO. This is largely because an inventor is defined as a "natural person". But it's an interesting challenge to the notion of inventorship and authorship.

An argument regarding output specifically as a salable commodity would be that most human creative endeavor is result of previous work that forms the basis for the novel arrangement of elements in the form of an original work.

Music is more specific because melodies are mathematical, and what are traditionally protected, although Blurred Lines case blurred this line, in that the ruling there was the production aspect, the "feel", was what was plagiarized.

And, in fact, algorithmic music generation, which is heavily utilized in pop music, depends on combinatorial functions producing novel sequences, not protected by prior copyright.

Samples are clear cut—it is excerpting from copyrighted work. A wave form is then copyrightable, in that a work of recorded music is simply an arrangement of waveforms in some combination and sequence.

If the output is procedural, and a process that can be automated, that would enter the domain of patent law, utility patents specifically.

Design output would also be in the realm of design patents, which are specific/generalized arrangements of elements in products that are not processes.


A note on liability:

Law is a process, where precedent plays a major factor. Because law uses natural language, there is ambiguity, and jurisprudence is the process of clarifying the meaning and application via challenges. Especially when in uncharted territory "no one knows" until a case has been ruled on, and rulings can be challenged all the way up to the Supreme Court, which may or may not accept a suit.

Damages for copyright are 3x, but damages can be very hard to prove. Intellectual Property litigation is also enormously expensive, thus unlikely to be pursued unless there is a potential financial benefit. (The first step in any potential copyright violation is typically a "cease and desist" letter, with no legal action if acceded to. In a case such as "Blurred Lines", where there is significant financial return to the alleged infringer, the award or settlement presumably exceeds the cost of litigation.)

However, deep-pocket players can use litigation, or threat of litigation, to disincentivize competitors, and, if the target has limited resources, can produce the desired outcome regardless of the strength of the claim. (It's not uncommon in law in general to file frivolous suits as a strategy, although there is typically a financial penalty such as reimbursement of the defendant's legal fees if the suit is found to be frivolous and dismissed.) "Patent trolling" became so much of a problem before 2013, the entire philosophy of patentability, what is patentable, had to be revisited.


†Who has rights to an algorithmic process is also a legal question.

Software is patentable, being simply a type of computer, regardless of medium, but most software is not patented, and protected instead as "trade secrets" and via non-disclosure agreements. (This may be due to the glacial pace of the patent process compared to the software development process, but also per the from the necessity of making patents public. i.e. if Google patented their search algorithm, it would be an instruction manual on how to exploit it.)

Software is copyrightable as the specific lines of code. Copyright naturally resides with the creator, but the process of copyright is be utilized to document the right against potential infringement.

",1671,,1671,,10/3/2020 1:52,10/3/2020 1:52,,,,4,,,,CC BY-SA 4.0 23883,1,,,10/3/2020 8:49,,1,41,"

what Machine Learning tool can understand in which location and orientation a picture was taken from? That is from pictures of similar objects, say for example pictures of car interiors.

So given a side vent picture it will show me all the pictures with a side vent with similar views (if a picture shows a vent from the cockpit it will not show me pictures of vents taken from outside the car with open door).

If the problem is too complicated for just one tool, could you address me to which particular field of Artificial Intelligence should I research in?

this goes close to what I am looking for: https://ai.googleblog.com/2020/03/real-time-3d-object-detection-on-mobile.html

But I thought to ask if there are more specific appropriate examples.

",41370,,,,,10/3/2020 8:49,Machine Learning Techniques for Objects Location/Orientation in Images,,0,0,,,,CC BY-SA 4.0 23884,1,,,10/3/2020 11:03,,2,1074,"

During transfer learning in computer vision, I've seen that the layers of the base model are frozen if the images aren't too different from the model on which the base model is trained on.

However, on the NLP side, I see that the layers of the BERT model aren't ever frozen. What is the reason for this?

",41373,,2444,,10/3/2020 14:37,1/7/2023 15:50,Why aren't the BERT layers frozen during fine-tuning tasks?,,1,4,,,,CC BY-SA 4.0 23885,1,,,10/3/2020 12:45,,0,100,"

I've written my own version of SAC(v2) for a problem with continuous action space. While training, the losses for the value network and both q functions steadily decrease down to 0.02-0.03. The loss for my actor/agent is negative and decreases to about -0.25 (I've read that it doesn't matter whether it is negative or not, but I'm not 100% sure). Despite that, the output variance from the Gaussian policy is way too high (make all the outcomes nearly uniformly likely) and is not decreasing during training. Does anyone know what can be a cause of that?

My implementation is mostly based on https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/sac.py, but I resigned from computing td_errors.

Here is the code (in case you need it).

import tensorflow as tf
from tensorflow.keras.layers import *
from src.anfis.anfis_layers import *
from src.model.sac_layer import *
from src.anfis.anfis_model import AnfisGD

hidden_activation = 'elu'
output_activation = 'linear'


class NetworkModel:
    def __init__(self, training):

        self.parameters_count = 2
        self.results_count = 1
        self.parameters_sets_count = [3, 4]
        self.parameters_sets_total_count = sum(self.parameters_sets_count)

        self.models = {}
        self._initialise_layers()  # initialises self.models[]

        self.training = training

        self.train()

    def _initialise_layers(self):
        # ------------
        # LAYERS & DEBUG
        # ------------

        f_states = Input(shape=(self.parameters_count,))
        f_actions = Input(shape=(self.results_count,))

        # = tf.keras.layers.Dense(10)# AnfisGD(self.parameters_sets_count)
        #f_anfis = model_anfis(densanf)#model_anfis(f_states)
        f_policy_1 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_states)
        f_policy_2 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_policy_1)
        f_policy_musig = tf.keras.layers.Dense(2, activation=output_activation)(f_policy_2)
        f_policy = GaussianLayer()(f_policy_musig)

        #self.models["anfis"] = tf.keras.Model(inputs=f_states, outputs=f_anfis)
        #self.models["forward"] = tf.keras.Model(inputs=f_states, outputs=model_anfis.anfis_forward(f_states))

        self.models["actor"] = tf.keras.Model(inputs=f_states, outputs=f_policy)

        self.models["critic-q-1"] = generate_q_network([f_states, f_actions])
        self.models["critic-q-2"] = generate_q_network([f_states, f_actions])

        self.models["critic-v"] = generate_value_network(f_states)
        self.models["critic-v-t"] = generate_value_network(f_states)

        # self.models["anfis"].compile(
        #     loss=tf.losses.mean_absolute_error,
        #     optimizer=tf.keras.optimizers.SGD(
        #         clipnorm=0.5,
        #         learning_rate=1e-3),
        #     metrics=[tf.keras.metrics.RootMeanSquaredError()]
        # )
        # self.models["forward"].compile(
        #     loss=tf.losses.mean_absolute_error,
        #     optimizer=tf.keras.optimizers.SGD(
        #         clipnorm=0.5,
        #         learning_rate=1e-3),
        #     metrics=[tf.keras.metrics.RootMeanSquaredError()]
        # )
        self.models["actor"].compile(
            loss=tf.losses.mean_squared_error,
            optimizer=tf.keras.optimizers.Adam(
                learning_rate=1e-3),
            metrics=[tf.keras.metrics.RootMeanSquaredError()]
        )

    def act(self, din):
        data_input = tf.convert_to_tensor([din], dtype='float64')
        data_output = self.models["actor"](data_input)[0]
        return data_output.numpy()[0]

    def train(self):
        self.training.train(self, hybrid=False)


def mean(y_true, y_pred): #ignore y_pred
    return tf.reduce_mean(y_true)


def generate_value_network(inputs):
    # SAC Critic Value (Estimating rewards of being in state s)
    f_critic_v1 = tf.keras.layers.Dense(5, activation=hidden_activation)(inputs)
    f_critic_v2 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_critic_v1)
    f_critic_v = tf.keras.layers.Dense(1, activation=output_activation)(f_critic_v2)
    m_value = tf.keras.Model(inputs=inputs, outputs=f_critic_v)
    m_value.compile(
        loss=tf.losses.mean_squared_error,
        optimizer=tf.keras.optimizers.Adam(
            learning_rate=1e-3),
        metrics=[tf.keras.metrics.RootMeanSquaredError()]
    )
    return m_value


def generate_q_network(inputs):
    # SAC Critic Q (Estimating rewards of taking action a while in state s)
    f_critic_q_concatenate = tf.keras.layers.Concatenate()(inputs)
    f_critic_q1 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_critic_q_concatenate)
    f_critic_q2 = tf.keras.layers.Dense(5, activation=hidden_activation)(f_critic_q1)
    f_critic_q = tf.keras.layers.Dense(1, activation=output_activation)(f_critic_q2)

    m_q = tf.keras.Model(inputs=inputs, outputs=f_critic_q)
    m_q.compile(
        loss=tf.losses.mean_squared_error,
        optimizer=tf.keras.optimizers.Adam(
            learning_rate=1e-3),
        metrics=[tf.keras.metrics.RootMeanSquaredError()]
    )
    return m_q;

from src.model.training import Training
import numpy as np
import tensorflow as tf
from src.constructs.experience_holder import ExperienceHolder


class SACTraining(Training):

    def __init__(self, environment):
        super().__init__()
        self.environment = environment
        self.models = None
        self.parameters_sets_count = None
        self.parameters_sets_total_count = 0
        self.parameters_count = 0

        self.gamma = 0.99
        self.alpha = 1.0
        self.beta = 0.003
        self.tau = 0.01

        self.experience = ExperienceHolder(capacity=10000, cells=5)  # state, action, reward, state', done

    def train(self, simulation_model, **kwargs):
        self.models = simulation_model.models
        self.parameters_count = simulation_model.parameters_count
        self.parameters_sets_count = simulation_model.parameters_sets_count
        self.parameters_sets_total_count = simulation_model.parameters_sets_total_count

        self.train_sac(
            self.models,
            epochs=300, max_steps=200, experience_batch=128, simulation=self.environment)

    def train_sac(self, models, epochs, max_steps, experience_batch, simulation):

        # deterministic random
        np.random.seed(0)

        history = []
        epoch_steps = 128
        simulation.reset()
        update_net(models['critic-v'], models['critic-v-t'], 1.0)

        for i in range(epochs):
            print("epoch: ", i)
            episode_reward = 0
            reset = False
            j = 0
            while not(j > epoch_steps and reset):
                j += 1
                reset = False
                # ---------------------------
                # Observe state s and select action according to current policy
                # ---------------------------

                # Get simulation state
                state_raw = simulation.get_normalised()
                # state_unwound = [[i for t in state for i in t]]

                state = [state_raw[0]]  # TODO
                state_tf = tf.convert_to_tensor(state)

                # Get actions distribution from current model
                # and their approx value from critic
                actions_tf, _, _ = models['actor'](state_tf)
                actions = list(actions_tf.numpy()[0])

                # ---------------------------
                # Execute action in the environment
                # ---------------------------
                reward, done = simulation.step_nominalised(actions)
                episode_reward += reward

                # ---------------------------
                # Observe next state
                # ---------------------------

                state_l_raw = simulation.get_normalised()
                state_l = [state_l_raw[0]]  # TODO

                # ---------------------------
                # Store information in replay buffer
                # ---------------------------

                self.experience.save((state, actions, reward, state_l, 1 if not done else 0))

                if done or simulation.step_counter > max_steps:
                    simulation.reset()
                    reset = True

            # ---------------------------
            # Updating network
            # ---------------------------
            if self.experience.size() > 500:  # update_counter_limit:
                exp = self.experience.replay(min(experience_batch, int(self.experience.size() * 0.8)))
                states_tf = tf.convert_to_tensor(exp[0], dtype='float64')
                actions_tf = tf.convert_to_tensor(exp[1], dtype='float64')
                rewards_tf = tf.convert_to_tensor(exp[2], dtype='float64')
                states_l_tf = tf.convert_to_tensor(exp[3], dtype='float64')
                not_dones_tf = tf.convert_to_tensor(exp[4], dtype='float64')

                with tf.GradientTape(watch_accessed_variables=True, persistent=True) as tape:

                    q_1_current = models['critic-q-1']([states_tf, actions_tf])
                    q_2_current = models['critic-q-2']([states_tf, actions_tf])
                    v_l_current = models['critic-v-t'](states_l_tf)

                    q_target = tf.stop_gradient(rewards_tf + not_dones_tf * self.gamma * v_l_current)
                    q_1_loss = tf.reduce_mean((q_target - q_1_current) ** 2)
                    q_2_loss = tf.reduce_mean((q_target - q_2_current) ** 2)

                    v_current = models['critic-v'](states_tf)
                    actions, policy_loss, sigma = models['actor'](states_tf)
                    q_1_policy = models['critic-q-1']([states_tf, actions_tf])
                    q_2_policy = models['critic-q-2']([states_tf, actions_tf])
                    q_min_policy = tf.minimum(q_1_policy, q_2_policy)

                    v_target = tf.stop_gradient(q_min_policy - self.alpha * policy_loss)
                    v_loss = tf.reduce_mean((v_target - v_current)**2)

                    a_loss = tf.reduce_mean(self.alpha * policy_loss - q_min_policy)

                backward(tape, models['critic-q-1'], q_1_loss)
                backward(tape, models['critic-q-2'], q_2_loss)
                backward(tape, models['critic-v'], v_loss)
                update_net(models['critic-v'], models['critic-v-t'], self.tau)

                backward(tape, models['actor'], a_loss)

                del tape
           
                print('Loss:\n\tvalue: {}\n\tq1   : {}\n\tq2   : {}\n\tactor (ascent): {}'.format(
                     tf.reduce_mean(v_loss),
                     tf.reduce_mean(q_1_loss),
                     tf.reduce_mean(q_2_loss),
                     tf.reduce_mean(a_loss) #Gradient ascent

                ))
                print('Episode Reward: {}'.format(episode_reward))
                print('Batch sigma: {}'.format(tf.reduce_mean(sigma)))


def update_net(model, target, tau):
    len_vars = len(model.trainable_variables)
    for i in range(len_vars):
        target.trainable_variables[i] = tau * model.trainable_variables[i] + (1.0 - tau) * target.trainable_variables[i]


def backward(tape, model, loss):
    grads = tape.gradient(loss, model.trainable_variables)
    model.optimizer.apply_gradients(
        zip(grads, model.trainable_variables))

from tensorflow.keras import Model
import tensorflow as tf
import tensorflow_probability as tfp


class GaussianLayer(Model):
    def __init__(self, **kwargs):
        super(GaussianLayer, self).__init__(**kwargs)

    def call(self, inputs, **kwargs):
        mu, log_sig = tf.split(inputs, num_or_size_splits=2, axis=1)

        log_sig_clip = tf.clip_by_value(log_sig, -20, 2)
        sig = tf.exp(log_sig_clip)

        distribution = tfp.distributions.Normal(mu, sig)
        output = distribution.sample()
        actions = tf.tanh(output)

        return actions, \
            distribution.log_prob(output) - \
            tf.reduce_sum(tf.math.log(1 - actions ** 2 + 1e-12), axis=1, keepdims=True), \
            tf.stop_gradient(tf.keras.backend.abs(actions - tf.tanh(mu)))
",41374,,41374,,10/4/2020 11:41,10/4/2020 11:41,Variance of the Gaussian policy is not decreasing while training the agent using Soft Actor-Critic method,,0,2,,,,CC BY-SA 4.0 23887,2,,23875,10/3/2020 16:50,,6,,"

Section 5.2 Error Decomposition of the book Understanding Machine Learning: From Theory to Algorithms (2014) gives a description of the approximation error and estimation error in the context of empirical risk minimization (ERM) and, in particular, in the context of the bias-complexity tradeoff (which is strictly related to the bias-variance tradeoff).

Error/risk decomposition

The expected risk (error) of a hypothesis $h_S \in \mathcal{H}$ selected based on the training dataset $S$ from a hypothesis class $\mathcal{H}$ can be decomposed into the approximation error, $\epsilon_{\mathrm{app}}$, and the estimation error, $\epsilon_{\mathrm{est}}$, as follows

\begin{align} L_{\mathcal{D}}\left(h_{S}\right) &= \epsilon_{\mathrm{app}}+\epsilon_{\mathrm{est}} \\ &= \epsilon_{\mathrm{app}}+ \left( L_{\mathcal{D}}\left(h_{S}\right)-\epsilon_{\mathrm{app}} \right) \\ &= \left( \min _{h \in \mathcal{H}} L_{\mathcal{D}}(h)\right) + \left( L_{\mathcal{D}}\left(h_{S}\right)-\epsilon_{\mathrm{app}} \right) \label{1}\tag{1} \end{align}

Approximation error

The approximation error (AE), aka inductive bias, defined as

$$\epsilon_{\mathrm{app}} = \min _{h \in \mathcal{H}} L_{\mathcal{D}}(h) $$

is the error due to the specific choice of hypothesis class (or set) $\mathcal{H}$. So, $\min _{h \in \mathcal{H}} L_{\mathcal{D}}(h)$ is minimal risk/error that can be achieved with a hypothesis class $\mathcal{H}$. In other words, if you limit yourself to $\mathcal{H}$ and you select the "best" hypothesis in $\mathcal{H}$, then $\min _{h \in \mathcal{H}} L_{\mathcal{D}}(h)$ is the expected risk of that hypothesis.

Here are some properties.

  • The larger $\mathcal{H}$ is, the smaller this error is (because it's more likely that a larger hypothesis class contains the actual hypothesis we are looking for). So, if $\mathcal{H}$ does not contain the actual hypothesis we are searching for, then this error could not be zero.

  • This error does not depend on the training data. You can see in the formula above that there's no $S$ (the training dataset), but only on $D$ (the distribution over the space of inputs and labels from which $S$ was assumed to have been sampled)

Estimation error

The estimation error (EE) is the difference between the approximation error $\epsilon_{\mathrm{app}}$ and the training error $L_{\mathcal{D}}\left(h_{S}\right)$, i.e.

\begin{align} \epsilon_{\mathrm{est}} &=L_{\mathcal{D}}\left(h_{S}\right)-\epsilon_{\mathrm{app}} \\ &= L_{\mathcal{D}}\left(h_{S}\right) - \min _{h \in \mathcal{H}} L_{\mathcal{D}}(h) \end{align}

Here are some properties.

  • The estimation error depends on the training dataset $S$. You can see $S$ in the formula above.

  • $\epsilon_{\mathrm{est}}$ also depends on the choice of the hypothesis class (given that it is defined as a function of $\epsilon_{\mathrm{app}}$).

Bias-complexity tradeoff

If we increase the size and complexity of the hypothesis class, the approximation error decreases, but the estimation error may increase (i.e. we may have over-fitting). On the other hand, if we decrease the size and complexity of the hypothesis class, the estimation error may decrease, but the bias may increase (i.e. we may have under-fitting). So, we have a bias-complexity trade-off (where the bias refers to the approximation error or inductive bias) and the complexity refers to the complexity of the hypothesis class.

Error excess

In section 4.1 of this book also describes a similar (but equivalent) error decomposition, which is called error excess because it's a difference between the expected risk and the Bayes error (which is sometimes called inherent error or irreducible error), which they denote by $R^{*}$, while the other book above, which also points out that there's this equivalent error excess decomposition, denote it by $\epsilon_{\mathrm{Bayes}}$. So, here's the error excess

$$R(h)-R^{*}=\underbrace{\left(\inf _{h \in \mathcal{H}} R(h)-R^{*}\right)}_{\text {approximation excess}} + \underbrace{\left(R(h)-\inf _{h \in \mathcal{H}} R(h)\right)}_{\text {estimation error}}$$

So, if you take $R^{*} = \epsilon_{\mathrm{Bayes}}$ from both sides of the equation, you end up with

$$R(h)=\underbrace{\left(\inf _{h \in \mathcal{H}} R(h) \right)}_{\epsilon_{\mathrm{app}}} + \underbrace{\left(R(h)-\inf _{h \in \mathcal{H}} R(h)\right)}_{\epsilon_{\mathrm{est}}} \label{2}\tag{2}$$

which is equivalent to equation \ref{1}, where

  • $L_{\mathcal{D}}\left(h_{S}\right) \equiv R(h)$
  • $\min _{h \in \mathcal{H}} L_{\mathcal{D}}(h) \equiv \inf _{h \in \mathcal{H}} R(h)$

Illustration

A nice picture that illustrates the relationship between these terms can be found in figure 4.1 of this book (p. 62).

Here, the red points are specific hypotheses. In this illustration, we can see that the best hypothesis (the Bayes hypothesis) lies outside our chosen hypothesis class $\mathcal{H}$. The distance between the risk of $h \in \mathcal{H}$ and the risk of $h^* = \operatorname{arg inf} _{h \in \mathcal{H}} R(h)$ is the estimation error, while the distance between $h^*$ and the Bayes hypothesis (i.e. the hypothesis that achieves the Bayes error) is the approximation excess in equation \ref{2}.

",2444,,2444,,12/14/2021 11:38,12/14/2021 11:38,,,,2,,,,CC BY-SA 4.0 23888,1,,,10/3/2020 18:33,,1,30,"

I have a dataset with four features:

  • the x coordinate
  • the y coordinate
  • the velocity magnitude
  • angle

Now, I want to measure the distance between two points in the dataset, taking into account the facts that the angle dimension is toroidal, and taking into account the difference in nature of the dimensions (2 of them are distances, one of them is velocity magnitude, and the other an angle).

What kind of distance function would suit this need?

If I have to go for an $L^p$ norm, can I determine which value of $p$ would be apt by some means?

Also, if you are aware, please, let me know how such problems have been solved in various applications.

",35576,,2444,,10/4/2020 21:37,10/4/2020 21:37,How to find distance between 2 points when dimensions are all of different nature?,,0,4,,,,CC BY-SA 4.0 23889,1,23923,,10/3/2020 20:18,,7,3300,"

I'm trying to implement transformer model using this tutorial. In the decoder block of the Transformer model, a mask is passed to "pad and mask future tokens in the input received by the decoder". This mask is added to attention weights.

import tensorflow as tf

def create_look_ahead_mask(size):
    mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
    return mask

Now my question is, how is doing this step (adding mask to the attention weights) equivalent to revealing the words to model one by one? I simply can't grasp the intuition of it's role. Most tutorials won't even mention this step like it's very obvious. Please help me understand. Thanks.

",41379,,41379,,10/6/2020 10:06,2/27/2021 13:54,What is the purpose of Decoder mask (triangular mask) in Transformer?,,3,0,,,,CC BY-SA 4.0 23891,1,23895,,10/4/2020 4:35,,1,85,"

We have n dimension input for SOM and the output 2-D clusters. How does it happen?

",41226,,,,,10/4/2020 15:33,How does dimensionality reduction occur in Self organizing Map (SOM)?,,1,0,,,,CC BY-SA 4.0 23895,2,,23891,10/4/2020 15:20,,2,,"

SOM (Self-Organinizing Map) is a type of artificial neural network (ANN), introduced by the Finnish professor Teuvo Kohonen in the 1980s, that is trained using unsupervised learning to produce a low-dimensional, discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction.

SOM produces a mapping from a multidimensional input space onto a lattice of clusters, i.e. neurons, in a way that preserves their topology, so that neighboring neurons respond to “similar” input patterns.

It uses three basic processes:

  • Competition
  • Cooperation
  • Adaptation

In competition, each neuron is assigned a weight vector with the same dimensionality d as the input space. Any given input pattern is compared to the weight vector of each neuron and the closest neuron is declared the winner.

In cooperation, the activation of the winning neuron is spread to neurons in its immediate neighborhood, and as a result this allows topologically close neurons to become sensitive to similar patterns. The size of the neighborhood is initially large, but shrinks over time, where an initially large neighborhood promotes a topology-preserving mapping and smaller neighborhoods allows neurons to specialize in the latter stages of training.

In adaptation, the winner neuron and its topological neighbors are adapted to make their weight vectors more similar to the input pattern that caused the activation.

So, a Self-Organizing Map (SOM) is to encode a large set of input vectors $\textbf{x}$ by finding a smaller set of “representatives” or “prototypes” or “code-book vectors” $\textbf{w}$ that provide a good approximation to the original input space. This is the basic idea of vector quantization theory, the motivation of which is dimensionality reduction or data compression. Performing a gradient descent style minimization on SOM's loss function (eg. the sum of the Euclidean distances between the input sample and each neuron) does lead to the SOM weight update algorithm, which confirms that it is generating the best possible discrete low dimensional approximation to the input space (at least assuming it does not get trapped in a local minimum of the error function).

To answer your question, you should take into consideration that the Dimensionality reduction takes place in fields that deal with large numbers of observations and/or large numbers of variables. Thus, SOM helps finding good "prototypes", in a way that each input pattern belongs to exactly one of them. As a result, the training instances are mapped to the training "prototypes" and the whole training set is mapped to a new one with less instances.

In addition, the "prototypes" neurons resulted by SOM can often be used as good centers in RBF networks or to classify patterns with the LVQ family algorithms.

",36055,,36055,,10/4/2020 15:33,10/4/2020 15:33,,,,3,,,,CC BY-SA 4.0 23896,2,,7608,10/4/2020 16:39,,2,,"

Use of Transposed Convolution can lead to checkerboard artifacts. So we prefer to up-sample and then apply convolution. You can check this article for more information https://distill.pub/2016/deconv-checkerboard/.

",41392,,,,,10/4/2020 16:39,,,,0,,,,CC BY-SA 4.0 23897,1,,,10/4/2020 23:38,,2,54,"

I am trying to create a model that is using a one-shot learning approach for a classification task. We do this because we do not have a lot of data and it also seems like a good way to learn this approach (it is going to be a university project). The task would be to classify objects, probably from drone/satellite image (of course zoomed one).

My question is, do you think it would be ok to use a model for face recognition, such as DeepFace or OpenFace, and, using transfer learning, retrain it on my classes?

",38252,,2444,,10/5/2020 21:11,10/5/2020 21:11,Is it ok to perform transfer learning with a base model for face recognition to perform one-shot learning for object classification?,,0,0,,,,CC BY-SA 4.0 23898,1,23904,,10/4/2020 23:48,,3,1144,"

I am wondering what is believed to be the reason for superiority of transformer?

I see that some people believe because of the attention mechanism used, it’s able to capture much longer dependencies. However, as far as I know, you can use attention also with RNN architectures as in the famous paper attention is introduced(here)).

I am wondering whether the only reason for the superiority of transformers is because they can be highly parallelized and trained on much more data?

Is there any experiment comparing transformers and RNN+attention trained on the exact same amount of data comparing the two?

",40551,,,,,10/5/2020 9:44,Any comparison between transformer and RNN+Attention on the same dataset?,,1,0,,,,CC BY-SA 4.0 23902,1,,,10/5/2020 3:17,,1,825,"

In section 7.3 of the book Artificial Intelligence: A Modern Approach (3rd edition), it's written

An inference algorithm that derives only entailed sentences is called sound or truth-preserving.

The property of completeness is also desirable: an inference algorithm is complete if it can derive any sentence that is entailed.

However, this does not make much sense to me. I'd like someone to kindly elaborate on this.

",31755,,2444,,1/24/2021 13:29,1/24/2021 13:29,What is the difference between derivation and entailment?,,0,1,,,,CC BY-SA 4.0 23904,2,,23898,10/5/2020 9:19,,3,,"

If you go through the main introductory paper of the transformer ("Attention is all you need"), you can find the comparison of the model with other state-of-the-art machine translation method:

For example, Deep-Att + PosUnk is a method that has utilized RNN and attention for the translation task. As you can see, the training cost for the transformer with self-attention is $2.3 \cdot 10^{19}$ (FLOPs) and $1.0 \cdot 10^{20}$ (FLOPs) for the "Deep-Att + PosUnk" method (the transformer is 4 times faster) on "WMT14 English-to-French" dataset.

Please note that the BLEU is a crucial factor here (not merely training cost). Hence, you can see the BLEU‌ value of the transformer superior to the ByteNet (Neural Machine Translation in Linear Time). Although the ByteNet has not adopted the RNN, you can find the comparison of the ByteNet with other "RNN + Attention" methods in its original paper:

Hence, by transitivity property of the BLEU score, you can find that the transformer has already outperformed other "RNN‌ + Attention" methods in terms of the BLEU score (please check their performance on "WMT14" dataset).

",4446,,4446,,10/5/2020 9:44,10/5/2020 9:44,,,,2,,,,CC BY-SA 4.0 23906,1,,,10/5/2020 10:59,,1,78,"

I'm doing a student project where I construct a model predicting the number of languages that a given Wikipedia article is translated into (for example, the article TOYOTA is translated into 93 languages). I've tried extracting basic info (article length, number of links, etc.) to create a simple regression model, but can't get the $R^2$ value above $0.25$ or so.

What's the most appropriate NLP algorithm for regression problems? Almost all examples I find online are classification problems. FYI I'm aware of the basics of NLP preprocessing (tokenization, lemmatization, bag of words, etc).

",41405,,2444,,10/7/2020 10:28,11/1/2021 12:05,What is the best algorithm to solve the regression problem of predicting the number of languages a Wikipedia article can be translated to?,,1,2,,,,CC BY-SA 4.0 23908,2,,11375,10/5/2020 15:25,,8,,"

As a supplement to nbro's nice answer, I think a major difference between RL and optimal control lies in the motivation behind the problem you're solving. As has been pointed out by comments and answers here (as well as the OP), the line between RL and optimal control can be quite blurry.

Consider the Linear-Quadratic-Gaussian (LQG) algorithm, which is generally considered to be an optimal control method. Here a controller is computed given a stochastic model of the environment and a cost function.

Now, consider AlphaZero, which is obviously thought of as an RL algorithm. AlphaZero learns a value function (and thus a policy/controller) in a perfect information setting with a known deterministic model.

So, it's not the stochasticity that separates RL from optimal control, as some people believe. It's also not the presence of a known model. I argue that the difference between RL and optimal control comes from the generality of the algorithms.

For instance, generally, when applying LQG and other optimal control algorithms, you have a specific environment in mind and the big challenge is modeling the environment and the reward function to achieve the desired behavior. In RL, on the other hand, the environment is generally thought of as a sort of black box. While in the case of AlphaZero the model of the environment is known, the reward function itself was not designed specifically for the game of chess (for instance, it's +1 for a win and -1 for a loss, regardless of chess, go, etc.). Furthermore, the neat thing with AlphaZero is that we can use it to train agents in virtually any perfect information game without changing the algorithm at all. Another difference with RL here is that the agent iteratively improves itself, while optimal control algorithms learn controllers offline and then stay fixed.

",37829,,2444,,2/12/2021 20:00,2/12/2021 20:00,,,,5,,,,CC BY-SA 4.0 23909,2,,20794,10/5/2020 16:08,,0,,"

From my understanding of the paper, $Z^{pres}$ keeps track of the objects in the scene. For every step of the sequential inference, $z^{pres,i}$ takes either a 0 or a 1. A 1 represents that an object is present and it has to be explained by the remaining latent variables. 0 indicates that all the objects have been explained and inference is complete.

",41414,,,,,10/5/2020 16:08,,,,0,,,,CC BY-SA 4.0 23910,1,,,10/5/2020 19:08,,14,496,"

The problem of automated theorem proving (ATP) seems to be very similar to playing board games (e.g. chess, go, etc.): it can also be naturally stated as a problem of a decision tree traversal. However, there is a dramatic difference in progress on those 2 tasks: board games are successfully being solved by reinforcement learning techniques nowadays (see AlphaGo and AlphaZero), but ATP is still nowhere near to automatically proving even freshman-level theorems. What does make ATP so hard compared to board games playing?

",41418,,,,,11/7/2020 4:02,Why is automated theorem proving so hard?,,1,3,,,,CC BY-SA 4.0 23915,1,23963,,10/6/2020 0:57,,2,505,"

Pokemon is a game where 2 players each select 6 Pokemon (a team) at the beginning of the game without knowing the other player's team. Every Pokemon has one or two types. Every type is either weak, neutral or strong against every other type. This means that every 2 Pokemon matchup will either have a winner or be a tie. This also means that any team can be ranked against any other team based on the number of winning matchups they have.

I want to write a program that can find the optimal Pokemon team out of a set of 70 provided Pokemon. A team is considered optimal if it has the greatest number of winning matchups against any other team. Basically, I want to calculate which team will have the most amount of favorable matchups if you were to battle it against every other possible team.

What algorithm would be best for doing this? It is not feasible to compute matchups for every possible team. Can I do some sort of A* search with enough pruning to make it computationally feasible?

",41421,,,,,10/8/2020 4:14,How to find the optimal pokemon team,,1,0,,,,CC BY-SA 4.0 23916,2,,23772,10/6/2020 4:14,,1,,"

After reading your question I can relate it to the Representation Learning papers such as SimCLR and SwAV. These models use a "Big Task agnostic CNN" to obtain smaller representations of the images and then they train another CNN for classification. I suggest you read Big Self-Supervised Models are Strong Semi-Supervised Learners by Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi and Geoffrey Hinton. The code for the following can be found here. But I feel that training such a model would take up a lot of computational resources.

",40434,,,,,10/6/2020 4:14,,,,0,,,,CC BY-SA 4.0 23920,2,,23906,10/6/2020 7:44,,1,,"

I think it's difficult to tell wich algorithm is "the best" or "the simplest".

I had the same issue of choosing the suited NLP algorithm for my dataset and I used :

https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html

Then I recommanded you to test many algorithms as you can to find the best for your needed.

",41176,,,,,10/6/2020 7:44,,,,0,,,,CC BY-SA 4.0 23922,1,,,10/6/2020 8:41,,1,80,"

What is the status of the capsule networks?

I got an impression that capsule networks turned out not to be so useful in applications more complicated than the MNIST (at least according to this reddit discussion​).

Is this really the case? Or can they be a promising research direction (and if so, is there any specific application for which they seem the most promising)?

",36769,,,,,10/18/2020 4:14,What is the status of the capsule networks?,,1,0,,,,CC BY-SA 4.0 23923,2,,23889,10/6/2020 10:43,,6,,"

The Transformer model presented in this tutorial is an auto-regressive Transformer. Which means that prediction of next token only depends on it's previous tokens.

So in order to predict next token, you have to make sure that only previous token are attended. (If not, this would be a cheating because model already knows whats next).

So attention mask would be like this
[0, 1, 1, 1, 1]
[0, 0, 1, 1, 1]
[0, 0, 0, 1, 1]
[0, 0, 0, 0, 1]
[0, 0, 0, 0, 0]

For example: If you are translating English to Spanish
Input: How are you ?
Target: < start > Como estas ? < end >
Then decoder will predict something like this
< start > (it will be given to decoder as initial token)
< start > Como
< start > Como estas
< start > Como estas ?
< start > Como estas ? < end >

Now compare this step by step prediction sequences to attention mask given above, It would make sense now to you

",41110,,,,,10/6/2020 10:43,,,,3,,,,CC BY-SA 4.0 23928,1,23932,,10/6/2020 19:24,,1,56,"

The E step on the EM algorithm asks us to set the value of the variational lower bound to be equal to the posterior probability of the latent variable, given the data points and parameters. Clearly we are not taking any expectations here, then why is it called the Expectation step? Am I missing something here?

",35576,,,,,10/7/2020 8:12,Why is the E step in expectation maximisation algorithm called so?,,1,0,,,,CC BY-SA 4.0 23931,2,,23910,10/5/2020 18:29,,3,,"

There are two ways to look at the problem, one in terms of logic and the other in terms of psychology.

To get any -start- on automation of mathematics, you need to formalize the part you want. It has only been since the early part of the 20th c that most day to day math had been formalized with logic and set theory. And even though Gödel's incompleteness theorems say (very loosely) that there is no algorithm to decide theorem-hood for mathematical statements (that include a theory of arithmetic), that still leaves a lot of math that -can- be decided. But that has taken the Reverse Mathematics program (still ongoing) to say specifically what subsets of math are decidable or to what degree (what logical assumptions are necessary) they are undecidable.

So theorems in arithmetic of just '+' (that is, dropping '*') can be decided, Euclidean geometry can be decided, single variable differential calculus can be decided but not single variable integral calculus. These examples show that what we know to be decidable is pretty elementary. And most of the things we care about are very un-elementary (almost by definition).

As to psychology, the theorems and proofs that you learn in mathematics classes are nowhere near like their formalizations. Most mathematicians aren't pushing symbols around in their heads like a computer does. A mathematician is more like an artist, visualizing dreams and connecting metaphors just on their barely conscious images borne out of repetition. That is, machines and mathematicians just work on different representations (despite what non-mathematicians might imagine).


To address your specific question, yes, mathematical theorems and the systems to prove them are very similar in a technical sense. Games (often, not always) can be modeled as trees, And likewise proofs can often be modeled as trees. Without writing you a library of books about games and proofs, let's just say that the mathematical proofs that are like games that are won by Alpha Zero are not for particularly interesting theorems. Winning a game of go is more like proving that a very very large boolean formula. Most mathematical theorems require a lot of ingenuity in introducing steps in their proof trees. It may be mechanical after the fact to check that a proof is correct, but discovering the proof almost needs magic to come up with a step in the game. Sure, some things in math are automatable (as mentioned before, derivatives), but some mathematical systems (such as integration) are provably impossible to find proofs of all true statements.

Another difference between theorem proving and games is that proofs have to be air tight on all paths, whereas with games one side just has to eke out a single win over the other side.


A separate issue entirely that may contribute to the difficulty is that we just may not yet have the tooling available, ie editors, notation, proof assistants that make it easy to do what should be easy. Or it could just be that mathematicians don't have the fluency with theorem proving systems.

Or it could be that if there were automated theorem provers good enough, mathematicians just wouldn't care too much for them because they'd take away the fun of finding the proofs themselves.

",41442,Mitch,41442,,10/8/2020 3:13,10/8/2020 3:13,,,,5,,,,CC BY-SA 4.0 23932,2,,23928,10/6/2020 22:21,,1,,"

In expectation step, firstly we calculate the posterior of latent variable $Z$ and then the $Q(θ | θ^{(t)})$ is defined as the expected value of the log likelihood of $θ$, with respect to the current conditional contribution of $Z$ given $X$ and the current estimates of $θ^{(t)}$. In maximization step, we update $θ$ using the argmax on $Q$, with respect to $θ$.

$$Q(θ | θ^{(t)}) = E_{Z|X,θ^{(t)}}[logL(θ;Χ,Z)]$$

To be more intuitive, think of k-means as a special case of EM, where in expectation step the $Z$ variables are defined, that is the latent variables indicating the membership in a cluster, and calculated in a hard assignment way. In maximization step the $μ$s of the clusters are updated. If you want to see the corresponding relation for $Q$ in k-means, I suggest you read the chapter 9.3.2 in C.Bishop's book: Pattern Recognition and Machine Learning.

",36055,,36055,,10/7/2020 8:12,10/7/2020 8:12,,,,2,,,,CC BY-SA 4.0 23933,1,23934,,10/7/2020 2:21,,4,120,"

This is the Short Corridor problem taken from the Sutton & Barto book. Here it's written:

The problem is difficult because all the states appear identical under the function approximation

But this doesn't make much sense as we can always choose states as 0,1,2 and corresponding feature vectors as

x(S = 0,right) = [1 0 0 0 0 0]
x(S = 0 , left) = [0 1 0 0 0 0]
x(S = 1,right) = [0 0 1 0 0 0]
x(S = 1 , left) = [0 0 0 1 0 0]
x(S = 2,right) = [0 0 0 0 1 0]
x(S = 2 , left) = [0 0 0 0 0 1]\

So why is it written that all the states appear identical under the function approximation?

",37611,,2444,,10/7/2020 17:32,10/7/2020 17:32,Why do all states appear identical under the function approximation in the Short Corridor task?,,2,0,,,,CC BY-SA 4.0 23934,2,,23933,10/7/2020 2:51,,4,,"

You can choose those states, but is the agent aware of the state it is in? From the text, it seems that the agent cannot distinguish between the three states. Its observation function is completely uninformative.

This is why a stochastic policy is what is needed. This is common for POMDPs, whereas for regular MDPs we can always find a deterministic policy that is guaranteed to be optimal.

",40573,,,,,10/7/2020 2:51,,,,0,,,,CC BY-SA 4.0 23935,2,,23933,10/7/2020 7:50,,1,,"

In toy problems like the Short Corridor task, you can choose the state representation to explore a key property, such as the ability of a particular method to solve it. Often this is done to extremes and heavily simplified.

That is what is going on here. The state space that the agent is allowed to use is made highly degenerate with respect to the problem. This stands in for perhaps more complex partially observable systems, but in a way that is really clear to the reader. Also, it is still possible to derive analytically what the best policy should be, so methods can be examined as to how well they deal with the core issue (here, that state data is ambiguous).

",1847,,,,,10/7/2020 7:50,,,,0,,,,CC BY-SA 4.0 23937,1,,,10/7/2020 10:49,,1,19,"

In the Deep Equilibrium Model the neural network can be seen as "infinitely deep". Training learns a nonlinear function as usual. But there is no forward propagation of input data through layers. Instead, a root finding problem is solved when data comes in.

My question is, what is actually the function for which roots are searched, I'm struggling to see what would be unknown when data is available and parameters have been found in training?

",41448,,,,,10/7/2020 10:49,Root finding in Deep Equilibrium Models,,0,0,,,,CC BY-SA 4.0 23938,1,,,10/7/2020 13:20,,1,46,"

I'm processing a semi-structured scientific document and trying to extract some specific concepts. I've actually made quite good progress without machine-learning so far, but I got to a block of true free text and I'm wondering whether a very narrow sense NLP/learning algorithm can help.

Specifically, there are concepts I know to be important that are discussed in this section, but I'll need some NLP to get the 'sentiment'. I thought this might 'entity sentiment' analysis, however, I'm not trying to capture the writer's emotion about a concept. It's literally whether the writer of the text thinks the entity is present, absent, or is uncertain about an entity.

Simple example. "First, there are horns. And second, the sheer size of this enormous fossil record argues for an herbivore or omnivore. The jaws are large, but this is not a carnivore."

And say my entities are horns (presence or absence), and type of dinosaur (herbivore, omnivore, carnivore). Desired output:

Horns (present) Carnivore (absent) Herbivore (possible/present) -- fine if it thinks 'present' Omnivore (possible/present) -- fine if it thinks 'present'

What is the class of NLP analysis that takes an explicit input entity (or list of entities) and tries to assess based on context whether that entity is present or absent according to the writer of the input text? It's actually fine if this isn't a learning algorithm (maybe better). Bonus if you have suggestions for python packages that could be used in this narrow sense. I've looked casually through NLTK and spacy packages but they're both vast, wasn't obvious which class of model or functions I'd need to solve this problem.

",41451,,41451,,10/9/2020 11:43,10/9/2020 11:43,Determining if an entity in free text is 'present' or 'absent'; what is this called in NLP?,,0,3,,,,CC BY-SA 4.0 23941,1,23952,,10/7/2020 17:03,,3,670,"

In RL, both the KL divergence (DKL) and Total variational divergence (DTV) are used to measure the distance between two policies. I'm most familiar with using DKL as an early stopping metric during policy updates to ensure the new policy doesn't deviate much from the old policy.

I've seen DTV mostly being used in papers giving approaches to safe RL when placing safety constraints on action distributions. Such as in Constrained Policy Optimization and Lyapunov Approach to safe RL.

I've also seen that they are related by this formula:

$$ D_{TV} = \sqrt{0.5 D_{KL}} $$

When you compute the $D_{KL}$ between two polices, what does that tell you about them, and how is it different from what a $D_{TV}$ between the same two policies tells you?

Based on that, are there any specific instances to prefer one over the other?

",40671,,2444,,10/7/2020 22:27,10/8/2020 14:59,When should one prefer using Total Variational Divergence over KL divergence in RL,,2,0,,,,CC BY-SA 4.0 23946,1,23960,,10/7/2020 18:19,,3,132,"

In the average reward setting we have:

$$r(\pi)\doteq \lim_{h\rightarrow\infty}\frac{1}{h}\sum_{t=1}^{h}\mathbb{E}[R_{t}|S_0,A_{0:t-1}\sim\pi]$$

$$r(\pi)\doteq \lim_{t\rightarrow\infty}\mathbb{E}[R_{t}|S_0,A_{0:t-1}\sim\pi]$$

How is the second equation derived from the first?

",37611,,36821,,10/7/2020 23:52,10/7/2020 23:57,How do we derive the expression for average reward setting in continuing tasks?,,1,2,,,,CC BY-SA 4.0 23947,2,,23922,10/7/2020 20:01,,-2,,"

ML is full with things that supposed to work better (in theory).

  • Sigmoid function seems better than ReLu.

  • L1 seems way better than L2.

  • Spikes neural network seem to be better than standard neural network.

  • A shallow neural network with a lot of neurons has more parameters than a deep one with the same amount of neurons. So, in theory, has to be more powerful.

Is common to forget the big impact of the training capacity. And easy to get charmed by its theoretical capacity.
Training process are still in diapers. We lack a good understating of it.
Today, Capsules neural network are a very powerful architecture, hard to train.

",41126,,,,,10/7/2020 20:01,,,,0,,,,CC BY-SA 4.0 23948,1,23959,,10/7/2020 20:34,,2,70,"

I am reading "Reinforcement Learning: An Introduction (2nd edition)" authored by Sutton and Barto. In Section 9, On-policy prediction with approximation, it first gives the mean squared value error objective function in (9.1):

$\bar{VE}(\boldsymbol{w}) = \sum_{s \in S} \mu(s)[v_{\pi}(s) - \hat{v}(s,\boldsymbol{w})]^2$. (9.1)

$\boldsymbol{w}$ is a vector of the parameterized function $\hat{v}(s,\boldsymbol{w})$ that approximates the value function $v_{\pi}(s)$. $\mu(s)$ is the fraction of time spent in $s$, which measures the "importance" of state $s$ in $\bar{VE}(\boldsymbol{w})$.

In (9.4), it states an update rule of $\boldsymbol{w}$ by gradient descent: $\boldsymbol{w}_{t+1} = \boldsymbol{w} -\frac{1}{2}\alpha \nabla[v_{\pi}(S_t) - \hat{v}(S_t,\boldsymbol{w})]^2$. (9.4)

I have two questions regarding (9.4).

  1. Why $\mu(s)$ is not in (9.4)?
  2. Why is it the "minus" instead of "+" in (9.4)? In other words, why is it $\boldsymbol{w} -\frac{1}{2}\alpha \nabla[v_{\pi}(S_t) - \hat{v}(S_t,\boldsymbol{w})]^2$ instead of $\boldsymbol{w} +\frac{1}{2}\alpha \nabla[v_{\pi}(S_t) - \hat{v}(S_t,\boldsymbol{w})]^2$?
",40506,,2444,,10/9/2020 20:48,10/9/2020 20:48,"Why is the fraction of time spent in state $s$, $\mu(s)$, not in the update rule of the parameters?",,1,7,,,,CC BY-SA 4.0 23949,2,,21604,10/7/2020 20:48,,0,,"

You will need to convert that to something that a neural network can understand.

  • Movie Name

Is useless. At least you want to judge a movie by its name.

  • Description

You will need to perform a Tokenization. Grab the x's more common words and convert them on an array.
I recommend you to see these videos from TensorFlow.

https://www.youtube.com/watch?v=fNxaJsNG3-s&t=18s

There you can find the Google Colab Links for the job.

  • Director

There are two options. A One Hot Array or an integer for every director.
One Hot Array may be better if you don't have an order of similarity of the directors. But will increase the size of the inputs.
If you have an order of similarity of the directors an integer for each director will work fine.

  • Rating

Nothing to do here. Is ready to go.

You can perform this work directly on excel, but tensorflow has great tools for it. It’s hard to run the model after you have trained it on excel.
If you are to comfortable on excel or you can install new software on your computer, I’ve made a backpropagation algorithm that runs on excel and give you a formula to paste in a module.

https://github.com/TorrensJoaquin/Multivariant-Nonlinear-Regression-in-VBA-Small-Neural-Network

",41126,,,,,10/7/2020 20:48,,,,0,,,,CC BY-SA 4.0 23950,1,,,10/7/2020 20:52,,0,68,"

I want to train a convolutional neural network for object detection (say YOLO) to detect faces. Consider this image:

In this training image, I have many people, but only 2 of them are annotated. Is having this kind of images (where target classes are not all annotated) will train the network to ignore positives?

If yes, are there any techniques to solve the issue apart from annotating the data (I don't have enough resources to do that).

",30497,,,,,10/8/2020 6:33,Is training a CNN object detector on an image containing multiple targets that are not all annotated will teach it to miss targets?,,1,0,,,,CC BY-SA 4.0 23952,2,,23941,10/7/2020 22:08,,1,,"

I did not read those two specified linked/cited papers and I am not currently familiar with the total variation distance, but I think I can answer some of your questions, given that I am reasonably familiar with the KL divergence.

When you compute the $D_{KL}$ between two polices, what does that tell you about them

The KL divergence is a measure of "distance" (or divergence, as the name suggests) between two probability distributions (i.e. probability measures) or probability densities. In reinforcement learning, (stochastic) policies are probability distributions. For example, in the case your Markov decision process (MDP) has a discrete set of actions, then your policy can be denoted as $$\pi(a \mid s),$$which is the conditional probability distribution over all possible actions, given a specific state $s$. Hence, the KL divergence is a natural measure of how two policies are similar or different.

There are 4 properties of the KL divergence that you always need to keep in mind

  1. It is asymmetric, i.e., in general, $D_{KL}(q, p) \neq D_{KL}(p, q)$ (where $p$ and $q$ are p.d.s); consequently, the KL divergence cannot be a metric (because metrics are symmetric!)
  2. It is always non-negative
  3. It is zero when $p = q$.
  4. It is unbounded, i.e. it can be arbitrarily large; so, in other words, two probability distributions can be infinitely different, which may not be very intuitive: in fact, in the past, I used the KL divergence and, because of this property, it wasn't always clear how I should interpret the KL divergence (but this may also be due to my not extremely solid understanding of this measure).

and how is it different from what a $D_{TV}$ between the same two policies tells you?

$D_{TV}$ is also a measure of the distance between two probability distributions, but it is bounded, specifically, in the range $[0, 1]$ [1]. This property may be useful in some circumstances (which ones?). In any case, the fact that it lies in the range $[0, 1]$ potentially makes its interpretation more intuitive. More precisely, if you know the maximum and minimum values that a measure can give you, you can have a better idea of the relative difference between probability distributions. For instance, imagine that you have p.d.s $q$, $p$ and $p'$. If you compute $D_{TV}(q, p)$ and $D_{TV}(q, p')$, you can have a sense (in terms of percentage) of how much $p'$ and $p$ differ with respect to $q$.

The choice between $D_{TV}$ and $D_{KL}$ is probably motivated by their specific properties (and it will probably depend on a case by case basis, and I expect the authors of the research papers to motivate the usage of a specific measure/metric). However, keep in mind that there is not always a closed-form solution not even to calculate the KL divergence, so you may need to approximate it (e.g. by sampling: note that the KL divergence is defined as an expectation/integral so you can approximate it with a sampling technique). So, this (computability and/or approximability) may also be a parameter to take into account when choosing one over the other.

By the way, I think that your definition of the total variational divergence is wrong, although the DTV is related to the DKL, specifically, as follows [1]

\begin{align} D_{TV} \leq \sqrt{\frac{1}{2} D_{KL}} \end{align}

So the DTV is bounded by the KL divergence. Given that the KL divergence is unbounded (e.g. it can take very big values, such as 600k, this bound should be very loose).

Take a look at the paper On choosing and bounding probability metrics (2002, by Alison L. Gibbs and Francis Edward Su) or this book for information about $D_{TV}$ (and other measures/metrics).

",2444,,2444,,10/8/2020 14:59,10/8/2020 14:59,,,,5,,,,CC BY-SA 4.0 23959,2,,23948,10/7/2020 23:34,,2,,"
  1. $\mu(s)$ is not in equation (9.4) because we are assuming that the examples by which we update our parameter $w$, i.e. the frequency of which we will observe the states during online training, is the same. That is, it is a constant with respect to $w$ and since we are differentiating it can be somewhat disregarded as a constant of proportionality -- it essentially can be 'absorbed' by $\alpha$.

  2. The minus is there because we are performing gradient descent. For more information on this, see e.g. the wikipedia page

",36821,,,,,10/7/2020 23:34,,,,0,,,,CC BY-SA 4.0 23960,2,,23946,10/7/2020 23:50,,3,,"

We assume that our MDP is ergodic. Loosely speaking, this means that wherever the MDP starts (i.e. no matter which state we start in) or any actions the agent takes early on can only have a limited effect on the MDP and in the limit (as $t \rightarrow \infty$) the expectation of being in a given state depends only on the policy $\pi$ and the transition dynamics of the MDP.

This means that, eventually, $\mathbb{E}[R_t] = \mathbb{E}[R_{t+1}]$ for some large $t$. Therefore, as we take the average of our expected values of the rewards received for an infinitely long period of time, this will have converged due to what I just mentioned of $\mathbb{E}[R_t] = \mathbb{E}[R_{t+1}]$. To see why the two are equal, recall that the reward received is dependent on the current state and the action taken -- to better emphasise this I will briefly denote the reward at time step $t+1$ as $R(S_t, A_t)$. If we are in the steady state distribution, that is, the state distribution is now fixed, and our actions are still taken according to our policy, then the expected value of $R(S_t, A_t)$ will be the same for all future $t$ since neither the policy nor the state distribution are changing (recall that the average rewards are a way of evaluating a policy in the average-reward setting so for sure this does not change).

A way to think of this is that since we know that, eventually, $\mathbb{E}[R_t]$ will equal $\mathbb{E}[R_{t+1}]$, and so if we keep have an infinite number of these, the average of them will of course converge to the same value. Imagine if I gave you the sequence 1, 2, 3, 4, 4, 4, 4, ........, 4 and asked you to take the average - if we had an infinite amount of 4's then the average would of course be 4.

",36821,,36821,,10/7/2020 23:57,10/7/2020 23:57,,,,0,,,,CC BY-SA 4.0 23961,2,,13848,10/7/2020 23:52,,0,,"

Our recent work solves this problem by using the idea of a forward-looking actor.

We use a neural network to forecast the next state given the current state and current action. Then plug it into the actor training with considering the value of future states. We use our idea on TD3 and make a new algorithm TD3-FORK, which solves this problem with as few as four hours. https://github.com/honghaow/FORK/tree/master/BipedalWalkerHardcore

",41432,,,,,10/7/2020 23:52,,,,1,,,,CC BY-SA 4.0 23962,1,,,10/8/2020 0:42,,1,23,"

Suppose I have two fitted ensemble models $F_1 := (f_1, f_2, f_3, \cdots f_n)$ and $G_1 := (g_1, g_2, g_3, \cdots g_n)$.

And they were using the same ensemble methods (boosting or bagging).

And I am using some measurement for model performance $M: f_i \to \mathbb{R}^+$, higher the better.

And I know beforehand $M(f_i) \gt M(g_i), \forall i \in [1,n]$, can I conclude $M(F) \gt M(G) $ ?

",41462,,41462,,11/15/2020 0:13,11/15/2020 0:13,Would performance of atomic models matter in ensemble methods?,,0,2,,,,CC BY-SA 4.0 23963,2,,23915,10/8/2020 4:14,,1,,"

After my initial comment (where I suggest that it might not be enough info) I believe I actually came up with an idea.

Start with the full set of pokemon. For every possible type, identify the count of pokemon that are strong against that type. For this, you'll end up with a List<(pokemonId, types, List<weakAgainst>)>.

Minimize List<weakAgainst>.Count() and from the possible set of pokemonIds, select one at random. Without knowing anything else besides type, this pokemon is as good as any other with the same weakness count (this is the point of my original comment).

From the list of weaknesses that this selected pokemon has, select a pokemon from your list that is strong against the weakness, minimizing the amount of weaknesses again. Likely more than one will match this criteria, again, select one at random.

Keep repeating this pattern until you obtain the 6 in your team. This is, statistically speaking, one of the best teams that you can gather.

For all the combinations that you might find here, some teams will have less weaknesses, since we're "randomly" walking down a tree of possibilities. This very much sounds like a minimax-prunning algorithm, where each pokemon selection (minimizing your weaknesses) can be met with potential opponents that will maximize your weak points.

Simplified, put together:

input: allPokemon: list<pokemonId, weakAgainst, strongAgainst>

var: teamWeakAgainst: []
var: teamStrongAgainst: []
var: selectedTeam: []

while (size(selectedTeam) < 6)
  goodMatches <- allPokemon.max(p -> size(p.strongAgainst.intersect(teamWeakAgainst)))
  goodMatches <- goodMatches.min(p -> size(p.weakAgainst))
  goodMatches <- goodMatches.max(p -> size(p.strongAgainst))

  selectedPokemon <- goodMatches.random()

  teamWeakAgainst -= selectedPokemon.strongAgainst
  teamWeakAgainst += selectedPokemon.weakAgainst # not counting previously selected pokemon because the current one adds another "weakness", even if it was already accounted for

  selectedTeam += selectedPokemon

output: selectedTeam

From this algorithm is it not obvious where the "max" portion is. We're minimizing our losses (weaknesses) but we're considering all possible opponent teams equally, so there is no real maximization of the opponent choices. For a set of ideas, check below.

Note that this algorithm will give you a set of "teams" that are equally good in the sense that they'll have the same amount of minimized weaknesses and maximized strengths against other possible teams. But even if pokemon are different, the numbers will be the same, just different types.

For a more complex approach, you might want to consider how prevalent some pokemon are (you might not need to optimize against a super rare mythical type, but rather the very common types available in the game), how likely is it that certain pokemon can have better / faster attacks, what is the probability of battle IVs, how frequent can a trainer switch pokemon in battle, etc. Again, I know this is not what you asked for, but for the sake of the example, this will become so complex that instead of a search algorithm, a simulation (Monte Carlo?) approach might be simpler to build teams out of statistical testing.

",190,,,,,10/8/2020 4:14,,,,3,,,,CC BY-SA 4.0 23964,1,,,10/8/2020 5:14,,2,1123,"

There are lots of research papers available that are worth reading. We can read papers easily, but the associated code (not necessarily the official one developed by the authors of the paper) is often not available.

Papers with Code (and the associated Github repo) already lists many research papers and often there is a link to the associated Github repo with the code, but sometimes the code is missing. So, are there alternatives to Papers with Code (for such cases)?

",15368,,2444,,10/9/2020 8:44,5/10/2021 0:20,"What are some alternatives to ""Papers with Code""?",,2,3,,,,CC BY-SA 4.0 23965,2,,23950,10/8/2020 6:33,,1,,"

The neural network will learn what we teach it, for example with that image only, after finish training, your model will hard to recognize humans with dark skin, glasses, big eyes, etc, the features that two annotated targets don't have.

If your data is big enough, and contain all the feature of humans face, the result should be good.

If not, I recommend a semi unsupervised learning method which called Noisy Student. Quick explanation, you take a part of data, add noise (augmentation, drop-out, stochastic depth), and train. Then use that model to label the rest of the dataset and train a new bigger teacher (such as YOLOv3 > YOLOv1) and repeat it. They told that this method was even better than you labeled all the dataset.

You will need to choose the data to train the teacher really carefully, then pray for the result.

",41287,,,,,10/8/2020 6:33,,,,0,,,,CC BY-SA 4.0 23967,2,,10682,10/8/2020 7:56,,0,,"

They're pretty much the same thing - in that the underlying logic of neural networks is fuzzy. A neural network will take a variety of valued inputs, give them different weight in relation to eachother, and arrive at a decision which normally also has a value. Nowhere in that process is there anything like the sequences of either-or decisions which characterize non-fuzzy mathematics, almost all of computer programming, and digital electronics. Back in the 1980s there was a debate about what AI would eventually look like - some researchers tried to program 'common sense' with huge bivalent decision trees, while others used neural networks which pretty soon found their way into a multitude of electronic devices. Obviously the underlying logic of the latter approach is radically different to the former, even if neural nets are built on top of bivalent electronics. However, the use of the term 'fuzzy logic' seems to have been downplayed since the 80s, perhaps because colloquially it sometimes implies uncertainty. This is shame because it offers a more accurate way to model complex situations.

",41469,,41469,,10/8/2020 8:08,10/8/2020 8:08,,,,0,,,,CC BY-SA 4.0 23968,1,23988,,10/8/2020 8:05,,3,155,"

From this page in Interpretable-ml book and this article on Analytics Vidhya, it means to know what has happened inside an ML model to arrive at the result/prediction/conclusion.

In linear regression, new data will be multiplied with weights and bias will be added to make a prediction.

And in boosted tree models, it is possible to plot all the decisions as trees that results in a prediction.

And in feed-forward neural networks, we will have weights and biases just like linear regression and we just multiply weights and add bias at each layer, limiting values to some extent using some kind of activation function at every layer, arriving finally at prediction.

In CNNs, it is possible to see what happens to the input after having passed through a CNN block and what features are extracted after pooling (ref: what does a CNN see?).

Like I stated above, one can easily know what happens inside an ML model to make a prediction or conclusion. And I am unclear as to what makes them un-interpretable!. So, what exactly makes an algorithm or it's results un-interpretable or why are these called black box models? Or am I missing something?

",38060,,38060,,10/9/2020 9:25,10/9/2020 10:03,What exactly is an interpretable machine learning model?,,1,0,,,,CC BY-SA 4.0 23970,2,,3647,10/8/2020 12:18,,0,,"

Another good (although a bit old) and freely available online book (apart from the one suggested in this answer) is Neural Networks - A Systematic Introduction (1996) by Raul Rojas. This book contains several exercises at the end of each chapter and covers topics that you will not find in many online courses.

",2444,,,,,10/8/2020 12:18,,,,0,,,,CC BY-SA 4.0 23974,2,,23941,10/8/2020 14:50,,2,,"

To add to nbro's answer, I'd say also that much of the time the distance measure isn't simply a design decision, rather it comes up naturally from the model of the problem. For instance, minimizing the KL divergence between your policy and the softmax of the Q values at a given state is equivalent to policy optimization where the optimality at a given state is Bernoulli with respect to the exponential of the reward (see maximum entropy RL algorithms). As another example, the KL divergence in the VAE loss is a result of the model and not just a blind decision.

I'm less familiar with total variation distance, but I know there's a nice relationship between the total variation distance of a state probability vector and a Markov chain stationary distribution relative to the timestep and the mixing time of the chain.

Finally, another thing to consider is the properties of the gradients of these divergence measures. Note that the gradient of the total variation distance might blow up as the distance tends to $0$. Additionally, one must consider if unbiased estimators of the gradients from samples can be feasible. While this is generally the case with KL divergence, I'm not sure about total variation distance (as in, I literally don't know), and this is generally not the case with the Wasserstein metric (see Marc G. Bellemare et. al's paper "The Cramér distance as a solution to biased wasserstein gradients"). However, of course there's other scenarios where the tables are turned -- for instance, the distributional bellman operator is a contraction in the supremal Wasserstein metric but not in KL or total variation distance.

TL; DR: Many times mathematical/statistical constraints suggest particular metrics.

",37829,,,,,10/8/2020 14:50,,,,3,,,,CC BY-SA 4.0 23975,2,,23864,10/8/2020 15:05,,0,,"

Well probably the response is that previous approach was a little naive. I managed to fave some interesting result with this kernel that allow me to have an accuracy of 0.969 and a validation accuracy of 0.931. Model I used is based on ResNet50 with the following additional layers ( and the last one just for binary classification ):

tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1,activation="sigmoid")

Even if the net is already trained, I trained each layer again otherwise I did not have any sensible accuracy.

Training history is like this:

Still far to be really good, but progress.

",41346,,,,,10/8/2020 15:05,,,,0,,,,CC BY-SA 4.0 23976,1,,,10/8/2020 17:55,,0,30,"

So I am trying to enforce better separability in my deep learning model and was wondering what I can use besides cross entropy loss to do that? Could maybe using logarithm with different basis in cross entropy (i.e. using lower basis of logarithm than $e$ to gain steeper losses on small values, or bigger basis of logarithm to enforce plateaued losses). What would you suggest on doing?

",41484,,,,,10/8/2020 17:55,Loss function for better class separability in multi class classification,,0,2,,,,CC BY-SA 4.0 23977,1,,,10/8/2020 18:35,,0,133,"

I'm not an expert in AI or NN, I gathered most of the information I have from the internet, and I'm looking for advice and guidance.

I'm trying to design a NN that is going to be used by all the agents of my simulation (each agent will have its own matrix of weights). This is what I plan to have:

  • The NN will have 1 input layer and 1 output layer (no hidden layers).
  • The number of inputs will always be greater than the number of outputs.
  • The outputs represent the probability of an action being taken by the agent (the output node with the highest value will identify the action that will be taken). Which means there are as many output nodes are there are actions.

When an agent takes an action it receives a reward: a number that represents how well the agent performed. This happens "online" that is, the agent is trained on the fly.

What I would like to know if how to best train the NN: that is, how to update the weights of my matrix to maximise the rewards long term.

From the research I made it seems this is close to the concept of Reinforcement Learning, but even if it was, it's not clear to me how to apply it to such a simple NN shape.

",41382,,,,,10/9/2020 6:22,How to train the NN of simple agents given a reward system?,,1,0,,,,CC BY-SA 4.0 23978,1,,,10/8/2020 19:29,,2,438,"

Is there any reason why skip connections would not provide the same benefits to fully connected layers as it does for convolutional?

I've read the ResNet paper and it says that the applications should extend to "non-vision" problems, so I decided to give it a try for a tabular data project I'm working on.

Try 1: My first try was to only skip connections when the input to a block matched the output size of the block (the block has depth - 1 number of layers with in_dim nodes plus a layer with out_dim nodes :

class ResBlock(nn.Module):
    def __init__(self, depth, in_dim, out_dim, act='relu', act_first=True):
        super().__init__()
        self.residual = nn.Identity()
        self.block = block(depth, in_dim, out_dim, act)
        self.ada_pool = nn.AdaptiveAvgPool1d(out_dim)
        self.activate = get_act(act)
        self.apply_shortcut = (in_dim == out_dim)
        
    def forward(self, x):
        if self.apply_shortcut:
            residual = self.residual(x)
     
            x = self.block(x)
            return self.activate(x + residual)
        return self.activate(self.block(x))

The accompanying loss curve:

Try 2: I thought to myself "Great, it's doing something!", so then I decided to reset and go for 30 epochs from scratch. I don't have the image saved, but this training only made it 5 epochs and then the training and validation loss curves exploded by several orders of magnitude:

Try 3: Next, I decided to try to implement the paper's idea of reducing the input size to match the output when they don't match: y = F(x, {Wi}) + Mx. I chose average pooling in place of matrix M to accomplish this, and my loss curve became.

The only difference in my code is that I added average pooling so I could use shortcut connections when input and output sizes are different:

class ResBlock(nn.Module):
    def __init__(self, depth, in_dim, out_dim, act='relu', act_first=True):
        super().__init__()
        self.residual = nn.Identity()
        self.block = block(depth, in_dim, out_dim, act)
        # squeeze/pad input  to output size
        self.ada_pool = nn.AdaptiveAvgPool1d(out_dim) 
        self.activate = get_act(act)
        self.apply_pool = (in_dim != out_dim)
        
    def forward(self, x):
        # if in and out dims are different apply the padding/squeezing:
        if self.apply_pool:
            residual = self.ada_pool(self.residual(x).unsqueeze(0)).squeeze(0)
        else: residual = self.residual(x)
            
        x = self.block(x)
        return self.activate(x + residual)
)

Is there a conceptual error in my application of residual learning? A bug in my code? Or is resididual learning just not applicable to this kind of data/network?

",37691,,,,,10/8/2020 19:29,How to use residual learning applied to fully connected networks?,,0,0,,,,CC BY-SA 4.0 23979,2,,23977,10/8/2020 19:30,,1,,"

You might be able to glean what you want from Chapter 13 or Sutton & Barto's Reinforcement Learning: An Introduction, which deals with policy gradient algorithms, and includes pseudocode for a variety of agents based on linear approximation using softmax regression. From your description, you appear to be using - or should consider - softmax regression for the policy function.

Probably a good bet using your setup will be REINFORCE with baseline, which does not add requirements to track anything more than your current weights, sum of rewards seen after each state & action pair, and mean rewards (or other baseline). It does require an episodic environment though, such as a game that ends.

Essentially REINFORCE and its variants require you to figure out the return (sum of rewards received after each action to the end of the episode) and perform a gradient step update multiplied by that return (minus a baseline which can help with stability).

With REINFORCE and a discrete distribution, the gradient due to action selection is the same as supervised learning for multi-classification evaluated as if whichever action the agent took was the correct one. When you multiply this gradient by the return as well, it will make steps in the direction of better action choices larger. Regardless of the fact that you may train towards selecting many different actions, the better actions will win the race in the end due to this weighting, and have the highest probability of being selected after a large number of training episodes.

A multi-agent problem may bring you some problems, such as instability due to dependency of performance on what the other agents are doing. It is not possible to say how much of an issue this will be. However, I would advise not jumping straight into your main problem, and try some reinforcement learning methods with a single agent on toy problems first. If you build up to solving the problem you are currently facing, then you will be much more confident in how you have implemented the learning. There are plenty of stepping-stone projects and pre-built environments to solve if you search for them. A good starting point is OpenAI's Gym, which presents a standardised set of toy - and not so toy - problems that you can test any learning algorithm against.

",1847,,1847,,10/9/2020 6:22,10/9/2020 6:22,,,,0,,,,CC BY-SA 4.0 23980,1,23981,,10/8/2020 20:12,,1,134,"

I know that my question probably seems like being asked many times, but Ill try to be more speciffic:

Limitations to my question:

  1. I am NOT asking about convolutional neural networks, so please, try not to mention this as an example or as an answer as long as it is possible. (maybe only in question number 3)

  2. My question is NOT about classification using neural networks

  3. I am asking about a "simple" neural network designed to solve the regression type of problem. Let's say it has 2 inputs and 1 output.

Preambula:

As far as I understood, from the universal approximation theorem, in such a case, even if the model is nonlinear, only one hidden layer can perfectly fit a nonlinear model, as shown here http://neuralnetworksanddeeplearning.com/chap4.html.

Question 1

In this specific case, is there any added value in using extra layers? (maybe the model will be more precise, or faster training?)

Question 2

Suppose in 1st question the answer was there is no added value. In such a case will the added value appear if I enlarge inputs from two inputs as described above, to some larger number?

Question 3

Suppose in 2nd question the answer was there is no added value. I am still trying to pinpoint the situation where it STARTS making sense in adding more layers AND where it makes NO sense at all using one layer.

",36453,,2444,,10/9/2020 8:36,10/10/2020 20:35,When are multiple hidden layers necessary?,,1,1,,,,CC BY-SA 4.0 23981,2,,23980,10/8/2020 21:42,,1,,"

A very wide but shallow neural network is going to be harder to train.
You can check that with the playground of tensorflow or with the MPG example in Google Colab.
The relationship between architecture and learning capabilities is not fully understood, but, empirically, thats what you see.
But making the network too deep creates more problems:

  • Vanishing gradients.
  • Less parameters for the same amount of neurons.
  • Exploding gradients.

It is for that reason that humans and neural networks are the one deciding a good architecture.

",41126,,41026,,10/10/2020 20:35,10/10/2020 20:35,,,,1,,,,CC BY-SA 4.0 23983,1,,,10/9/2020 2:23,,3,178,"

We can visualize single, two, and three dimensions using websites or imagination.

In the context of AI and, in particular, machine learning, AI researchers often have to deal with multi-dimensional random vectors.

Suppose if we consider a dataset of human faces, each image is a vector in higher dimensional space and needs to understand measures on them.

How do they imagine them?

I can only imagine with 3D and then approximating with higher dimensions. Is there any way to visualize in higher dimensions for research?

",18758,,2444,,10/13/2020 22:40,10/13/2020 22:40,How do AI researchers imagine higher dimensions?,,1,2,,,,CC BY-SA 4.0 23984,2,,23889,10/9/2020 2:46,,5,,"

We give the target input into the transformer decoder while training the model. So it is easy for the model to "peek ahead" and learn what the next word would be. To ensure that this doesn't happen we apply an additive mask after the dot product between Query and Key. In the original paper "Attention is all you need", the triangular matrix had 0's in the lower triangle and -10e9 (You can see negative infinity used in recent examples) in the upper triangle. So when the mask is added to the attention score the attention scores in the upper triangle would be really low. When this matrix is passed through the softmax function, these really low values become close to 0, which essentially means not to attend to the words after timestep t. To put in matrix format,

[8.1, 0.04, 5.2, 4.2]
[0.5, 9.2, 2.33, 0.7]
[0.2, 0.4, 6.11, 1.0]
[3.1, 2.1. 2.19, 8.1]

Let the above matrix A the result of the dot product between query and key. The A[0][0] contains the attention score of the first-word query to the first word of the key, A[0][1] contains the attention score of the first word of the query to the second of the key, and so on. So as you can see the after adding the mask and performing softmax on A, the result would be,

[8.1, 0.0, 0.0, 0.0]
[0.5, 9.2, 0.0, 0.0]
[0.2, 0.4, 6.11, 0.0]
[3.1, 2.1. 2.19, 8.1]

This forces the transformer only to attend to words that are before it. You can check out the Transformer lecture available in CS224n for full detail.

",41279,,41279,,10/9/2020 7:02,10/9/2020 7:02,,,,0,,,,CC BY-SA 4.0 23986,2,,23964,10/9/2020 3:35,,4,,"

Recently arxiv.org added a Code Tab towards the end of paper descriptions. Which contains links to both the official and community code.

I don't know if this is the case for all the papers or not till know. But I'm sure it'll be extended to all the papers in a short while.

",40434,,,,,10/9/2020 3:35,,,,1,,,,CC BY-SA 4.0 23988,2,,23968,10/9/2020 10:03,,2,,"

In a simple linear model of the form $y = \beta_0 + \beta_1 x $ we can see that increasing $x$ by a unit will increase the prediction on $y$ by $\beta_1$. Here we can completely determine what the effect on the models prediction will be by increasing $x$. With more complex models such as neural networks it is much more difficult to tell due to all the calculations that a single data point is involved in. For instance, in a CNN as you mentioned, if I changed the value of a pixel in an image we were passing through the CNN you wouldn't really be able to tell me exactly the effect this would have on the prediction like you can with the linear model.

",36821,,,,,10/9/2020 10:03,,,,3,,,,CC BY-SA 4.0 23989,2,,5960,10/9/2020 13:13,,0,,"

I only have one good news... There is nothing wrong with your code. Neural networks tend to do that. Especially with a really complex function.

  • Increasing the amount of neurons will not decrease how the error is distributed.
  • There are better loss functions for each case but is not a really effective solution.
  • Neural networks are really good managing noise. So, they are good ignoring minorities. It's a common expression "ANN are racist".

I recommend you to deploy a histogram DataSet vs Output Value. To see if you have too much more data from the central region than in the frontier. If you can generate more data at will. Generate more values in the specific zones with more errors.
This will increase the error and force the backpropagation algorithm to improve in that area.
More information on your optimization algorithm may be useful. But, like I said, everything seems perfectly normal.

",41126,,40434,,10/10/2020 16:11,10/10/2020 16:11,,,,0,,,,CC BY-SA 4.0 23990,2,,23983,10/9/2020 14:40,,5,,"

The most I can visualize or perceive are 4 dimensions. Yes, 4, because I can also watch videos (which have 3 spatial dimensions and 1 temporal one). Remember Einstein's spacetime?

When dealing with $n$-dimensional spaces, for $n > 4$, I simply do not care about visualizing them in my head, but, as someone suggests, we can think of them as "degrees of freedom". Maybe something like a tesseract may be interesting to you, but that's not really useful to me, to be honest.

When dealing with the math that involves $n$-dimensional spaces or objects, you often do not have to visualize anything, but just have to apply the rules. For example, if you are multiplying multi-dimensional arrays, you just need to make sure that the external dimensions match, and stuff like that.

There are cases, when dealing e.g. with TensorFlow's tensors, where you can imagine that there are matrices for each of the elements at that coordinate of the tensor, but that's not very common.

In case you really want to visualize $n$-dimensional objects, you could first project them to $2$ or $3$ dimensions with some dimensionality reduction technique (e.g. t-SNE).

",2444,,2444,,10/9/2020 14:46,10/9/2020 14:46,,,,1,,,,CC BY-SA 4.0 23993,1,,,10/9/2020 21:14,,4,125,"

In many applications and domains, computer vision, natural language processing, image segmentation, and many other tasks, neural networks (with a certain architecture) are considered to be by far the most powerful machine learning models.

Nevertheless, algorithms, based on different approaches, such as ensemble models, like random forests and gradient boosting, are not completely abandoned, and actively developed and maintained by some people.

Do I correctly understand that the neural networks, despite being very flexible and universal approximators, for a certain kind of tasks, regardless of the choice of the architecture, are not the optimal models?

For the tasks in computer vision, the core feature, which makes CNNs superior, is the translational invariance and the encoded ability to capture the proximity properties of an image or some sequential data. And the more recent transformer models have the ability to choose which of the neighboring data properties is more important for its output.

But let's say I have a dataset, without a certain structure and patterns, some number of numerical columns, a lot of categorical columns, and in the feature space (for classification task) the classes are separated by some nonlinear hypersurface, would the ensemble models be the optimal choice in terms of performance and computational time?

In this case, I do not see a way to exploit CNNs or attention-based neural networks. The only thing that comes to my head, in this case, is the ordinary MLP. It seems that, on the one hand, it would take significantly more time to train the weights than the trees from the ensemble. On the other hand, both kinds of models work without putting prior knowledge to data and assumptions on its structure. So, given enough amount of time, it should give a comparable quality.

Or can there be some reasoning that neural network is sometimes bound to give rather a poor quality?

",38846,,2444,,5/20/2021 9:44,12/9/2022 18:37,When do the ensemble methods beat neural networks?,,2,0,,,,CC BY-SA 4.0 23995,1,,,10/10/2020 1:27,,2,67,"

I'm working on creating a model that locates the object in the scene (2D image or 3D scene) using a natural language query. I came across this paper on natural language object retrieval, which mentions that this task is different from text-based image retrieval, in the sense that natural language object retrieval requires an understanding of objects in the image, spatial configurations, etc. I am not able to see the difference between these two approaches. Could you please explain it with an example?

",41514,,2444,,10/12/2020 19:51,10/12/2020 19:51,What is the difference between text-based image retrieval and natural language object retrieval?,,0,0,,,,CC BY-SA 4.0 23996,1,23999,,10/10/2020 1:52,,4,628,"

The question is more or less in the title.

A Markov decision process consists of a state space, a set of actions, the transition probabilities and the reward function. If I now take an agent's point of view, does this agent "know" the transition probabilities, or is the only thing that he knows the state he ended up in and the reward he received when he took an action?

",36978,,2444,,10/10/2020 9:06,10/10/2020 13:10,Is the state transition matrix known to the agents in a Markov decision processes?,,1,0,,,,CC BY-SA 4.0 23998,1,,,10/10/2020 6:56,,3,266,"

Let us assume we have a GRU network containing $H$ layers to process a training dataset with $K$ tuples, $I$ features, and $H_i$ nodes in each layer.

I have a pretty basic idea how the complexity of algorithms are calculated, however, with the presence of multiple factors that affect the performance of a GRU network including the number of layers, the amount of training data (which needs to be large), number of units in each layer, epochs and maybe regularization techniques, training with back-propagation through time, I am messed up. I have found an intriguing answer for neural networks complexity out here What is the time complexity for training a neural network using back-propagation?, but that was not enough to clear my doubt.

So, what is the time complexity of the algorithm, which uses back-propagation through time, to train GRU networks?

",41519,,2444,,10/10/2020 22:14,10/10/2020 22:14,What is the time complexity for training a gated recurrent unit (GRU) neural network using back-propagation through time?,,0,0,,,,CC BY-SA 4.0 23999,2,,23996,10/10/2020 8:35,,4,,"

In reinforcement learning (RL), there are some agents that need to know the state transition probabilities, and other agents that do not need to know. In addition, some agents may need to be able to sample the results of taking an action somehow, but do not strictly need to have access to the probability matrix. This might be the case if the agent is allowed to backtrack for instance, or to query some other systems that simulates the target environment.

Any agent that needs to have access to the state transition matrix, or look-ahead samples of the environment is called model-based. The model in this case can either be a distribution model i.e. the state transition matrix, or it can be a sampling model that simulates the outcome from a given state/action combination.

The state transition function $p(r, s'|s, a)$ which returns the probability of observing reward $r$ and next state $s'$ given the start state $s$ and action $a$, is another way to express the distribution model. It often maps simply to the state transition matrix, but can be a more complete description of the model.

One example model-based approach is Value Iteration, and that requires access to the full distribution model in order to process value update steps. Also, any reinforcement learning that involves planning must use some kind of model. MCTS, as used in AlphaGo, uses a sampling model for instance.

Many RL approaches are model-free. They do not require access to a model. They work by sampling from the environment, and over time learn the impact on expected results due to behaviour of the unknown state transition function. Example methods that do this are Monte Carlo Control, SARSA, Q learning, REINFORCE.

It is possible to combine model-free and model-based methods by using observations to build an approximate model of the environment, and using it in some form of planning. Dyna-Q is an approach which does this by simply remembering past transitions and re-using them in the background to refine its value estimates. Arguably, the experience replay table in DQN is a similar form of background planning (the algorithm is essentially the same). However, more sophisticated model-learning and reuse is not generally as successful, and is not seen commonly in practice. See How can we estimate the transition model and reward function?

In general, model-based methods on the same environment can learn faster than model-free methods, since they start with more information that they do not need to learn. However, it is quite common need to learn without having an accurate model available, so there is lots of interest in model-free learning. Sometimes an accurate model is possible in theory, but it would be more work to calculate predictions from the model than to work statistically from the observations.

",1847,,1847,,10/10/2020 13:10,10/10/2020 13:10,,,,0,,,,CC BY-SA 4.0 24001,1,,,10/10/2020 12:51,,4,74,"

I have been working on some RL project, where the policy is controlling the robot using its joint angles.Throughout the project I have noticed some phenomenon, which caught my attention. I have decided to create a very simplified script to investigate the problem. There it goes:

The environment

There is a robot, with two rotational joints, so 2 degrees of freedom. This means its continuous action space (joint rotation angle) has a dimensionality of 2. Let's denote this action vector by a. I vary the maximum joint rotation angle per step from 11 to 1 degrees and make sure that the environment is allowed to do a reasonable amounts of steps before the episode is forced to terminate on time-out.

Our goal is to move the robot by getting its current joint configuration c closer to the goal joint angle configuration g (also two dimensional input vector). Hence, the reward I have chosen is e^(-L2_distance(c, g)).

The smaller the L2_distance, the exponentially higher the reward, so I am sure that the robot is properly incentivised to reach the goal quickly.

Reward function (y-axis: reward, x-axis: L2 distance):

So the pseudocode for every step goes like:

  • move the joints by predicted joint angle delta

  • collect the reward

  • if time-out or joint deviates too much into some unrealistic configuration: terminate.

Very simple environment, not to have too many moving parts in our problem.

RL algorithm

I use Catalyst framework to train my agent in the actor-critic setting using TD3 algorithm. By using a tested framework, which I am quite familiar with, I am quite sure that there are no implementational bugs.

The policy is goal-driven so the actor consumes the concatenated current and goal joint configuration a= policy([c,g])

The big question

When the robot has only two degrees of freedom, the training quickly converges and the robots learns to solve the task with high accuracy (final L2 distance smaller than 0.01).

Performance of the converged 2D agent. y-axis: joint angle value, x-axis: no of episodes. Crosses denote the desired goal state of the robot.:

However, if the problem gets more complicated - I increase the joint dimensions to 4D or 6D, the robot initially learns to approach the target, but it never "fine-tunes" its movement. Some joints tend to oscillate around the end-point, some of them tend to overshoot.

I have been experimenting with different ideas: making the network wider and deeper, changing the action step. I have not tried optimizer scheduling yet. No matter how many samples the agent receives or how long it trains, it never learns to approach targets with required degree of accuracy (L2_distance smaller than 0.05).

Performance of the converged 4D agent. y-axis: joint angle value, x-axis: no of episodes. Crosses denote the desired goal state of the robot.:

Training curve for 2D agent (red) and 4D agent (orange). 2D agent quickly minimises the L2 distance to something smaller than 0.05, while the 4D agent struggles to go below 0.1.:

Literature research

I have looked into papers which describe motion planning in joint space using TD3 algorithm.

There are not many differences from my approach: Link 1 Link 2

Their problem is much more difficult because the policy needs to also learn the model of the obstacles in joint space, not only the notion of the goal. The only thing which is special about them is that they use quite wide and shallow networks. But this is the only peculiar thing. I am really interested, what do you guys would advise me to do, so that the robot can reach high accuracy in higher joint configuration dimensions? What am I missing here?!

Thanks for any help in that matter!

",41525,,32410,,11/14/2020 2:16,11/14/2020 2:16,Difficulty in agent's learning with increasing dimensions of continuous actions,,0,8,,,,CC BY-SA 4.0 24005,1,,,10/10/2020 15:43,,4,498,"

In reinforcement learning, the return is defined as some function of the rewards. For example, you can have the discounted return, where you multiply the rewards received at later time steps by increasingly smaller numbers, so that the rewards closer to the current time step have a higher weight. You can also have $n$-step returns or $\lambda$-returns.

Recently, I have come across the concept of return-to-go in a few research papers, such as Prioritized Experience Replay (appendix A. Prioritization Variants, p. 12) or Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy (section Theoretical Analysis, p. 3).

What exactly is the return-to-go? How is it mathematically defined? In which situations do we need to care about it? The name suggests that this is the return starting from a certain time step $t$, but wouldn't this be the same thing as the return (which is defined starting from a certain time step $t$ and often denoted as $G_t$ for that same reason)?

There is also the concept of reward-to-go. For example, the reward-to-go is analyzed in the paper Learning the Variance of the Reward-To-Go, which states that the expected reward-to-go is the value function, which seems to be consistent with this explanation of the reward-to-go, where the reward-to-go is defined as

$$\hat{R}_{t} \doteq \sum_{t^{\prime}=t}^{T} R\left(s_{t^{\prime}}, a_{t^{\prime}}, s_{t^{\prime}+1}\right)$$

We also had a few questions that involve the reward-to-go: for example, this or this. How is the return-to-go related to the reward-to-go? Are they the same thing? For example, in this paper, the return-to-go seems to be used as a synonym for reward-to-go (as used in this article), i.e. they call $R(t)$ the "return to-go" (e.g. on page 2), which should be the return starting from time step $t$, which should actually be the reward-to-go.

",2444,,2444,,10/10/2020 15:48,10/10/2020 15:48,What is the return-to-go in reinforcement learning?,,0,2,,,,CC BY-SA 4.0 24007,1,24009,,10/10/2020 16:19,,2,120,"

I am new to Neural Networks and my questions are still very basic. I know that most of neural networks allow and even ask user to chose hyper-parameters like:

  • amount of hidden layers
  • amount of neurons in each layer
  • amount of inputs and outputs
  • batches and epochs steps and some stuff related to back-propagation and gradient descent

But as I keep reading and youtubing, I understand that there are another important "mini-parameters" such as:

  • activation functions type

  • activation functions fine-tuning (for example shift and slope of sigmoid)

  • whether there is an activation funciton in the output

  • range of weights (are they from zero to one or from -1 to 1 or -100 to +100 or any other range)

  • are the weights normally distributed or they just random

etc...

Actually the question is:

Part a:

Do I understand right that most of neural networks do not allow to change those "mini-parameters", as long as you are using "readymade" solutions? In other words if I want to have an access to those "mini-parameters" I need to program the whole neural network by myself or there are "semi-finished products"

Part b:(edited) For someone who uses neural network as an everyday routine tool to solve problems(Like data scientist), How common and how often do those people deal with fine tuning things which I refer to as "mini-parameters"? Or those parameters are usually adjusted by a neural network developers who create the frameworks like pytorch, tensorflow etc?

Thank you very much

",36453,,36453,,10/11/2020 7:57,10/11/2020 7:57,Which hyperparameters in neural network are accesible to users adjustment,,1,5,,,,CC BY-SA 4.0 24008,1,,,10/10/2020 16:20,,3,48,"

I am trying to implement Novelty search; I understand why it can work better than the standard Genetic Algorithm based solution which just rewards according to the objective. I am working on a problem which requires to generate a fixed number of points in a 2d box centered at the origin. In this problem, how can I identify which is a novel configuration of points?

Note: I have thought of one way of doing this: We call the mean of one configuration of points to be the mean of all points in that configuration (let's say this tuple is $(m_x, m_y)$, we store the mean of all configurations generated till now, now for a new configuration it's novelty can be defined as the distance of the mean of this new configuration with $(m_x, m_y)$.
But I think it will not work greatly as some very different configuration of points can also have the same mean.

",41487,,41487,,10/11/2020 11:26,10/14/2020 15:17,Measuring novel configuration of points,,1,0,,,,CC BY-SA 4.0 24009,2,,24007,10/10/2020 17:07,,3,,"

In general, many of the parameters you mentioned are called hyperparameters. All hyperparameters are user-adjusted (or user-programmed) in training phase. Some hyperparameters are:

  • learning rate,
  • batch size,
  • epochs,
  • optimizer,
  • layers,
  • activation functions etc.

To answer your (a) part of your question, there are obsiously many frameworks and libraries, for example in python; TensorFlow, pytorch and so on. You might never create a net from the very beginning; maybe only in order to understand the forward and backpropagation algorithms. When we call from scatch networks, we mean that these networks are trained from scratch, with learnable weights and chosen hyperparameters; with no transfer learning.

To answer your (b) part of your question, I can understand from it that you mean when a net is good enough. Dependently of your data, of course, a neural network is good enough, when it is trained adequately on them. That is, you should be aware of overfitting, underfitting, and in general of the model you are trying to train with all its parameters and hyperparameters.

Since, you are at the very beginning with Machine Learning, I propose you read some books, in order to get everything needed, in terms of Mathematical and Computer Science aspects.

",36055,,,,,10/10/2020 17:07,,,,3,,,,CC BY-SA 4.0 24013,1,,,10/10/2020 19:36,,1,473,"

I am aware that back-propagation through time is used for training the recurrent neural network. But I am not able to understand how this happens for the bi-directional versions of the recurrent neural networks?

So, I was hoping if anyone help me with:

  1. Understanding with an example the training of bi-directional recurrent neural networks using back-propagation through time? (I tried following the original paper https://ieeexplore.ieee.org/document/650093, but it was kind of confusing for me when they perform the backward pass for training)
",41519,,36055,,10/27/2020 1:29,10/27/2020 1:29,How does back-propagation through time work for optimizing the weights of a bidirectional RNN?,,1,0,,,,CC BY-SA 4.0 24020,1,,,10/11/2020 0:13,,5,1144,"

I am trying to do the standard MNIST dataset image recognition test with a standard feed forward NN, but my network failed pretty badly. Now I have debugged it quite a lot and found & fixed some errors, but I had a few more ideas. For one, I am using the sigmoid activation function and MSE as an error function, but the internet suggests that I should rather use softmax for the output layer, and cross entropy loss as an error function. Now I get that softmax is a nice activation function for this task, because you can treat the output as a propability vector. But, while being a nice thing to have, that's more of a convinience thing, isn't it? Easier to visualize?

But when I looked at what the derivative of softmax & CEL combined is (my plan was to compute that in one step and then treat the activation function of the last layer as linear, as not to apply the softmax derivative again), I found:

$\frac{δE}{δi}$ = $t$$o$

(With $i$ being the input of the last layer, $t$ the one hot target vector and $o$ the prediction vector).

That is the same as the MSE derivative. So what benefits does softmax + CEL actually have when propagating, if the gradients produced by them are exactly the same?

",17769,,,,,3/11/2021 0:03,What is the advantage of using cross entropy loss & softmax?,,2,1,,,,CC BY-SA 4.0 24023,2,,23335,10/11/2020 8:43,,1,,"

You can calculate the memory requirement analytically, but it's still not going to beat physical test in practice as there are so many unknown variables in the system which can takes the GPU memory. Maybe tensorflow will decide to store the gradients, then you have to take into account the memory usage of it also.

The way I do it is by setting the GPU memory limit to a high value e.g. 1GB, then test the model inference speed. Then I repeat the process with half the memory. I do it until the model refuses to run or the model speed drops. For example, I start with 1GB, then 512MB, then 256MB, eventually I got to 32 MB and the model speed drops. At 16MB, the model refuses to run. So I know that 64 MB is the minimum requirement I should use for my model. If I want to get a more precise number, I'd repeat the binary search process a couple more time between 64 MB and 32 MB.

You can see how to limit the GPU memory here: https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  # Restrict TensorFlow to only allocate 1GB of memory on the first GPU
  try:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],
        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Virtual devices must be set before GPUs have been initialized
    print(e)
",20819,,20819,,10/12/2020 3:48,10/12/2020 3:48,,,,0,,,,CC BY-SA 4.0 24024,1,,,10/11/2020 8:47,,2,1062,"

I am pretty new to RL and I am trying to code a simple RL task with pytorch.

The goal/task is the following: The initial state is $t_o$ and the agent takes an action $\Delta_t$: $t_o +\Delta_t = t_1$.

If $t_1$ equals 450 or 475 then it gets a reward, else he does not get a reward.

I am training the agent with DQN algorithm on a NN ( with: 2 Linear layes: fist layer n_in=1 n_out 128 and second layer n_in=128 and n_out=5):

observation space($t_i$) is 700 --> $t_i \in [0,700[$
action space ($\Delta_t$) is 5 --> ($\Delta_t \in [-50,-25,0,25,50]$)

epsilon_start=0.9#e-greedy threshold start value
epsilon_end=0.01#e-greedy threshold end value
epsilon_decay=200#e-greedy threshold decay learning_rate=0.001# NN
optimizer learning rate batch_size=64#Q-learning batch size

Unfortunately it does not seem to converge to the values $t_i=$ 450 or 475. I doesn't seem to care about getting a reward. How can I improve my code so that the agent learns what I am trying to teach him? I put my code below in case the explanations were not clear enough:


import gym
from gym import spaces

class RL_env(gym.Env):
    metadata = {'render.modes': ['human']}

    
    def __init__(self):
        super(RL_env, self).__init__()
        
        n_actions_delta = 1 #delta_t
        self.action_space = spaces.Discrete(5)
        
        n_observations = 1 #time
    
        self.observation_space = spaces.Discrete(700)
       
        #initial time
        self.time = 0
        
        self.done = 0
        self.reward = 0

    def reset(self):
        self.reward = 0
        self.done = False
        return self.reward
       
    def step(self,delta_t):
        print('self time',self.time)
        d_t = np.arange(-50,70,25)
        
        self.time = (self.time + d_t[delta_t])%700
        print('delta time',d_t[delta_t],'-->','self time',self.time)
        
       
        
        if self.time == 475 or self.time == 450:
            self.reward = 1
            
            
        else:
            self.reward += 0
        
            
        info = {}
        print('total reward',self.reward)
        print('\n')
        return self.time,self.reward, self.done, info
    

    
    
    def render(self, mode='human', close=False):
        print()
import gym
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
from torch.distributions import Categorical
dtype = torch.float
device = torch.device("cpu")
import random
import math
import sys
if not sys.warnoptions:#igrnore warnings
    import warnings
    warnings.simplefilter("ignore")

#hyper parameters
epsilon_start=0.9
#e-greedy threshold start value
epsilon_end=0.01#e-greedy threshold end value
epsilon_decay=200#e-greedy threshold decay
learning_rate=0.001# NN optimizer learning rate
batch_size=64#Q-learning batch size 

env = RL_env()


#use replay memory (-> to stabilize and improve our algorithm)for training: store transitions observed by agent,
#then reuse this data later
#sample from it randomly (batch built by transitions are decorrelated)
class ReplayMemory:#allowes the agent to learn from earlier memories (speed up learning and break undesirable temporal correlations)
    def __init__(self, capacity):
        self.capacity = capacity
        self.memory = []
    def push(self, transition):#saves transition
        self.memory.append(transition)
        if len(self.memory)>self.capacity:#if length of memory arra is larger than capacity (fixed)
            del self.memory[0]#remove 0th element

    def sample(self, batch_number):#samples randomly a transition to build batch
        return random.sample(self.memory, batch_number)

    def __len__(self):
        return len(self.memory)
    
#Dqn NN (we want to maximize the discounted, cumulative reward)
#idea of Q-learning: we want to approximate with NN maximal Q-function (gives max return of action in given state)
#training update rule: use the fact that every Q-function for some policy obeys the Bellman equation
#difference between the two sides of the equality is known as the temporal difference error (we want to min -> Huber loss)
#calculate over batch of transitions sampled from the replay memory
class DqnNet(nn.Module):
    def __init__(self):
        super(DqnNet, self).__init__()
        
        state_space = 1
        action_space = env.action_space.n
        num_hid = 128
        self.fc1 = nn.Linear(state_space, num_hid)
        self.fc2 = nn.Linear(num_hid, action_space)
        self.gamma=0.5 #Q-learning discount factor (ensures that reward sum converges, 
                        #makes actions from far future less important)
    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.sigmoid(self.fc2(x))
        return x

#select action accordingly to epsilon greedy policy
#sometimes we use model for choosing action, other times sample uniformly 
#probability of choosing a random action will start at epsilon_start and will decay (epsilon_decay) exponentially
#towards epsilon_end
steps_done=0
def predict_action(state):
    global steps_done
    sample=random.random()#random number
    eps_threshold=epsilon_end+(epsilon_start-epsilon_end)*math.exp(-1.*steps_done/epsilon_decay)
    steps_done += 1
    if sample>eps_threshold:  
        x  = eps_threshold,model(Variable(state,).type(torch.FloatTensor)).data.max(0)[1].view(1, 1)
        return x#chose action from model
    
    else:
        x = eps_threshold,torch.tensor([[random.randrange(env.action_space.n)]])
        return x#choose random action uniformly

#wtih the update_policy function we perform a single step of the optimization
#first sample a batch, concatenate all the tensors into a single one, compute Q-value and max Q-value, 
#and combine them into loss
def update_policy():
    if len(memory)<batch_size:#we want to sample a batch of size 64
        return
    transitions = memory.sample(batch_size)#take random transition batch from experience replay memory
    batch_state, batch_action, batch_next_state, batch_reward = zip(*transitions)#convert batch-array of Transitions
                                                                              #to Transition of batch-arrays   
    #-->zip(*) takes iterables as arguments and return iterator
    
    batch_state = Variable(torch.cat(batch_state))#concatenate given sequence tensors in the given dimension
    batch_state = batch_state.resize(batch_size,1)
    batch_action = Variable(torch.cat(batch_action))
    batch_next_state = Variable(torch.cat(batch_next_state))
    batch_next_state = batch_next_state.reshape(batch_size,1)
    batch_reward = Variable(torch.cat(batch_reward))
    
    #print('model batch state',model(Variable(batch_state[0])))
    current_q_values = model(batch_state).gather(1, batch_action)#current Q-values estimated for all actions,
                                                                 #compute Q, then select the columns of actions taken,
                                                                 #these are the actions which would've been taken
                                                                 #for each batch state according to policy_net
    max_next_q_values = model(batch_next_state).detach().max(1)[0]#predicted Q-values for non-final-next-states
                                                                  #(-> gives max Q)
    expected_q_values = batch_reward + (model.gamma * max_next_q_values)

    #loss is measured from error between current and newly expected Q values (Huber Loss)
    loss = F.smooth_l1_loss(current_q_values, expected_q_values)

    # backpropagation of loss to NN --> optimize model
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    return loss, np.sum(expected_q_values.numpy())
    

def train(episodes):
    scores = []
    Losses = []
    Bellman = []
    Epsilon = []
    Times = []
    Deltas = []
    
    
    
    for episode in range(episodes):  
        state=env.reset()#reset environment
        print('\n')
        print('episode',episode)
           
        epsilon_action = predict_action(torch.FloatTensor([state]))
        
        action = epsilon_action[1] #after each time step predict action

        next_state, reward, done,info = env.step(action.item())#step through environment using chosen action
        
        epsilon = epsilon_action[0]
        Epsilon.append(epsilon)
        print(reward,'reward')
              
        state=next_state
        Times.append(state)
        scores.append(reward)            

    
        memory.push((torch.FloatTensor([state]),action,torch.FloatTensor([next_state]),
                         torch.FloatTensor([reward])))#action is already a tensor
        up = update_policy()#update_policy()#update policy
            
        if up != None:
            Losses.append(Variable(up[0]))
            print('loss',Variable(up[0]))
            Bellman.append(up[1])

        #calculate score to determine when the environment has been solved
        mean_score=np.mean(scores[-50:])#mean of score of last 50 episodes
        #every 50th episode print score
        if episode%50 == 0:
            print('Episode {}\tScore: {}\tAverage score(last 50 episodes): {:.2f}'.format(episode,scores[-50:],mean_score))

    
    #print('Losses',Losses)
    Losses = torch.stack(Losses).numpy()
    #print('Losses',Losses)
    plt.plot(np.arange(len(Losses)),Losses)
    plt.xlabel('Training iterations')
    plt.ylabel('Loss')
    plt.show()
    
    Bellman = np.array(Bellman)
    #print('Bellman',Bellman,'\n')
    plt.plot(np.arange(len(Bellman)),Bellman)
    plt.xlabel('Training iterations')
    plt.ylabel('Bellman target')
    plt.show()
    
    #print('scores',scores)
    plt.plot(np.arange(len(scores)),scores)
    plt.xlabel('Training iterations')
    plt.ylabel('Reward')
    plt.show()
    
    #print('epsilon',Epsilon)
    plt.plot(np.arange(len(Epsilon)),Epsilon)
    plt.xlabel('Training iterations')
    plt.ylabel('Epsilon')
    plt.show()
    
    print('Times',Times[-25:])
    print('Deltas',Deltas[-25:])
    
    Times = np.array(Times)
    print('Times',Times)
    #plt.figure(figsize=(31,20))
    plt.figure(figsize=(9,7))
    plt.plot(np.arange(len(Times)),(np.array(Times)))
    plt.xlabel('Training iterations')
    plt.ylabel('t')
    plt.show()
     
    Times_1 = np.array(Times[-300:])
    print('t',Times)
    plt.figure(figsize=(9,7))
    plt.plot(np.arange(len(Times_1)),(np.array(Times_1)))
    plt.xlabel('Last 300 Training iterations')
    plt.ylabel('t')
    plt.ylim(0,1000)
    plt.show()
    
model = DqnNet()#policy         
memory = ReplayMemory(20000)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)

train(10000)
",41538,,2904,,7/12/2022 21:42,7/12/2022 21:42,"Reinforcement learning simple problem: agent not learning, wrong action",,1,0,,,,CC BY-SA 4.0 24026,2,,24020,10/11/2020 9:46,,-1,,"

If you look at the definition of the cross-entropy (e.g. here), you will see that it is defined for probability distributions (in fact, it comes from information theory). You can also show that the maximization of the (binomial/Bernoulli) log-likelihood is equivalent to the minimization of the cross-entropy, i.e. when you minimize the cross-entropy you actually maximize the log-likelihood of the parameters given your labelled data. Hence the use of the softmax is theoretically founded.

Regarding the supposed derivative of the cross-entropy loss function preceded by the softmax, even if that derivative is correct (I didn't think about it and I don't want to think about it now), note that then $t - o$ is different depending on whether $o$ is a probability vector or an unnormalized vector (which can take arbitrarily large numbers). If $o$ is a probability vector and $t$ a one-hot encoded vector (i.e. also a probability vector), then all numbers of $t - o$ will be in the range $[-1, 1]$. However, if $o_i$ can be arbitrarily large, e.g. $o_i = 10$, then $t_i - o_i \in [-10, -9]$. So, the propagated error would be different if $o$ was not a probability vector.

",2444,,2444,,10/11/2020 20:46,10/11/2020 20:46,,,,0,,,,CC BY-SA 4.0 24028,1,,,10/11/2020 10:38,,2,100,"

For example, if AlphaZero plays with an opponent who has a right to move chess figures any way she wants, or make more than 1 move in a turn? Will a neural network adapt to that, as it adapted to an absurd move made by Lee Sedol in 2015?

",19666,,2444,,10/11/2020 14:41,10/12/2020 8:35,What happens when an opponent a neural network is playing with does not obey the rules of the game (i.e. cheats)?,,2,2,,,,CC BY-SA 4.0 24029,2,,24028,10/11/2020 12:15,,2,,"

The behaviour when playing against "cheats" depends on how the agent has been trained, and how different the game becomes from the training scenarios. It will also depend on how much of the agent's behaviour is driven by training, and how much by just-in-time planning.

In general, unless game playing bots are written specifically to detect or cope with opponents that are given unfair advantages, they will continue to play in the same style as if the cheating had not occurred, and assuming that the rules are still being followed strictly. If the cheating player only makes one or two rules-breaking moves, and the resulting game state is still something feasible within the game, then the agent should continue to play well. If the agent significantly outclasses the human opponent, it may still win.

A completed, trained agent will not adapt its style to "now my opponent can cheat". An agent still being trained could do so in theory, but it would take many games with cheating allowed for it to learn tactics that cope with an opponent that had an unfair advantage.

Agents that plan by looking ahead during play can cope with more unusual/unseen game state - things that may not have been seen in training. However, they still look ahead on the assumption that game play is as desiged/trained for, they cannot adapt to new rules unless those rules are added to the planning by the bot designers. For instance if the allowed cheating was a limited number of extra moves, but only for the human player, the effects of that could be coded into the planning engine, and the bot would "adapt" with help from its designers.

[AlphaGo] adapted to an absurd move made by Lee Sedol in 2015?

Assuming you are referring to game 4, then as far as I know, AlphaGo did not "adapt" to this play, after Lee Sedol managed to put it in a losing position then it started playing badly as it could not find a winning strategy from the board positions it was in, and could not recover. I don't think any effort was put into refining AlphaGo during this game or afterwards to patch it for game 5.

",1847,,,,,10/11/2020 12:15,,,,0,,,,CC BY-SA 4.0 24030,2,,24013,10/11/2020 15:09,,1,,"

I have not implement the backprop of a bi-directional RNN from scratch so I can't be sure my answer is correct but I hope it helps.

You can see how bi-directional RNN works from this video from Andrew NG. I got the image below from that video:

For more clarity:

So if you know how to backprop through a simple RNN, you should be able to do so for bi-directional RNN.

If you need more detail, let me know.

",41547,,,,,10/11/2020 15:09,,,,6,,,,CC BY-SA 4.0 24031,2,,24024,10/11/2020 15:16,,2,,"

I see some issues in your code of the environment. Firstly, and probably most importantly, you should not be incrementing the reward. In your code, every time the agent hits $t=475$ for example, the reward given by the environment increases by 1. So if the agent oscillates between $t=450$ and $t=475$, at each timestep the environment gives a greater and greater reward. Your Q values have no reason to converge. Instead of incrementing the reward, you should just return a reward of $1$ at those two desired states and return $0$ otherwise.

Secondly, it appears that you don't reset the state during reset(), I think you should have self.time = 0 in there (unless this was intentional).

Edit : I also question why you're using DQN for this problem. This problem should be very easy to solve with standard (tabular) Q Learning.

Edit 2 : Your training loop looks wrong. You reset the environment at each iteration in your loop, so your episodes are one timestep long each. What I think you want instead is to have another loop under reset() something like

for episode in range(episodes):
  s = env.reset()
  episode_score = 0 
  for step in range(max_timesteps):
    a = agent.predict_action(s)
    ns, r, d, _ = env.step(a)
    episode_score += r
   ... 
    agent.update_policy()
   ... 
  scores.append(episode_score)
 ... 
",37829,,37829,,10/11/2020 21:04,10/11/2020 21:04,,,,7,,,,CC BY-SA 4.0 24032,2,,23877,10/11/2020 16:01,,0,,"

I know this is not a straight answer to your question, but I couldn't comment on your post so decided to post it (so maybe I will delete it after you received a better answer).

I think this playlist by sentdex can be handy as he goes through a lot of details to teach a neural network model that can drive cars in GTA-V by simply looking at each frame of the game. You can find the code of each step in this link.

",41547,,,,,10/11/2020 16:01,,,,1,,,,CC BY-SA 4.0 24033,2,,23877,10/11/2020 17:49,,1,,"

In short: yes, you must allow "do nothing" decision as a first level result.

Your system must decide the action to be taken, including "do nothing" action. This is different to low network outputs, that can be translated as "don't know what to do".

In other words, the network can result in:

  • "I don't know what to do now" when all results in the output have low probabilities. (Obviously, this is a bad network result, to be fixed as much as possible).
  • "I know I must do nothing", when "do nothing" action has high probability, greater than the others.
  • "I know I must do W", when "W" action has high probability, greater than the others.
  • ...

Kind regards.

",12630,,12630,,10/14/2020 18:58,10/14/2020 18:58,,,,0,,,,CC BY-SA 4.0 24034,2,,24028,10/11/2020 18:10,,1,,"

"Will a neural network adapt to that ?"

No.

The big functional difference between human mind and neural networks : human mind learns by itself, a NN not.

If we call NN the net with its layers, weights, ... this is a static system, unable to learn anything new. The back-propagation algorithm that made intelligent the NN runs outside the NN itself, in a different stage, different hardware and software, software that is not a NN but classic programming.

Thus, a NN never learns nothing while playing, driving, or any other action it is designed for.

If, in the learning stage, some cheats are done, the learning algorithm will learn and adapt to these cheats, thus the resulting NN configuration will be able to react to these cheats in the best way. But this is equivalent, in fact, to learn a different game where these cheats are valid movements.

",12630,,12630,,10/12/2020 8:35,10/12/2020 8:35,,,,0,,,,CC BY-SA 4.0 24035,2,,17306,10/11/2020 18:34,,1,,"

Is there an AI technology out there or being developed that can predict human behaviour ?

If it can predict (all) human behavior, it can act as an human, thus, it will be the first real (strong) AI. This has not happened yet.

I must remark that the question contains a lot of weakly defined terms. Fix these terms can help to work in the question subject:

  • "human behavior" : the behavior of an individual or the behavior of all humans as a set ? (bigger groups tends to be more predictable).
  • "irrational decision-makers" : it assumes that it exists a "rational" way of take all decisions.
  • "humans are perfectly rational, but obviously this isn't the case" : same as previous.
  • "better models and therefore better models of recessions" : recessions are not caused by "irrational decisions".

(In fact, each of these points could be an independent question at this site, more according to its manifest that all the plague of questions about neural nets).

",12630,,12630,,10/11/2020 18:51,10/11/2020 18:51,,,,0,,,,CC BY-SA 4.0 24038,1,,,10/11/2020 22:27,,0,119,"

My friend is working at a pizza shop. He takes cigarette breaks in an area that is covered by the public webcam of our town.

I now want to train a convolutional neural network to be able to detect when he is smoking.

Can somebody direct me in the right direction, which tools/tutorials I should look at for this classification task? I already saved 18 hours worth of pictures, one a minute. He is in 28 of these images, I will probably save a few more, maybe 2-3 days. But I don't really know how to start this.

",41554,,11539,,10/15/2020 2:46,10/15/2020 2:46,How can I train a CNN to detect when a person is smoking outside of shop given images from a video camera?,,1,0,,,,CC BY-SA 4.0 24039,2,,24020,10/11/2020 22:44,,1,,"

Short answer: larger gradients

That is not the derivative of the softmax function. $t - o$ is the combined derivative of the softmax function and cross entropy loss. Cross entropy loss is used to simplify the derivative of the softmax function. In the end, you do end up with a different gradients. It would be like if you ignored the sigmoid derivative when using MSE loss and the outputs are different. Using softmax and cross entropy loss has different uses and benefits compared to using sigmoid and MSE. It will help prevent gradient vanishing because the derivative of the sigmoid function only has a large value in a very small space of it. It is similar to using a different cross entropy loss where the combined derivative of the loss and sigmoid is $t - o$.

Information on derivatives of cross entropy with sigmoid function and with softmax function. I would also suggest some more research on cross entropy loss functions beyond my links.

",41026,,41026,,10/11/2020 22:57,10/11/2020 22:57,,,,2,,,,CC BY-SA 4.0 24040,1,,,10/12/2020 6:38,,1,132,"

I'm reading the paper Neural Ordinary Differential Equations and I have a simple question about adjoint method. When we train NODE, it uses a blackbox ODESolver to compute gradients through model parameters, hidden states, and time. It uses another quantity $\mathbf{a}(t) = \partial L / \partial \mathbf{z}(t)$ called adjoint, which also satisfies another ODE. As I understand, the authors build a single ODE that computes all the gradients $\partial L / \partial \mathbf{z}(t_{0})$ and $\partial L / \partial \theta$ by solving that single ODE. However, I can't understand how do we know the value $\partial L / \partial \mathbf{z}(t_1)$ which corresponds to the initial condition for the ODE corresponds to the adjoint. I'm using this tutorial as a reference, and it defines custom forward and backward methods for solving ODE. However, for the backward computation (especially ODEAdjoint class in the tutorial) we need to pass $\partial L / \partial \mathbf{z}$ for backpropagation, and this enables us to compute $\partial L / \partial \mathbf{z}(t_i)$ from $\partial L / \partial \mathbf{z}(t_{i+1})$, but we still need to know the adjoint value $\partial L / \partial \mathbf{z}(t_N)$. I do not understand well about how pytorch's autograd package works, and this seems to be a barrier to understand this. Could anyone explain how it operates, and where $\partial L / \partial \mathbf{z}(t_1)$ (or $\partial L / \partial \mathbf{z}(t_N)$ if this is more comfortable) comes from? Thanks in advance.


Here's my guess for the initial adjoint from simple example. Let $d\mathbf{z}/dt = Az$ be a 2-dim linear ODE with given $A \in \mathbb{R}^{2\times 2}$. If we use Euler's method as a ODE solver, then the estimate for $z(t_1)$ is explicitly given as $$\hat{\mathbf{z}}(t_1) = \mathrm{ODESolve}(\mathbf{z}(t_0), f, t_0, t_1, \theta))= \left(I + \frac{t_1 - t_0}{N}A\right)^{N} \mathbf{z}(t_0) $$ where $N$ is the number of steps for Euler's method (so that $h = (t_1 - t_0) /N$ is the step size). If we use MSE loss for training, then the loss will be $$ L(\mathbf{z}(t_1)) = \Bigl|\Bigl| \mathbf{z}_1 - \left(I + \frac{t_1 - t_0}{N}A\right)^N\mathbf{z}(t_0)\Bigr|\Bigr|_2^2 $$ where $\mathbf{z}_1$ is the true value at time $t_1$, which is $\mathbf{z}_1 = e^{A(t_1 - t_0)}\mathbf{z}(t_0)$. Since adjoint $\mathbf{a}(t) = \partial L / \partial \mathbf{z}(t)$ satisfies $$\frac{d\mathbf{a}(t)}{dt} = -\mathbf{a}(t)^{T} \frac{\partial f(\mathbf{z}(t), t, \theta)}{\partial \mathbf{z}} = \mathbf{0},$$ $\mathbf{a}(t)$ is constant and we get $\mathbf{a}(t_0) = \mathbf{a}(t_1)$. So we do not need to use augmented ODE for computing $\mathbf{a}(t)$. However, I still don't know what $\mathbf{a}(t_1) = \partial L / \partial \mathbf{z}(t_1)$ should be. If my understanding is correct, since $L = ||\mathbf{z}_1 - \mathbf{z}(t_1)||^{2}_{2}$, it seems that the answer might be $$ \frac{\partial L}{\partial \mathbf{z}(t_1)} = 2(\mathbf{z}(t_1) - \mathbf{z}_1). $$ However, this doesn't seem to be true: if it is, and if we have multiple datapoints at $t_1, t_2, \dots, t_N$, then the loss is $$ L = \frac{1}{N} \sum_{i=1}^{N}||\mathbf{z}_i -\mathbf{z}(t_i)||_{2}^{2} $$ and we may have $$ \frac{\partial L}{\partial \mathbf{z}(t_i)} = \frac{2}{N} (\mathbf{z}(t_i) - \mathbf{z}_i), $$ which means that we don't need to solve ODE associated to $\mathbf{a}(t)$.

",30886,,30886,,10/12/2020 7:51,12/4/2022 9:09,Computation of initial adjoint for NODE,,1,0,,,,CC BY-SA 4.0 24042,2,,24038,10/12/2020 19:12,,1,,"

Since this a classification problem you will use a CNN preferably. Then you need to fix an architecture of the CNN like VGGNet or Resnet or Le-net. You can find details on architectures here- Neural Network Architecures. As a beginner you can use VGG 16. You can read about the architecure here- Medium.com blog on VGG 16.

which tools/tutorials i should look at for this classification task?

Tools that you can use:

  1. A python IDE like PyCharm or Jupyter Notebook
  2. Keras and Tensorflow packages

Since, deep learning requires lot of training dataset and demands huge computation power, you can opt for cloud computing platforms like Google Colab or Azure to run your code on, unless you have enough GPU power on your local machine. The above tools are if you want to code yourself. If you want to use GUI(and not write code) Azure Machine Learning Studio is a starting point. Matlab Deep Learning toolbox also provides an excellent GUI with pre trained models on above architectures. However, if you write code on Matlab then you have to ensure your target GPU is NVIDIA as Matlab(Parallel Processing Toolbox, which supports doing computations on GPU) supports CUDA only. It won't work with AMD GPU.

If you want to opt to write code, you can find a step by step implementation on dogs vs cats classification here- Dogs vs Cats classification.

",41564,,,,,10/12/2020 19:12,,,,4,,,,CC BY-SA 4.0 24043,1,,,10/12/2020 20:21,,1,157,"

I'm aware that convergence proofs for Monte Carlo tree search exist in the case of deterministic zero sum games and Markov decision processes.

I have come across research which applies MCTS to zero-sum stochastic games, however I was unable to find proof that such an approach is guaranteed to converge to the optimal solution.

If anyone is able to provide references or an explanation explaining why or why not MCTS is guaranteed to converge to the optimal solution in this setting I would appreciate it a lot.

",41567,,,,,10/12/2020 20:21,Is Monte Carlo tree search guaranteed to converge to the optimal solution in two player zero-sum stochastic games?,,0,0,,,,CC BY-SA 4.0 24045,1,24046,,10/12/2020 21:59,,2,1206,"

Consider the following coding line related to CNNS

Conv2D(64, (3,3), strides=(2, 2), padding='same')

It is a convolution layer with filter size $3 \times 3$ and step size of $2\times 2$.

I am confused about the need for $64$ filters.

Are they doing the same task? Obviously, it is no. (one is enough in this case)

Then how do each filter differ by? Is it in hovering over the input matrix? Or is it in the values contained by filter itself? Or differs in both hovering and content?

I am finding difficulty in visualizing it.

",18758,,2444,,10/13/2020 21:25,10/15/2020 21:18,What is the need for so many filters in a CNN?,,2,0,,,,CC BY-SA 4.0 24046,2,,24045,10/12/2020 22:40,,3,,"

Then how do each filter differ by? Is it in hovering over the input matrix? Or is it in the values contained by filter itself? Or differs in both hovering and content?

The filters (aka kernels) are the learnable parameters of the CNN, in the same way that the weights of the connections between the neurons (or nodes) are the learnable parameters of a multi-layer perceptron (or feed-forward neural network).

So, the value of these filters is not fixed or pre-determined, but will depend on how you train the CNN, i.e. the learning algorithm, the objective function and the data. If you use gradient descent as the learning algorithm, you will be minimizing a loss (aka cost or error) function (e.g. the cross-entropy, in the case of classification problems). To do that, you need to find the gradient of the loss function with respect to the filters. You then apply a step of gradient descent (i.e. you add a scaled version of the gradient of the loss function with respect to the parameters to the parameters), so that this loss decreases.

To answer your question more directly, the only thing that usually changes is just the value of the filters. The convolution (or cross-correlation) operation is the same for all filters.

Why do you use more than one filter? The usual explanation is that each filter, when convolved with the input, will extract different features from it, and the specific features that they will extract will depend on the specific values of the filters, which, in turn, depend on the data, so we can say that CNNs are data-driven feature extractors. If you are familiar with image processing techniques, then you know that different filters, when convolved with the same image, can have different effects (e.g. blurring or de-noising).

",2444,,2444,,10/15/2020 21:18,10/15/2020 21:18,,,,0,,,,CC BY-SA 4.0 24047,2,,24045,10/12/2020 22:42,,1,,"

All filters move across the same area, but the filter values (also called filter kernels) are different for each filter. This makes it possible to "filter out" different features.

",34358,,,,,10/12/2020 22:42,,,,0,,,,CC BY-SA 4.0 24048,2,,22316,10/12/2020 23:01,,1,,"

We don't need multiple environments. On-policy algorithms require that new training samples are collected with the newest policy, so we can't use an experience buffer. However we can use the newest policy to collect multiple samples, even over multiple epochs, before updating the weights. This update can be a batch update.

",34358,,,,,10/12/2020 23:01,,,,0,,,,CC BY-SA 4.0 24049,1,,,10/13/2020 1:30,,2,52,"

I am currently implementing the Moth-Flame Optimization (MFO) Algorithm, based on the paper: Moth-Flame Optimization Algorithm: A Novel Nature-inspired Heuristic Paradigm.

To calculate the values of the Moths, it uses two arrays of values, which contain upper and lower values for each variable. However, as far as I can see. it mentions nothing about what these values are. Quoting from the paper:

As can be seen, there are two other arrays called $ub$ and $lb$. These matrixes define the upper and lower bounds of the variables as follows:

$ub = [ub_1, ub_2, ub_3, \dots,ub_{n-1}, ub_n]$

where $ub_i$ indicates the upper bound of the $i$-th variable.

$lb = [lb_1, lb_2, lb_3, \dots, ub_{n-1}, ub_n]$

where $lb_i$ indicates the lower bound of the $i$-th variable

After that, it says nothing more about this matter

So, if anyone has any idea of how these bound values are determined, please tell me!

",41573,,2444,,10/13/2020 21:02,10/13/2020 21:02,How are the lower and upper bound values of the moths determined in the Moth-Flame Optimization algorithm?,,0,1,,,,CC BY-SA 4.0 24050,1,,,10/13/2020 9:47,,1,85,"

So I'm stack to something that it's probably very easy but I can't get my head around it. I'm building a Neural Network that will consist of many layers with non-linear activation functions (probably ReLUs) and the last output layer will be linear because we are trying to catch a specific number and not a probability. I've done the forward propagation calculations but I'm stuck at the back propagation ones.

Let's say that I'm gonna use the cross entropy loss function: (we will implement the MSE as well) $-(y \log (a)+(1-y) \log (1-a))$. (I understand that this is not a good option for a regression problem)

So we can easily find the dJ/dA $d A=\frac{\partial J}{\partial A}=-\left(\frac{y}{A}-\frac{1-y}{1-A}\right)$ of the last layer and we can start going backwards finding the $\frac{\partial J}{\partial Z^{[L]}}$ which we can calculate from the equation: $d Z^{[L]}=d A*g^{\prime}\left(Z^{[L]}\right)$ The problem lies at the second part of this equation where the derivative of g is.

What will be the outcome since we have a linear activation function which derivative is equal with 1? (Activation function: f(x) = x, f'(x) = 1)

Will it be an identity matrix with the shape of Z[L] or a matrix full of ones with the same shape again? I'm asking about the term $g^{\prime}\left(Z^{[L]}\right)$.

Many thanks.

",41488,,41488,,10/13/2020 15:14,10/13/2020 15:14,Linear output layer back propagation,,0,4,,,,CC BY-SA 4.0 24051,1,,,10/13/2020 12:39,,1,87,"

I want to build HELM neural network that consists of autoencoder (AE) and one class classification (OC).

HELM with AE and OC have following shape:

That is, hidden layer output of AE is input of OC.

Training of HELM consists of training AE and OC separately. In order to train each neural network in HELM, first there are generated random weights and biases between input and hidden layers, and then based of them and activation function (for example sigmoid), only weights between hidden and output layers are trained. But then what's the point in training weights between hidden and output layers in AE, since only output of its hidden layer is provided as input into one-class classifier? What is point in use AE in HELM if weights between input and hidden layers of AE are basicaly random?

Following paper (page 6): https://arxiv.org/pdf/1810.05550.pdf

confirms that output of hidden layer of AE is input for OC, but also on the contrary in Algorithms 2 and 3 (pages 6, 7), there is shown that input of OC is AE input vector multiplied by matrix of weights between hidden and output layer, what sounds weird for me.

",41511,,41511,,10/13/2020 12:48,10/13/2020 12:48,Role of autoencoder in Hierarchical Extreme Learning Machine,,0,0,,,,CC BY-SA 4.0 24052,1,,,10/13/2020 14:04,,1,14,"

I am reading RBMs from this paper. In Fig1 they show an example of generating hand-written digit using RBMs. This is the figure they are showing:

In the learning step first we sample $h$ from $h \sim P(h|v)$ and then reconstructe $v$ from $v \sim P(v|h)$. Now in the generating step they are sampling $v$ from $v \sim P(v|h)$. My question is, in generating step if we do not sample $h$ from $h \sim P(h|v)$ how can we get $P(v|h)$?

",28048,,,,,10/13/2020 14:04,How Restricted Boltzman Machine (RBM) generates hand-written digit?,,0,0,,,,CC BY-SA 4.0 24054,1,,,10/13/2020 14:27,,0,100,"

I am doing a project where I have to know distance a particular object is from camera. In the photo I only know one of the object's height, but I don't know how far away that object is and I don't know how tall are other objects. Is it possible to write a code or do some geometry to know other objects distances from camera using only the height of one object? For example I have an image where 5 meters away there is a box which is 1 meter high, I wanna know the distance to human who is 12 meters away, or to know a distance to a dog who is 7 meters away. Maybe you guys know any datasets or models which deal with the same problem as I am facing. Any help will be appreciated.

",41591,,,,,11/4/2022 3:06,Is it possible to know the distance objects are from camera based on only knowing one object's height?,,1,1,,,,CC BY-SA 4.0 24055,1,,,10/13/2020 14:44,,1,65,"

Let's say I'm looking for any item that has a certain shape (outline) in a photo. but I can further classify it only according to particular features, that most of them are expected to be shown only in a smaller area of the the object itself.

How may I give more weight, in the model, to that particular area, in order to avoid wrong classification issues?

What is the flow, and are there specific tools that should be used for that purpose?

Example:

I want to detect all triangles in the image, and try to classify them like this: If triangle has 3 lines in its corner, it's A type. if only two lines, it's B type.

So the triangles outline composes 100% of the object, but we can see that the area where the red lines are present, is only about 10% of the object area. How may I give more weight and tell the model to carefully look for the details in that area, so it doesn't confuse A with B or vice versa, just because the other 90% of the shape is similar.

And Of course, I want the certainty level to be as close as possible to 100%, for both A and B, and to be distinguished from the other option.

So my goal is the get this output:

Purple Triangle ==> Type A, certainty, 99%. Type B: certainty: 50%

Green Triangle ==> Type B, certainty, 99%. Type A: certainty: 50%

",41590,,,,,11/30/2020 20:03,Computer vision - Can you put more weight on a specific part of the object?,,0,6,,,,CC BY-SA 4.0 24062,2,,2279,10/14/2020 0:29,,0,,"

One of the suggestions in the accepted answer was SSD. On their website, SSD mentioned a competitor, faster_rcnn. faster_rcnn was deprecated in favor of Detectron. Detectron was deprecated in favor of Detectron2. Long live detectron2.

It looks pretty cool and powerful: https://github.com/facebookresearch/detectron2

",32709,,,,,10/14/2020 0:29,,,,1,,,,CC BY-SA 4.0 24063,1,24066,,10/14/2020 0:33,,2,56,"

I am interested in models that exhibit behavior. My goal is a model that survives indefinitely on a two dimensional resource landscape. One dimension represents the location (0 to 1) and the second says if there is a resource available at that location (-1 = resource one, 0 = no resource, 1 = resource two).

The landcape looks like this:

location = [0, 0.2, 0.4, 0.6, 0.8, 1]
resource = [-1, 0,   0,   0,   0,  1] (I added spaces so the elements line up)

My model represents an organism deciding if it will move or rest on the landscape at each time step. The organism has reserves of each resource. The organism fills its reserve of a resource if it rests on the resource and loses 1 unit of both resources at each time step. I am considering neural networks to represent my organisms. The input would be 4 values; The location on the landscape, the resource value at that location, and the reserve levels of resource one and two. The output would be 3 values; move right, rest, move left. The highest value decides what happens. To survive indefinitely the model will have to bounce between the ends of the landscape, briefly resting on the resource. Model evaluation would go like this: start the model in the middle of the landscape with full resource reserves. Allow time to pass until one of the resource reserves is depleted (the organism dies).

My question is this: Can my loss function be evaluating the model until it dies? 1/survival time could be the loss value to be minimized by gradient descent. Is this a reinforcement learning problem (I don't think so..?) Thanks!!

",31703,,2444,,10/14/2020 10:18,10/14/2020 13:26,Evaluate model multiple times in loss function? Is this reinforcement learning?,,1,0,,,,CC BY-SA 4.0 24065,1,24070,,10/14/2020 7:28,,1,95,"

As a routine (in typical everyday tasks) of a data scientist, should they usually decide about weights and biases range and initial values as a function of which data they are planning to insert as an input, and which type of data they expect to get in the output? Or we usually do not deal with such fine-tuning, and let the algorithm to do it? One could answer that normalizing inputs solves the problem and no need to fit weights and biases, but I guess they depend also on expected output.

To summarize:

  1. is it common to deal with weights and biases in everyday tasks or in most of the cases existing algorithms do it well?

  2. what are the rules of thumb for how to decide about range and initial values of weights and biases?

",36453,,11539,,10/14/2020 18:25,10/14/2020 18:25,Should the range and initial values of weights and biases be adjusted to fit input and output data?,,1,0,,,,CC BY-SA 4.0 24066,2,,24063,10/14/2020 10:04,,1,,"

Can my loss function be evaluating the model until it dies? 1/survival time could be the loss value to be minimized by gradient descent.

In order to use backpropagation and gradient descent, you have to relate the loss function directly to the output of the neural network. Your proposed loss function is too indirect, it is not possible to turn it directly into a gradient that could be used to alter the neural network weights.

In addition, the specific function of time you have chosen will be difficult to optimise, as incrememental improvements from e.g. 10 to 11 time steps surviving will provide a much lower signal for adjusting behaviour than the improvement from e.g. 2 to 3 lifetime. If the environment has enough randomness (and typically these kind of a-life scenarios do), then the signal here could be swamped by random events and very hard to optimise, requiring a larger number of samples in order to extract expected improvements to the loss function.

Is this a reinforcement learning problem (I don't think so..?)

It is very close to a definition of reinforcement learning (RL) problem. For a RL problem you need the following things:

  • An environment in which an agent exists, and which has a measurable state.

  • A set of actions = sequential decisions that need to be made, and that have consequences.

  • The consequences of any action are:

    • A consistent (but allowing for some randomness) change to the state of the environment. Here you have change to agent's location and resource levels.
    • A consistent scalar reward value that measures how well the agent is achieving a goal. This can also be partially random, and can also be sparse, only achieved in certain specific states.

In your problem definition you don't have a reward signal, but it would be easy to add one. A suitable one would be $+1$ per time step.

Technically your problem would also be partially observable (sometimes called a POMDP), in that the agent does not get to see resources available in other locations. It only knows its current location, its internal state and resources available at its current location. This is not a major issue, although you should note that adding some kind of memory (either open-ended memory as in a recurrent neural network, or explicitly added to the state) would allow for more efficient agents. That's not a RL issue as such, any learning process without ability to form or use memories would be limited in this environment, and you might want to look into that as a later experiment.

How RL helps is that it provides a framework to convert your problem definition into measurements and gradients for the neural network to improve its performance.

As you have set up your neural network to predict best action choice, this would naturally lend itself to policy gradient methods in RL, such as REINFORCE. As it is a simple problem, I would expect REINFORCE with baseline to perform well enough for it.

I will not explain the algorithm here in full detail, but the basic approach is to have the agent act with the current network for a few episodes, collecting data on its choices and performance. You will get a dataset of (state, action, return = sum of rewards to end of episode). You then use that as a labelled dataset to train a minibatch as if the action choice was correct ground truth, but multiply each gradient by (return - baseline) where the baseline is typically the average return seen so far from that state. You may need a second neural network during training to estimate that expected return (aka state value). After using a minibatch once, you will need to discard it, as it represents results for the previous iteration of the network before the weight updates. There are ways around this, but not typically done in REINFORCE - instead the approach is to just keep generating data and train on the new mini-batch as fast as you can go.

RL offers other methods which may work just as well to solve your problem, but I suspect REINFORCE will serve you well since it will allow the agent to randomise direction choice which is important when it cannot see where the resources are and has no memory of where it has searched.

You don't have to use RL for this problem. An alternative that may work for you is using genetic algorithms to tune the network architecture and weights. It avoids using gradients, but I would still recommend a simple fitness function equal to number of time steps survived. There is a framework called NEAT which is ideal for this sort of a-life control problem.

",1847,,1847,,10/14/2020 13:26,10/14/2020 13:26,,,,1,,,,CC BY-SA 4.0 24067,1,,,10/14/2020 10:32,,3,1126,"

I am using Q-learning and SARSA to solve a problem. The agent learns to go from the start to the goal without falling in the holes.

At each state, I can choose the action corresponding to the maximum Q value at the state (the greedy action that the agent would take). And all the actions connect some states together. I think that would show me a road from start to goal, which means the result converges.

But some others think that as long as the agent learns how to reach the goal, the result converges. Sometimes the success rate is very high but we cannot get the road from Q table. I don't know which one means the agent is trained totally and what the converged result means.

",41613,,2444,,10/14/2020 15:25,10/14/2020 15:43,How to determine if Q-learning has converged in practice?,,1,0,,,,CC BY-SA 4.0 24068,1,24115,,10/14/2020 10:58,,3,1224,"

I was reading the "Deep Learning with Python" by François Chollet. He mentioned separable convolution as following

This is equivalent to separating the learning of spatial features and the learning of channel-wise features, which makes a lot of sense if you assume that spatial locations in the input are highly correlated, but different channels are fairly independent.

But I could not understand what he meant by saying "correlated spatial locations". Can some explain what he means or the purpose of separable convolutions? (except performance-related part).

Edit: Separable convolution means that first depthwise convolution is applied then pointwise convolution is applied.

",41615,,41615,,10/17/2020 13:51,10/18/2020 15:03,When should we use separable convolution?,,1,3,,,,CC BY-SA 4.0 24069,1,,,10/14/2020 12:48,,3,85,"

We can use DDPG to train agents to stack objects. And stacking objects can be viewed as first grasping followed by pick and place. In this context, how does meta-reinforcement learning fit? Does it mean I can use grasp, pick and place as training tasks and generalize to assembling objects?

",41617,,2444,,10/16/2020 22:54,1/4/2023 17:06,What exactly does meta-learning in reinforcement learning setting mean?,,1,1,,,,CC BY-SA 4.0 24070,2,,24065,10/14/2020 14:04,,2,,"

is it common to deal with weights and biases in everyday tasks or in most of the cases existing algorithms do it well?

No; and it is no coincidence that you will not be able to find any reference to such a practice in any course or tutorial about neural networks. Such a practice would require a whole additional level of (business/SME) know-how in order to meaningfully apply neural networks to real-world problems, and fortunately this is not necessary.

The desired situation is for both weights & biases to remain in a relatively small* range around zero, among other reasons because this avoids the exploding & vanishing gradient problems, which is catastrophic for learning; trying to adjust for the scale of our inputs & outputs by scaling accordingly the model weights & biases is the wrong approach, and never followed.

[*How small? Well, see here and here for what a change in just the standard deviation of a zero-mean weight initialization from 1.0 and 0.1 to 0.01 can do to the model performance]

It is a well-established fact by now that neural nets work with normalized inputs, and, depending on the problem, normalized outputs as well; which answers to your objection:

One could answer that normalizing inputs solves the problem and no need to fit weights and biases, but I guess they depend also on expected output.

It depends indeed, but the solution here is to scale the output(s) as well, and not the weights.

Now, it's true that de-scaling the outputs back to their original range in order to be able to meaningfully compare them with the ground truth (and possibly compute more meaningful metrics, like the error on our true output range, and not the scaled one) is AFAIK seldom mentioned in introductory expositions; but this is indeed the correct thing to do (especially for regression problems, where metrics like MSE are scale-sensitive), instead of trying to manually intervene on the weights range. For details, see own answers in How to interpret MSE in Keras Regressor and ANN regression accuracy and loss stuck.


what are the rules of thumb for how to decide about range and initial values of weights and biases?

Range aside, the general rule, as already implied, is to initialize the biases with zeros and the weights with small random values around zero. Nevertheless, the exact details are an area of active research. Currently used initialization schemes that are already integrated into the relevant frameworks (Tensorflow, Keras, Pytorch etc) are the Glorot (or Xavier) and He initializations (for a nice overview, see Weight Initialization in Neural Networks: A Journey From the Basics to Kaiming).

Beyond these routinely-used approaches that have already reached the practitioner's workbench, and moving closer to the front of active theoretical research, the Lottery Ticket Hypothesis (finding "winning" weight initializations that require minimal training) is an ultra-hot topic lately.

",11539,,11539,,10/14/2020 17:22,10/14/2020 17:22,,,,2,,,,CC BY-SA 4.0 24071,2,,24067,10/14/2020 15:01,,4,,"

A typical and practical way to measure the convergence to some solution (so not necessarily the optimal one!) of any numerical iterative algorithm (such as RL algorithms) is to check if the current solution has not changed (much) with respect to the previous one. In your case, the solutions are value functions, so you could check if your algorithm has converged to some value function e.g. as follows

$$ c(q_t, q_{t-1}, \epsilon) = \begin{cases} 1, &\text{if } |q_t(s, a) - q_{t-1}(s, a)| < \epsilon, \forall s \in S, a \in A \\ 0, & \text{otherwise} \end{cases}, \tag{1}\label{1} $$ where

  • $c$ is the "convergence" function (aka termination condition) that returns $1$ (true) if your RL algorithm has converged to some small enough neighbourhood of value functions (where those value functions are "indistinguishable"), and $0$ otherwise
  • $q_t$ is the value function at iteration $t$
  • $\epsilon$ is a threshold (aka precision or tolerance) value, which is a hyper-parameter that you can set depending on your "tolerance" (hence the name); this value is typically something like $10^{-6}$

Of course, this requires that you keep track of two value functions.

You can also define your "convergence" function $c$ in \ref{1} differently. For example, rather than using the absolute value, you could use the relative error, i.e. $\left|\frac{q_t(s, a) - q_{t-1}(s, a)}{q_t(s, a)} \right|$. Moreover, given that RL algorithms are exploratory (i.e. stochastic) algorithms, the value function may not change (much) from one iteration to the other, but, in the next one, it could significantly change because of your exploratory/behavioural actions, so you may also want to take into account more iterations, i.e. after e.g. $N > 1$ iterations, if the value function does not change much, then you could say (maybe probabilistically) that your RL algorithm has converged to some small neighbourhood of value functions in the space of value functions.

Note that these approaches do not guarantee that your RL algorithm has converged to the global optimal value function, but to some locally optimal value function (or, more precisely, small neighborhood of value functions). Q-learning is guaranteed to converge to the optimal value function in the tabular setting (your setting), but this is in the limit; in practice, it is more difficult to know if Q-learning has converged to an optimal or near-optimal value function.

Maybe you can also have a look at episodic returns of the policy derived from your final value function, but without upper and lower bounds on the optimal returns, you don't know much about the global optimality of your policy/value function.

Yes, you can check if the policy makes the agent reach the goal, but many policies could do that job, i.e. that does not say that the policy is the best (or optimal) one, i.e. that's a necessary (provided the goal is reachable and the reward function models your actual goal) but not sufficient condition (for optimality). The optimality here is usually a function of the return (given that is what you are usually trying to optimize).

",2444,,2444,,10/14/2020 15:43,10/14/2020 15:43,,,,1,,,,CC BY-SA 4.0 24072,1,,,10/14/2020 15:03,,1,39,"

A recent paper "MagGAN High Resolution Face Attribute Editing with Mask Guided GAN" published this month (October 2020) describe how an approach has been developed to deal with specific face attribute editing.

The thing is that in this paper and related work (StarGAN, CycleGAN, AttGAN, STGAN ...) seems to tackle the process of adding / editing a face attribute (e.g. adding a hat or mustache...), but not really modifying the face attribute (like eyes, nose, lips ...)

Is there anyway we can make a model that can edit for example the nose type/size or any related works already published?

",41621,,11539,,10/15/2020 16:15,10/15/2020 16:15,GAN for specific face attribute modification,,0,0,,,,CC BY-SA 4.0 24073,2,,24008,10/14/2020 15:17,,1,,"

You can define different measures in this way:

  1. Maximum distance of the new point with all points of the configuration ($M$)
  2. Minimum distance of the new point with all points of the configuration ($N$)
  3. $\frac{M}{\text{Maximum distance between two points of the configuration(}D)}$: normalize (1) measure
  4. $\frac{N}{D}$:normalize (2) measure

You can get more ideas from distance measures in hierarchical clustering methods. To select a proper one, you need to elaborate on the context of these points.

",4446,,,,,10/14/2020 15:17,,,,1,,,,CC BY-SA 4.0 24074,2,,23669,10/14/2020 16:18,,1,,"

In the paper that you cite, Inverse Reward Design (2017), the authors actually define what they mean by "proxy reward function".

We formalize this in a probabilistic model that relates the proxy (designed) reward to the true reward

So, the proxy reward function is the reward function designed by the human, which may not necessarily be the reward function that he/she intended (i.e. it may be a misspecified reward function), given that the human may have forgotten to model/incorporate certain (unpredicted by the human) scenarios or situations that the agent may face. This usage of the word "proxy" is thus consistent with the general usage of the word in computer science, i.e. a "proxy reward function" is a reward function that is used instead of the intended (optimal) reward function.

",2444,,,,,10/14/2020 16:18,,,,0,,,,CC BY-SA 4.0 24075,1,,,10/14/2020 17:26,,1,19,"

I am currently working on a novel application in NLP where I try to classify empathic and non-empathic texts. I would like to compare the performance of my model to some benchmark models. As I am working with models based on Word2Vec embeddings, the benchmark models should also be based on Word2Vec, however I am looking for some relatively easy, quick to implement models.

Do you have any suggestions?

",37792,,,,,10/14/2020 17:26,Bechmark models for Text Classification / Sentiment Classification,,0,0,,,,CC BY-SA 4.0 24078,1,,,10/14/2020 21:55,,1,50,"

I'm researching the use of emotion recognition in Intelligent Tutoring Systems and trying to more effectively find and formally reference materials. My question is whether this is the most formal terminology (i.e. "emotion recognition"), because I've also seen "affect recognition" and "affective computing". Maybe it's a matter of taste, but I know sometimes the market terminology is different from the engineering terminology and I'd like to be more in tune with the engineers.

Maybe there is a leading classification system of related technologies (e.g. facial recognition, sentiment analysis, etc.)?

I'm seeing the "affective-computing" tag now, but not sure if these tags reflect a formal classification system in the field of AI.

",41626,,2444,,2/1/2021 0:45,2/1/2021 0:45,What is the formal terminology for emotion recognition AI?,,0,0,,,,CC BY-SA 4.0 24079,2,,24054,10/14/2020 22:46,,0,,"

Monocular depth estimation basically does this and it implicitly brings in knowledge about object size. But you need to have prior information.

",32390,,,,,10/14/2020 22:46,,,,0,,,,CC BY-SA 4.0 24081,1,,,10/15/2020 0:41,,1,122,"

Back before deep learning, there were a lot of different attempts at computer vision. Some involved Conditional Random Fields and Markov Random Fields, which were both computationally difficult and hard to understand/implement.

Are these areas still being developed in the computer vision domain? What was the end result of this line of study? I haven't seen any papers on this topic be cited in top-performing benchmarks, so I assume nobody cares about them anymore, but I wanted to ask.

",32390,,2444,,7/15/2021 12:33,7/15/2021 12:33,Are Markov Random Fields and Conditional Random Fields still used in computer vision?,,1,0,,,,CC BY-SA 4.0 24082,1,,,10/15/2020 2:59,,0,41,"

I have been trying to figure out whether if I train a model and then while predicting is it possible to train images too just like humans Somehow converting valid images to the dataset by asking us when an object is shown Like the Google Photos somehow they ask us if they predicted a face correctly and then reinforces on it

",30884,,,,,10/15/2020 4:11,Training while predicting on dataset,,1,0,,3/3/2022 17:09,,CC BY-SA 4.0 24083,2,,24082,10/15/2020 4:11,,1,,"

Yes, this method of training a model is commonly known as online learning and specific learning algorithms have been designed for this purpose, such as, Stochastic Gradient Descent(SGD). As opposed to Batch Gradient descent, which computes gradients over the entire training set at each step, the SGD algorithm computes gradients for individual samples and updates the model's parameters.

The online learning strategy is not specific to Reinforcement Learning(RL) methods but is also suitable for the broader classification of Machine Learning algorithms. Suppose, we are given a simple model pretrained on a large Face Recognition dataset, and wish the model to continuously adapt to new faces, we may keep feeding in the new input images with their correct labels (obtained from feedback), as they appear, and apply SGD to incrementally update the model. To go about this using an RL based approach, we may penalize the model with negative rewards for every label misclassified and update it.

",38280,,,,,10/15/2020 4:11,,,,2,,,,CC BY-SA 4.0 24086,1,,,10/15/2020 15:07,,0,59,"

There are two sources that I'm using to to try and understand why LSTMs reduce the likelihood of the vanishing gradient problem associated with RNNs.

Both of these sources mention the reason LSTMs are able to reduce the likelihood of the vanishing gradient problem is because

  1. The gradient contains the forget gate's vector of activions
  2. The addition of four gradient values help balance gradient values

I understand (1), but I don't understand what (2) means.

Any insight would greatly be appreciated!

",26159,,,,,10/15/2020 15:07,"In LSTMs, how does the additive property enables better balancing of gradient values during backpropagation?",,0,3,,,,CC BY-SA 4.0 24087,1,24089,,10/15/2020 15:12,,4,589,"

For simplicity, let's assume we want to solve a regression problem, where we have one independent variable and one dependent variable, which we want to predict. Let's also assume that there is a nonlinear relationship between the independent and dependent variables.

No matter the way we do it, we just need to build a proper curved line based on existing observations, such that the prediction is the best.

I know we can solve this problem with neural networks, but I also know other ways to create such curves. For example:

  1. splines

  2. kriging

  3. lowess

  4. Something I think would also work (do not know if exists): fitting curve using a series of Fourier sine waves, and so on

My questions are:

  1. Is it true that neural networks are just one of the ways to fit a non-linear curve to the data?

  2. What are the advantages and disadvantages of choosing a neural network over other approaches? (maybe it becomes better when I have many independent variables, and another little guess: maybe the neural network is better in omitting the effect of linear dependent input variables?)

",36453,,2444,,10/16/2020 22:48,2/15/2021 22:25,What is the difference between neural networks and other ways of curve fitting?,,2,0,,,,CC BY-SA 4.0 24089,2,,24087,10/15/2020 21:10,,3,,"
  1. In some sense, you're right that a neural net is just another tool to fit data. However, it's quite the tool! There's this universal approximation theorem saying that, under decent conditions, a neural network can get as close as you want to a wide class of functions. This means that you can get the network to give you complicated shapes with squiggles all over if that's the right trend.

  2. The universal approximation theorem is a big upside. You don't have to specify that you want to model with sine curves or a particular type of spline. You just let the computer figure that out for you. The result is the ability to model complex patterns and make accurate predictions. The drawback is that the modeling can pick up on coincidences in the data that look like a trend but are not. This causes overfitting. When your goal is to make accurate predictions, a model that has overfit does nothing for you. A second drawback is that neural networks are hard to interpret. A third drawback is that they can take a long time to train, while a linear regression is just a matrix inversion and a couple of matrix products (the $\hat{\beta}=(X^TX)^{-1}X^Ty$).

",25529,,25529,,10/16/2020 12:58,10/16/2020 12:58,,,,0,,,,CC BY-SA 4.0 24094,1,,,10/16/2020 7:45,,0,609,"

I am not fully understanding how to train a GAN's generator. I have a few questions below, but let me first describe what I am doing.

I am using the MNIST dataset.

  1. I generate a batch of random images (the faked ones) with the generator.

  2. I train the discriminator with the set composed of faked images and real MNIST images.

  3. After the training phase, the discriminator modifies the weights in the direction of recognizing fake (probability 0) from real (probability 1) ones.

  4. At this point, I have to consider the combined model of generator and discriminator (keep untrainable the discriminator) and put in the generator as input the faked images with the tag of 1s (as was real one).

My questions are:

Why do I have to set to real these fake images, and what fake images are these? The one generated in the first round from the generator itself? Or only the one classified as faked by the discriminator? (Then they could be both real images classified wrongly or fake images classified in the right way). Finally, what the generator does to these faked images?

",23717,,2444,,10/17/2020 20:30,11/8/2022 22:07,What is the right way to train a generator in a GAN?,,2,0,,,,CC BY-SA 4.0 24095,2,,24094,10/16/2020 8:53,,0,,"

Why I have to set to real these fake images and what fake images are these?

You set them to "real" label for the discriminator when training the generator, because that is the goal of the generator, to produce an output of 1 (probability of being a real image) when tested.

Usually you will generate a new batch of generated images for this step in training. You just used the last generated mini-batch to train the discriminator, so you expect them to score worse. Sending the exact same images again will cause correlation between the two minibatches that you want to avoid. It would not be a disaster, but training GANs can be quite difficult and sensitive to details like this, so it is better to keep generating new images and not re-use the previous ones.

The one generated in the first round from the generator itself?

No. New images generated just for training the generator.

Or only the one classified as faked by the discriminator? (then they could be both real images classified wrongly or fake images classified in the right way).

No. New images generated just for training the generator.

Out of interest though, if the discriminator classifies a fake image as 100% real (with a probability close to 1), then the generator will not learn anything from that. The gradients would all be zero.

Finally what the generator does to these faked images?

Nothing is done to the images themselves - unless perhaps you are keeping some copies to render and monitor training progress etc. The images occur within the combined generator/discriminator network, effectively as a hidden layer. The images are represented as artificial neuron output, so they are involved in backpropagation calculations for that layer (with no difference to any other hidden layer in a CNN), but are not changed directly.

The generator uses the gradients calculated from the combined discriminator/generator network to update its weights using gradient descent. Importantly in this phase of the updates, the discriminator weights are not changed.

In terms of training the generator/discriminator combined network to update the generator:

  • The input to the combined network is some new random input vectors (typically a vector with independent truncated normal distribution for each element).

  • The "ground truth" label is 1 for every item.

  • The discriminator parameters must be "frozen", somehow excluded from being updated.

  • Run the minibatch forward to get loss and backpropagate to get gradients for the whole network including the generator.

  • Apply a gradient step (usually via some optimiser, such as Adam).

",1847,,1847,,10/16/2020 10:32,10/16/2020 10:32,,,,5,,,,CC BY-SA 4.0 24097,1,,,10/16/2020 12:51,,3,251,"

To train the discriminator network in GANs we set the label for the true samples as $1$ and $0$ for fake ones. Then we use binary cross-entropy loss for training.

Since we set the label $1$ for true samples that means $p_{data}(x) = 1$ and now binary cross-entropy loss is: $$L_1 = \sum_{i=1}^{N} P_{data}(x_i)log(D(x)) + (1-P_{data}(x_i))log(1-D(x))$$ $$L_1 = \sum_{i=1}^{N} P_{data}(x_i)log(D(x))$$ $$L_1 = E_{x \sim P_{data}(x)}[log(D(x))]$$

For the second part, since we set the label $0$ for fake samples that means $p_{z}(z) = 0$ and now binary cross-entropy loss is: $$L_2 = \sum_{i=1}^{N} P_{z}(z_i)log(D_{G}(z)) + (1-P_{z}(z_i))log(1-D_{G}(z))$$ $$L_2 = \sum_{i=1}^{N} 1-P_{z}(z_i)log(1-D_{G}(z))$$ $$L_2 = E_{z \sim \bar{P_{z}(z)}}[log(1-D_{G}(z))]$$ Now we combine those two losses and get: $$L_D = E_{x \sim P_{data}(x)}[log(D(x))] + E_{z \sim \bar{P_{z}(z)}}[log(1-D_{G}(z))]$$ When I was reading about GANs I saw that the loss function for discriminator is defined as: $$L_D = E_{x \sim P_{data}(x)}[log(D(x))] + E_{z \sim P_{z}(z)}[log(1-D_{G}(z))]$$ Should not it be $E_{z \sim \bar{P_{z}(z)}}$ instead of $E_{z \sim P_{z}(z)}$ ?

",28048,,18758,,8/3/2021 1:56,9/6/2021 21:59,How to define loss function for Discriminator in GANs?,,1,0,,,,CC BY-SA 4.0 24100,1,24108,,10/16/2020 15:28,,1,121,"

Training an SVM with an RBF kernel model with c = 5.5 and gamma = 1.06, for a 5-class classification problem on the NSL-KDD train data-set with 122 features using one vs rest strategy takes $2162$ seconds. Also, considering binary classification (c = 10, gamma = 4), it takes $520.56$ seconds.

After dimensionality reduction, from 122 to 30, using a sparse auto-encoder, the training time falls dramatically, from $2162$ to $240$ and $520$ to $170$, while using the same hyperparameters for the RBF-kernel.

What is the reason for that? Is it not true that using kernel neutralized the effect of high dimensions?

",41662,,2444,,10/18/2020 9:48,10/18/2020 9:48,Why does the training time of SVMs dramatically decrease after applying dimensionality reduction to the features?,,1,0,,,,CC BY-SA 4.0 24103,2,,24040,10/16/2020 20:57,,0,,"

First, a forward pass is done to obtain predictions of $z$, at every $t$. Then the adjoint state is run backward in time for every $t$. Which gives the learning impulse. So an initial run is done to obtain values of the dynamical system at all time points and the last of these is the initial point for the backward pass.

This picture depicts it.

To be more specific with regards to the question in your initial paragraph the input of $t_0$ and $t_1$ into $ODE\_Solve$ are time points at which the network is evaluated. In this specific case, the first point is $t_0$ which should always be the case and $t_1$ is the last point. But you can also insert more time points for example $N$ time points than you input the sequence $t_0, t_1, t_2, ..., t_N$ into $ODE\_Solve$. In that case, $t_N$ is the last time point.

This picture also shows a lot. But in this picture, a RNN is used to generate $z_{t_0}$. The RNN encoder is given the time points in reversed order: $(t_N, t_{N-1}, t_{N-2}, ..., t_0)$.

This is probably not enough to satisfy your question, hopefully, I can get more specific with respect to the math later.

",41565,,41565,,7/7/2022 8:13,7/7/2022 8:13,,,,0,,,,CC BY-SA 4.0 24106,1,24109,,10/17/2020 3:33,,1,237,"

I know that

$$\mathbb{E}[g(X) \mid A] = \sum\limits_{x} g(x) p_{X \mid A}(x)$$

for any random variable $X$.

Now, consider the following expression.

$$\mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a \right]$$

It is used for the calculation of Q values.

I can understand the following

  1. $A$ is $\{s_t = s, a_t = a\}$ .i.e., agent has been performed action $a$ on state $s$ at time step $t$ and

  2. $g(X)$ is $\sum\limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1}$ i.e., return (long run reward).

What I didn't understand is what is $X$ here. i.e., what is the random variable on which we are calculating long-run rewards?

My guess is policy function. It is averaging long-run rewards over all possible policy functions. Is it true?

",18758,,18758,,1/14/2022 23:48,1/14/2022 23:51,"In the definition of the state-action value function, what is the random variable we take the expectation of?",,1,0,,,,CC BY-SA 4.0 24108,2,,24100,10/17/2020 9:28,,1,,"

SVM complexity is $O(\max(n,d)\min(n,d)^2)$ according to Chapelle, Olivier. "Training a support vector machine in the primal." Neural Computation 19.5 (2007): 1155-1178.

$n$ is the number of instances and $d$ is the number of dimensions. I'm assuming that you have more instances than dimensions giving a complexity of $O(nd^2)$. Hopefully this explains fully why reducing the number of dimensions will reduce the training time.

",34473,,,,,10/17/2020 9:28,,,,2,,,,CC BY-SA 4.0 24109,2,,24106,10/17/2020 10:00,,2,,"

I am using the convention of uppercase $X$ for random variable and lowercase $x$ for an individual observation. It is possible your source material did not do this, which might be causing your confusion. However, it is the convention used in Sutton & Barto's Reinforcement Learning: An Introduction.

What I didn't understand is what is 𝑋 here. i.e., what is the random variable on which we are calculating long-run rewards?

The random variable is $R_t$, the reward at each time step. The distribution of $R_t$ in turn depends on the distribution of $S_{t-1}$ and $A_{t-1}$ plus the policy and state progression rules. There is no need to include the process that causes the distribution of each $R_t$ in every equation. Although sometimes it is useful to do so, for example when deriving the Bellman equations for value functions.

My guess is policy function. It is averaging long-run rewards over all possible policy functions. Is it true?

No, this is not true. In fact, it is the more usual assumption that the policy function $\pi(a|s)$ remains constant over the expectation, and this is what the subscript $\pi$ in $\mathbb{E}_{\pi}[...]$ means.

The expectation is over randomness due to the policy $\pi$, plus randomness due to the environment, which can be described by the function $p(r, s'|s, a)$ - the probability of observing reward $r$ and next state $s'$ given starting in state $s$ and taking action $a$. These two functions combine to create the distribution of $R_t$. It is possible that both functions are deterministic in practice, thus $R_t$ is also deterministic. However, RL theory works on the more general stochastic case, which is also used to model exploratory actions, even if the target policy and environment are deterministic.

",1847,,18758,,1/14/2022 23:51,1/14/2022 23:51,,,,7,,,,CC BY-SA 4.0 24112,1,,,10/17/2020 13:20,,2,117,"

I am trying to implement an AI bot for my Agar.io clone using deep neural network.
However, I am struggling with the state and action space of the AI bot.
Because the bot can take real number for position and velocity, can I say the state space is continuous?
For the action space, I am thinking something like (velocityX, velocityY, "split to half", "eject mass").
What should be the number of input nodes in the input layer for my Neural network? And what are those input(observations, rewards)?
As the number of players and AI bots are changing, how can I train a dynamic network with changing input node number?
For the outputs, how can I get a continuous action output like velocity?

As a reference, you can learn about the game rules from this short youtube video:
20 Rules and Game Mechanism of Agar (How to Play Agar.io)

",41676,,41676,,10/17/2020 13:31,10/18/2020 5:46,How to define Agar.io state and action space?,,1,0,,,,CC BY-SA 4.0 24114,1,,,10/17/2020 14:36,,1,345,"

I have a 2-dimentional matrix as an action space, the rows being a resource to be allocated, and the columns are the users that we will allocate the resources to. (I built my own RL environment)

The possible actions are 'Zero' or 'One'. One if the resource was allocated to the user, Zero if not.

I have a constraint related to the resource allocation, which states that each resource can be allocated to one user only, and the resource should only be allocated to users who have requested a resource to be allocated to them, and that would be the state space which is another matrix.

A penalty would be applied if the agent violates the constraints and the episode would end and the reward would equal the penalty. Otherwise, the reward would equal the sum of all the users that were satisfied with the allocation.

I am struggling with the implementation. The agent starts by exploring, then little by little it starts exploiting. When it gets to be more exploitative, I've noticed that the action matrix's values are all set to 'One', and the penalty always has the same value from episode to episode.

",42372,,,,,11/7/2022 3:07,How to create a Q-Learning agent when we have a matrix as an action space?,,1,2,,,,CC BY-SA 4.0 24115,2,,24068,10/17/2020 14:46,,3,,"

Context of the question

This is a link to the text cited in the question.

It refers to the usage of SeparableConv2D (tf, keras name). A related question on StackOverflow is "What is the difference between SeparableConv2D and Conv2D layers". This answer points to this excellent article by Chi-Feng Wang:

A Basic Introduction to Separable Convolutions

Answer to the question

In image processing, a separable convolution converts a NxM convolution to two convolutions with kernels Nx1 and 1xM. Using this idea, in NN a SeparableConv2D converts a WxHxD convolution (width x height x depth, where depth means number of incoming features ) to two convolutions with kernels WxHx1 and 1x1xD.

Note the first kernel doesn't handles information across features, thus, it is "learning of spatial features". The 1x1xD kernel doesn't handles different points, it is "learning of channel-wise features".

About the phrase "spatial locations in the input are highly correlated", my understanding of what the author means is: Assume we have a channel (feature) image that each pixel measures the "distance to the background". When we pass from one pixel to a neighbors one, it is expected some continuity in the value (except for edge pixels): correlation. Instead, if we have a channel that measures "brightness" and another one that measures "distance to background" the two values for one specific pixel has little correlation.

Finally, about title question "When should we use separable convolution?" : if the final output must depend of some features of one pixel and some other features of neighbors pixels in a very unpredictable way, a complete WxHxD convolution must be used. However if, as is more usual, you can handle first spatial dependencies (neighborhood) to extract pixel features and next handle pixel-by-pixel these features to get the output, better use a WxHx1 followed by 1x1xD, saving lots of network parameters, thus, saving training time.

",12630,,12630,,10/18/2020 15:03,10/18/2020 15:03,,,,0,,,,CC BY-SA 4.0 24117,1,24154,,10/17/2020 19:33,,6,300,"

The main goal is: Find the smallest possible neural network to approximate the $sin$ function.

Moreover, I want to find a qualitative reason why this network is the smallest possible network.

I have created 8000 random $x$ values with corresponding target values $sin(x)$. The network, which am currently considering, consists of 1 input neuron, 3 neurons in two hidden layers, and 1 output neuron:

Network architecture:

The neural network can be written as function $$y = sig(w_3 \cdot sig(w_1 \cdot x) + w_4 \cdot sig(w_2 \cdot x)),$$ where $\text{sig}$ is the sigmoid activation function.

$tanh$ activation function:
When I use $tanh$ as an activation function, the network is able to hit the 2 extrema of the $sin$ function:

$tanh$ activation function"" />

Sigmoid activation function:
However, when I use the sigmoid activation function $\text{sig}$, only the first extremum is hit. The network output is not a periodic function but converges:

My questions are now:

  • Why does one get a better approximation with the $tanh$ activation function? What is a qualitative argument for that?
  • Why does one need at least 3 hidden neurons? What is the reason that the approximation with $tanh$ does not work anymore, if one uses only 2 hidden neurons?

I really appreciate all your ideas on this problem!

",23496,,,,,10/20/2020 6:34,Smallest possible network to approximate the $sin$ function,,1,3,,,,CC BY-SA 4.0 24118,2,,24114,10/18/2020 0:55,,0,,"

I was thinking this strategy may work.

So, Q-learning takes vector input as state representation let's say your vector has n dimensions i.e. [$n_0$, $n_1$, $n_2$,..., $n_{n-1}$]

Now, from my interpretation you want to populate a matrix with 0 and 1's given the state vector but action-space has a high complexity e.g. an 8*8 matrix has 64 cells i.e. $2^{64}$ possible actions if you want the action to be a matrix.

I suggest this:

Fill each cell, one at a time. i.e. Your agent has only two possible action 0 and 1. To indicate to your agent that you are at a specific cell, concatenate the row and column number to the state vector before passing it as input to the Q-learning agent.

Example:

If you original state-vector is [55, 22, 100, 4] and you have to fill cell at position (10, 30) of the matrix, the state-vector should be modified as follows: [55, 22, 100, 4, 10, 30].

I'm not sure of the sample-efficiency of this approach though.

",30174,,,,,10/18/2020 0:55,,,,0,,,,CC BY-SA 4.0 24121,1,,,10/18/2020 2:35,,1,705,"

I'm trying to have a simple autoencoder but with variable latent length (the network can produce variable latent lengths with respect to the complexity of the input), but I've not seen any related work to get idea from. Have you seen any related work? Do you have any idea to do so?

Actually, I want to use this autoencoder for transmitting the data over a noisy channel, so having a variable-length may help.

",41547,,2444,,10/18/2020 9:58,10/18/2020 20:30,Is it possible to have a variable-length latent vector in an autoencoder?,,2,2,,,,CC BY-SA 4.0 24123,2,,23559,10/18/2020 5:05,,2,,"

Art of NPC creation

I'm assuming this is a standard game, not a game theory application. These AIs tend to be far simpler in theory than actual artificially intelligent agents which are used to solve real world problems. The challenge of building games is that there are few right answers. A game NPC opponent could know practically nothing of the player and be guided by random world exploration activating certain characteristics (attack, strategy) whenever a human is in range or on the other end could know basically everything and have some limiting factors built into the code which reduces their challenge level to the player. This is the art aspect of gaming design. NPCs with a high level of complexity could learn from their interactions and build a "pattern" with some internal reward scoring system for success (similar to reinforced learning). This would represent a very challenging game NPC, especially if the strategy changes upon failure. Think about who will be playing your games, the audience that you would tend to draw, and hone the NPC challengers to them. Hardcore gamers deserve a challenge, and hobby gamers deserve a chance to win. Where that balance between rewarding game experience and challenging game experience lies depends on the 'feel' which you are cultivating as the game designer. You mentioned resources and exploration as elements of the game. For exploration types of games, you may rely on environmental interactions to offset player interactions. A distracted NPC (gathering resources) is less likely to heavily attack the human player unless this player enters the perimeters of the NPC base or scouting area. This may work to keep the gameplay balanced. Keep in mind that some limitations for your NPCs must be absolute, or the game will experience unexpected behavior (bugs). A completely unrestricted NPC is a liability. If the NPC goes too far from the player or gets stuck, it completely ruins the feel of your game.

",16959,,,,,10/18/2020 5:05,,,,0,,,10/18/2020 5:05,CC BY-SA 4.0 24124,2,,24112,10/18/2020 5:46,,3,,"

The state space is certainly continuous, assuming that you can somehow feed that AI exact coordinates. You may have to resort to CNNs if you do not have access to this information. For the action space, you should consider how the game actually plays. Since you use a mouse to simply show the direction, you could use (x,y) positions of the mouse as an action, or even just the angle $\theta$ of the mouse cursor in a circle around the agent. If you are playing on the site, then your observations would have to be from a CNN, it should be possible to use the score as your reward, as well as the possibility for eating things or distance from opponents that are bigger or smaller as intermediate rewards. The number of nodes in your network is something that you must find experimentally, you may like to research what kind of architectures other people have used in this field. You shouldn't need to account for different numbers of players. Nothing special needs to be done to get a single node giving a continuous output to represent the angle $\theta$, or alternatively two nodes representing x and y. You can then use tanh or sigmoid to limit the output node values for eject and split actions.

",34473,,,,,10/18/2020 5:46,,,,0,,,,CC BY-SA 4.0 24125,1,,,10/18/2020 5:57,,1,407,"

I am a mathematics student who is learning NLP, so I have paid a high amount of attention on the mathematics used in the subject, but my interpretations may or may not be right sometimes. Please correct me if any of them are incorrect or do not make sense.

I have learned CBOW and Skip-Gram models.

I think I have understood the CBOW model, and here is my interpretation: First, we fix a number of neighbors of the unknown center word which we would like to predict; let the number be $m$. We then input the original characteristic vectors (vectors of zeros and ones only) of those $2m$ context words. By multiplying those vectors by a matrix, we obtain $2m$ new vectors. Next, we take the average of those $2m$ vectors and this is our hidden layer, namely $v$. We finally multiply $v$ with another matrix, and that is the "empirical" result.

I tried to follow the logic to Skip-Gram similarly, but I have been stuck. I understand that Skip-Gram is kind of a "reversal" of CBOW, but the specific steps have given me a hard time. So, in Skip-Gram, we only have a center word, and based upon that we are trying to predict $2m$ context words. By similar steps, we obtain a hidden layer, which is again a vector. The final process also involves multiplication with a matrix, but I don't know how we can get $2m$ new vectors based upon one, unless we have $2m$ different matrices?

",41685,,2444,,5/5/2022 20:46,5/5/2022 20:46,Is my interpretation of the mathematics of the CBOW and Skip-Gram models correct?,,1,0,,,,CC BY-SA 4.0 24126,2,,24121,10/18/2020 8:43,,-1,,"

You might want to look at an encoder-decoder sequence to sequence model. This model allows you to input and output data with variable length.

",34358,,,,,10/18/2020 8:43,,,,1,,,,CC BY-SA 4.0 24127,2,,24081,10/18/2020 8:57,,1,,"

In the Image-to-Image Translation with Conditional Adversarial Networks paper (popularly known as pix2pix), they used a Markovian Discriminator to effectively model the image as a Markovian Random Field.

There were some papers in the last 5 years concerning Markov Fields. Here are some of them:-

  1. Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis

  2. Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

",40434,,,,,10/18/2020 8:57,,,,0,,,,CC BY-SA 4.0 24128,1,,,10/18/2020 10:08,,1,130,"

Could someone clear my doubt on the loss function used in SeqGAN paper . The paper uses policy gradient method to train the generator which is a recurrent neural network here.

  1. Have I interpreted the terms correctly?
  2. What are we summing over? The entire vocabulary of words?

Loss function - my interpretation:

",41372,,41372,,11/6/2020 17:44,11/6/2020 17:44,SeqGAN - Policy gradient objective function interpretation,,0,2,,,,CC BY-SA 4.0 24129,2,,24125,10/18/2020 10:51,,1,,"

The following figure from this article can be helpful:

This figure represents "Skip-Gram model structure. Current center word is 'passes'".

",4446,,,,,10/18/2020 10:51,,,,0,,,,CC BY-SA 4.0 24132,2,,23700,10/18/2020 16:22,,0,,"

You could use Ray RLlib. It has support for parallel environments, even over multiple GPUs and compute nodes.

",34358,,,,,10/18/2020 16:22,,,,0,,,,CC BY-SA 4.0 24133,2,,23666,10/18/2020 16:30,,0,,"

Since you are looking at a single iteration and expect a meaningful change my guess is that you aren't training for long enough. Q-learning can take very long, for many environments it takes millions of iterations.

",34358,,,,,10/18/2020 16:30,,,,0,,,,CC BY-SA 4.0 24135,2,,24121,10/18/2020 17:37,,1,,"

If you use RNNs, then I think the solution is to use padding (zero padding) with max sequence length (that is the max number of words in a text) in order to tell your model to skip the zeros when possible. In that way, your model will try to learn a good representation of your input with fixed size. If you do not know this dimension, a solution may be to grid search this hyperparameter.

If you still want to exploit the dimensionality difference, maybe you can train different models with fixed size dimension of the representation dependently of the dimension of the input. That is, for example, use one for small, one for medium and one for large dimensions, but this should surely require to have a large and quite balanced initial dataset.

Another idea could be to use the autoencoder with a fixed latent dimension. Then, do effective clustering on your samples using their latent representation, considering that similar representations should have similar dimensionality requirements (?). After that, you could train your initial dataset on k models, the same number as the clusters. That is, there should be k different latent spaces. The goal is to match each instance to the correct model. At first, you should train them all with each instance, but as the training progresses, you should maybe go with binary search for each instance in order it to find the correct model, assuming that there is total order in the measuring of the dimensionality requirements. Of course, this is just an idea, I don't know if it is going to be really helpful at all.

",36055,,36055,,10/18/2020 20:30,10/18/2020 20:30,,,,5,,,,CC BY-SA 4.0 24136,1,24137,,10/18/2020 18:22,,0,295,"

It seems like transfer learning is only applicable to neural networks. Is this a correct assumption?

While I was looking for examples of Transfer Learning, most seemed to be based on image data, audio data, or text data. I was not able to find an example of training a neural network on numerical data.

I want to use transfer learning in this particular scenario: I have a lot of numerical data from an old environment, with binary classification labels, and utilizing this, want to train on a new environment to do the same binary classification.

The dataset would look something like this

Is this possible? What would the model look like?

",41693,,,,,10/18/2020 19:09,Transfer Learning of Numerical Data,,1,0,,,,CC BY-SA 4.0 24137,2,,24136,10/18/2020 19:03,,1,,"

It seems like transfer learning is only applicable to neural networks. Is this a correct assumption?

No. Wiki page give you pointers of several examples in other methodologies.

While I was looking for examples of Transfer Learning, most seemed to be based on image data, audio data, or text data. I was not able to find an example of training a neural network on numerical data.

All the cases you say are converted to numerical data. Image and audio usually via sampling, text via one-hot encoding.

I want to use transfer learning in this particular scenario: I have a lot of numerical data from an old environment, with binary classification labels, and utilizing this, want to train on a new environment to do the same binary classification.

That is not transfer learning. Transfer learning applies when there are a change in the domain (input features) or in the task (output labels).

The dataset would look something like this Sample Table Is this possible? What would the model look like?

For a simple case as the one you present, probably a simple network with one hidden layer will be enough. Train it with original pairs of {features,label} or, if not available, use the current predictor to obtain the label from the features.

",12630,,12630,,10/18/2020 19:09,10/18/2020 19:09,,,,1,,,,CC BY-SA 4.0 24138,2,,12870,10/18/2020 21:55,,28,,"

Explainable AI and model interpretability are hyper-active and hyper-hot areas of current research (think of holy grail, or something), which have been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks, plus the necessity of algorithmic fairness & accountability.

Here are some state of the art algorithms and approaches, together with implementations and frameworks.


Model-agnostic approaches

SHAP seems to enjoy high popularity among practitioners; the method has firm theoretical foundations on co-operational game theory (Shapley values), and it has in a great degree integrated the LIME approach under a common framework. Although model-agnostic, specific & efficient implementations are available for neural networks (DeepExplainer) and tree ensembles (TreeExplainer, paper).


Neural network approaches (mostly, but not exclusively, for computer vision models)


Libraries & frameworks

As interpretability moves toward the mainstream, there are already frameworks and toolboxes that incorporate more than one of the algorithms and techniques mentioned and linked above; here is a partial list:


Reviews & general papers


eBooks (available online)


Online courses & tutorials


Other resources

",11539,,11539,,12/3/2021 15:23,12/3/2021 15:23,,,,1,,,,CC BY-SA 4.0 24139,2,,3981,10/19/2020 9:19,,3,,"

There are several ways to add new classes to the trained model, which require just training for the new classes.

  • Incremental training (GitHub)
  • continuously learn a stream of data (GitHub)
  • online machine learning (GitHub)
  • Transfer Learning Twice
  • Continual learning approaches (Regularization, Expansion, Rehearsal) (GitHub)
",41707,,2444,,11/17/2020 18:37,11/17/2020 18:37,,,,3,,,,CC BY-SA 4.0 24140,1,24141,,10/19/2020 10:07,,2,464,"

I attended an introductory class about neural network and I had a question regarding how to choose the number of hidden units per hidden layer.

I remember that the Professor saying that there is no rule for choosing the number of hidden units and that having many of them along with many hidden layers can cause the network to overfit the data and under learn.

However, I still have this question where assuming that we have a network with an input layer of n input nodes, a first hidden layer of 4 hidden units, a second layer of X hidden units and an output layer of 5 units. Now if I follow the Professor's saying, it would mean that I am allowed to have X = 3 or X = 4 in layer 2.

Is that actually allowed? Won't we have some sort of information gain passing from 4 (or 3) nodes to 5? The example is illustrated below.

",40411,,,,,10/19/2020 10:49,Can the hidden layer prior to the ouput layer have less hidden units than the output layer?,,1,0,,,,CC BY-SA 4.0 24141,2,,24140,10/19/2020 10:25,,7,,"

A layer with bigger number of nodes than previous one is something very common. Some examples are:

  • strategies encoder-decoder (autoencoders) where the encoder typically has layers with a decreasing number of nodes (until the compressed/encoded data) and the decoder has layers increasing in number of nodes.

  • bidirectional recurrent networks where in the forward direction number nodes decreases and in the backward increases.

  • generators, that from a random vector generates, by example, a full image.

As general rule: decrease number of nodes forces the net to filter/resume/abstract/summarize the internal signal information (discarding useless information or noise) while increase number of nodes means apply current information to generate an answer value for a specific question/target.

Allow me a strongly simplified example: assume you want a system that, from a photo of an animal, answers the questions: number of legs? has beak ? flies ? . Net inputs are images of birds and dogs.

The net architecture can have layers of decreasing size until a single node that will decide "is bird or dog ?". From this single item of information (the only one need to answer all the questions) the output layer will have 3 nodes, each one answering one of the specific target questions: number or legs ? 4 if dog, 2 if bird, etc .

",12630,,12630,,10/19/2020 10:49,10/19/2020 10:49,,,,0,,,,CC BY-SA 4.0 24143,2,,8190,10/19/2020 12:02,,4,,"

Hopfield networks, a special case of RNNs, were first proposed in 1982: https://www.pnas.org/content/79/8/2554

Otherwise (shameless plug, I am the author) a non-technical timeline for NLP can be found here: https://blog.exxcellent.de/ki-machine-learning

",41709,,,,,10/19/2020 12:02,,,,2,,,,CC BY-SA 4.0 24144,1,,,10/19/2020 12:25,,3,63,"

I'm currently in the middle of a project (for my thesis) constructing a deep neural network. Since I'm still in the research part, I'm trying to find various ways and techniques to initialize weights. Obviously, every way will be evaluated and we will choose the one that fits best with our data set and our desired outcome.

I'm all familiar with the Xavier initialization, the classic random one, the He initialization, and zeros. Searching through papers I came across the SCAWI one (Statistically Controlled Activation Weight Initialization). If you have used this approach, how efficient is it?

(Also, do you know any good sources to find more of these?)

",41488,,2444,,10/22/2020 9:46,10/22/2020 9:46,How efficient is SCAWI weight initialization method?,,0,4,,,,CC BY-SA 4.0 24145,1,,,10/19/2020 13:38,,1,1034,"

I have been digging up of articles across the internet in context of computational complexity of GRU. Interestingly, I came across this article, http://cse.iitkgp.ac.in/~psraja/FNNs%20,RNNs%20,LSTM%20and%20BLSTM.pdf, where it takes the following notations:

Let I be the number of inputs, K be the number of outputs and H be the number of cells in the hidden layer

And then goes on to explain the computational complexity of FNNs, RNNs, BRNNs, LSTM and BLSTM computational complexity is O(W) i.e., the total number of edges in the network.

where

  • For FNN: $W = IH + HK$ ( I get this part as, for fully connected networks, we have connections from each input to each node in hidden and subsequently for hidden to output nodes)

  • For RNN: $W = IH + H^2$ + HK ( The formula is pretty same as is it for FNN but where does this $H^2$ come into picture?)

  • For LSTM : $W = 4IH + 4H^2 + 3H + HK$ (It becomes more difficult as it comes down to LSTM as to where the 4's and 3's come into the equation? )

Continuing with these notations, can I get a similar notation for GRU as well? This can be very helpful for understanding.

",41519,,49821,,9/17/2021 5:05,9/17/2021 5:05,What is the computational complexity in terms of Big-O notation of a Gated Recurrent Unit Neural network?,,0,2,,,,CC BY-SA 4.0 24147,2,,24094,10/19/2020 18:19,,0,,"

I'd like to add some details to the Neil Slater's answer.

In order to generate data, we want to find some unknown distribution. Since we do not know anything about a real distribution, we can approximate it using GAN. It was shown that optimizing the loss function of the original GAN is equivalent to minimizing Jensen-Shannon divergence between the real distribution and the estimated (generator) distribution.

where q is our initial distribution and p is the target distribution.

While training the Discriminator as a normal classification model (which learns to distinguish between generated and real samples), we want to push the Generator towards the real data distribution. To make the generated samples look realistic, we pass true labels to calculate the gradient in that direction. Here is the intuition behind generator training:

",12841,,12841,,10/19/2020 18:24,10/19/2020 18:24,,,,0,,,,CC BY-SA 4.0 24150,2,,2922,10/19/2020 20:31,,4,,"

Although asked over 3 years ago, the question is still interesting and while I agree with the original answer, a lot can be added to it.

First, I'd like to point out that the term "knowledge base" is very ambiguous and it means different things to different people. For example, there is no sharp distinction between knowledge base and neural network. By now NN can be so large that it essentially encodes knowledge as GPT does. So the distinction becomes a question of interface. And NN are no longer as opaque since many new techniques are available to probe the knowledge inside NN. Even more fundamental dustinction between symbolic and neural reasoning becoming less important when hybrid AI combines both in a intertwined fashion. So the historical divisions were largely about technologies and not the essence of AI.

Second, when it comes to NLP there is a fundamental distinction between language as surface form of information used for communication and knowledge as deep information which cannot be accessed directly even with traditional database technologies. That fundamental divide makes the historical differences even less relevant today. NLP is where that interplay between surface and deep forms was at the forefront of AI but the same is now happening with vision and planning. The question becomes - How do we architect the interface between deep knowledge (however it is represented) and surface communication? At the moment the natural language seems to be the only viable answer. So, for example there is effort to develop a natural language interface to replace plethora of query languages use by systems.

My personal prediction is that natural language will slowly evolve to include variety of technical languages and multimodal interactions. But it is not clear at all how this will happen.

",5231,,5231,,10/22/2020 21:27,10/22/2020 21:27,,,,0,,,,CC BY-SA 4.0 24151,1,,,10/19/2020 22:12,,1,63,"

I am training a network through reinforcement learning. The policy network learns rotations, but depending on the actual input (state), the output of the network should be restricted to be in certain bounds otherwise it mostly fails to reach these bounds. I am using tanh as last activation function. So, I wonder if there could be a way to modify this last activation function s.th. it can adaptively change bounds depending on input? Or would this have a negative impact in learning?

I would also be open for papers or publications tackling these kind of problems. Thank you for your help!

",41715,,,,,10/19/2020 22:12,Dynamically adapting activation function,,0,3,,,,CC BY-SA 4.0 24153,1,,,10/20/2020 5:22,,2,187,"

I am trying to debug a convolutional neural network. I am seeing gradients close to zero.

How can I decide whether these gradients are vanishing or not? Is there some threshold to decide on vanishing gradient by looking at the values?

I am getting values close to $4$ decimal places (e.g. $0.0001$) and, in some cases, close to $5$ decimal places (e.g. $0.00001$).

The CNN seems not to be learning since the histogram of weight is also quite similar in all epochs.

I am using the ReLU activation function and Adam optimizer. What could be the reason for the vanishing gradient in the case of the ReLU activation function?

If it is possible, please, point me to some resources that might be helpful.

",32394,,2444,,12/13/2020 12:52,12/13/2020 12:52,How to decide if gradients are vanishing?,,0,1,,,,CC BY-SA 4.0 24154,2,,24117,10/20/2020 6:34,,6,,"

Before anything, the function you have wrote for the network lacks the bias variables (I'm sure you used bias to get those beautiful images, otherwise your tanh network had to start from zero).

Generally I would say it's impossible to have a good approximation of sinus with just 3 neurons, but if you want to consider one period of sinus, then you can do something. for clarity look at this picture:

I've write the code for this task in colab and you can find it here, and you can play with it if you want.

If you run the network several times you may get different results (because of different initializations) and you can see some of them at the Results section of the link above. What you showed us in the images above are just two possibilities. But it's interesting that you can get better results with tanh rather than sigmoid and if you want to know why, I highly recommend you to look at this lecture of CS231n. In summary it's because tanh has the negative part and the network can learn better with it.

But actually their power of approximation are almost similar because 2*sigmoid(1.5*x) - 1 almost looks the same as tanh(x) and you can find it by looking the picture below:

So why you can't get the same results as tanh? that's because tanh suits the problem better and if the network wants to get the same result as tanh with sigmoid it should learn their transformation parameters and learning these parameters makes the learning task harder. So It's not impossible to get the same result with sigmoid but it's harder. And to show you that its possible, I have set the parameters of the network using sigmoid manually and got the result below (you can get better results if you have more time):

At last if you want to know why you can't get the same result with 2 neurons instead of 3 neurons, it's better to understand what does the network do with 3 neurons.
If you look at the output of the first layer, you may see something like this (which are outputs of two neurons it has):

Then the next layer gets the difference between the output of these two neurons (which is like sinus) and applies sigmoid or tanh to it, and that's how you get a good result. But when you have just one neuron in the first layer, you can't imagine some scenario like this and approximating one period of sinus is out of it's ability (underfitting).

",41547,,,,,10/20/2020 6:34,,,,2,,,,CC BY-SA 4.0 24155,1,,,10/20/2020 6:56,,1,43,"

Suppose we are training an environment with 2 collaborative agents with Reinforcement Learning. We define the following example: There is a midfielder and a striker. The midfielder's reward depends on how many goals are scored, which however depends on the attacker's performance. And the striker's performance depends on how good the midfielder is at making his passes.

For this type of problem, what do you recommend to study?

",41726,,2444,,10/31/2020 15:16,10/31/2020 15:16,Which reinforcement learning approach to use when there are 2 collaborative agents?,,0,3,,,,CC BY-SA 4.0 24157,1,,,10/20/2020 7:40,,2,415,"

I'm writing a DQN agent for the Wumpus game.

Is the reward function to train the Q-networks (target network and policy) the same as the score of the game, i.e. +1000 for picking up gold, -1000 for falling in pits and dying from the wumpus, -1 each move?

This is naturally cumulative, in that the score changes after each action taken by the agent. Alternatively, is it just a +1 for win, -1 for a loss and 0 in all other situations?

",41728,,2444,,10/21/2020 16:15,9/6/2022 13:14,How should I define the reward function to solve the Wumpus game with deep Q-learning?,,2,0,,,,CC BY-SA 4.0 24158,1,,,10/20/2020 7:57,,1,66,"

I am working on a multilabel classification in which I am having 206 labels. When I saw the percentage of the number of 1's in each label they are way less than 0.1% for each label. The maximum percentage of ones in labels is 0.034%.

Below is the distribution of percentage of one's in each labels If I simply build a multilabel classification single model. The score it gives may be high but it got biased towards zeros very much so, it doesn't give probability of a label to be one very high. And if I want to build for each label different model, I can treat it as a bunch of imbalanced data and apply smote algorithm to each model, But I have a doubt whether can smote produce a good amount of data to balance because we know how imbalance my data is. Now, doubt is can I gave a try to autoencoders, which I heard good at fraud detection when the data is having a percentage of one's less than 1% or such. Will it perform better in my case? because if it can work well, then I will study autoencoders.

",38737,,,,,10/20/2020 7:57,how to handle highly imbalanced multilabel classification?,,0,0,0,,,CC BY-SA 4.0 24159,2,,24157,10/20/2020 8:25,,1,,"

The reward function is up to you when you set the goals for the agent.

  • If the goal is to score as highly as possible, before ending the game, then use the score. You may want to scale the score down if you are using neural networks, to prevent needing to handle very large error values in early phases of learning.

  • If the goal is to win the game, and you do not care about the score, then use the win/loss end result. I am not familiar with the game, but if it is possible to win the game - e.g. reach an exit - whilst not collecting all the gold, then the agent may choose to do that if it reduces the chance of losing.

The second option is harder for the agent to assess. You may want the current score to be one of the state variables, as the score is likely to be correlated with win/loss.

Most computer games are designed around giving a numerical score as feedback for human play, with high score tables, players considered "better" if they get more points etc. If you want your agent to compete in the same way, then using the score directly will help achieve that goal.

",1847,,,,,10/20/2020 8:25,,,,0,,,,CC BY-SA 4.0 24161,1,,,10/20/2020 8:30,,3,173,"

Hello :) I'm pretty new to this community, so let me know if I posted anything incorrectly and I'll try to change it.

I'm working on the project which aim is to create self-driving agent in CARLA. I built a neural network Xception (decaying ε-greedy). The other parameters are:

EPISODES: 100
GAMMA: 0.3
EPSILON_DECAY: 0.9
MIN_EPSILON: 0.001 BATCH: 16

Due to the limited computer resources I chose 100 or 300 epochs to train the model, but it generates much fluctuations:

EPISODES: 100
GAMMA: 0.7 EPSILON_DECAY: 0.9
MIN_EPSILON: 0.001 BATCH: 16

Can anyone suggest how can I improve my results? Or it is only the issue of small number of epochs?

",40980,,40980,,10/20/2020 15:03,10/21/2020 10:00,Improving DQN with fluctuations,,1,5,,,,CC BY-SA 4.0 24163,1,,,10/20/2020 11:41,,1,63,"

In the above diagram, the shape of some of the matrices can be seen in the yellow highlight. For instance:

The hidden state at timestep t-1 ($h_{t-1}$) has shape $(na, m)$

The input data at timestep t ($x_{t}$) has shape $(nx, m)$

$Z_{t}$ has shape $(na+nx, m)$ since the hidden state and input data are concatenated in LSTMs.

$W_{c}$ has shape $(na, na+nx)$

$W_{c}$$Z_{t}$ has shape $(na, m)$ = $i_{t}$

$W_{i}$$Z_{t}$ has shape $(na, m)$ = $ĉ_{t}$

When working through the network to the point $i_{t}$ and $ĉ_{t}$, how can these two be dot producted when the multiplication is not of the form (m x n)(n x p) as per the matrix multiplication definition?:

",26159,,26159,,10/20/2020 12:04,10/20/2020 12:49,How do LSTMs work if the following two matrices are not able to be multiplied?,,1,0,,,,CC BY-SA 4.0 24164,2,,24157,10/20/2020 11:42,,1,,"

The reward function belongs to the environment and it is the only way the agent can explore the world given a state.

If we want agents to do something specific, we must provide rewards to them in such a way that they will achieve our goals. It is thus very important that the reward function accurately indicates the exact behavior.

Depending on your goal you can construct the function such that the agent will try to finish the game as fast as possible or collect the maximum score.

For example, certain reward functions can cause an agent to commit suicide in order to avoid more severe punishment in form of negative reward in the future (e.g. if the step reward is very small). Or it will go the safest way without collecting gold if falling in pits punishment is very big. In other words, you should experiment with your reward function to find a tradeoff.

Check out this video for more intuition behind it.

",12841,,12841,,9/6/2022 13:14,9/6/2022 13:14,,,,0,,,,CC BY-SA 4.0 24165,2,,24161,10/20/2020 12:43,,2,,"

It is not clear form your question, how you use your replay buffer. Basically, you have to store all states/reward tuples and train your agent on the entire buffer.

Moreover, you should give the agent time to explore (all) states of the world. But if you want to speed up training, you can try to implement importance sampling

",12841,,12841,,10/21/2020 10:00,10/21/2020 10:00,,,,2,,,,CC BY-SA 4.0 24166,2,,24163,10/20/2020 12:49,,1,,"

Turns out the reason is because, for places where a dot is shown in the image above, they're actually element-wise multiplications, not dot products. A lot of sources use an X or . to denote multiplication, but don't clearly indicate they mean element-wise multiplication.

",26159,,,,,10/20/2020 12:49,,,,0,,,,CC BY-SA 4.0 24168,1,,,10/20/2020 15:28,,1,67,"

I am using an AlexNet architecture as my Convolutional Neural Network. A learning rate of 0.00007 and 128 batch_size. I have 20000 data and 10% test, 40% validation, and 50% for training. I used 100 epochs to train my network and here are my results for Loss and Accuracy. I would like to ask how can I get closer validation and training loss in these plots? At first, I guess the number of epochs was not enough, but I tried more epochs and my results didn't change. Can I say my training process is complete with this distance between train and validation loss? Is there any way to have closer loss plots?

",33792,,33792,,10/20/2020 17:27,10/21/2020 10:45,How to have closer validation loss and training loss in training a CNN,,0,0,,,,CC BY-SA 4.0 24169,1,,,10/20/2020 15:46,,1,13,"

I've been reading this paper that formulates invariant task-parametrized HSMMs. In section 3.1 (Model Learning), the task parameters are represented in $F$ coordinate systems defined by $\{A_j,b_j\}_{j=1}^F$, where $A_j$ denotes the rotation of the frame as an orientation matrix and $b_j$ represents the origin of the frame. Each datapoint $\xi_t$ is observed from the viewpoint of $F$ different experts/frames, with $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ denoting the datapoint w.r.t. frame $j$.

How is $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ derived? I understand that we must subtract $b_j$, but I'm not sure if I should pre-multiply by $A_j$ or $A_j^{-1}$, so it'd be great if someone could help me understand this better. Since $A_j$ is an orientation matrix, I'd guess that it's orthogonal, and so $A_j^{-1} = A_j^T$ - and it may just be a matter of convention (i.e. depending on how $A_j$ is defined). The details aren't clear from the paper though, and I'd appreciate any help!

",35585,,,,,10/20/2020 15:46,How do I find the data-point with respect to a given frame?,,0,1,,,,CC BY-SA 4.0 24170,1,,,10/20/2020 15:55,,2,16,"

I've been reading this paper that formulates invariant task-parametrized HSMMs. The task parameters are represented in $F$ coordinate systems defined by $\{A_j,b_j\}_{j=1}^F$, where $A_j$ denotes the rotation of the frame as an orientation matrix and $b_j$ represents the origin of the frame. Each datapoint $\xi_t$ is observed from the viewpoint of $F$ different experts/frames, with $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ denoting the datapoint w.r.t. frame $j$. I quote from the abstract:

"Generalizing manipulation skills to new situations requires extracting invariant patterns from demonstrations. For example, the robot needs to understand the demonstrations at a higher level while being invariant to the appearance of the objects, geometric aspects of objects such as its position, size, orientation and viewpoint of the observer in the demonstrations."

"The algorithm takes as input the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest with a task-parameterized formulation, and adapt the segments according to the environmental changes in a systematic manner."

Though it makes some intuitive sense, I'm not fully convinced why working with multiple coordinate systems would help us capture invariant patterns in demonstrations, and leave aside the scene-specific details. That is the goal, right? On a very high level, I see that having access to more "viewpoints" may help the robot understand the environment better, and neglect viewpoint-specific biases to focus on invariant patterns across different frames. However, this is very handwavy - and I'd love to know specific details about why using multiple viewpoints is a good idea in this case.

Thanks!

",35585,,,,,10/20/2020 15:55,How do multiple coordinate systems help in capturing invariant features?,,0,0,,,,CC BY-SA 4.0 24171,1,,,10/20/2020 16:14,,1,74,"

I am working on implementing an RL agent and I want to demonstrate its effectiveness over a bounded problem space. The setting is essentially a queueing network and so it can be represented as a graph. I want to consider the agent's performance over all graphs up to order $n$ and with average degree from $0$ (edgeless) to $n-1$ (fully connected).

I have looked into generating random graphs using the Erdős–Rényi model, for example. My thought is that I could show the average performance of my agent for different settings of number of nodes and edge probability (under this particular graph generation model).

Are there any established techniques that are along the lines of this approach?

",31466,,31466,,10/20/2020 21:15,10/20/2020 21:15,How can I evaluate a reinforcement learning algorithm over an entire problem space?,,0,4,,,,CC BY-SA 4.0 24176,1,,,10/20/2020 21:28,,1,718,"

Machine Learning books generally explains that the error calculated for a given sample $i$ is:

$e_i = y_i - \hat{y_i}$

Where $\hat{y}$ is the target output and $y$ is the actual output given by the network. So, a loss function $L$ is calculated:

$L = \frac{1}{2N}\sum^{N}_{i=1}(e_i)^2$

The above scenario is explained for a binary classification/regression problem. Now, let's assume a MLP network with $m$ neurons in the output layer for a multiclass classification problem (generally one neuron per class).

What changes in the equations above? Since we now have multiple outputs, both $e_i$ and $y_i$ should be a vector?

",41747,,4709,,10/21/2020 16:20,10/22/2020 9:21,How is the error calculated with multiple output neurons in the neural network?,,2,0,,,,CC BY-SA 4.0 24178,2,,24176,10/21/2020 4:45,,1,,"

Assuming you're using softmax on the last layer for classification, it sounds like a simple application of cross entropy loss from here on out: https://datascience.stackexchange.com/questions/20296/cross-entropy-loss-explanation

Edit:

",34473,,34473,,10/21/2020 12:10,10/21/2020 12:10,,,,1,,,,CC BY-SA 4.0 24179,1,,,10/21/2020 4:45,,1,142,"

I am taking a course in Machine Learning and the Professor introduced us to the XOR problem.

I understand the XOR problem is not linearly separable and we need to employ Neural Network for this problem.

However, he mentioned XOR works better with Bipolar representation(-1, +1) which I have not really understand.

I am wondering what Bipolar representation would be better than Binary Representation? Whats the rationale for saying so?

",41187,,,,,10/21/2020 4:45,XOR problem with bipolar representation,,0,3,,,,CC BY-SA 4.0 24180,2,,9275,10/21/2020 8:04,,1,,"

Your best bet would be to formulate the problem in PDDL, which should be fairly easy, and then use a standard planner to generate a plan from that description.

In PDDL you describe the properties and the possible actions, the start state and the goal state, and the planner will then take this to produce a sequence of actions that leads from the start state to the goal state. There is a planner available on-line that you can use.

",2193,,,,,10/21/2020 8:04,,,,0,,,,CC BY-SA 4.0 24181,1,,,10/21/2020 8:43,,2,265,"

In AlphaGo Zero, MCTS is used along with policy networks. Some sources say MCTS (or planning in general) increases the sample efficiency.

Assumed the transition model is known and the computational cost of interacting through planning is the same as interacting with the environment, I do not see the difference between playing many games versus playing a single game, but plan at each step.

Furthermore, given a problem with a known transition model, how do we know combining learning and tree search will likely be better than pure learning?

",41751,,41751,,11/13/2020 0:10,11/13/2020 0:10,Why is tree search/planning used in reinforcement learning?,,1,2,,,,CC BY-SA 4.0 24182,2,,24176,10/21/2020 9:21,,1,,"

As you say, the outputs are modeled as a vector, each output in one vector component.

In regression problems:

The most common loss function, like in the scalar case, is the square error. Skipping constants, it is defined as:

$$E=\sum_i ||\mathbf{y_i}-\mathbf{\hat{y_i}}||^2 = \sum_i (\mathbf{y_i}-\mathbf{\hat{y_i}})(\mathbf{y_i}-\mathbf{\hat{y_i}})$$

where:

  • $\mathbf{y_i}$ vector is expected value for sample $i$ (note I do not use the same naming convention than the question).
  • $\mathbf{\hat{y_i}}$ vector is network output for same sample
  • ||.|| is vector norm
  • product of vectors is scalar/inner product.

The derivative respect some NN parameter $w$ is:

$$\frac{\partial}{\partial w}E=\frac{\partial}{\partial w} \sum_i (\mathbf{y_i}-\mathbf{\hat{y_i}})(\mathbf{y_i}-\mathbf{\hat{y_i}}) = -2 \sum_i (\mathbf{y_i}-\mathbf{\hat{y_i}})\frac{\partial \mathbf{\hat{y_i}}}{\partial w} $$

being $\frac{\partial \mathbf{\hat{y_i}}}{\partial w}$ the term that backtracking algorithm evaluates.

Multi-class classification problems:

Two options appears as most usual ones:

  • a) optimize the square error of probabilities. Target vector will be of the form (0,...0,1,0,...0) while network output will be something as (0.2,0.1,0.8,0.4,...). This case can be solved like regression ones.
  • b) optimize entropy. In this case, an usual loss function is cross-entropy:

$$ E = - \sum_c p(c) log(\hat p(c)) \text{ [definition]} $$ $$ E = - \sum_i \sum_c p_i(c) log(\hat p_i(c)) = - \sum_i log(\hat p_i(c_i)) \text{ [average]} $$

where:

  • $c$ is some class
  • $p(c)$ is probability of class $c$ in train dataset
  • $\hat p(c)$ is probability of class $c$ in network output
  • $i$ is sample number
  • $c_i$ is expected (correct) class of sample $i$
  • $p_i(c)$ is ground truth, usually taken as 1 if $c=c_i$, 0 otherwise as in latest expression.
  • $\hat p_i(c)$ is net output for sample $i$ and class $c$.
",12630,,12630,,10/22/2020 9:21,10/22/2020 9:21,,,,0,,,,CC BY-SA 4.0 24186,1,,,10/21/2020 12:11,,2,135,"

I am really trying to understand deep learning models like RNN, LSTMs etc. I have gone through many tutorials of RNN and have learned that RNN cannot work for long Range dependencies, like:

Consider trying to predict the last word in the text “I grew up in France… I speak fluent French.” Recent information suggests that the next word is probably the name of a language, but if we want to narrow down which language, we need the context of France, from further back. It’s entirely possible for the gap between the relevant information and the point where it is needed to become very large. Unfortunately, as that gap grows, RNNs become unable to learn to connect the information.

it comes due to vanish gradient problem. However, I could not understand that how to vanish gradient creates an issue for RNN to not work for long-range dependencies. Since, as I know that vanish gradient usually comes when we have many hidden layers and the gradient for the first layer usually produced too low and that affects the training process. However, everyone connects this issue with vanish gradient, technically what is the relationship RNN (long-range dependencies) with vanish gradient?

I am really sorry if it is a weird question

",41756,,,,,10/21/2020 13:26,How does vanish gradient restrict RNN to not work for long range dependencies?,,1,0,,,,CC BY-SA 4.0 24187,1,24198,,10/21/2020 12:17,,2,270,"

Assume that I have a fully connected network that takes in a vector containing 1025 elements. First 1024 elements are related to the input image of size 32 x 32 x 1, and the last element in the vector (1025-th element) is a control bit that I call it special input.

When this bit is zero, the network should predict if there is a cat in the image or not, and when this bit is one, it should predict if there is a dog in the image or not.

So how can I tell the network that your 1025-th element should be special to you and you should pay more attention to it?

Note that it's just an example and the real problem is more complex than this. So please don't bypass the goal of this question by using tricks special to this example. Any idea is appreciated.

",41547,,,,,6/26/2021 23:04,"How to tell a neural network that: ""your i-th input is special""",,2,4,,,,CC BY-SA 4.0 24188,1,,,10/21/2020 13:09,,1,46,"

Are there any known models/techniques to determine whether a person in a store is a customer or a store representative?

For example, customer representatives can wear uniforms and then one possible way to identify customer representatives is by the color of their uniform, texture, etc. On the other hand, a customer can also wear the same color clothes as that of a customer representative. Likewise, a customer representative could be wearing "normal clothes." So the main problems that may occur could be:

  • A customer becomes misclassified as a customer representative.
  • A customer representative representative becomes misclassified as a customer

So using clothing as the only proxy to classify people as customers or customer representatives seems to be flaky. Any other known ideas?

",41759,,2444,,10/21/2020 16:27,10/22/2020 7:20,Are there any known models/techniques to determine whether a person in a store is a customer or a store representative?,,1,0,,,,CC BY-SA 4.0 24190,2,,24186,10/21/2020 13:19,,2,,"

Vanishing gradient is: as the gradient starts to flow from the end of the network (right side of the network) to the start of the network (left side of the network), it will be multiplied by numbers less than 1 and gradually it will become weaker and weaker and when it arrives to the first layers, it's so weak that makes almost no change in initial layers parameters.

Now in case of RNNs you can unroll the network, and now you can see that it is like a deep network. For clarity look at the image below (taken from a course by Andrew NG and edited):

Red arrows show the way of gradient backpropagation and you can see that in each step they are multiplied by a number (actually a matrix is multiplied in another matrix). If this number is less than one, then it results in vanishing gradient. If this number is greater than one it will result in exposure (which can be controlled by simply clipping it to a max value). The higher the steps, the more of vanishing or exploding effect.

But they have found a solution for this problem, LSTM or GRU. What these units do is that they will make a highway for gradient to backpropagate through (in each state it will be multiplied by 1). So gradient can travel for longer distance (from French into France). Even though, the problem still exists for very long relations.
If you know what is a ResNet and how it works, you can find the same concept behind LSTM or GRU and ResNets as both make a highway for gradient to flow back.

You can have an intuition of how LSTM or GRU works by following the forward pass and assuming what they are doing as locking the concepts in memory cells. Like when the network sees the word France it will understand that it's an important word and maybe it will become handy later. So it will put the France word in one of it's memory cells and keeps it there for several steps and when it wants to guess the language it will use this memory cell to predict French not English or Persian. Then it can release that memory cell and use this cell for something else.

If you want to learn more and get better intuition, I highly recommend you to look at this link: CS231n of MIT - lecture 10

",41547,,41547,,10/21/2020 13:26,10/21/2020 13:26,,,,5,,,,CC BY-SA 4.0 24192,1,,,10/21/2020 14:16,,3,28,"

The basic idea of MFA is to perform subspace clustering by assuming the covariance structure for each component of the form, $\Sigma_i = \Lambda_i \Lambda_i^T + \Psi_i$, where $\Lambda_i \in \mathbb{R}^{D\times d}$, is the factor loadings matrix with $d < D$ for parsimonious representation of the data, and $Ψ_i$ is the diagonal noise matrix. Note that the mixture of probabilistic principal component analysis (MPPCA) model is a special case of MFA with the distribution of the errors assumed to be isotropic with $Ψ_i = Iσ_i^2$.

What is meant by subspace clustering here, and how does $\Sigma_i = \Lambda_i \Lambda_i^T + \Psi_i$ accomplish the same? I understand that this is a dimensionality reduction technique since $\text{rank}(\Lambda_i) \leq d < D$. It'd be great if someone could help me understand more, and/or suggest resources I could look into for learning about this as an absolute beginner.

From what I understand, $x = \Lambda z + u$ is one factor-analyzer (right?), i.e. the generative model in maximum likelihood factor analysis. This paper goes on to define a mixture of factor-analyzers indexed by $\omega_j$, where $j = 1,...,m$. The generative model now obeys the distribution $$P(x) = \sum_{i=1}^m \int P(x|z,\omega_j)P(z|\omega_j)P(\omega_j)dz$$ where, $P(z|\omega_j) = P(z) = \mathcal{N}(0,I)$. How does this help/achieve the desired objective? Why take the sum from $1$ to $m$? Where is subspace clustering happening, and what's happening on a high-level when we are using this mixture of factor-analyzers?

",35585,,35585,,10/21/2020 14:48,10/21/2020 14:48,What is meant by subspace clustering in MFA?,,0,0,,,,CC BY-SA 4.0 24193,1,,,10/21/2020 14:30,,0,25,"

I am a beginner in AI methods. I have a collection of x(t) data, where x are some signal amplitudes and t is a time. My testing data are divided into two classes, say those from good and bad experimental samples. I need to classify the signals from unknown samples as good or bad according to their similarity to these two classes. What kind of a neural network is the best in this case? Could you recommend me some example in the literature where such a problem is considered?

",41761,,,,,10/21/2020 14:30,What is the best neural network model to classify an x(t) signal according two classes?,,0,2,,,,CC BY-SA 4.0 24194,1,24196,,10/21/2020 15:30,,0,192,"

I have created my own RL environment where I have a 2-dimensional matrix as a state space, the rows represent the users that are asking for a service, and 3 columns representing 3 types of users; so if a user U0 is of type 1 is asking for a service, then the first row would be (0, 1, 0) (first column is type 0, second is type 1...).

The state space values are randomly generated each episode.

I also have an action space, representing which resources were allocated to which users. The action space is a 2-dimensional matrix, the rows being the resources that the agent has, and the columns represent the users. So, suppose we have 5 users and 6 resources, if user 1 was allocated resource 2, then the 3rd line would be like this: ('Z': a value zero was chosen, 'O': a value one was chosen) (Z, O, Z, Z, Z)

The possible actions are a list of tuples, the length of the list is equal to the number of users + 1, and the length of each tuple is equal to the number of users. Each tuple has one column set to 'O', and the rest to 'Z'. (Each resource can be allocated to one user only). So the number of the tuples that have one column = 'O', is equal to the number of users, and then there is one tuple that has all columns set to 'Z', which means that the resource was not allocated to any users.

Now, when the agent chooses the action, for the first resource it picks an action from the full list of possible action, then for the second resource, the action previously chosen is removed from the possible actions, so it chooses from the actions left, and so on and so forth; and that's because each user can be allocated one resource only. The action tuple with all 'Z' can always be chosen.

When the agent allocates a resource to a user that didn't request a service, a penalty is given (varies with the number of users that didn't ask for a service but were allocated a resource), otherwise, a reward is given (also varies depending on the number of users that were satisfied).

The problem is, the agent always tends to pick the same actions, and those actions are the tuple with all 'Z' for all the users. I tried to play with the q_values initial values; q_values is a dictionary with 2 keys: 1st key: the state being a tuple representing each possible state from the state space, meaning (0, 0, 0) & (1, 0, 0) & (0, 1, 0) & (0, 0, 1), combined with each action from the possible actions list. I also tried different learning_rate values, different penalties and rewards etc. But it always does the same thing.

",42372,,,,,10/21/2020 16:07,Q-learning agent stuck at taking same actions,,1,0,,,,CC BY-SA 4.0 24195,2,,24187,10/21/2020 16:00,,2,,"

Assume the image can contain objects of class $C_1 \dots C_c$. Assume a set of additional inputs that has a meaning of questions as "contains the image a C_i or C_j or ... ?".

The main problem for the system is classify the image in classes $C_i$. Second problem is answer the implicit question proposed by the remainder inputs.

Thus, better combine two NNs:

  • first one an object recognizer, with input the image data.
  • second one a NN that will answer the question implicit in remainder bits, with inputs the output of previous NN and the "question" bits.

In concrete for the example that the question describe:

  • a NN to classify dog/cat from 32x32 image. Two outputs for probabilities of dog an cat. Or simply one binary output 0:"is a dog", 1:"is a cat".
  • a second NN with the "1025" binary input ( 0: "look for dogs", 1: "look for cats") and output from previous one. In this case, if everything is ok, it will infer the logic "a==b": "is a dog" and "look for dogs" or "is a cat" and "look for cats".

Note that if you try to solve the problem directly with a full-connected NN with all inputs (1025 in the example), you loss the possibility of use CNN and max-pooling layers, etc. Moreover, split decreases training times, join subjects increases them "exponentially". Not a promising way.

",12630,,12630,,10/21/2020 16:06,10/21/2020 16:06,,,,0,,,,CC BY-SA 4.0 24196,2,,24194,10/21/2020 16:07,,0,,"

I am confused. For the initial $Q$-values, you generate one for each possible row $(1, 0, 0), (0,0,0), \ldots$ so you would have 4 states.

However, from the first paragraph it seems that the states themselves are matrices (one row for each user), so the state space is a set of such matrices.

That means that your $Q$-table should have a row for each possible matrix, and a column for each possible total assignment of items to users.

",40573,,,,,10/21/2020 16:07,,,,4,,,,CC BY-SA 4.0 24197,1,24208,,10/21/2020 16:19,,2,57,"

I understand that Hidden Markov Models are used to learn about hidden variables $z_i$ with the help of observable variables $\xi_i$. On Wikipedia, I read that while the $\xi_i$'s can be continuous (say Gaussian), the $z_i$'s are discrete. Is this necessary, and why? Are there ways in which I could extend this to continuous domains?

",35585,,2444,,10/25/2020 9:49,10/25/2020 9:49,Is there an equivalent model to the Hidden Markov Model for continuous hidden variables?,,1,0,0,,,CC BY-SA 4.0 24198,2,,24187,10/21/2020 16:34,,2,,"

The main benefit of deep learning is that you don't have to manually design features.

Classic Machine Learning algorithms always include the Feature engineering step, whereas neural networks are able to extract features automatically during learning. The classic example is CNN. In the first layer, it creates simple features that representing lines, the last layers represent abstract features. Of course, some tasks do require feature engineering (e.g. signal processing).

In your case, if you want to take advantage of the CNN network, you can also add an additional input layer for the flag (e.g. as one-hot vector). Here is an illustration taken from this answer.

",12841,,12841,,6/26/2021 23:04,6/26/2021 23:04,,,,4,,,,CC BY-SA 4.0 24199,1,24210,,10/21/2020 16:37,,3,73,"

Is there a common way to build a neural network that seeks to extract spatial and temporal information simultaneously? Is there an agreed up protocol on how to extract this information?

What combination of layers works: convolution + LSTM? What would be the alternatives?

",32434,,2444,,10/21/2020 23:15,10/22/2020 7:37,Is there a common way to build a neural network that seeks to extract spatial and temporal information simultaneously?,,1,0,,,,CC BY-SA 4.0 24200,1,,,10/21/2020 17:35,,1,89,"

All else being equal, including total neuron count, I give the following definitions:

  • wide is a parallel ensemble, where good chunks of the neurons have the same inputs because the inputs are shared and they have different outputs.
  • deep is a series ensemble, where for the most part neurons have as input the output of other neurons and few inputs are shared.

For CART ensembles the parallel (wide) ensemble is a random forest while the series (deep) ensemble is a gradient boosted machine. For several years the GBM was the "winningest" on kaggle.

Is there a parallel of that applied to Neural networks? Is there some reasonable measure that indicates whether deep outperforms wide when it comes to neural networks? If I had the same count of weights to throw at a tough problem, all else being equal should they be applied more strongly in parallel or in series?

",2263,,,,,10/22/2020 7:48,"Has ""deep vs. wide"" been resolved?",,1,1,,,,CC BY-SA 4.0 24201,1,,,10/21/2020 14:33,,2,359,"

tl;dr
Did AlphaGo and AlphaGo play 100 repetitions of the same sequence of boards, or were there 100 different games?

Background:
Alphago was the first superhuman go player, but it had human tuning and training.

AlphaGo zero learned to be more superhuman than superhuman. Its supremacy was shown by how it beat AlphaGo perfectly in 100 games.

My understanding of AlphaGo and AlphaGo are that they are deterministic, not stochastic.

If they are deterministic, then given a board position they will always make the same move.

The way that mathematicians count the possible games in chess is to account for different board positions. As I understand it, and I could be wrong, if they have the exact same sequence of board positions then it does not count as a different game.

If they make the same sequence of moves 100 times, then they did not play 100 different games, but played one game for 100 repetitions.

Question:
So, using the mathematical definition, did AlphaGo and AlphaGo Zero play only one game for 100 iterations or did they play 100 different games?

References:

",2263,EngrStudent,2263,,10/21/2020 17:58,10/21/2020 21:36,Did Alphago zero actually beat Alphago 100 games to 0?,,1,0,,,,CC BY-SA 4.0 24203,2,,24201,10/21/2020 19:17,,7,,"

Did AlphaGo and AlphaGo [Zero] play 100 repetitions of the same sequence of boards, or were there 100 different games?

There were 100 different games. You can view some example games between AlphaGo [Lee] and AlphaGo Zero here. They are clearly all different.

This statement in the question shows a misunderstanding:

My understanding of AlphaGo and AlphaGo [Zero] are that they are deterministic, not stochastic.

The Monte Carlo Tree Search (MCTS) algorithm used for look-ahead planning in AlphaGo and Alpha Zero is inherently stochastic. It samples from the huge tree of possibilities in a game like Go by making weighted random choices at all branch points. That means play can progress stochastically with two such agents opposing each other, as many board states will resolve into selecting semi-randomly between "best" moves that would be very closely ranked by each agent in the limit of very long search times.

Whilst this solves the main point of your question, it is worth noting that there can be a related effect in self-play algorithms, even if they are partially stochastic. That is, it is possible to have one agent that develops a specific weakness by chance, that another agent consistently takes advantage of, such that agent A consistently beats agent B, and wins in a very similar fashion each time (maybe deterministically, maybe across a range of different games all with a similar mistake). However it may be the case that also:

  • Neither agent is strong in general.

  • Another agent C can beat B consistently, but will lose to A consistently. There would then be no clear way to rank agents A, B, and C without further measurements.

Agents trained through self play therefore do need to be trained and tested against a wide range of opponents to verify this is not happening and that the skill level assessment is valid more generally. I believe this was done with all the AlphaGo variants built by DeepMind.

The MCTS algorithm does help a little with this scenario as it can correct for weaknesses in how a trained neural network rates early board positions. The look-ahead planning of MCTS makes initial ratings less relevant to eventual action selection - effectively it refines those learned ratings using the samples from current position.

",1847,,1847,,10/21/2020 21:36,10/21/2020 21:36,,,,3,,,,CC BY-SA 4.0 24204,1,24205,,10/21/2020 20:35,,3,306,"

My question is about neuroevolution (genetic algorithm + neural network): I want to create artificial life by evolving agents. But instead of relying on a fitness function, I would like to have the agents reproduce with some mutation applied to the genes of their offspring and have some agents die through natural selection. Achieve evolution in this manner is my goal.

Is this feasible? And has there been some prior work on this? Also, is it somehow possible to incorporate NEAT into this scheme?

So far, I've implemented most of the basics in amethyst (a parallel game engine written in Rust), but I'm worried that the learning will happen very slowly. Should I approach this problem differently?

",41768,,2444,,10/25/2020 10:01,10/25/2020 14:01,Is it possible to perform neuroevolution without a fitness function?,,2,1,,,,CC BY-SA 4.0 24205,2,,24204,10/21/2020 21:29,,3,,"

You do not always need an explictly coded fitness function to perform genetic algorithm searches. The more general need is for a selection process that favours individuals that perform better at the core tasks in an environment (i.e. that are "more fit"). One way of assessing performance is to award a numerical score, but other approaches are possible, including:

  • Tournament selection where two or more individuals compete in a game, and the winner is selected.

  • Opportunity-based selection, where agents in a shared environment - typically with limited resources and chances to compete - may reproduce as one of the available actions, provided they meet some criteria such as having collected enough of some resource. I was not able to find a canonical name for this form of selection, but it is commonly implemented in artificial life projects.

A key distinction between A-life projects and GA optimisation projects is that in A-life projects there is no goal behaviour or target performance. Typically A-life projects are simulations with an open ended result and the developer runs a genetic algorithm to "see what happens" as opposed to "make the best game-player". If your project is like this then you are most likely looking for the second option here.

To discover more details about this kind of approach, you could try searching "artifical life genetic algorithms" as there are quite a few projects of this type published online, some of which use NEAT.

Technically, you could view either of the methods listed above as ways of sampling comparisons between individuals against an unknown fitness function. Whether or not a true fitness function could apply is then partly a matter of philosophy. More importantly for you as the developer, is that you do not have to write one. Instead you can approximately measure fitness using various methods of individual selection.

So far I've implemented most of the basics in amethyst (a parallel game engine written in rust), but I'm worried that the learning will happen very slowly. Should I approach this problem differently?

It is difficult to say whether you should approach the problem differently. However, the biggest bottlenecks against successful GA approaches are:

  • Time/CPU resources needed to assess agents.

  • Size of search space for genomes.

Both of these can become real blockers for ambitious a-life projects. It is common to heavily simplify agents and environments in attempts address these issues.

",1847,,1847,,10/25/2020 14:01,10/25/2020 14:01,,,,14,,,,CC BY-SA 4.0 24206,2,,23618,10/21/2020 22:06,,0,,"

The economic value would be high indeed, as, combined with robotics, AGI would be able to replace all human workers. So:

  • Whatever the economic value of the sum of human labor is, in an ideal sense

Of course, there would also be the question of the cost of computation, the cost of the hardware & software required for AGI, and whether that cost is higher or lower than the cost of human labor. (My guess is biological machines such as humans would be cheaper, both in production and processing, until AGI leverages molecular computing via an inexpensive, ubiquitous substrate. Also worth noting that biological systems such as humans and canines may be more fault-tolerant, and more resilient in that they can persist even where the technological base collapses.)

Currently, cost of training even narrowly superintelligent Neural Networks which exceed humans at a single function is extremely high.

",1671,,1671,,10/21/2020 22:56,10/21/2020 22:56,,,,0,,,,CC BY-SA 4.0 24208,2,,24197,10/22/2020 5:31,,2,,"

Kalman filter is what you're looking for.

According to Wikipedia:

The Kalman filter may be regarded as analogous to the hidden Markov model, with the key difference that the hidden state variables take values in a continuous space (as opposed to a discrete state space as in the hidden Markov model).

",41777,,,,,10/22/2020 5:31,,,,0,,,,CC BY-SA 4.0 24209,2,,24188,10/22/2020 7:20,,-1,,"

Access to the back of the store, basically if you can track them entering and leaving an employees only area.

But this requires a really good tracking algorithm and if you lose track you're going to have to fall back on some other properties.

",32390,,,,,10/22/2020 7:20,,,,0,,,,CC BY-SA 4.0 24210,2,,24199,10/22/2020 7:37,,3,,"

Yes, there are different ways. What I think you are looking for is under the research field of Localization and Mapping. Which divides in the following subfields:

  • For getting current (the robot) position and trajectory go to models for Odometry Estimation
  • For getting a representation of the world around the robot go to models for Mapping
  • If you want both of them (I am guessing you want). Go for SLAM (Simultaneous Localization and Mapping) models

Here it is the amazing survey that links you to tons of papers with different models for each category. If you want to know what are the most common architectural blocks (LSTM, ConvLSTM, RNN...) used for your problem, read the most promising papers under your target category.

References:

Survey: https://arxiv.org/abs/2006.12567

",26882,,,,,10/22/2020 7:37,,,,0,,,,CC BY-SA 4.0 24211,2,,24200,10/22/2020 7:48,,2,,"

I am not sure what you are really looking for but I leave here this paper here, where some intuition into that direction is provided. This paper compares the performance of a deep learning model scaling in 3 dimensions: resolution, width and depth. As depicted in their definition:

If you go to section 3.2 you will see how scaling the different dimensions independently (resolution, width and depth) affects performance and how by performing a compound scaling the performance of the model is maximized (so there is a close relation). It is a very thorough ablation study. For me this was the paper where I understood how these Width, Depth and Resolution parameters come together.

Rethinking Model Scaling for Convolutional Neural Network: https://arxiv.org/abs/1905.11946

",26882,,,,,10/22/2020 7:48,,,,0,,,,CC BY-SA 4.0 24217,2,,15936,10/22/2020 20:29,,2,,"

A few of us have spent quite a bit of time thinking about this. I summarised our work in a Medium article here: https://towardsdatascience.com/deep-learning-vs-puzzle-games-e996feb76162

Would love to hear what you think.

Spoiler: so far, good old SAT seems to beat fancy AI algorithms!

",41793,,,,,10/22/2020 20:29,,,,4,,,,CC BY-SA 4.0 24219,1,24229,,10/23/2020 0:24,,1,139,"

Given a video, I'm trying to classify whether it is a graphical (computer-generated) or realistic scene. For instance, if it contains computer-generated graphics, credit, moving bugs, blue screen, etc. it will be computer-generated graphics, and if it is a realistic scene captured by camera, it will be a realistic scene.

How can we achieve that with AI? Do we have any working solutions available?

Some examples of graphical scenes:

",9053,,2444,,10/25/2020 9:57,10/25/2020 9:57,How can I determine whether a video's frame is realistic (was recorded by a camera) or contains computer-generated graphics?,,1,6,,,,CC BY-SA 4.0 24221,1,,,10/23/2020 10:32,,4,281,"

In the literature, there are at least two action selection strategies associated with the UCB1's action selection strategy/policy. For example, in the paper Algorithms for the multi-armed bandit problem (2000/2014), at time step $t$, an action is selected using the following formula

$$ a^*(t) \doteq \arg \max _{i=1 \ldots k}\left(\hat{\mu}_{i}+\sqrt{\frac{2 \ln t}{n_{i}}}\right) \tag{1}\label{1}, $$ where

  • $\hat{\mu}_{i}$ is an estimate of the expected return for arm $i$
  • $n_i$ is the number of times the action $i$ is selected
  • $k$ is the number of arms/actions

On the other hand, Sutton & Barto (2nd edition of the book) provide a slightly different formula (equation 2.10)

$$ a^*(t) \doteq \arg \max _{i=1 \ldots k}\left(\hat{\mu}_{i}+c\sqrt{\frac{\ln t}{n_{i}}}\right) \tag{2}\label{2}, $$ where $c > 0$ is a hyper-parameter that controls the amount of exploration (as explained in the book or here).

Why do we have these two formulas? I suppose that both are "upper confidence bounds" (and, in both cases, they are constants, though one is a hyper-parameter), but why (and when) would we use one over the other? They are not equivalent because $c$ only needs to be greater than $0$, i.e. it can be arbitrarily large (although, in the mentioned book, the authors use $c=2$ in one experiment/figure). If $c = \sqrt{2}$, then they are the same.

The answer to my question can probably be found in the original paper that introduced UCB1 (which actually defines the UCB1 as in \ref{1}), or in a paper that derives the bound, in the sense that the bound probably depends on some probability of error, but I have not fully read it yet, so, if you know the answer, feel free to derive both bounds and relate the two formulas.

",2444,,2444,,10/23/2020 16:35,10/23/2020 19:06,Why do we have two similar action selection strategies for UCB1?,,1,0,,,,CC BY-SA 4.0 24222,1,24225,,10/23/2020 10:59,,2,178,"

I am reading this blog post: https://ruder.io/optimizing-gradient-descent/index.html. In the section about AdaGrad, it says:

It adapts the learning rate to the parameters, performing smaller updates (i.e. low learning rates) for parameters associated with frequently occurring features, and larger updates (i.e. high learning rates) for parameters associated with infrequent features.

But I am not sure about the meaning of infrequent features: is it that the value of a given feature changes rarely?

",41801,,2444,,10/23/2020 14:58,10/23/2020 15:28,"What do we mean by ""infrequent features""?",,1,0,,,,CC BY-SA 4.0 24225,2,,24222,10/23/2020 11:53,,2,,"

We will describe the input to the network as a vector, called features vector. Each component of this vector is usually related to some "real world" information, by example "age of the person", "number of atoms", "...".

In very usual situations, a specific component of the input vector will have near than always the same value. This is more usual in binary components or components that has a small set of possible values.

However, usually in the cases where this component has a value different from the most usual one, this component is very important and informative.

These values of this kind of components are called infrequent features.

(example: "rains?" is in my city 99.9% of time "false". However, when it is true, it is a key factor to all questions about the behavior of the population).

The problem with these features: as unusual values are infrequent, the net has few chances to learn from them, and some learning algorithms could fail to give them the weight that they must (taken into account that, as has been said, these components are very important when they take a value different to the most frequent one).

Some adaptive learning rate algorithms as AdaGrad tries to solve this issue.

",12630,,12630,,10/23/2020 15:28,10/23/2020 15:28,,,,3,,,,CC BY-SA 4.0 24226,1,,,10/23/2020 13:01,,1,40,"

I am working on a project where I encountered a component which takes 96 arguments (all integer values) and outputs 12 float values. I would like to find a useful combination of these 96 values to receive the output that I want while avoiding random guessing, so the desired behavior would be that I provide the outcome and receive the 96 inputs to use them in my component.

Unfortunately I am not that experienced in that field. If I think about how I can implement this, my first thought was a kind of classification task, since I could build a dataset but the problem here is that I need integer values.

A second guess was a regression but would that be possible for y as an output vector? Are there some other approaches that could fit to my use case?

",41804,,2444,,12/21/2021 15:13,12/21/2021 15:13,How to find a parameter combination for a black box using AI?,,0,2,,,,CC BY-SA 4.0 24227,2,,24221,10/23/2020 19:06,,2,,"

In the PDF of the original paper for UCB1 you linked, in page 242-243 the authors proves why non-optimal machines get played much less (in fact, logarithmically less) than the optimal ones. $c$ decides whether they indeed would, and $c=\sqrt{2}$ is the minimum choice of $c$.

We want to show that the number of runs for non-optimal machines ($n_i$, for non-optimal $i$s, in your notation) is asymptotically logarithmic. In other words, you may run them for a few times and well, it's fine, but not too often. We're devising some indicator value $a_i(t)=\hat \mu_i+(\epsilon...)$ such that the mistaken cases, where values of non-optimal ones surpass values of optimal ones ($a^*(t)<a_i(t)$), are minimized.

Think about the last inequality. We know that $\mu^* > \mu_i$ (again, optimal and non-optimal ones). Therefore, for that inequality to be true, it seems either the left-hand side should be quite small or the right-hand side should be quite large. But wait, $\hat \mu$s are actually some random trials for $\mu$, so we cannot claim directly from $\mu^* > \mu_i$ to $\hat{\mu^*} > \hat{\mu_i}$; it might be that we just need more trials.

The equations (7), (8) and (9) of the paper is the three conditions mentioned in the paragraph above; left-hand side is small, right-hand side is large or the trials are lacking. Well, in fact, as we stated the number of runs ... is asymptotically logarithmic at first, the third case can be eliminated(!), assuming that we've run this machine enough.

For the first and second case, since $\hat \mu_i$ is the average of some random variable in $[0, 1]$, we can use Chernoff–Hoeffding bound (or so called in the paper; stated as Hoeffding's inequality in Wikipedia). Now, a good choice of $(\epsilon ...)$ will guarantee (from Hoeffding's inequality) that the first two cases will occur sufficiently scarcely, or in other words, in the order of $t^{-4}$. To achieve this, we need $c \ge \sqrt 2$.

Now back to the third case, the enough number of runs is actually $l = \left \lceil 2 c^2 \ln n / (\mu^* - \mu_i)^2\right \rceil$. Thus, you may choose larger $c$ but receive longer convergence speed in penalty.

Funnily enough, after all the proofs the authors find $c=1/4$ to converge well and actually perform substantially better(!!) than $c=\sqrt{2}$. It seems they could not prove the bound as we did above.

",41808,,,,,10/23/2020 19:06,,,,3,,,,CC BY-SA 4.0 24229,2,,24219,10/23/2020 21:36,,2,,"

As per your requirements, I would suggest that you start with any simple CNN network.

CNNs take advantage of the hierarchical pattern in data and assemble more complex patterns using smaller and simpler patterns. Therefore, on the scale of connectedness and complexity, CNNs are on the lower extreme.

Here is a Keras example:

model = models.Sequential()
model.add(layers.Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=image_shape))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
# output layer
model.add(layers.Dense(1))

where image_shape is the resolution and number of channels of images (e.g. 128x128x3 for RGB images). I also suggest you downscaling the images to a lower resolution. You will also have to crop the images as they must all be the same image_shape.

Also take a look at the MaxPooling2D and BatchNormalization layers.

Since you only have real and CGI images, this becomes a binary classification problem. Therefore you can have a single output (0 - CGI, 1 - real). Such problems can be solved with BinaryCrossentropy loss.

model.compile(loss=losses.BinaryCrossentropy(from_loits=True), optimizer='adam')

Finally, you can fit your model

history = model.fit(train_images, train_labels, epochs=1000, validation_data=(test_images, test_labels))

You can find a complete example here.

Please note that depending on your data, the model can become biased if your dataset is unbalanced. That is, if all of your CGI images have text, and only a small fraction of the real images also have text, they might be misclassified. Therefore, I recommend that you visualize your model to better understand what it has learned. Here is an example of such a problem we faced at our university.

There are also more advanced CNN architectures such as ResNet, VGG or YOLO. You can also extend your model with time series (i.e. video) using LSTM or GRU architecture.

",12841,,12841,,10/24/2020 9:48,10/24/2020 9:48,,,,6,,,,CC BY-SA 4.0 24231,1,24240,,10/24/2020 0:55,,5,326,"

I started reading some reinforcement learning literature, and it seems to me that all approaches to solving reinforcement learning problems are about finding the value function (state-value function or action-state value function).

Are there any algorithms or methods that do not try to calculate the value function but try to solve a reinforcement learning problem differently?

My question arose because I was not convinced that there is no better approach than finding the value functions. I am aware that given the value function we can define an optimal policy, but are there not other ways to find such an optimal policy?

Also, is the reason why I don't encounter any non value-based methods that these are just less successful?

",36978,,2444,,11/1/2020 19:10,11/11/2020 15:11,Is reinforcement learning only about determining the value function?,,1,0,,,,CC BY-SA 4.0 24232,1,24234,,10/24/2020 1:08,,1,240,"

I have seen 2 forms of softmax cross-entropy loss and are confused by the two. Which one is the right one? For example in this Quora answer, there are 2 answers:

  1. $L(\mathbf{w})=\frac{1}{N} \sum_{n=1}^{N} H\left(p_{n}, q_{n}\right)=-\frac{1}{N} \sum_{n=1}^{N}\left[y_{n} \log \hat{y}_{n}+\left(1-y_{n}\right) \log \left(1-\hat{y}_{n}\right)\right]$

  2. $\mathrm{L}(y, \hat{y})=-\Sigma y(i) \log \hat{y}(i)$, which is only the first part of the version one.

",23549,,2444,,10/24/2020 1:29,10/24/2020 9:16,Why are there two versions of softmax cross entropy? Which one to use in what situation?,,1,0,,,,CC BY-SA 4.0 24234,2,,24232,10/24/2020 9:07,,5,,"

It's the same thing, first version is the special case of the more general one. In the first case you only have two classes, it's binary cross-entropy, and they also included iteration over batch of samples. In the second case you have multiple classes and in the current form it's only for a single sample.

In the first case there is only one output, if you had two outputs it would have been \begin{equation} -\frac{1}{N} \sum_{n=1}^N \sum_{j=1}^2 y_{n,j} \log(\hat y_{n,j}) \end{equation} where $n$ iterates over batch samples, and $j$ over two classes. The reason why it was written like that is that if you have two classes you can have only one output because you can immediately conclude about probability of the second class if you have the probability of the first class, it would simply be $p_1 = 1-p_0$.

In the second case, with batch samples included, it would be \begin{equation} -\frac{1}{N} \sum_{n=1}^N \sum_{j=1}^c y_{n,j} \log(\hat y_{n,j}) \end{equation} wheren $n$ iterates over batch samples, and $j$ over $c$ output classes.

",20339,,,,,10/24/2020 9:07,,,,0,,,,CC BY-SA 4.0 24236,2,,23708,10/24/2020 9:46,,1,,"

If the filter is separable, that is, the NxM kernel can be mathematically equal to the convolution of a Nx1 filter and a 1xM filter, there are a very important increase in performance.

Using separable convolution, the network can made an optimal usage of the shared memory and of the parallelism in memory access. See this excellent article for details. These improvements are bigger for bigger kernels.

Also the training is improved, starting by the simple fact that a NxM filter has a number of parameters proportional to N*M but the related separable one has N+M.

",12630,,12630,,10/24/2020 11:45,10/24/2020 11:45,,,,2,,,,CC BY-SA 4.0 24237,1,,,10/24/2020 9:47,,1,38,"

I'm struggling a little with understanding the OpenAI implementation of A2C in the baselines (version 2.9.0) package. From my understanding, one step_model acts in different parallel environments and gathers experiences (calculates the gradients, I think), and sends them to the train_model that trains with them. After this, the step_model gets updated from the train_model.

What I am unsure about is if both step_model and train_model are actor-critic models or if step_model is actor and train_model is a critic (or vice versa). Does the step_model use the advantage function or is it just the train_model?

",41823,,2444,,11/5/2020 11:15,11/5/2020 11:15,What is the difference between step_model and train_model in the OpenAI implementation of the A2C algorithm?,,0,0,,,,CC BY-SA 4.0 24238,1,24258,,10/24/2020 10:59,,0,197,"

Finite state automata and transducers are computational models that were widely used decades before in natural language processing for morphological parsing and other nlp tasks. I wonder if these computational models are still used in NLP nowadays for significant purposes. If these models are in use, can you give me some examples ?

",38292,,38292,,10/24/2020 11:22,10/26/2020 16:43,Are FSA and FSTs used in NLP nowadays?,,1,0,,,,CC BY-SA 4.0 24240,2,,24231,10/24/2020 13:16,,8,,"

There are many algorithms that are not based on finding a value function. The most common ones are policy gradients. These methods attempt to map states to actions through a neural network. They learn the optimal policy directly, not through a value function.

The important part of the image is when Model-Free RL splits into Policy Optimization (which includes policy gradients) and Q-Learning. Later you can see the two sections coming back together in algorithms that are a mix of both techniques. Even the bottom three methods in policy optimization involve some form of learning a value function. The best and most advanced algorithms use value function learning and policy optimization. The value function is only for training. Then when the agent is tested, it only uses the policy.

The most likely reason you have only heard of value function methods is because policy gradients are more complicated. There are many algorithms more advanced than ones that only use value functions and policy gradients can learn to operate in continuous actions spaces (an action can be between -1 and 1, like when moving a robot arm) while value functions can only operate with discrete action spaces (move 1 right or 1 left).

Summary: Yes, there are other methods that learn the optimal policy without a value function. The best algorithms use both types of reinforcement learning.

The SpinningUp website has a lot of information about reinforcement learning algorithms and implementations. You can learn more about direct policy optimization there. That is also where I got the image from.

This answer is specific to the most common types of Model-Free RL. There are other algorithms related to the RL problem that do not learn value functions, like inverse reinforcement learning and imitation reinforcement learning.

",41026,,41026,,11/11/2020 15:11,11/11/2020 15:11,,,,0,,,,CC BY-SA 4.0 24242,1,,,10/24/2020 19:30,,0,62,"

Suppose we have a sequence of still images each of which has been contaminated by some particles(ex, dust/sand/smoke) making the images very poor in certain areas.

What architecture would be best to teach image regeneration using multiple frames? The simplest technique is to simply find a way to detect what parts of the image are contaminated and uncontaminated and pull uncontaminated sections from each frame.

",32390,,,,,10/24/2020 19:30,Deep Learning based image restoration using multiple frames,,0,2,,,,CC BY-SA 4.0 24244,2,,24204,10/25/2020 10:16,,1,,"

How can you assess the quality of any solution without a measure of quality, which, in the context of genetic algorithms, is known as fitness function? The term fitness function is due to the well-known phrase "Survival of the Fittest", which is often used to describe the Darwinian theory of natural selection (which genetic algorithms are based on). However, note that the fitness function can take any form, such as

  • How well this solution performs in a game? (in this case, solutions could, for example, be policies to play a game), or
  • How close this solution is to a minimum/maximum of some function $f$ (more precisely, if you want to find the maximum of the function $f(x) = x^2$, then individuals are scalars in $\hat{x} \in \mathbb{R}$, and the fitness could be determined by $f'(\hat{x})$ or by how big $f(\hat{x})$ with respect to other individuals); check how I did it here)?

The definition of the fitness function depends on what problem you want to solve and which solutions you want to find.

So, you need some kind of fitness function in genetic algorithms to perform selection in a reasonable way, so that to maintain the "best solutions" in the population. More precisely, while selecting the new individuals for the new generation (i.e. iteration), if you don't use a fitness (which you can also call performance, if you like) function to understand which individuals deserve to live or die, how do you know that the new solutions are better than the previous ones? You cannot know this without a fitness/performance function, so you cannot also logically decide which individuals to kill before the next generation. Mutations alone just change the solutions, i.e. they are used to explore the space of solutions.

Genetic algorithms are always composed of

  • a population of solutions/individuals/chromosomes (i.e. usually at least $2$ solutions)
  • operations to randomly (or stochastically) change existing solutions to create new ones (typically mutations and crossovers)
  • a selection process that selects the new solutions/individuals for the next generation (or to be combined and mutated)
  • a fitness function to help you decide which solutions need to be selected (or even combined and mutated)

For more info about genetic algorithms or, more generally, evolutionary algorithms, take a look at chapter 8 and 9 of the book Computational Intelligence: An Introduction by Andries P. Engelbrecht.

",2444,,2444,,10/25/2020 11:08,10/25/2020 11:08,,,,0,,,,CC BY-SA 4.0 24258,2,,24238,10/26/2020 16:43,,0,,"

Both are used, for example, in the GATE framework, which is still widely used. I suspect that this also applies to many other applications.

I would think that many recent academic publications are now on other approaches, as FSAs and FSTs are fairly established and mature technologies, but I've been out of academia for a while now.

",2193,,,,,10/26/2020 16:43,,,,4,,,,CC BY-SA 4.0 24259,1,,,10/26/2020 17:26,,2,59,"

From the part titled Introducing Latent Variables under subsection 2.2 in this tutorial:

Introducing Latent Variables. Suppose we want to model an $m$-dimensional unknown probability distribution $q$ (e.g., each component of a sample corresponds to one of m pixels of an image). Typically, not all variables $\mathbf{X} = (X_v)_{v \in V}$ in an MRF need to correspond to some observed component, and the number of nodes is larger than $m$. We split $\mathbf{X}$ into visible (or observed) variables $\mathbf{V} = (V_1,...,V_m)$ corresponding to the components of the observations and latent (or hidden) variables $\mathbf{H} = (H_1,...,H_n)$ given by the remaining $n = |\mathbf{V}| − m$ variables. Using latent variables allows to describe complex distributions over the visible variables by means of simple (conditional) distributions. In this case the Gibbs distribution of an MRF describes the joint probability distribution of $(\mathbf{V},\mathbf{H})$ and one is usually interested in the marginal distribution of $\mathbf{V}$ which is given by: $$p(\mathbf{v}) = \sum_{\mathbf{h}} p(\mathbf{v},\mathbf{h}) = \frac{1}{Z} \sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h})}$$ where $Z = \sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h})}$. While the visible variables correspond to the components of an observation, the latent variables introduce dependencies between the visible variables (e.g., between pixels of an input image).

I have a question about this part:

While the visible variables correspond to the components of an observation, the latent variables introduce dependencies between the visible variables (e.g., between pixels of an input image).

Given a set of nodes $\mathbf{X}$ in a Markov Random Field $G$, the joint distribution of all the nodes is given by:

$$p(\mathbf{X}) = \frac{1}{Z} \prod_{c \in C} \phi(c)$$

Where $Z$ is the partition function and $C$ is the set of cliques in $G$. To ensure that the joint distribution is positive, the following factors can be used:

$$\phi(c) = e^{-E(c)}$$

Such that:

$$p(\mathbf{X}) = \frac{1}{Z} e^{-\sum_{c \in C} E(c)}$$

Where $E$ is the energy function.

I am not sure why there is a need to introduce hidden variables and express $p(\mathbf{v})$ as a marginalization of $p(\mathbf{v},\mathbf{h})$ over $\mathbf{h}$. Why can't $p(\mathbf{v})$ be expressed as:

$$p(\mathbf{v}) = \frac{1}{Z} e^{-\sum_{v \in \mathbf{v}} E(v)}$$

directly? I think it may be because the factors only encode dependencies between variables in cliques, and so may not be able to encode dependencies between variables that are in two separate cliques. The purpose of the hidden variables are then to encode these "long-range" dependencies between visible variables not in cliques. However, I am not sure about this reasoning.

Any help would be greatly appreciated.

By the way, I am aware of this question, but I think the answer is not specific enough.

",41856,,41856,,10/27/2020 3:23,10/27/2020 3:23,Purpose of the hidden variables in a Restricted Boltzmann Machine,,0,5,,,,CC BY-SA 4.0 24260,1,,,10/27/2020 0:28,,2,105,"

I'm new to the AI Stackexchange and wasn't certain if this should go here or to Maths instead but thought the context with ML may be useful to understand my problem. I hope posting this question here could help another student learning about Support Vector Machines some day.

I'm currently learning about Support Vector Machines at university and came across a weird step I could not understand. We were talking about basic SVMs and formulated the optimisation problem $\max_{w,b} \{ \frac{1}{||w||} \min_n(y^{(n)}f(x^{(n)}))\}$ which we then simplified down to $\max_{w,b} \{ \frac{1}{||w||}\}$ by introducing $\kappa$ as a scaling factor for $w$ and $b$ according to the margin of the SVM. Now our lecturer converted it without explanation into a quadratic optimisation problem as $\min_{w,b}\{\frac{1}{2} ||w||^2\}$ which I could not explain myself. I hope someone with context can help me how this is possible and what math or trick is behind this approach?


Notation information:

  • $w$ - weight matrix
  • $b$ - bias (sometimes denoted $w_0$ I believe?)
  • $x^{(n)}$ - Independent variable (vector)
  • $y^{(n)}$ - Dependent variable (scalar classifying the input in a binary classifcation as $y=1$ or $y=-1$)

Thank you very much!

",41859,,,user9947,10/28/2020 10:49,10/28/2020 14:53,Support Vector Machine Convert optimisation problem from argmax to argmin,,1,2,,,,CC BY-SA 4.0 24261,1,24272,,10/27/2020 0:38,,8,801,"

I'm reading chapter one of the book called Neural Networks and Deep Learning from Aggarwal.

In section 1.2.1.1 of the book, I'm learning about the perceptron. One thing that book says is, if we use the sign function for the following loss function: $\sum_{i=0}^{N}[y_i - \text{sign}(W * X_i)]^2$, that loss function will NOT be differentiable. Therefore, the book suggests us to use, instead of the sign function in the loss function, the perceptron criterion which will be defined as:

$$ L_i = \max(-y_i(W * X_i), 0) $$

The question is: Why is the perceptron criterion function differentiable? Won't we face a discontinuity at zero? Is there anything that I'm missing here?

",41860,,2444,,2/15/2021 22:49,2/15/2021 22:49,Why is the perceptron criterion function differentiable?,,2,0,,,,CC BY-SA 4.0 24262,1,,,10/27/2020 1:16,,1,83,"

Across the literature of artificial intelligence, especially machine learning, it is normal to treat the tuples of datasets as vectors.

Although there is a convention to treat them as data points. Treating them as vectors is also considerable.

It is easy to understand the tuples of datasets as points over space. But what is the purpose of treating them as vectors?

",18758,,18758,,3/19/2021 2:34,3/19/2021 2:34,What is the reason for taking tuples as vectors rather than points?,,1,1,,,,CC BY-SA 4.0 24265,2,,24262,10/27/2020 7:00,,4,,"

They are equivalent. When we consider a particular instance as a vector, we are not literally imagining it as an arrow with it's head at the point coordinates and tail at the origin. It's just when you are working with a tuple of numbers in a mathematical context, it is conventional to call it a vector. This language follows into machine learning which is usually based on associated linear algebra.

",34473,,,,,10/27/2020 7:00,,,,0,,,,CC BY-SA 4.0 24267,1,,,10/27/2020 12:38,,0,47,"

I wish to get MSE < 0.5 on test data (https://easyupload.io/zr7xf3) which is 20% of given data chosen randomly. But I am reaching 0.73 using both plain Ridge Regression as well as a neural network with about 6 layers with some elementary regularization, dropout and choice of other parameters. Overfitting also occurs.

Suggest. I believe a Bayesian optimization or a genetic algorithm for parameters is required.

I did no feature selection (as top 4 features showed no improvement) and non-linear methods exploration.

My solutions - Ridge - Alpha = 0.002 (Grid searched)

Neural Network efforts =
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
                              patience=10, min_lr=0.001)


es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=10, restore_best_weights=True)


model_b = Sequential()


model_b.add(Dense(2048, kernel_initializer='he_uniform',input_dim = X.shape[1], activation='relu', kernel_regularizer=regularizers.l2(l2=1e-6)))
model_b.add(BatchNormalization(beta_regularizer = regularizers.l2(0.00001)))


# The Hidden Layers :
model_b.add(Dense(1024, kernel_initializer='lecun_normal',activation='selu',kernel_regularizer=regularizers.l2(l2=1e-6)))


model_b.add(BatchNormalization(beta_regularizer = regularizers.l2(0.00001)))


model_b.add(Dense(1024, kernel_initializer='lecun_normal',activation='selu',kernel_regularizer=regularizers.l2(l2=1e-6)))


model_b.add(BatchNormalization(beta_regularizer = regularizers.l2(0.00001)))


model_b.add(Dropout(0.5))


model_b.add(Dense(512, kernel_initializer='normal',activation='relu'))


model_b.add(Dense(512, kernel_initializer='normal',activation='relu'))


model_b.add(Dense(256, kernel_initializer='normal',activation='relu'))

# The Output Layer :


model_b.add(Dense(1, kernel_initializer='normal',activation='linear'))
optimizer = SGD(lr = 0.0001)


model_b.compile(loss='mean_squared_error', optimizer= optimizer)


model_b.fit(X_train, y_train, batch_size=70,
              epochs=256,
              validation_data=(X_test, y_test),callbacks = [es])


predb = model_b.predict(X_test)

If anyone has free time, may answer.

Best

",41870,,,,,10/27/2020 12:38,Unable to meet desired mean squared error,,0,2,,,,CC BY-SA 4.0 24268,2,,17218,10/27/2020 14:15,,0,,"

As described in this post, this problem is known as "unbalanced dataset" problem, which can have different solution approaches. If you use supervised learning, augmentation approaches could help. Otherwise, unsupervised approaches need some proper distance measure for outliers detection.

",32260,,,,,10/27/2020 14:15,,,,0,,,,CC BY-SA 4.0 24270,2,,24261,10/27/2020 17:53,,3,,"

Since we're dealing with real-values variables, it is almost certainly the case that the argument of the function will not be $0$.

If you care strongly about that point, you can just use sub-gradients instead (and we do have sub-gradients for this function, so there is no problem).

",40573,,,,,10/27/2020 17:53,,,,0,,,,CC BY-SA 4.0 24271,1,24795,,10/27/2020 18:21,,1,215,"

How is information theory applied to machine learning, and in particular to deep learning, in practice? I'm more interested in concepts that yielded concrete innovations in ML, rather than theoretical constructions.

Note that, I'm aware that basic concepts such as entropy is used for training decision trees, and so on. I'm looking for applications which use slightly more advanced concepts from information theory, whatever they are.

",32621,,32621,,10/28/2020 2:08,11/23/2020 1:57,Applications of Information Theory in Machine Learning,,1,0,,,,CC BY-SA 4.0 24272,2,,24261,10/27/2020 20:29,,7,,"

$\max(-y_i(w x_i), 0)$ is not partial derivable respect $w$ if $w x_i=0$.

Loss functions are problematic when not derivable in some point, but even more when they are flat (constant) in some interval of the weights.

Assume $y_i = 1$ and $w x_i < 0$ (that is, an error of type "false negative").

In this case, function $[y_i - \text{sign}(w x_i)]^2 = 4$. Derivative on all interval $w x_i < 0$ is zero, thus, the learning algorithm has no any way to decide if it is better increase or decrease $w$.

In same case, $\max(-y_i(w x_i), 0) = - w x_i$, partial derivative is $-x_i$. The learning algorithm knows that it must increase $w$ value if $x_i>0$, decrease otherwise. This is the real reason this loss function is considered more practical than previous one.

How to solve the problem at $w x_i = 0$ ? simply, if you increase $w$ and the result is an exact $0$, assign to it a very small value, $w=\epsilon$. Similar logic for remainder cases.

",12630,,12630,,10/27/2020 21:04,10/27/2020 21:04,,,,0,,,,CC BY-SA 4.0 24273,1,,,10/28/2020 2:01,,2,200,"

The paper, Deep Recurrent Q-Learning for Partially Observable MDPs, talks about stacking multiple observations in the input of a convolutional neural network.

How does this exactly work? Do the convolutional filters loop over each observation (image)?

(I know this isn't the right group to request this, but I'll highly appreciate if someone could also suggest a framework that helps with this.)

",31755,,2444,,10/30/2020 19:27,10/30/2020 19:27,How does one stack multiple observations in the input layer of a convolutional neural network?,,0,2,,,,CC BY-SA 4.0 24274,2,,24260,10/28/2020 3:28,,1,,"

So actually I managed to get hold of my lecturer to explain the argmax to argmin conversion.

Generally speaking maximising $\frac{1}{||w||}$ is identical to minimising $||w||$. As $||w||$ in $\frac{1}{||w||}$ decreases, the overall value increases, i.e. we maximise it. The reason for choosing $\frac{1}{2}||w||^2$ turns out to be a less mathematic and more practical one as our optimisation algorithms used perform better on quadratic functions and the $\frac{1}{2}$ seems to be a rather arbitrary scaling choice.

If anyone has something to add, I'd love to hear any details though!

",41859,,41859,,10/28/2020 14:53,10/28/2020 14:53,,,,0,,,10/28/2020 3:28,CC BY-SA 4.0 24279,1,,,10/28/2020 11:27,,4,166,"

Multi-label assignment is the task in machine learning to assign to each input value a set of categories from a fixed vocabulary where the categories need not be statistically independent, so precluding building a set of independent classifiers each classifying the inputs as belong to each of the categories or not.

Machine learning also needs a measure by which the model may be evaluated. So this is the question how do we evaluate a multi-label classifier?

We can’t use the normal recall, accuracy and F measures since they require a binary is it correct or not measure of each categorisation. Without such a measure we have no obvious means to evaluate models nor to measure concept drift.

",26382,,32410,,4/27/2021 7:02,1/17/2023 10:06,How do you measure multi-label classification accuracy?,,2,1,,,,CC BY-SA 4.0 24282,1,,,10/28/2020 18:26,,4,681,"

I've been trying to understand where the formulas for Xavier and Kaiming He initialization come from. My understanding is that these initialization schemes come from a desire to keep the gradients stable during back-propagation (avoiding vanishing/exploding gradients).

I think I can understand the justification for Xavier initialization, and I'll sketch it below. For He initialization, what the original paper actually shows is that that initialization scheme keeps the pre-activation values (the weighted sums) stable throughout the network. Most sources I've found explaining Kaiming He initialization seem to just take it as "obvious" that stable pre-activation values will somehow lead to stable gradients, and don't even mention the apparent mismatch between what the math shows and what we're actually trying to accomplish.

The justification for Xavier initialization (introduced here) is as follows, as I understand it:

  1. As an approximation, pretend the activation functions don't exist and we have a linear network. The actual paper says we're assuming the network starts out in the "linear regime", which for the sigmoid activations they're interested in would mean we're assuming the pre-activations at every layer will be close to zero. I don't see how this could be justified, so I prefer to just say we're disregarding the activation functions entirely, but in any case that's not what I'm confused about here.

  2. Zoom in on one edge in the network. It looks like $x\to_{w} y$, connecting the input or activation value $x$ to the activation value $y$, with the weight $w$. When we do gradient descent we consider $\frac{\partial C}{\partial w}$, and we have: $$\frac{\partial C}{\partial w}=x\frac{\partial C}{\partial y}$$ So if we want to avoid unstable $\frac{\partial C}{\partial w}$-s, a sufficient (not necessary, but that's fine) condition is to keep both those factors stable - the activations and the gradients with respect to activations. So we try to do that.

  3. To measure the "size" of an activation, let's look at its mean and variance (where the randomness comes from the random weights). If we use zero-mean random weights all i.i.d. on each layer, then we can show that all of the activation values in our network are zero-mean, too. So controlling the size comes down to controlling the variance (big variance means it tends to have large absolute value and vice versa). Since the gradients with respect to activations are calculated by basically running the neural network backwards, we can show that they're all zero-mean too, so controlling their size comes down to controlling their variance as well.

  4. We can show that all the activations on a given layer are identically distributed, and ditto for the gradients with respect to activations on a given layer. If $v_n$ is the variance of the activations on layer $n$, and if $v'_n$ is the variance of the gradients, we have $$v_{n+1}=v_n k_n \sigma^2$$ $$v'_n=v_{n+1} k_{n+1} \sigma^2$$

where $k_i$ is the number of neurons on the $i$-th layer, and $\sigma^2$ is the variance of the weights between the $n$-th and $n+1$-th layers. So to keep either of the growth factors from being too crazy, we would want $\sigma^2$ to be equal to both $1/k_n$ and $1/k_{n+1}$. We can compromise by setting it equal to the harmonic mean or the geometric mean or something like that.

  1. This stops the activations from exploding out of control, and stops the gradients with respect to activations from exploding out of control, which by step (2) stops the gradients with respect to the weights (which at the end of the day are the only things we really care about) from growing out of control.

However, when I look at the paper on He initialiation, it seems like almost every step in this logic breaks down. First of all, the math, if I understand correctly, shows that He initialization can control the pre-activations, not the activations. Therefore, the logic from step (2) above that this tells us something about the gradients with respect to the weights fails. Second of all, the activation values in a ReLU network like the authors are considering are not zero-mean, as they point out themselves, but this means that even the reasoning as to why we should care about the variances, from step (3), fails. The variance is only relevant for Xavier initialization because in that setting the mean is always zero, so the variance is a reasonable proxy for "bigness".

So while I can see how the authors show that He initialization controls the variances of the pre-activations in a ReLU network, for me the entire reason as to why we should care about doing this has fallen apart.

",1931,,,,,10/28/2020 18:26,What is the justification for Kaiming He initialization?,,0,1,,,,CC BY-SA 4.0 24285,2,,24279,10/29/2020 3:50,,0,,"

Your intuition is correct. We do use other metrics for multi-label classification. The meaning of evaluation itself changes. Apart from grading the classifier on whether it classifies correctly or not, we also have to penalize it, if it chooses the wrong class appropriately. You could use the following metrics:

  • micro/macro averaging Recall/Precision, etc.
  • Hamming Loss
  • Subset Accuracy
",40434,,,,,10/29/2020 3:50,,,,1,,,,CC BY-SA 4.0 24286,1,,,10/29/2020 7:50,,0,118,"

I'm working on SLAP (storage location assignment problem) using genetic algorithm implemented manually in the C++ programming language. The problem is fairly simple, we do have N products, which we want to allocate to M warehouse location slots (N might and might not be equal to M).

Let's begin with the encoding of the chromosomes. The chromosome length is equal to number of products (i.e. each product is one gene). Each product has one integer value (allele value), representing the location its allocated to.

Let me show you on simple example.

Products         Average picking rate        Location slots       Location number
Prod1            0.4                         Location 1, slot 1   (1)   // 3rd best
Prod2            0.3                         Location 1, slot 2   (2)   // 4th best
Prod3            0.2                         Location 2, slot 1   (3)   // The best
Prod4            0.1                         Location 2, slot 2   (4)   // 2nd best

We aim for optimal allocation of products (Prod1-4) to location slots (1-4). The better the allocation is, the faster we can process all the products in customer orders. Now let's say the Location 2 is closer to the warehouse entrance/exit, so its more attractive, and the lower the location slot number is, the faster we can pick product out of the location slot. So the optimal allocation should be:

Product     Location number
Prod1       3
Prod2       4
Prod3       1
Prod4       2

And expressed as the chromosome:

+---+---+---+---+
| 3 | 4 | 1 | 2 |
+---+---+---+---+

This allocation will lead to the best warehouse performance. Now let me show you my crossover operator (based on TSP crossover https://www.permutationcity.co.uk/projects/mutants/tsp.html):

void crossoverOrdered(std::vector<int32_t>& lhsInd, std::vector<int32_t>& rhsInd)
{
    int32_t a, b;
    int32_t pos =  0;
    int32_t placeholder = -1;
    int32_t placeholderCount = 0;

    std::vector<int32_t> o1, o1_missing, o1_replacements;
    std::vector<int32_t> o2, o2_missing, o2_replacements;

    while(true)
    {
        do
        {
            a = randomFromInterval(pos, constants::numberDimensions);
            b = randomFromInterval(pos, constants::numberDimensions);
        }
        while(a == b);

        if(a > b) std::swap(a, b);

        // Insert from first parent
        for(int32_t i = pos; i < a; ++i)
        {
            o1.push_back(lhsInd.at(i));
            o2.push_back(rhsInd.at(i));
        }

        // Insert placeholders
        for(int32_t i = a; i < b; ++i)
        {
            ++placeholderCount;
            o1.push_back(placeholder);
            o2.push_back(placeholder);
        }

        if(b >= constants::numberDimensions - 1)
        {
            for(int32_t i = b; i < constants::numberDimensions; ++i)
            {
                o1.push_back(lhsInd.at(i));
                o2.push_back(rhsInd.at(i));
            }

            break;
        }
        else
        {
            pos = b;
        }
    }

    // Find missing elements
    for(int32_t i = 0; i < constants::problemMax; ++i)
    {
        if(std::find(o1.begin(), o1.end(), i) == o1.end()) o1_missing.push_back(i);
        if(std::find(o2.begin(), o2.end(), i) == o2.end()) o2_missing.push_back(i);
    }

    // Filter missing elements and leave only those which are in the second parent (keep the order)
    for(int32_t i = 0; i < static_cast<int32_t>(rhsInd.size()); i++)
    {
        if(std::find(o1_missing.begin(), o1_missing.end(), rhsInd.at(i)) != o1_missing.end()) o1_replacements.push_back(rhsInd.at(i));
    }

    // Filter missing elements and leave only those which are in the second parent (keep the order)
    for(int32_t i = 0; i < static_cast<int32_t>(lhsInd.size()); i++)
    {
        if(std::find(o2_missing.begin(), o2_missing.end(), lhsInd.at(i)) != o2_missing.end()) o2_replacements.push_back(lhsInd.at(i));
    }

    // Replace placeholders in offspring 1
    for(int32_t i = 0; i < placeholderCount; ++i)
    {
            auto it = std::find(o1.begin(), o1.end(), placeholder);
            *it     = o1_replacements.at(i);
    }

    // Replace placeholders in offspring 2
    for(int32_t i = 0; i < placeholderCount; ++i)
    {
            auto it = std::find(o2.begin(), o2.end(), placeholder);
            *it     = o2_replacements.at(i);
    }

    // Assign new offsprings
    lhsInd.assign(o1.begin(), o1.end());
    rhsInd.assign(o2.begin(), o2.end());
}

My mutation operator(s):

void mutateOrdered(std::vector<int32_t>& ind)
{
    int32_t a, b;

    do
    {
        a = randomFromInterval(0, constants::numberDimensions);
        b = randomFromInterval(0, constants::numberDimensions);
    }
    while(a == b);

    std::rotate(ind.begin() + a, ind.begin() + b, ind.begin() + b + 1);
}

void mutateInverse(std::vector<int32_t>& ind)
{
    int32_t a, b;

    do
    {
        a = randomFromInterval(0, constants::numberDimensions);
        b = randomFromInterval(0, constants::numberDimensions);
    }
    while(a == b);

    if(a > b) std::swap(a, b);

    std::reverse(ind.begin() + a, ind.begin() + b);
}

I tried to use roulette, truncate, tournament and rank selection alorithms, but each with similar results.

This is my configuration:

populationSize = 20
selectionSize = 5
eliteSize = 1
probabilityCrossover = 0.6
probabilityMutateIndividual = 0.4
probabilityMutateGene = 0.2

My fitness function is fairly simple, since it's real number returned by simulation program which simulates picking of orders on the current allocation we gave it. Unfortunately I cannot provide this program as its confidential. It's just a real number representing how good the current allocation is, the better the allocation is, the lower the number is (i.e. its minimization problem).

The problem

This genetic algorithm can find better solutions than just random allocation, the problem is, it gets "stuck" after lets say few thousand generations and it fails to improve furthermore, even though there are better solutions, and it will go even 20k generations with exact same ellite chromosom (don't improve at all). I tried to increase crossover/mutation probability and population size but none of it worked. Thanks for any help.

",18760,,156,,5/22/2021 19:47,5/22/2021 19:47,Genetic algorithm stuck and cannot find an optimal solution,,0,4,,,,CC BY-SA 4.0 24287,1,24288,,10/29/2020 8:49,,3,57,"

Are there any architectures of deep neural networks that connect input neurons not only with the first hidden layer but also with deeper ones (red lines on the picture)?

If so could you give some names or links to research papers?

",22659,,32410,,4/26/2021 16:32,4/26/2021 16:32,Are there deep neural networks that have inputs connected with deeper hidden layers?,,1,2,,,,CC BY-SA 4.0 24288,2,,24287,10/29/2020 9:03,,2,,"

This type of connections are called skip or residual connections. There are numerous works which employs this type of mechanism, for example: ResNet, SkipRNN. In addition here you can find a paper that empirically explores the skip connections for sequential tagging, or this one for speech enhancement.

",20430,,,,,10/29/2020 9:03,,,,0,,,,CC BY-SA 4.0 24290,1,,,10/29/2020 10:10,,0,116,"

The system I'm trying to implement is a microcontroller with a connected microphone which have to recognise single words. the feature extraction is done using MFCC (and is working).

  • the system have to recognise [predefined, up to 20] single words each one up to 1 seconds length
  • input audio is sampled with a frequency of 10KHz and 8 bits resolution
  • the window is 256 sample wide (25.6 ms), hann windowed with a 15ms step (overlaying windows)
  • the total MFCC features representing each window, is about 18 features

I've done the above things, and tested the outputs for accuracy and computation speed so there is not much concern about the computations. now I have to implement a HMM for word recognition. I've read about the HMM and I think these parameters need to be addressed:

  • the hidden states are the "actual" pieces of the word with 25.6ms length represented in 18 MFCC features. and they count up to maximum of 64 sets in a single word (because the maximum length for input word is 1sec and each window is (25.6 - 10)millisecs)
  • I should use Viterbi algorithm to find out the most probable word spoken untill the current state. so, if the user is saying "STOP", the Viterbi can suggest it (with proper learning of course) when the user has spoken "STO.." . so it's some kind of prediction too.
  • I have to determine the other HMM parameters like the emission and transition. the wikipedia page for Viterbi which has written the algorithm, shows the input/output as:

from the above:

  • what is observation space? the user may talk anything so it seems indefinite to me
  • the state space is obviously the set containing all the possible MFCC feature sets used in the learned word set. how I learn or hardcode that ?

thanks for reading this long question patiently.

",18124,,18124,,10/29/2020 11:21,10/29/2020 11:21,Determining observation and state spaces for viterbi algorithm in a simple word recognition system using HMM,,0,3,,,,CC BY-SA 4.0 24292,2,,10575,10/29/2020 11:28,,0,,"

There is also the simpler action value function $$q_*(a) = \mathbb{E} \left[ R_t \mid A_t = a\right],$$ which we try to approximate when solving context-free bandit problems. You can also similarly define the action value function for contextual bandit problems by also conditioning on the context (rather than just on the action).

See chapter 2 of the book Reinforcement Learning: An Introduction (2nd edition) by Barto and Sutton for more details.

There are also the afterstate value functions, $v(s')$. Check this question and section 6.8 of the just cited book.

Moreover, there's the state-action-goal value function, $q(s, a, g)$, as described in the paper Hindsight Experience Replay (2017), although I am not sure how we can mathematically define it (i.e. as a function of other value functions or as a Bellman equation).

",2444,,2444,,11/23/2020 14:13,11/23/2020 14:13,,,,0,,,,CC BY-SA 4.0 24295,2,,14224,10/29/2020 14:40,,0,,"

I have already given an answer and there are other good answers, but I would like to give another answer by quoting an excerpt from an old paper by Norbert Wiener, i.e. Some Moral and Technical Consequences of Automation (1960, Science)

As is now generally admitted, over a limited range of operation, machines act far more rapidly than human beings and are far more precise in performing the details of their operations. This being the case, even when machines do not in any way transcend man's intelligence, they very well may, and often do, transcend man in the performance of tasks. An intelligent understanding of their mode of performance may be delayed until long after the task which they have been set has been completed.

This means that though machines are theoretically subject to human criticism, such criticism may be ineffective until long after it is relevant. To be effective in warding off disastrous consequences, our understanding of our man-made machines should in general develop pari passu with the performance of the machine. By the very slowness of our human actions, our effective control of our machines may be nullified. By the time we are able to react to information conveyed by our senses and stop the car we are driving, it may already have run head on into a wall.

",2444,,,,,10/29/2020 14:40,,,,0,,,,CC BY-SA 4.0 24296,1,24382,,10/29/2020 15:36,,2,139,"

My implementation of NEAT consistently fails to solve XOR completely. The species converge on different sub-optimal networks which map all input examples but one correctly (most commonly (1,1,0)). Do you have any ideas as to why that is?

Some information which might be relevant:

  • I use a plain logistic activation function in each non-input node 1/(1 + exp(-x)).
  • Some of the weights seem to grow quite large in magnitude after a large number of epochs.
  • I use the sum squared error as the fitness function.
  • Anything over 0.5 is considered a 1 (for comparing the output with the expected)

Here is one example of an evolved network. Node 0 is a bias node, the other red node is the output, the green are inputs and the blue "hidden". Disregard the labels on the connections.

EDIT: following the XOR suggestions on the NEAT users page of steepening the gain of the sigmoid function, a network that solved XOR was found for the first time after ca 50 epochs. But it still fails most of the time. Here is the network which successfully solved XOR:

",41343,,41343,,10/30/2020 11:27,11/2/2020 15:00,Evolved networks fail to solve XOR,,1,0,,,,CC BY-SA 4.0 24298,1,,,10/29/2020 17:29,,1,76,"

I am currently learning about autoencoders and I follow https://www.tensorflow.org/tutorials/generative/autoencoder

When denoising images, authors of tutorial add an additional axis to the data and I cannot find any explanation why... I would appreciate any answer or suggestion :)

x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]

Then the encoder is built from the following layers:

 self.encoder = tf.keras.Sequential([
      layers.Input(shape=(28, 28, 1)), 
      layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
      layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2)])
    
 self.decoder = tf.keras.Sequential([
      layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same')])
",38252,,,,,2/21/2022 18:57,Why do we add additional axis in CNN autoencoder while denoising?,,1,1,,,,CC BY-SA 4.0 24301,1,,,10/29/2020 19:46,,2,502,"

From my understanding, in a tissue where nuclei are present and need to be detected, we need to predict bounding boxes (either rectangular/circular or in the shape of the nucleus, i.e. as in instance segmentation). However, a lot of research papers start with semantic segmentation. Again, what I understood is semantic segmentation won't give the location, bounding box or count of nuclei. It will just tell that some stuff is probably nuclei and rest is probably background.

So, what is the bridging that I am missing when trying to detect nuclei from semantic segmentation. I have personally done semantic segmentation but I can't seem to count/predict bounding boxes because I can't understand how to do that (for example if semantic segmentation gave a probable region for nuclei which is actually a mixture of 3 nuclei overlapping). Semantic segmentation (in the example) just stops right there.

  1. Thresholding algorithm like Watershed might not work in some cases as demonstrated in [Nuclei Detection][1] at 23:30 onwards.
  2. Edge detection between segmented nuclei and background would not separate overlapping nuclei.
  3. Finding local maxima and putting a dot there might give rise to false positives.
  4. Finding IoU but what if the output of segmentation is not a region of classification (1s and 0s) but a continuous probability map from values between 0 to 1.
  5. Isn't finding contours and getting bounding boxes from masks using opencv a parametric method? What I mean is, it being an image processing technique, there are chances it will work for some images and won't work for some.
",41564,,41564,,11/1/2020 5:39,11/1/2020 5:39,Getting bounding box/boundaries from segmentations in UNet Nuclei Segmentation,,0,0,,,,CC BY-SA 4.0 24302,1,,,10/29/2020 20:14,,1,77,"

In a Restricted Boltzmann Machine (RBM), the likelihood function is:

$$p(\mathbf{v};\mathbf{\theta}) = \frac{1}{Z} \sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}$$

Where $E$ is the energy function and $Z$ is the partition function:

$$Z = \sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}$$

The log-likelihood function is therefore:

$$ln(p(\mathbf{v};\mathbf{\theta})) = ln\left(\sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right) - ln\left(\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right)$$

Since the log-likelihood function cannot be computed, its gradient is used instead with gradient descent to find the optimal parameters $\mathbf{\theta}$:

$$\frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} = -\frac{1}{\sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} \sum_{\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right] + \frac{1}{\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} \sum_{\mathbf{v},\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right]$$

Since:

$$p(\mathbf{h}|\mathbf{v}) = \frac{p(\mathbf{v},\mathbf{h})}{p(\mathbf{v})} = \frac{\frac{1}{Z} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{\frac{1}{Z} \sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} = \frac{e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{\sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}$$

Then:

$$\frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} = -\sum_{\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot p(\mathbf{h}|\mathbf{v}) \right] + \frac{1}{\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} \sum_{\mathbf{v},\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}\right]$$

Also, since:

$$ \frac{e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{Z} = \frac{e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}}{\sum_{\mathbf{v},\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h};\mathbf{\theta})}} = p(\mathbf{v},\mathbf{h})$$

Then:

$$\begin{align} \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} &= -\sum_{\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot p(\mathbf{h}|\mathbf{v}) \right] + \sum_{\mathbf{v},\mathbf{h}} \left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \cdot p(\mathbf{v},\mathbf{h})\right] \\ &= -\mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] + \mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] \end{align}$$

Since both of these are expectations, they can be approximated using Monte Carlo integration:

$$ \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} \approx -\frac{1}{N} \sum_{i = 1}^{N} \left[\frac{\partial E(\mathbf{v},\mathbf{h}_i;\mathbf{\theta})}{\partial \mathbf{\theta}} \right] + \frac{1}{M} \sum_{j=1}^{M} \left[\frac{\partial E(\mathbf{v}_j,\mathbf{h}_j;\mathbf{\theta})}{\partial \mathbf{\theta}} \right] $$

The first term can be computed beacuse it is easy to sample from $p(\mathbf{h}|\mathbf{v})$. However, it is difficult to sample from $p(\mathbf{v},\mathbf{h})$ directly, but since it is easy to sample from $p(\mathbf{v}|\mathbf{h})$, then Gibbs sampling is used to sample from both $p(\mathbf{h}|\mathbf{v})$ and $p(\mathbf{v}|\mathbf{h})$ to approximate a sample from $p(\mathbf{v},\mathbf{h})$.

My questions are:

  1. Is my understanding and math correct so far?
  2. In the expression for the gradient of the log-likelihood, can expectations be interchanged with partial derivatives such that:

$$\begin{align} \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}} &= -\mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] + \mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[\frac{\partial E(\mathbf{v},\mathbf{h};\mathbf{\theta})}{\partial \mathbf{\theta}} \right] \\ &= - \frac{\partial}{\partial \mathbf{\theta}} \mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] + \frac{\partial}{\partial \mathbf{\theta}} \mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] \\ &= \frac{\partial}{\partial \mathbf{\theta}} \left(\mathbb{E}_{p(\mathbf{v},\mathbf{h})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] - \mathbb{E}_{p(\mathbf{h}|\mathbf{v})}\left[E(\mathbf{v},\mathbf{h};\mathbf{\theta}) \right] \right) \\ &\approx \frac{\partial}{\partial \mathbf{\theta}} \left(\frac{1}{M} \sum_{j=1}^{M} \left[E(\mathbf{v}_j,\mathbf{h}_j;\mathbf{\theta}) \right] - \frac{1}{N} \sum_{i = 1}^{N} \left[E(\mathbf{v},\mathbf{h}_i;\mathbf{\theta}) \right] \right) \end{align}$$

  1. After approximating the gradient of the log-likelihood, the update rule for the parameter vector $\mathbf{\theta}$ is:

$$\mathbf{\theta}_{t+1} = \mathbf{\theta}_{t} + \epsilon \frac{\partial ln(p(\mathbf{v};\mathbf{\theta}))}{\partial \mathbf{\theta}}$$

Where $\epsilon$ is the learning rate. Is this update rule correct?

",41856,,2444,,10/31/2020 14:56,10/31/2020 14:56,How do I derive the gradient of the log-likelihood of an RBM?,,0,3,,,,CC BY-SA 4.0 24305,1,,,10/29/2020 22:38,,1,440,"

I am training a CNN with a batch size of 128, but I have some fluctuations in the validation loss, which are greater than one. I want to increase my batch size to 150 or 200, but, in the code examples I have come across, the batch size is always something like 32, 64, 128, or 256. Is it a rule? Can I use other values for it?

",33792,,2444,,10/30/2020 19:01,10/30/2020 19:01,Are there any rules for choosing batch size?,,1,1,,10/30/2020 19:02,,CC BY-SA 4.0 24306,1,24324,,10/30/2020 0:25,,0,152,"

How to manually draw a $k$-NN decision boundary with $k=1$ knowing the dataset

the labels are

and the euclidean distance between two points is defined as

",41202,,2444,,10/30/2020 18:54,10/30/2020 19:16,How to manually draw a $k$-NN decision boundary with $k=1$ given the dataset and labels?,,1,0,,,,CC BY-SA 4.0 24307,1,,,10/30/2020 1:38,,7,558,"

What are the state-of-the-art results in OpenAI's gym environments? Is there a link to a paper/article that describes them and how these SOTA results were calculated?

",41824,,2444,,10/30/2020 19:14,7/23/2022 21:02,What are the state-of-the-art results in OpenAI's gym environments?,,1,0,,,,CC BY-SA 4.0 24308,1,,,10/30/2020 5:35,,-1,65,"

I am working in a project named "Handwritten Math Evaluation". So what basically happens in this is that there are 11 classes of (0 - 9) and (+, -) each containing 50 clean handwritten digits in them. Then I trained a CNN model for it with 80 % of data used in training and 20 % using in testing of model which results in an accuracy of 98.83 %. Here is the code for the architecture of CNN model:

import pandas as pd 
import numpy as np 
import pickle 
np.random.seed(1212) 
import keras 
from keras.models import Model 
from keras.layers import *
from keras import optimizers 
from keras.layers import Input, Dense 
from keras.models import Sequential 
from keras.layers import Dense 
from keras.layers import Dropout 
from keras.layers import Flatten 
from keras.layers.convolutional import Conv2D 
from keras.layers.convolutional import MaxPooling2D 
from keras.utils import np_utils 
from keras import backend as K  
from keras.utils.np_utils import to_categorical 
from keras.models import model_from_json
import matplotlib.pyplot as plt
model = Sequential() 
model.add(Conv2D(30, (5, 5), input_shape =(28,28,1), activation ='relu')) 
model.add(MaxPooling2D(pool_size =(2, 2))) 
model.add(Conv2D(15, (3, 3), activation ='relu')) 
model.add(MaxPooling2D(pool_size =(2, 2))) 
model.add(Dropout(0.2)) 
model.add(Flatten()) 
model.add(Dense(128, activation ='relu')) 
model.add(Dense(50, activation ='relu')) 
model.add(Dense(12, activation ='softmax')) 
# Compile model 
model.compile(loss ='categorical_crossentropy', 
            optimizer ='adam', metrics =['accuracy']) 
model.fit(X_train, y_train, epochs=1000)

Now each image in dataset is preprocessed as follows:

import cv2
im = cv2.imread(path)
im_gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret, im_th = cv2.threshold(im_gray, 90, 255, cv2.THRESH_BINARY_INV)
ctrs, hier = cv2.findContours(im_th.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rects = [cv2.boundingRect(ctr) for ctr in ctrs]
rect = rects[0]
im_crop =im_th[rect[1]:rect[1]+rect[3],rect[0]:rect[0]+rect[2]]
im_resize = cv2.resize(im_crop,(28,28))
im_resize = np.array(im_resize)
im_resize=im_resize.reshape(28,28)

I have made an evaluation function which solves simple expression like 7+8 :-

def evaluate(im):
    s = ''
    data = []
    im_gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
    ret, im_th = cv2.threshold(im_gray, 90, 255, cv2.THRESH_BINARY_INV)
    ctrs, hier = cv2.findContours(im_th.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    sorted_ctrs = sorted(ctrs, key=lambda ctr: cv2.boundingRect(ctr)[0])
    boundingBoxes = [cv2.boundingRect(c) for c in ctrs]
    look_up = ['0','1','2','3','4','5','6','7','8','9','+','-'] 
    i=0
    for c in ctrs:
        rect = boundingBoxes[i]
        im_crop = im_th[rect[1]:rect[1]+rect[3], rect[0]:rect[0]+rect[2]]
        im_resize = cv2.resize(im_crop,(28,28))
        im_resize = np.array(im_resize)
        im_resize = im_resize.reshape(28,28,1)
        data.append(im_resize)
        i+=1
    data = np.array(data)
    predictions = model.predict(data)
    i=0
    while i<len(boundingBoxes):
        rect = boundingBoxes[i]
        print(rect[2],rect[3])
        print(predictions[i])
        s += look_up[predictions[i].argmax()]
        i+=1
    return s

I need help extending this to compound fractions, but the problem is that the vinculum / is identical to the subtraction sign - when resized to (28, 28). So I need help in distinguishing between them.

This is my first question, so please let me know if any details are left.

",41931,,62466,,11/6/2022 15:14,12/6/2022 16:10,Distinguishing between handwritten compound fraction and subtraction,,1,5,,,,CC BY-SA 4.0 24311,1,,,10/30/2020 8:58,,2,1722,"

In object detection, we can resize images by keeping the ratio the same as the original image, which is often known as "letterbox" resize.

My questions are

  1. Why do we need to resize images? If we resize images to have all the same dimensions, given that some original images are too long vertically or horizontally, we will lose a lot of features in those images.

  2. If the "letterbox" method is better than "normal resize" (i.e. without keeping the aspect ratio, e.g. the result of the application of OpenCV's resize function with the parameter interpolation set to cv2.INTER_AREA), why don't people apply it in the classification task?

",41287,,2444,,11/7/2020 11:23,11/7/2020 11:23,Why do we resize images before using them for object detection?,,1,0,,,,CC BY-SA 4.0 24315,2,,3920,10/30/2020 11:09,,1,,"

ML, being a relatively young and fast-developing field, has numerous (near-)synonyms for many concepts.

One paradigm difference is whether a model is learned from a static, pre-defined set of data, or whether it adapts as new data is presented to it over time.

Some of the terms used to describe these two paradigms respectively (with subtle differences in meaning between authors/terms) are:

  • Offline / batch / isolated learning
  • Online / continual / continuous / incremental / lifelong learning

Further, some familiar branches of ML (like Transfer Learning and Multi-task learning) have a lot of intersection with Continual Learning.


Related q's:

",23503,,23503,,11/9/2020 11:43,11/9/2020 11:43,,,,0,,,,CC BY-SA 4.0 24316,1,,,10/30/2020 11:34,,1,32,"

I am trying to train a Unet network with Synthetic data to do binary segmentation due to the fact that is is not easy to collect real data.

And there is something in the training process that I do not understand.

I have a gap in the IoU metrics between the training and the validation (despite having really similar data).

My training Iou is around 95 % and my validation is around 70 %. And the dice loss is around 0.007. The IoU is calculated on the inverted mask used for the loss.

So I do not understand why there is this gap whereas the images in validation has been created from the same background dataset and the same object dataset which has been randomly placed on background ( + rotation and rescaled randomly). The only difference is an aggressive data augmentation used for training dataset.

In my opinion, it is not overfitting since the loss value and comportment is very similar for train and val. Moreover, it seems very unlikely to me that the model overfit with same backgrounds and objects or at least model should have very good IoU for train and val if it was overfitting.

So could the data augmentation lead to the model learning features which corresponds to data augmented data (even if loss is similar) and not to the real data explaining the gap in IoU between train and val ?

",27718,,27718,,11/2/2020 7:37,11/2/2020 7:37,Could the data augmentation lead to the model learning features which corresponds to data augmented data and not to the real data?,,0,0,,,,CC BY-SA 4.0 24319,2,,24305,10/30/2020 15:32,,1,,"

Data that has a size of a factor of $2$ (aka $2^n$ for some integer $n$) allows for easier memory management because data can be organized contiguously (without gaps). This allows for faster memory reading and thus faster iteration time in general. From a computational point of view, this is important as it can be taken advantage of by the compiler and speed up iteration loops. This is why batch sizes are chosen as such in practice. However, this does not necessarily imply better training results.

In regards to your question of "Can I use other values for batch size?":

Yes, you can use different values, and, for the most part, you probably won't see the difference in computational performance because of the speed of modern training APIs. So, until training a large model with a lot of compute where this optimization has a larger impact, feel free to experiment :)

",4398,,2444,,10/30/2020 18:59,10/30/2020 18:59,,,,0,,,,CC BY-SA 4.0 24322,1,,,10/30/2020 16:19,,1,677,"

I'm trying to get a detected car's orientation when object detection is applied. For instance, when we apply object detection on a car and get a bounding box, is there any ways or methods to calculate where the heading is or the orientation or direction of the car (just 2D plane is fine)?

Any thoughts or ideas would be helpful.

",9053,,,,,10/31/2020 16:48,Get object's orientation or angle after object detection,,2,0,,,,CC BY-SA 4.0 24324,2,,24306,10/30/2020 18:00,,1,,"

This is a rather involved task. What to do from a high-level theoretical perspective might be easy to see, but it's difficult putting that into code from scratch.

Doing this in Python using existing libraries in not too complicated, though. See for example this tutorial or this StackOverflow post.

Edit:

Theoretically, I would first plot (draw) the points from your dataset in a graph and then watch out for "decision points" half-way in between any two near-by points (from the dataset) from distinct classes.

For the next step, keep in mind those decision points. Given close-by data points from classes (e.g.) A and B, imagine a straight line connecting these two points from separate classes. Next, take the point half-way along that imaginary line (i.e. your decision point) and draw a "soft" line (maybe using pencil instead of pen) orthogonal/perpendicular to that imaginary line which intersects the imaginary line in the decision point. Do that for all combinations of "reasonably" close points from different classes.

Parts of the lines you have just drawn will define the final decision boundary. Next, think of each line as consisting of multiple elements, which are separated from one another by means of the points of intersection with other drawn lines. In other words, split lines into elements wherever they intersect other lines. Now, decide which of these elements to outline as the eventual decision boundary (finally using a pen instead of pencil). This step simply involves human intelligence and is difficult to describe. Whenever a line's element accurately separates (logically speaking) two classes, indicate it using a pen. Otherwise, if it does not contribute to separating two classes, don't indicate it.

After having indicated the final decision boundary using a pen, simply erase the pencil drawings. Now, you should be left with a decision boundary.

I hope this is clear and accurate enough.

",37982,,37982,,10/30/2020 19:16,10/30/2020 19:16,,,,3,,,,CC BY-SA 4.0 24326,1,,,10/30/2020 19:40,,1,180,"

My agent uses an $\epsilon$-greedy strategy to learn. The exploration rate (i.e. $\epsilon$) decays throughout the training. I've seen examples where people update $\epsilon$ every time an action is taken, while others update it at the end of the episode. If updated at every action, $\epsilon$ is more continuous. Does it matter? Is there a standard? Is one better than another?

",38076,,2444,,10/31/2020 22:46,10/31/2020 22:46,Should the exploration rate be updated at the end of the episode or at every step?,,0,1,,,,CC BY-SA 4.0 24327,1,24335,,10/30/2020 20:26,,1,51,"

Let's assume I have a simple feedforward neural network whose input contains binary 0/1 features and output is also binary two classes.

Is it better, worse, or maybe totally indifferent, for every such binary feature to be in just one column or maybe it would be better to split one feature into two columns in a way that the second column will have the opposite value, like that:

feature_x (one column scenario) 

[0]

[1]

[0]


feature_x (two columns scenario)

[0, 1]

[1, 0]

[0, 1]

I know this might seem a bit weird and probably it is not necessary, but I have a feeling like there might be a difference for a network especially for its inner workings and how neurons in the next layers see such data. Has anyone ever researched that aspect?

",22659,,32410,,4/26/2021 16:32,4/26/2021 16:32,Should binary feature be in one or two columns in deep neural networks?,,1,0,,,,CC BY-SA 4.0 24328,2,,24322,10/30/2020 20:54,,1,,"

I think the problem can be phrased (more generally) as a Pose Estimation Problem. That term might help in obtaining better search results when searching for relevant papers.

One paper that I found on the given topic was this one. Even if it is maybe (for whatever reason) not what you are looking for precisely, it might still contain valuable references to different techniques that might be applicable to your problem at hand.

",37982,,,,,10/30/2020 20:54,,,,0,,,,CC BY-SA 4.0 24329,2,,21838,10/31/2020 1:34,,1,,"

There are two "inputs" into Wavenet:

  • the previously generated samples of the waveform, which are usually encoded into multiple channels, like into 256 channels using 8-bit mu-law encoding
  • local conditioning, which can be things like linguistic features such as phoneme classes (used in the original wavenet paper), or frequencies like mel spectrogram values (used in the Tacotron 2 paper)

The local conditioning signal usually has a much lower resolution than the waveform itself. For example, the Tacotron 2 paper mentioned using "50 ms frame size, 12.5 ms frame hop" to derive its mel spectogram values. At 24 kHz, each waveform sample has a duration of just 0.0416 ms. So in order to using the spectogram information to condition the waveform generation, it would have to be upsampled to spread out along the time dimension. (If you were locally conditioning using letters, you might turn "dog" into "dddddoooooooooggg" in order to use the letter "d" to generate 5 samples in the output waveform, etc.).

  1. What is the input to the WaveNet, isn't this a mel-spectrum input and not just 1 floating point value for raw audio?

In the implementation you linked to, it looks like variables that start with "c" refer to "conditioning" signals. So "hparams.cin_channels" indicates how many channels the input conditioning signal has. If that signal is mel spectogram with 80 channels, then it would be set to 80.

It looks like the "inference/input_convolution" layer is processing the actual waveform, not the spectogram conditioning signal.

  1. Is there reason for upsampling stride values to be [11, 25], like are the specific numbers 11 and 25 special or relevant in affecting other shapes/dimensions?

I suspect (though I'm not too familiar with tensorflow conventions) that the second dimension is time, which is why you're seeing the model upsample the conditioning signal in that direction. Those values may be required to match the resolution of the waveform being generated.

  1. Why is the input-channels in residual_block_causal_conv 128 and residual_block_cin_conv 80? What exactly is their inputs?

The "c" in "cin_conv" appears to mean that it is for the conditioning signal, which is why is has a different dimension than the waveform's embedding size.

I hope this helps. You could also try opening up an issue on that github repo to try getting help directly from the author.

",41227,,,,,10/31/2020 1:34,,,,0,,,,CC BY-SA 4.0 24330,2,,8007,10/31/2020 2:20,,1,,"

The original wavenet and the implementation you linked to is globally conditioned on speaker embeddings, which means that the network was given a unique identifier for the person speaking each time it trained on an audio clip. This allows the network to learn to mimic the voice of each person in the training data, but it only learns their voices, not arbitrary people's voices.

You might be able to do what you're describing by globally conditioning on different features of the speakers, such as age etc, though this would require a lot a speakers to span those dimensions.

I think it would definitely be possible to use style transfer for this, similar to the work done on tweaking people's age in images [1]. Some similar work has been done for audio in order to change the accent of a voices between accents [2], so I wouldn't be surprised to see more examples of this in the near future.

[1] https://www.youtube.com/watch?v=2qMw8sOsNg0 [2] https://ai.googleblog.com/2018/03/expressive-speech-synthesis-with.html

",41227,,,,,10/31/2020 2:20,,,,0,,,,CC BY-SA 4.0 24331,1,,,10/31/2020 4:41,,1,43,"

I'm trying to see how to detect the location of a soccer ball in the field using the live camera. What are some ways to achieve this?

1- Assuming we have a fixed camera with a wide shot. How to find the ball location on the actuall field?

2- A camera is zooming into the ball. But we know the location of the camera and maybe it's turning angle. Can we estimate the ball location on the field using this info? Or maybe we need additional info? Can we do it with two cameras as reference points?

Any thoughts would be helpful.

",9053,,,,,10/31/2020 4:41,Find object's location in an area using computer vision,,0,0,,,,CC BY-SA 4.0 24332,1,24384,,10/31/2020 6:23,,1,906,"

I'm facing a situation where I've to fetch probabilities from BERT MLM for multiple words in a single sentence.

Original : "Mountain Dew is an energetic drink"
Masked : "[MASK] is an energetic drink"

But BERT MLM task doesn't consider two tokens at a time for the MASK. I strongly think that there should be some sort of work around that I'm unable to find other than fine-tuning.

",25676,,,,,11/2/2020 15:25,Is there a way to provide multiple masks to BERT in MLM task?,,1,0,,,,CC BY-SA 4.0 24333,1,,,10/31/2020 8:21,,1,39,"

I have 999 signals, each with separate day timestamp, each T=10s long, sampled with fs=25kHz. This gives N=250,000 samples in total.

My task was to obtain the averaged magnitude spectrum for each signal. For example, for k=100, the signal is divided into k-equal fragments, 0.1s and 2500 samples long. Then FFT is computed on all fragments and mean value is calculated for all spectral component (mean for each frequency from DC to Nyquist Frequency). The averaged spectrum for each signal for k=100 contains 1251 values and 1251 frequency points (0-fs/2).

My question is, how can prepare the train dataset for multiple Machine Learning models based on that data, so I can predict when is the threshold time, before the failure of the machine occurs? Do i treat each spectral component (frequency) as separate feature? Or there is a different approach ?

",41953,,,,,10/31/2020 8:21,How can I predict an anomaly based on FFT of multiple signals?,,0,2,,,,CC BY-SA 4.0 24334,2,,4190,10/31/2020 8:32,,2,,"

This is an active research topic.

Consider reading

My hope is to get funded so that RefPerSys could (in a few years) be able to deduce some rules and metarules.

",3335,,2444,,10/31/2020 18:14,10/31/2020 18:14,,,,0,,,,CC BY-SA 4.0 24335,2,,24327,10/31/2020 9:31,,0,,"

You're simply adding a redundant feature by having it as two: $X_2 = 1 - X_1$. It would be equally useful to duplicate the first column. At best this will not improve your model, at worst it will decrease accuracy.

",34473,,,,,10/31/2020 9:31,,,,0,,,,CC BY-SA 4.0 24336,2,,24322,10/31/2020 16:48,,1,,"

There is a paper face pose estimation

It uses a very straight forward technique, and very obvious augmentaions to achieve nice results.

You could use exactly the same if you have a tagged dataset for cars rather than for faces.

I was able to reproduce the results myself a while back.

",21645,,,,,10/31/2020 16:48,,,,0,,,,CC BY-SA 4.0 24341,1,24365,,10/31/2020 17:35,,1,76,"

I am looking for a way to re-identify/classify/recognize x real life objects (x < 50) with a camera. Each object should be presented to the AI only once for learning and there's always only one of these objects in the query image. New objects should be addable to the list of "known" objects. The objects are not necessarily part of ImageNet nor do I have a training dataset with various instances of these objects.

Example:

In the beginning I have no "known" objects. Now I present a smartphone, a teddy bear and a pair of scissors to the system. It should learn to re-identify these three objects if presented in the future. The objects will be the exact same objects, i.e. not a different phone, but definitely in a different viewing angle, lighting etc.

My understanding is that I would have to place each object in an embedding space and do a simple nearest neighbor lookup in that space for the queries. Maybe just use a trained ResNet, cut off the classification and simply use the output vector for each object? Not sure what the best way would be.

Any advice or hint to the right direction would be highly appreciated.

",41963,,,,,3/31/2021 13:03,Single-Shot Learning for Object Re-Identification,,1,3,,,,CC BY-SA 4.0 24342,1,,,10/31/2020 17:47,,3,215,"

I'm coding my own version of MuZero. However, I don't understand how it supposed to learn to play well for both players in a two-player game.

Take Go for example. If I use a single MCTS to generate an entire game (to be used in the training stage), couldn't MuZero learn to play badly for black in order to become good at predicting a win for white? What is forcing it to play well at every turn?

",10202,,10202,,11/1/2020 13:43,11/1/2020 13:43,How does MuZero learn to play well for both sides of a two-player game?,,1,1,,,,CC BY-SA 4.0 24346,2,,24307,10/31/2020 18:37,,2,,"

There is the leaderboard page at the gym GitHub repository that contains links to specific implementations that "solve" the different gym environments, where "to solve" means "to reach a certain level of performance", which, given a fixed reward function, is typically measured as the average (episodic) return/reward. For example, in the case of the CartPole environment, you solve it when you get an average reward of $195.0$ over $100$ consecutive trials.

",2444,,,,,10/31/2020 18:37,,,,0,,,,CC BY-SA 4.0 24348,2,,24342,10/31/2020 20:49,,1,,"

Both players are represented by the exact same network with the exact same weights(similar to AplhaGO, AlphaGoZero and AlphaZero). So, they will both behave identical. Because you only have a single network, MuZero can not learn two different policies, but only one.

You can also think of this in the following way: MuZero actually learn to play only with white(or black, but just one of them) without knowing to play with the other color (at least multiple implementations of the previous algorithms like AlphaGo Zero and AlphaZero that are similar to MuZero are doing exactly this). So, in order to trick it to also be able to play with the other color, when the network need to play with black, you just flip the colors on the table so that black becomes white(and white becomes black) and the network knows what to do. After choosing the move, you flip the whole thing back and that is usually how it is done. So, from the perspective of your network, it will always play white, but because you do the flipping of the colors you can actually put them to play against each other without them knowing that.

Even without using this trick of flipping the color of the table, by doing the MCTS simulations, you will have for each state the statistics for the actions, and usually as you do more simulations, this statistics show you which actions are the best in each state. And when you train, you try to imitate this. So, your network will learn in each state which actions are the best, and this is the reason why it learns to take the best possible action in each state.

",37919,,,,,10/31/2020 20:49,,,,6,,,,CC BY-SA 4.0 24349,1,,,10/31/2020 20:49,,2,133,"

Are there any meaningful books entirely written by an artificial intelligence? I mean something with meaning, unlike random words or empty books. Something that can be charactersed as fiction literature.

If yes, then I think it is also interesting to know if any of those books is available for sale. Is there a specific name for such books? Like "robot books"?

",41409,,41409,,11/1/2020 11:45,11/2/2020 12:19,Are there any meaningful books entirely written by an artificial intelligence?,,1,5,,,,CC BY-SA 4.0 24355,2,,3502,11/1/2020 1:20,,3,,"

Reinforcement learning (and, in particular, bandit) algorithms have been and can be used to solve problems other than games, such as

In general, any problem that can be modelled as the maximization of some notion of reward, where you need to interact with some environment (with some states) by taking some actions, can, in principle, be solved by reinforcement learning. Take a look at this pre-print paper (2019) for other applications.

However, note that there are several obstacles that prevent RL algorithms from being widely adopted to solve real-world problems, starting from poor sample complexity (i.e. they require many samples to reach a good performance) or the partial inability to evaluate their performance online without affecting the users.

",2444,,2444,,11/1/2020 14:50,11/1/2020 14:50,,,,4,,,,CC BY-SA 4.0 24358,1,24359,,11/1/2020 4:52,,0,66,"

(I have a very primitive understanding of neural networks, so please forgive the lack of technicality here.)

I am used to seeing a neuron in a neural network as something that-

  1. Takes the inputs and multiplies them by their weights,
  2. then sums them up,
  3. and after that it applies the activation function to the sum.

Now, what if it was "smarter"? Say, a single neuron could do the function of an entire layer in a network, could that make the network more effective? This comes from an article I was reading at Quanta, where the author says:

Later, Mel and several colleagues looked more closely at how the cell might be managing multiple inputs within its individual dendrites. What they found surprised them: The dendrites generated local spikes, had their own nonlinear input-output curves and had their own activation thresholds, distinct from those of the neuron as a whole. The dendrites themselves could act as AND gates, or as a host of other computing devices.

...realised that this meant that they could conceive of a single neuron as a two-layer network. The dendrites would serve as nonlinear computing subunits, collecting inputs and spitting out intermediate outputs. Those signals would then get combined in the cell body, which would determine how the neuron as a whole would respond.

My thoughts: I know that Backpropagation is used to "teach" the network in the normal case, and the fact that neurons are simply activation buttons is somehow related to that. So, if neurons were to be more complicated, it would reduce efficiency. However, I am not sure of this: why would complex individual components make the network less effective?

",41971,,2444,,11/1/2020 10:47,11/1/2020 10:47,"If neurons performed the operation of an entire layer, would that make the neural network more effective?",,1,0,,,,CC BY-SA 4.0 24359,2,,24358,11/1/2020 10:16,,0,,"

Say, a single neuron could do the function of an entire layer in a network, could that make the network more effective?

That depends what you mean by "more effective". In terms of number of neurons to achieve the same result, then you should need fewer units. In terms of being able to calculate an end result for any specific problem, then no, because you can generally solve a problem using simpler units by adding more of them.

If this could somehow be done using less resources, it might reduce overall costs. In a biological system, there are possibly overheads per cell in order to maintain it on top of the costs for calculation, so it may be better to do more than the simplest calculation in each cell. Further to that, there may be an optimal amount of processing that each cell could do (this is all conjecture on my part).

In an artifical neural network, the calculations are the only thing being considered, there is no separate overhead per neuron.

There are neural network architectures with complex "sub-units". Probably the most well known are the recurrent neural network designs for LSTM and gated recurrent units (GRU), plus "skip connections" in residual neural networks. For efficiency these are normally processed in groups per layer with matrix processing functions, but you can also view them as per-neuron complexities.

I am not sure of this- why would complex individual components make the network less effective?

If the complexity was not used, or not really needed, in some of the units, then it would be wasted capacity. In a biological system, this might correspond to maintaining cells larger than they needed to be. In an artificial system, it would mean using memory and CPU to calculate interim values that were not needed for the task at hand.

",1847,,,,,11/1/2020 10:16,,,,0,,,,CC BY-SA 4.0 24360,1,,,11/1/2020 10:33,,1,655,"

Target network in DQN is known to make the network more stable, and the loss is like "how good I'm now compared to using the target". What I don't understand is, if the target network is the stable one, why do we keep using/saving the first model as the predictor instead of the target?

I see in the code everywhere:

  • Model
  • Target model
  • Train model
  • Copy to target
  • Get loss between them

At the end, the model is saved and used for prediction and not the target.

",41979,,2444,,11/1/2020 10:51,11/1/2020 12:00,Why not use the target network in DQN as the predictor after training,,2,0,,,,CC BY-SA 4.0 24361,1,24366,,11/1/2020 11:21,,2,115,"

I would like to create an AI for the 1 player version of the card game called "The Game" by Steffen Benndorf (rules here: https://nsv.de/wp-content/uploads/2018/05/the-game-english.pdf).

The game works with four rows of cards. Two rows are in ascending order (numbers 1–99), and two rows are in descending order (numbers 100–2). The goal is to lay as many cards as possible, all 98 if possible, in four rows of cards. The player can have a maximum of 8 cards in his hand and has to play at least 2 cards before drawing again. He can only play a greater value on an ascending row and a smaller value on a descending row with one single exception that lets him play in the reverse order: whenever the value of the number card is exactly 10 higher or lower.

I already implemented a very simple hard-coded AI that just picks the card with the smallest difference and prioritizes a +10/-10 play when possible. With some optimizations, I can get the AI to score 20 points (the number of cards left) on average which is decent (less than 10 points in an excellent score) but I'm stuck there and I would like to go further.

As there is randomness because of the draw pile, I was wondering if it was possible to implement a robust and not hard-coded AI to play this game.

Currently, my AI is playing piecemeal with a very simple heuristic. I do not see how to improve this heuristic, so I am wondering if it is possible to improve the performance by having a view over several turns for example. But I don't see how to simulate the next rounds since they will depend on the cards drawn.

",41980,,2444,,12/21/2021 15:18,12/21/2021 15:18,"How can I improve the performance of my approach to solving a 1-player version of the card game ""The Game"" by Steffen Benndorf?",,1,2,,,,CC BY-SA 4.0 24362,2,,6579,11/1/2020 11:29,,0,,"

What is it?

An experience replay (ER) buffer is an array/list (or buffer) $D = [e_1, \dots, e_N ]$ where you store the transitions that the agent collects while interacting with the environment. These transitions are usually represented as tuples of the form $e_t = (s_t, a_t, r_t, s_{t+1})$, where

  • $s_t$ is the state of the agent at time step $t$,
  • $a_t$ is the action taken by the agent when in state $s_t$,
  • $r_t$ is the reward received by the environment after having taken action $a_t$ in $s_t$
  • $s_{t+1}$ is the next state the agent ended up in after that action

The agent then samples (e.g. uniformly) transitions from this ER buffer $D$ to perform an update of the value function $\hat{q}(s, a)$. The ER buffer $D$ can thus be thought of as a dataset.

Why do we need it?

The motivation for experience replay (in the DQN paper that popularized this technique) is that learning becomes more stable. In fact, the authors of the DQN paper write

To alleviate the problems of correlated data and non-stationary distributions, we use an experience replay mechanism [13] which randomly samples previous transitions, and thereby smooths the training distribution over many past behaviors.

or

By using experience replay the behavior distribution is averaged over many of its previous states, smoothing out learning and avoiding oscillations or divergence in the parameters.

",2444,,2444,,11/1/2020 15:16,11/1/2020 15:16,,,,1,,,,CC BY-SA 4.0 24363,2,,24360,11/1/2020 11:42,,1,,"

The learning model and target model are only different by N steps (typically a few thousand) out the entire taining process. If the process is near complete, they will also be quite similar.

The target model is not inherently more stable in terms of producing "correct" or "better" Q values. Instead it is kept static for a period of time in order to stabilise the temporal difference (TD) target $R_{t+1} + \gamma \text{max}_{a'} \hat{q}(S_{t+1}, a', \theta^{-})$

Due to the copying stage that you listed:

  • If you return the target model, this is identical to returning the learning model from N steps beforehand.

  • Which in turn means that there was no point in doing the last N steps of training. You may as well have returned the learning model at the same point as copying.

Or another way of thinking about it: Returning the learning model at the end of the training process is identical to copying it to the target model as normal, then returning the target model immediately.

",1847,,,,,11/1/2020 11:42,,,,2,,,,CC BY-SA 4.0 24364,2,,24360,11/1/2020 11:48,,1,,"

Target network is not more stable. Both networks are the same in the regard that no one is more stable than the other. The reason for using a target network is that your current network after each step is updated. So, by not using a target network and using just the current network, after each update the rewards for many states will be modified slightly. So, for a particular state, after each update, the reward will be modified which will lead to an unstable reward. This happens because when you update Q value function for state S, you may also slightly change the reward it predicts for state S' in the next step. So, each update to Q value function changes slightly the rewards for many other states, which makes the predicted rewards in all those states to be slightly unstable, as they are changed very often and you dont have a clear reward in those states.

However, by using a target network, your reward will be more stable(as you dont update your target network at each step, but you only update the current network).

So, by using a target network, the update is more stable, not the individual networks. And because the target network is an older version of the current network, it is sensible to use the current network as it was trained more than the target network

",37919,,37919,,11/1/2020 12:00,11/1/2020 12:00,,,,3,,,,CC BY-SA 4.0 24365,2,,24341,11/1/2020 12:47,,0,,"

I have put my initial idea to a test and used a small pretrained CNN (MobileNet) to compute features for reference images and stored the feature vectors in a "database". Query images go through the exact same network and the resulting feature vector is used for nearest neighbor retrieval in the DB.

from glob import glob

import torch
from PIL import Image
from numpy.linalg import norm
from torchvision import transforms
from torchvision.models import mobilenet_v2

model = mobilenet_v2(pretrained=True)
model.eval()

preprocess = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])

# Generate DB
db = {}
dp_paths = glob('db/*.jpg')
for path in dp_paths:
    image = preprocess(Image.open(path)).unsqueeze(0)
    with torch.no_grad():
        output = model(image)
    db[output] = path

# Query
image = preprocess(Image.open('queries/box.jpg')).unsqueeze(0)
with torch.no_grad():
    query = model(image)

# Nearest Neighbor (poor man's version)
min_distance = float('inf')
candidate = None
for k, v in db.items():
    distance = norm(k.numpy() - query.numpy())
    if distance < min_distance:
        min_distance = distance
        candidate = v

print(candidate, min_distance)

At least with my 5 test reference images and several query images it worked without a single failed "classification". However, I am not sure if it will stand up to a larger test...

",41963,,41963,,11/1/2020 13:00,11/1/2020 13:00,,,,0,,,,CC BY-SA 4.0 24366,2,,24361,11/1/2020 13:37,,1,,"

There are a few different ways to improve on your simple heuristic approach, but they mostly resolve to these three things:

  • Find a better heuristic. This could be done by calculating probabilities of results, or running loads of training simulations and somehow tuning the heuristic function.

  • Look-ahead search/planning. There are many possible search algorithms. Most rely on you being able to simulate the impact of future decisions before taking them.

  • Take account of more player knowledge. So far your simple heuristic does not take account of which cards have already been played (thus which values remain to be drawn).

Currently my AI is playing piecemeal with a very simple heuristic. I do not see how to improve this heuristic so I am wondering if it is possible to improve the performance by having a view over several turns for example. But I don't see how to simulate the next rounds since they will depend on the cards drawn.

I think the main conceptual barrier you have to improvements is how to account for the complex behaviour of probabilities for drawing specific useful cards. There are a few ways to do this, but I think the simplest would be some kind of rollout (simulated look ahead), which might lead to more sophisticated algorithm such as Monte Carlo Tree Search (MCTS).

Here's how a really simple variant might work:

  1. For each possible choice of play in the game that you are currently looking at:

    1. Simulate the remaining deck (shuffle a copy of the known remaining cards)

    2. Play a simulation (a "rollout") to the end of game against the simulated deck using a simple heuristic (your current greedy choice version should be good as long as it is fast enough, but even random choices can work). Take note of the final score.

    3. Repeat 1.1 and 1.2 as many times as you can afford to (given allowed decision time). Average the result and save it as a score for the choice of play being considered.

  2. Instead of choosing the next play by your heuristic, choose the one that scores best out of all the simulations.

This statistical mean of samples works in a lot of cases because it avoids the complexity and time-consuming calculations that would be required to make a perfect decision analytically from probability theory. The important things it does in your case is look-ahead planning plus taking account of additional knowledge that the player has about the state of the game.

MCTS is like the above but nested so that the simulations are made from multiple starting points.

In terms of robustness, provided you run enough rollouts per decision to be confident about the mean scores, then it should be OK.

",1847,,1847,,11/1/2020 14:00,11/1/2020 14:00,,,,0,,,,CC BY-SA 4.0 24368,1,,,11/1/2020 13:58,,2,83,"

I'm working on reimplementing the MuZero paper. In the description of the MCTS (page 12), they indicate that a new node with associated state $s$ is to be initialized with $Q(s,a) = 0$, $N(s,a) = 0$ and $P(s,a) = p_a$. From this, I understand that the root node with state $s_0$ will have edges with zero visits each, zero value and policy evaluated on $s_0$ by the prediction network.

So far so good. Then they explain how actions are selected, according to the equation (also on page 12):

But for the very first action (from the root node) this will give a vector of zeros as argument to the argmax: $Q(s_0,a) = 0$ and $\sum_bN(s_0,b)=0$, so even though $P(s_0,a)$ is not zero, it will be multiplied by a zero weight.

Surely there is a mistake somewhere? Or is it that the very first action is uniformly random?

",10202,,,,,11/28/2020 14:22,How to choose the first action in a Monte Carlo Tree Search?,,1,0,,,,CC BY-SA 4.0 24369,1,,,11/1/2020 15:42,,1,148,"

I am using the default implementations of REINFORCE, DQN and c51 available from the tf.agents repo (links). As you can see, DQN manages to improve performance while REINFORCE seems to suffer from catastrophic forgetting. OTOH, c51 is not able to learn much and performs like a random policy throughout.

The environment looks like this -

  • action = [66, 1]
  • states = [20, 1]
  • max possible state value = 20
  • steps per episode = 20
  • Hidden Layer dimension = (128, 128)
  • learning rate = 0.001 (constant throughout)
  • Epsilon (exploration factor) = 0.2 with decay of 0.05 every 4000 episodes
  • Discount factor = 0.9
  • relay memory size = 10,000

Every episode runs for 20 steps and the rewards are collected for every step.

actual episode value is the plot is the x-axis value multiplied by 50

What could be the possible reasons for such a performance of c51 and DQN? And based on the state space, are my hyperparameters correct or some of them need more tuning? I will increase the replay memory size but other than that to check for catastrophic forgetting, I am not sure how to diagnose other issues.

",41984,,2444,,11/9/2021 23:29,11/9/2021 23:29,"If REINFORCE agent suddenly drops, how do I verify if it's due to catastrophic forgetting?",,0,0,,,,CC BY-SA 4.0 24374,1,,,11/1/2020 20:29,,0,30,"

Say I want to build a detection model that detects the existence of X or NO X.

The only piece of information I have, though, is a high res. RGB image, say 100k (width) x ~1000 pixels (height).

Let's also assume I cannot browse the internet to grab more data. I am stuck with this High resolution image. Can I somehow "slice" this image into multiple images and use said images as input data for my CNN?

How would I do so?

",41990,,41990,,11/2/2020 6:53,11/2/2020 6:53,Generating data from a High-Res. RGB image for a CNN,,0,3,,,,CC BY-SA 4.0 24375,1,,,11/1/2020 23:09,,6,1482,"

If we shift the rewards by any constant (which is a type of reward shaping), the optimal state-action value function (and so optimal policy) does not change. The proof of this fact can be found here.

If that's the case, then why does a negative reward for every step encourage the agent to quickly reach the goal (which is a specific type of behavior/policy), given that such a reward function has the same optimal policy as the shifted reward function where all rewards are positive (or non-negative)?

More precisely, let $s^*$ be the goal state, then consider the following reward function

$$ r_1(s, a)= \begin{cases} -1, & \text{ if } s \neq s^*\\ 0, & \text{ otherwise} \end{cases} $$

This reward function $r_1$ is supposed to encourage the agent to reach $s^*$ as quickly as possible, so as to avoid being penalized.

Let us now define a second reward function as follows

\begin{align} r_2(s, a) &\triangleq r_1(s, a) + 1\\ &= \begin{cases} 0, & \text{ if } s \neq s^*\\ 1, & \text{ otherwise} \end{cases} \end{align}

This reward function has the same optimal policy as $r_1$, but does not incentivize the agent to reach $s^*$ as quickly as possible, given that the agent does not get penalized for every step. So, in theory, $r_1$ and $r_2$ lead to the same behavior. If that's the case, then why do people say that $r_1$ encourage the agents to reach $s^*$ as quickly as possible? Is there a proof that shows that $r_1$ encourages a different type of behaviour than $r_2$ (and how is that even possible given what I have just said)?

",2444,,2444,,11/1/2020 23:21,12/5/2020 20:01,Why does a negative reward for every step really encourage the agent to reach the goal as quickly as possible?,,1,5,,,,CC BY-SA 4.0 24379,2,,24349,11/2/2020 10:48,,3,,"

James Ryan has done a lot of 'archaeological' work on this; you can find references to his work on his website.

Story generation has been a dream for a long time (in computing terms), and various genres have been explored, with not that much success. There have been episodes of a Western written by a computer (and actually filmed and acted out by human actors, see summary here), and various books, but the technology is nowhere near good enough to produce something worthwhile reading or watching without heavy editing.

So far it's only good for curiosity value.

There is NaNoGenMo, where since 2013 people work on programs generating novels. But most of them — again — are more interesting for curiosity. They either take an existing work and modify it procedurally, or generate random templated text (eg travel reports through a ficticious, auto-generated landscape). I don't think anyone has met the target of creating a 50k novel by computer yet.

Modern approaches with deep learning generators can produce reams of more or less well-formed text, but that doesn't make a compelling story. It's the meaning that is currently the problem, and as AI researchers keep finding out, it's hard.

",2193,,2193,,11/2/2020 12:19,11/2/2020 12:19,,,,0,,,,CC BY-SA 4.0 24382,2,,24296,11/2/2020 15:00,,1,,"

The problem was due to the following issues in my implementation:

  • The offspring generated in the crossover was not mutated (!)
  • The mutations did not occur with the expected frequencies (too few links and weight mutations)
  • The sigmoid activation had to be steepened

Another thing that previously caused issues was the network.activate function. Make sure that you wait for the network to stabilize when doing classification tasks, so all signals have time to propagate through the network.

",41343,,,,,11/2/2020 15:00,,,,0,,,,CC BY-SA 4.0 24383,2,,11000,11/2/2020 15:12,,1,,"

Specially on the problems related with PDEs you can find a relatively new article that is using a new approach to solve complex problems and improving the performance of classical approach.

Examples arise in molecular dynamics, micro-mechanics, and turbulent flows. You can find the paper called: Fourier Neural Operator for Parametric PDEs from a recent research pdf , site.

A more detailed view on this new concept here and find more on the paper's references.

",41965,,41965,,11/2/2020 15:19,11/2/2020 15:19,,,,1,,,,CC BY-SA 4.0 24384,2,,24332,11/2/2020 15:25,,0,,"

I've found the answer in the original BERT git repo

***** New May 31st, 2019: Whole Word Masking Models *****

This is a release of several new models which were the result of an improvement the pre-processing code.

In the original pre-processing code, we randomly select WordPiece tokens to mask. For example:

Input Text: 
the man jumped up , put his basket on phil ##am ##mon ' s
head Original Masked Input: [MASK] man [MASK] up , put his [MASK] on
phil [MASK] ##mon ' s head

The new technique is called Whole Word Masking. In this case, we always mask all of the the tokens corresponding to a word at once. The overall masking rate remains the same.

Whole Word Masked Input: 
the man [MASK] up , put his basket on [MASK] [MASK] [MASK] ' s head

The training is identical -- we still predict each masked WordPiece token independently. The improvement comes from the fact that the original prediction task was too 'easy' for words that had been split into multiple WordPieces.

This can be enabled during data generation by passing the flag

--do_whole_word_mask=True  

to create_pretraining_data.py.

",25676,,,,,11/2/2020 15:25,,,,0,,,,CC BY-SA 4.0 24387,1,,,11/2/2020 20:53,,3,137,"

I've read that decision trees are able to solve XOR operation so I conclude that XGBoost algorithm can solve it as well.

But my tests on the datasets (datasets that should be highly "xor-ish") do not produce good results, so I wanted to ask whether XGBoost is able to solve this type of problem at all, or maybe I should use a different algorithm like ANN?

EDIT: I found a similar question with a negative answer here.

Could someone please confirm that XGBoost cannot perform XOR operation due to "greedy approach" and whether this can maybe be changed in parameters?

",22659,,32410,,4/26/2021 16:31,4/26/2021 16:31,Can XGBoost solve XOR problem?,,0,0,,,,CC BY-SA 4.0 24390,1,,,11/3/2020 2:18,,1,92,"

As I read online following areas in mathematics come into play in ML research

  • Linear Algebra
  • Calculus
  • Differential Equations
  • Probability
  • Statistics
  • Discrete Mathematics
  • Optimization
  • Analytic Geometry
  • Topology
  • Numerical and Real Analysis

Can / Are any other areas of math used in ML research? If so what other areas? ex: Number theory

",42015,,18758,,5/5/2022 6:23,5/5/2022 6:23,Can any area of math come into play in Machine Learning Research?,,0,3,,,,CC BY-SA 4.0 24391,1,24401,,11/3/2020 4:45,,2,3890,"

When training machine learning models (e.g. neural networks) with stochastic gradient descent, it is common practice to (uniformly) shuffle the training data into batches/sets of different samples from different classes. Should we also shuffle the test dataset?

",32621,,2444,,11/3/2020 11:08,11/3/2020 13:57,Should we also shuffle the test dataset when training with SGD?,,1,1,,,,CC BY-SA 4.0 24397,1,,,11/3/2020 9:01,,1,57,"

Consider the equation 4.57 (p. 108) from section 4.6 of the Book Machine Learning: An Algorithmic Perspective, where the derivative of the softmax function is explained

$$\delta_o(\kappa) = (y_\kappa - t_\kappa)y_\kappa(\delta_{\kappa K} - y_K),$$

which is derived from equation 4.55 (p. 107)

$$y_{\kappa}(1 - y_{\kappa}),$$

which is to compute the diagonal of the Jacobian, and equation 4.56 (p. 107)

$$-y_{\kappa}y_K$$

In the book, it is not explained how they go from 4.55 and 4.56 to 4.57, it is just given, but I cannot follow how it is derived.

Moreover, in equation 4.57, the Kronecker's delta function is used, but how would one handle cases $i=j$, then we must have some for loop? Does having an $i$ and $j$ imply we need a nested for loop?

Also, I have tried to just compute the derivative of softmax according to the $i=j$ case only, and my model was faster (since we're not computing the jacobian) and accurate, but this assumes the error function is logarithmic, which I would like to code the general case.

",41915,,41915,,11/4/2020 7:16,11/4/2020 7:16,"How am I supposed to code equation 4.57 from the book ""Machine Learning: An Algorithmic Perspective""?",,0,1,,,,CC BY-SA 4.0 24400,1,,,11/3/2020 11:19,,1,109,"

For my school project, I have to develop an agent to play my game.

The base I have is a 'GameManager' which call 2 AIs, each taking a random move to do.

To make my AI perform, I decided to make a deep RL algorithm.

Here is how I've designed my solution.

1st : the board is a 8x8 board. making 112 possible lines to draw. 2nd : on each decision, my Agent has to choose 1 line in the remaining one. 3rd : each decision the Agent take is one among 112 possible.

I read some codes on the internet, the most relevant for me was a 'CartPole' example, which is a cart we have to slide to prevent a mass to fall.

I made an architecture which is this one: a game is simulated: the board is clean, making all 112 possibilities available. Our Agent is interroged by the gameManager to make a move passing him the actual state of the game (the state shape is a 112*1 vector of Boolean values, 1 means a line can be drawn, 0 means there is already a line on this position) (the action shape is a vector of 112*1 Boolean values, All values are set to 'False' except the line we want to draw) So, our Agent return his move decision.

Each time our agent perform a move, i store the initial state, the action we take, the reward we get performing the action, the state we reach and a boolean to know if the game is done or not.

The rewards I choose are: +1 if our action make us close a box, -1 if our action make other close a box, +10 if our action make us win the game, -10 if our action make us loose the game

The point is it's my 1st Deep learning project and I'm not sure about the mecanism i'm doing. when i launch a simulation, the Neural Network is running, but the move he does seems not to be better and better.

I give you the code I've wrote:

Here is the gameManager code:

while True:
hasMadeABox = False
gameIsEnd = False
rewardFCB = 0
doneFCB = False


if GRAPHIC_MODE:
    for event in pygame.event.get():
        if event.type == pygame.QUIT:
            pygame.quit()
            sys.exit()
    disp_board()

if playerTurns=="1":
    stateFCB = possibilitiesToBoolList(possible_moves[0])

boolArrayPossibleMoves = possibilitiesToBoolList(possible_moves[0])

theAction = players[playerTurns].play(boxes, possible_moves[0], boolArrayPossibleMoves, False)
#print(possible_moves[0])
#print(theAction)
if playerTurns =="1":
    actionFCB = theAction

if playerTurns=="1":
    is_box = move(True, theAction)

elif playerTurns=="2":
    is_box = move(False, theAction)

if is_box:
    if playerTurns =="1":
        #rewardFCB = 1
        rewardFCB = 1
        pass
    else:
        rewardFCB = -1
    hasMadeABox = True

if check_complete():
    gameIsEnd = True
    rewardFCB += 10 if score[0]>score[1] else -10 #does loosing is a reward null or negativ ?
    queueOfLastGame.pop(0)

    #Scotch pour affichage winrate
    isWin = 1 if score[0]>score[1] else -1
    queueOfLastGame.append(isWin)
    if queueOfLastGame.count(-1)+queueOfLastGame.count(1) > 0:
        print(queueOfLastGame.count(1)/(queueOfLastGame.count(-1)+queueOfLastGame.count(1)) * 100 , " % Winrate")

    doneFCB = True


if playerTurns=="1" and hasMadeABox:
    #si c'est notre IA vient de faire un carré
    #on connait directement l'état qui succede
    nextStateFCB = possibilitiesToBoolList(possible_moves[0])

if playerTurns=="2":
    nextStateFCB = possibilitiesToBoolList(possible_moves[0])


if nextStateFCB is not None:
    bufferSARS.append([stateFCB, actionFCB, rewardFCB, nextStateFCB, doneFCB])
    #ai_player_1.remember(stateFCB, actionFCB, rewardFCB, nextStateFCB, doneFCB)
    rewardFCB = 0
    nextStateFCB = None

if gameIsEnd:
    flushBufferSARS()
    reset()
    continue

#switch user to play if game is not end
if not hasMadeABox:
    playerTurns="1" if playerTurns == "2" else "2"

And here's my code about the Agent:

class Agent:
def __init__(self, name, possibleActions, stateSize, actionSize, isHuman=False, alpha=0.001, alphaDecay=0.01, batchSize=2048, learningRate=0.1, epsilon= 0.9, gamma = 0.996, hasToTrain=True):

    self._memory = deque(maxlen=100000)
    self._actualEpisode=1
    self._episodes=7000
    self._name=name
    self._possibleAction=possibleActions
    self._isHuman=isHuman
    self._epsilon=epsilon
    self._epsilonDecay = 0.99
    self._epsilonMin = 0.05
    self._gamma=gamma
    self._stateSize=stateSize
    self._actionSize=actionSize
    self._alpha=alpha
    self._alphaDecay=alphaDecay
    self._hasToTrain=hasToTrain
    self._batchSize=batchSize

    self._totalIllegalMove = 0
    self._totalLegalMove = 0

    self._path = "./modelWeightSave/"

    self._model = self._buildModel()


def save_model(self):
    self._model.save(self._path)

def getName(self):
    return self._name

def _buildModel(self):
    model = Sequential()

    model.add(Dense(128, input_dim=self._stateSize, activation='relu'))
    model.add(Dense(256, kernel_initializer='normal', activation='relu'))
    model.add(Dense(256, kernel_initializer='normal', activation='relu'))
    model.add(Dense(256, kernel_initializer='normal', activation='relu'))
    model.add(Dense(128, kernel_initializer='normal', activation='relu'))
    model.add(Dense(self._actionSize, kernel_initializer='normal', activation='relu'))

    model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=self._alpha), metrics=['accuracy'])
    #if os.path.isfile(self._path):
        #model.load_weights(self._path)
    return model

def act(self, UNUSED_state, stateAsBool):
    playableIndexes = []
    for i in range(len(stateAsBool[0])):
        if stateAsBool[0][i] == 1:
            playableIndexes.append(i)
    indexForRand = playableIndexes[random.randint(0, len(playableIndexes) - 1)]

    if np.random.random() <= self._epsilon:
        action= [0]*self._actionSize
        action[indexForRand]=1

    else:
        arrayState = np.array(stateAsBool)

        action = self._model.predict(arrayState)
        #Set index of max esperence to 1, we play this line.
        tmp=[0]*self._actionSize
        tmp[np.argmax(action)] = 1
        action = tmp

        isLegalMove = True
        if sum(action) != 1:
            isLegalMove = False
        for i in range(len(action)):
            if action[i] == 1:
                if stateAsBool[0][i] == 0:
                    isLegalMove = False
                    break

        if isLegalMove:
            pass
            #print("Legal move")
        else:
            #print("Illegal move")
            #AI try to play on an already draw line, we choose a random line in remainings
            self._totalIllegalMove+=1
            action = [0] * self._actionSize
            action[indexForRand] = 1

    #print("My AI took action : ",action)
    return action

def remember(self, state, action, reward, nextState, done):
    self._memory.append((state.copy(), action, reward, nextState, done))

    self._actualEpisode+=1
    if self._actualEpisode > self._episodes:
        self._actualEpisode = 0
        self.replay(self._batchSize)

def replay(self, batchSize):
    x_batch, y_batch = [], []
    minibatch = random.sample(self._memory, min(len(self._memory), self._batchSize))
    for state, action, reward, next_state, done in minibatch:
        actionIndex = np.argmax(action)
        y_target = self._model.predict(state)
        y_target[0][actionIndex] = reward if done else reward + self._gamma * np.max(self._model.predict(next_state)[0])
        x_batch.append(state[0])
        y_batch.append(y_target[0])
    self._model.fit(np.array(x_batch), np.array(y_batch),epochs=10, batch_size=len(x_batch), verbose=1)

    if self._epsilon > self._epsilonMin:
        self._epsilon *= self._epsilonDecay

    self.save_model()

def play(self, board, state, statesAsBool, player):
    actionTaken= self.act(state, statesAsBool)
    return actionTaken

def callBackOnPreviousMove(self, state, action, reward, nextState, done):
    self.remember(state, action, reward, nextState, done)

Example of output i have during fit method:

Epoch 1/10 

1/1 [==============================] - 0s 0s/step - loss: 109.9612 - accuracy: 0.8867
 
Epoch 2/10 

1/1 [==============================] - 0s 998us/step - loss: 109.9467 - accuracy: 0.8867 

Epoch 3/10 

1/1 [==============================] - 0s 0s/step - loss: 109.9456 - accuracy: 0.8867 

Epoch 4/10 

1/1 [==============================] - 0s 0s/step - loss: 109.9332 - accuracy: 0.8867 

Epoch 5/10 

1/1 [==============================] - 0s 998us/step - loss: 109.9339 - accuracy: 0.8867 

Epoch 6/10 

1/1 [==============================] - 0s 0s/step - loss: 109.9337 - accuracy: 0.8867 

Epoch 7/10 

1/1 [==============================] - 0s 997us/step - loss: 109.9305 - accuracy: 0.8867 

Epoch 8/10 

1/1 [==============================] - 0s 0s/step - loss: 109.9314 - accuracy: 0.8867 

Epoch 9/10 

1/1 [==============================] - 0s 0s/step - loss: 109.9306 - accuracy: 0.8867 

Epoch 10/10

1/1 [==============================] - 0s 0s/step - loss: 109.9301 - accuracy: 0.8867

My questions are:

  1. Is my architecture good (inputs = [0,0,1,1,0,0,1,0.....,1,0] (112x1 shape) to represent the state, and
    output = [0,0,0,0,0,0,0,0,0,1,0,0,0...0,0,0,0] (112x1 shape with only one '1') ) to represent an action ?

  2. How to nicely choose the architecture of the Neural Network model (self._model) (I have only the basics of Neural Network, so I don't really know all activation fonction, how to design the hiden layers, choose a loss...)

  3. To train my NN, is it good to call the 'fit' function with (state, action) as parameter to make it learn?

  4. Is there something really important I forget in my design to make it work?

",42025,,42025,,11/4/2020 18:12,11/4/2020 18:12,How to design my Neural Network for Game AI,,0,2,,,,CC BY-SA 4.0 24401,2,,24391,11/3/2020 12:02,,5,,"

Short answer

Shuffling affects learning (i.e. the updates of the parameters of the model), but, during testing or validation, you are not learning. So, it should not make any difference whether you shuffle or not the test or validation data (unless you are computing some metric that depends on the order of the samples), given that you will not be computing any gradient, but just the loss or some metric/measure like the accuracy, which is not sensitive to the order or the samples you use to compute it. However, the specific samples that you use affects the computation of the loss and these quality metrics. So, how you split your original data into training, validation and test datasets affects the computation of the loss and metrics during validation and testing.

Long answer

Let me describe how gradient descent (GD) and stochastic gradient descent (SGD) are used to train machine learning models and, in particular, neural networks.

Gradient descent (GD)

When training ML models with GD, you have a loss (aka cost) function $L(\theta; D)$ (e.g. the cross-entropy or mean squared error) that you are trying to minimize, where $\theta \in \mathbb{R}^m$ is a vector of parameters of your model and $D$ is your labeled training dataset.

To minimize this function using GD, you compute the gradient of your loss function $L(\theta; D)$ with respect to the parameters of your model $\theta$ given the training samples. Let's denote this gradient by $\nabla_\theta L(\theta; D) \in \mathbb{R}^m$. Then we perform a step of gradient descent

$$ \theta \leftarrow \theta - \alpha \nabla_\theta L(\theta; D) \label{1}\tag{1} $$

Stochastic gradient descent (SGD)

You can also minimize $L$ using stochastic gradient descent, i.e. you compute an approximate (or stochastic) version of $ \nabla_\theta L(\theta; D)$, which we can denote as $\tilde{\nabla}_\theta L(\theta; B) \approx \nabla_\theta L(\theta; D)$, which is typically computed with a subset of $B$ of your training dataset $D$, i.e. $B \subset D$ and $|B| < |D|$. The step of SGD is exactly the same as the step of GD, but we use $\tilde{\nabla}_\theta L(\theta; B)$

$$ \theta \leftarrow \theta - \alpha \tilde{\nabla}_\theta L(\theta; B) \label{2}\tag{2} $$ If we split $D$ into $k$ subsets (or batches) $B_i$, for $i=1, \dots, k$ (and these subsets usually have the same size, i.e. $|B_i| = |B_j|, \forall i$, apart from one of them, which may contain fewer samples), then the SGD step needs to be performed $k$ times, in order to go through all training samples.

Sampling, shuffling, and convergence

Given that $\tilde{\nabla}_\theta L(\theta; B_i) \approx \nabla_\theta L(\theta; D), \forall i$, it should be clear that the way you split the samples into batches can affect learning (i.e. the updates of the parameters).

For instance, you could consider your dataset $D$ as an ordered sequence/list, and just split it into $k$ sub-sequences. Without shuffling this ordered sequence before splitting, you will always get the same batches, which means that, if there's some information associated with the specific ordering of this sequence, then it may bias the learning process. That's one of the reasons why you may want to shuffle the data.

So, you could uniformly choose samples from $D$ to create your batches $B_i$ (and this is a way of shuffling, in the sense that you will be uniformly building these batches at random), but you can also sample differently and you could also re-use the same samples in different batches (i.e. sampling with replacement). Of course, all these approaches can affect how learning proceeds.

Typically, when analyzing the convergence properties of SGD, you require that your samples are i.i.d. and that the learning rate $\alpha$ satisfies some conditions (the Robbins–Monro conditions). If that's not the case, then SGD may not converge to the correct answer. That's why sampling or shuffling can play an important role in SGD.

Testing and validation

During testing or validation, you are just computing the loss or some metric (like the accuracy) and not a stochastic gradient (i.e. you are not updating the parameters, by definition: you just do it during training). The way you compute the loss or accuracy should not be sensitive to the order of the samples, so shuffling should not affect the computation of the loss or accuracy. For instance, if you use the mean squared error, then you will need to compute

\begin{align} L(\theta; D_\text{test}) &= \operatorname {MSE} \\ &= {\frac {1}{n}}\sum _{i=1}^{n}(f_\theta(x_i)-{\hat {y_{i}}})^{2}\\ &= {\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-{\hat {y_{i}}})^{2} \end{align}

where

  • $f_\theta$ is your ML model
  • $x_i$ is the $i$th input
  • $y_{i}$ is the true label for input $x_i$
  • $\hat {y_{i}}$ is the output of the model
  • $n$ is the number of samples you use to compute the MSE

This is an average, so it doesn't really matter whether you shuffle or not. Of course, it matters which samples you use though!

Further reading

Here you can find some informal answers to the question "Why do we shuffle the training data while training a neural network?". There are other papers that partially answer this and/or other related questions more formally, such as this or this.

",2444,,2444,,11/3/2020 13:57,11/3/2020 13:57,,,,0,,,,CC BY-SA 4.0 24402,1,24403,,11/3/2020 14:18,,-1,156,"

I have read what the loss function is, but I am not sure if I have understood it. For each neuron in the output layer, the loss function is usually equal to the square of the difference value of the neuron and the result we want. Is that correct?

",42011,,2444,,12/9/2021 9:33,12/9/2021 9:33,What is the definition of a loss function in the context of neural networks?,,1,0,,,,CC BY-SA 4.0 24403,2,,24402,11/3/2020 14:26,,1,,"

A loss function is what helps you "train" your neural network to do what you want it to do. A better way to word it to begin with would be an "objective" function. This function describes what objective you'd like your neural network to fit to (or to be good at).

The loss function that you've described is "squared error", which, as the name suggests, is the squared difference between the expected output and the output from the neural network. This trains the network to match the expected output value.

Other loss (or "objective") functions could train your network to look for different things. For example, training on cross entropy loss helps your network learn certain probabilities. That's why it's usually used for classification, like when you want to determine which digit from 0-9 was fed into your MNIST classifier.

",5240,,,,,11/3/2020 14:26,,,,0,,,,CC BY-SA 4.0 24406,1,,,11/3/2020 15:54,,1,640,"

I am trying to create my own gym environment for the A3C algorithm (one implementation is here). The custom environment is a simple login form for any site. I want to create an environment from an image. The idea is to take a screenshot of the web page and create an environment from this screenshot for the A3C algorithm. I know the doc and protocol for creating a custom environment. But I don't understand how to create an environment, exactly, based on a screenshot.

If I do so

self.observation_space = gym.spaces.Box(low=0, high=255, shape=(128, 128, 3), dtype=np.uint8)

I get a new pic.

Here's the algorithm that I am trying to implement (page 39 of the master's thesis Deep Reinforcement Learning in Automated User Interface Testing by Juha Eskonen).

",42038,,2444,,11/4/2020 11:22,11/23/2020 15:24,How do I create a custom gym environment based on an image?,,1,5,,,,CC BY-SA 4.0 24407,1,,,11/3/2020 17:03,,1,180,"

I've been trying to find the optimal number of epochs that I should train my neural network (that I just implemented) for.

The visualizations below show the neural network being run with a variable number of epochs. It is quite obvious that the accuracy increases with the number of epochs. However, at 75 epochs, we see a dip before the accuracy continues to rise. What is the cause of this?

",42039,,2444,,11/5/2020 11:28,11/5/2020 11:28,"Why does the accuracy drop while the loss decrease, as the number of epochs increases?",,1,1,,,,CC BY-SA 4.0 24408,2,,24407,11/3/2020 17:55,,1,,"

Decrease of loss does not essentially lead to increase of accuracy (most of the time it happens but sometime it may not happen). To know why, you can have a look at this question. The network cares about decreasing the loss and it does not care about the accuracy at all. So it's no surprise to see what you presented.

Additional note: If you use batch approaches to teach your network, or if you choose a big step size you may also see that the loss increases sometimes but in the case of batching, the hope is that its trend is to be decreased.

",41547,,,,,11/3/2020 17:55,,,,0,,,,CC BY-SA 4.0 24409,1,,,11/3/2020 22:44,,4,318,"

I know that several tokenization methods that are used for tranformer models like WordPiece for Bert and BPE for Roberta and others. What I was wondering if there is also a transformer which uses a method for tokenization similarly to the embeddings that are used in the fasttext library, so based on the summations of embeddings for the n-grams the words are made of.

To me it seems weird that this way of creating word(piece) embeddings that can function as the input of a transformer isn't used in these new transformer architectures. Is there a reason why this is not tried yet? Or is this question just an result of my inability to find the right papers/repo's.

",42045,,,,,9/12/2021 15:02,Is there a pretrained (NLP) transformer that uses subword n-gram embeddings for tokenization like fasttext?,,1,0,,,,CC BY-SA 4.0 24410,1,,,11/4/2020 0:01,,0,92,"

I'm trying to do some research about semantic segmentation for webpages, in particular e-commerce webpages. I found some articles which provide some solutions based on very old dataset and those solutions in my opinion can't be effective for modern websites, in particular e-commerce. I would like to semantically infer the images bounding box, text, price etc..

Another problem is related with the size of webpage screenshot which are huge, I resized to 1024x512, but I think that I can't resize the image more otherwise I loose quality.

I built a very complex neural network in order to semantically infer text, images and background, (not classification but just segmentation), and the results are not so bad, but they are far from my expectations which seems strange to me, as we have many DNN able to do semantic segmentation of road, building, car etc for example. One problem is for sure the lack of a dataset with detailed labels. I didn't find any dataset that can satisfy my requests.

QUESTION: Any idea to help the network learn better the structure of a webpage just with a screenshot?

My DNN essentially is built as an auto-encoder architecture based on Segnet, with some modifications, skip connections, unpooling etc, I think that it is a good network.

references: https://clgiles.ist.psu.edu/pubs/CVPR2017-connets.pdf https://link.springer.com/chapter/10.1007/978-981-13-0020-2_33

",32694,,32694,,11/4/2020 12:50,11/4/2020 12:50,Is Webpage Semantic Segmentation possible nowadays?,,0,4,,,,CC BY-SA 4.0 24411,2,,8190,11/4/2020 7:59,,4,,"

Warren McCulloch and Walter Pitts talk about recurrent neural nets in their paper McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943). https://doi.org/10.1007/BF02478259.

They finish their introduction with the paragraph:

The nervous system contains many circular paths, whose activity so regenerates the excitation of any participant neuron that reference to time past becomes indefinite, although it still implies that afferent activity has realized one of a certain class of configurations over time. Precise specification of these implications by means of recursive functions, and determination of those that can be embodied in the activity of nervous nets, completes the theory.

Their paper contains a section titled:

  1. The Theory: Nets Without Circles.

in which they introduce feed-forward (nets without cycles) and recurrent (nets with cycles) networks, and the next section, titled

  1. The Theory: Nets with Circles.

in which they prove a few theorems about recurrent neural networks.

Marvin Minsky quotes them, and discusses recurrent neural networks extensively throughout his book, Computation: Finite and Infinite Machines (1967). Prentice Hall, ISBN: 0131655639,9780131655638

I am not sure, are there earlier references.

",15524,,,,,11/4/2020 7:59,,,,0,,,,CC BY-SA 4.0 24413,1,24415,,11/4/2020 8:19,,0,112,"

The theory of evolution seems to be intelligent as it creates life

The mechanism of evolutionary theory consists of mutation, recombination, and natural selection like a genetic algorithm.

Isn't this evolutionary mechanism itself the same as the essence of human intelligence?

",23500,,1847,,11/4/2020 9:16,11/4/2020 12:26,Isn't evolutionary theory the essence of intelligence after all?,,1,3,,12/15/2021 10:22,,CC BY-SA 4.0 24415,2,,24413,11/4/2020 10:50,,0,,"

The theory of evolution seems to be intelligent as it creates life

When you say "seems to be intelligent" that begs the question: How are you defining "intelligent"? Which of course is still one of the big issues in AI research.

I think there are some flaws with the argument that "creates life" = "intelligent":

  • Evolution does not create life. It operates on entities where there is a copy mechanism which is not 100% reliable, plus a selective environment that also impacts likelihood of further copies being made. Some form of proto-life (an initial Darwinian ancestor or Ida) needed to exist before evolution started.

  • The process of creating the first proto-life capable of undergoing evolution is generally thought to be a large semi-random search through chemical combinations. Random search is sometimes used in optimisation problems, and might be studied as part of AI search topics. However, it would normally be considered something of a baseline algorithm, and definitley not tick the boxes for all the general traits of intelligence.

Isn't this evolutionary mechanism itself the same as the essence of human intelligence?

The Wikipedia article on artificial intellegence lists challenges faced by researchers and developers in AI. The categories chosen there are:

Reasoning, problem solving; Knowledge representation; Planning; Learning; Natural language processing; Perception; Motion and manipulation; Social intelligence; General intelligence

Together, these are mainly traits of mammalian, avian and a few other multicellular species, with a few traits such as language heavily focused on humans.

I think it is important to separate out the mechanism whereby these traits arose naturally - which is generally agreed to be via an evolutionary process - from how those traits function. Artifical intelligence may use a little bit of reverse engineering from the natural traits in order to inspire design, but most AI systems do not use theory of evolution directly.

When used directly, evolutionary algorithms can be used to solve search and optimisation problems. Also they can be used to solve simplified problems in perception and motion/manipulation. However, we are not able to scale up such algorithms to solve all aspects of general intelligence. Instead, systems like machine learning are designed to work from analysis of the problem, inspired in part by working natural systems. These work far more efficiently than evolutionary algorithms. There are no competitive evolutionary variants of AlphaZero, Watson, GPT-3 or neural-networks used in image processing.

Evolutionary algorithms have their place in AI in practice and research. However, they do not define or encapsulate a form of general intelligence.

",1847,,1847,,11/4/2020 12:26,11/4/2020 12:26,,,,3,,,,CC BY-SA 4.0 24416,1,,,11/4/2020 12:49,,1,92,"

I have to write the formalization of the loss function of my network, built following the WGAN-GP model. The discriminator takes 3 consecutive images as input (such as 3 consecutive frames of a video) and must evaluate if the intermediate image is a possible image between the first and the third.

I thought something like this, but is it correct to identify x1, x2 and x3 coming from Pr even if they are 3 consecutive images? Only the first is chosen randomly, the others are simply the next two.

EDIT:

EDIT 2:

I replaced Pr with p_r(x1, x3) and p_r(x1, x2, x3) to reinforce the fact that x2 and x3 are taken after x1, so they depend on the choice of x1. Is it more correct this way?

",40372,,2444,,1/25/2021 18:56,1/25/2021 18:56,WGAN-GP Loss formalization,,0,3,,,,CC BY-SA 4.0 24417,1,,,11/4/2020 17:16,,0,19,"

I have two sound datasets and each one has 80% normal and 20% anomalous data points. The first one is a rock song and the second one is a mellow indie song. I use half of the normal data as a baseline in each dataset. I identify anomalies using isolation forest in each dataset and found 25 anomalies in the first rock song dataset and 12 in the mellow threshold. Now my question is how can I classify an anomaly as a rock song specific one? Do you think building a simple linear regression classifier should work?

",42060,,,,,11/4/2020 17:16,How to classify anomalies between two sound datasets?,,0,2,,,,CC BY-SA 4.0 24418,1,,,11/5/2020 7:56,,4,1494,"

Variational autoencoders have two components in their loss function. The first component is the reconstruction loss, which for image data, is the pixel-wise difference between the input image and output image. The second component is the Kullback–Leibler divergence which is introduced in order to make image encodings in the latent space more 'smooth'. Here is the loss function:

\begin{align} \text { loss } &= \|x-\hat{x}\|^{2}+\operatorname{KL}\left[N\left(\mu_{x}, \sigma_{x}\right), \mathrm{N}(0,1)\right] \\ &= \|x-\mathrm{d}(z)\|^{2}+\operatorname{KL}\left[N\left(\mu_{x^{\prime}} \sigma_{x}\right), \mathrm{N}(0,1)\right] \end{align}

I am running some experiments on a dataset of famous artworks using Variational Autoencoders. My question concerns scaling the two components of the loss function in order to manipulate the training procedure to achieve better results.

I present two scenarios. The first scenario does not scale the loss components.

Here you can see the two components of the loss function. Observe that the order of magnitude of the Kullback–Leibler divergence is significantly smaller than that of the reconstruction loss. Also observe that 'my famous' paintings have become unrecognisable. The image shows the reconstructions of the input data.

In the second scenario I have scaled the KL term with 0.1. Now we can see that the reconstructions are looking much better.

Question

  1. Is it mathematically sound to train the network by scaling the components of the loss function? Or am I effectively excluding the KL term in the optimisation?

  2. How to understand this in terms of gradient descent?

  3. Is it fair to say that we are telling the model "we care more about the image reconstructions than 'smoothing' the latent space"?

I am confident that my network design (convolutional layers, latent vector size) have the capacity to learn parameters to create proper reconstructions as a Convolutional Autoencoder with the same parameters is able to reconstruct perfectly.

Here is a similar question.

Image Reference: https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73

",34180,,2444,,11/6/2020 1:42,8/3/2021 3:04,What is the impact of scaling the KL divergence and reconstruction loss in the VAE objective function?,,1,0,,,,CC BY-SA 4.0 24420,2,,8168,11/5/2020 12:27,,0,,"

People usually say that genetic algorithms are used to solve optimization problems, but when it comes to optimizing a specific function given in an analytic form (i.e. when it comes to finding a maximum or minimum of such a function), it may not be clear how to proceed. I have created a complete but simple implementation and explanation of how to solve this problem here, but let me also describe here the main idea behind the approach. Before that, let's briefly review genetic algorithms (GAs).

Genetic algorithms

Genetic algorithms are composed of

  • a population (i.e. a set) of individuals (also known as chromosomes or genotypes), which represent the solutions to some problem

  • a fitness function that evaluates each individual (i.e. how "good" it is, maybe compared to other individuals, where "good" depends on the problem)

  • genetic operations to stochastically change the individuals in the population: typically, these operations are the mutation and cross-over

  • a method to select individuals for the cross-over (where you combine 2 or more individuals to produce other individuals); the selection is also a genetic operation

How to solve your problem?

To solve any problem with genetic algorithms, you first need to address all the four points above, i.e. define what your individuals (i.e. solutions) are, how to compute the fitness of a solution (i.e. how good it is), and define the specific evolutionary operations (specifically, mutation, cross-over and selection).

In your specific problem, the solutions are $\hat{x} \in \mathbb{R}$, such that

$$f(x)=\frac{-x^{2}}{10}+3 x$$

is a (local or global) maximum, i.e. $f(\hat{x}) \geq f(x)$, for all $x \in \mathbb{R}$ in a neighbourhood of $\hat{x}$.

(Note that this is just the definition of the problem of function maximization: if you are not familiar with it, you should probably get familiar with it before trying to understand this answer or even trying to solve this problem with GAs).

Therefore, in this case, the individuals are real numbers (which are the inputs to $f$).

The fitness function can be a function that computes $f(x_i)$, for all $x_i$ in your population, then compares $f(x_i)$ to $f(x_j)$ for all $i \neq j$. The higher the $f(x_i)$, the closer it is to a maximum.

The genetic operations can be implemented in different ways. You should think about it. If you are familiar with GAs and you know now that solutions are real numbers, at least one way of implementing these genetic operations should come to your mind at this point. Keep in mind that your solutions should be in the range $[0, 32]$, i.e. this is a constrained optimization problem. If you do not have any idea on how to implement them, take a look at my implementation/explanation.

",2444,,2444,,11/5/2020 14:58,11/5/2020 14:58,,,,0,,,,CC BY-SA 4.0 24421,2,,5716,11/5/2020 13:43,,1,,"

Don't know if you have this doubt anymore, but this would be helpful for those who are facing similar problems-

You will need to find the correct weights with which you add these two loses by hyperparameter search. That is, find the best $\lambda$ for the loss-

$$ L = Loss_1 + \lambda(Loss_2) $$

Here $Loss_1$ and $Loss_2$ can be any losses. Here, we take them as SSIM and L1-regularization losses, respectively. You can keep the gradient flow of the regularization loss below some percentage of that of $Loss_1$ by choosing the correct value of gradient clipping for a hyperparameter. Note that by setting this hyperparameter to low, you may even impede its performance(the exact opposite of the case mentioned here). For this specific case, L1 regularization has a constant gradient, equal to the hyperparameter itself. So, by keeping it around 10% of the max gradient(or 10% of the max loss generally works as well), we should not face these kinds of problem.

",40843,,,,,11/5/2020 13:43,,,,0,,,,CC BY-SA 4.0 24422,1,,,11/5/2020 15:47,,1,395,"

I came across the concept of "deep learning primitives" from the Nvidia talk Jetson AGX Xavier New Era Autonomous Machines (on slide 44).

There doesn't seem to be a lot of articles in the community on this concept. I was able to find one definition from here, where it defined deep learning primitives as the "fundamental building blocks of deep networks" like fully connected layers, convolutions layers, etc.

I was curious to find out if a self-attention layer is a primitive, I came across this OpenDNN issue and one person explained that self-attention layers can be built by other primitives like inner product, concat, etc.

So my question is what exactly are primitives in deep learning? What makes a convolution layer a primitive and a self-attention layer not a primitive?

",42082,,,,,11/5/2020 15:47,What exactly are deep learning primitives?,,0,2,,,,CC BY-SA 4.0 24423,2,,24418,11/5/2020 17:28,,1,,"

Ans 1.

The motive of Variational Inference(on which VAE is based), is to decrease $KL(q(z|x)||p(z))$, where p(z) is our chosen distribution of the hidden variable z. After doing some math, we can write this expression as-

$ KL(q||x) = log(p(x)) - \Sigma_z q(z)log(\frac{p(x,z)}{q(z)}) $

For a given x, the first term of RHS is constant. So we maximise the second term so that the KL divergence goes to zero.

We can write the second term as

$E_{q(z)}[log(p(x|z)] - KL(q(z|x)||p(z))$

(try writing p(x,z) as $\frac{p(x|z)}{p(z)}$ and then expand. Here, p(z) is the distribution of our choice, i.e. a Gaussian distribution). We argue that the process from z to x is deterministic and z is gaussian. So, the first term could be replaced by $exp(log(-||x-\hat{x}||^2))$(we replaced z by $\hat{x}$ because it's deterministic- this is now the exact proof). So we have-

$ maximize(-||x-\hat(x)||-KL(q(z|x)||p(z))) $

and we get our loss function.

We also know that variational autoencoders almost never find the optimal solution, so I am not sure how would playing around with the weights affect it(Nor do I know if it makes sense mathematically).

Ans 2.

We can say that the KL divergence has a regularising effect.

This page has some nice experiments which will help you understand what happens to the latent space when you decrease the KL divergence part.

Ans 3.

Yes, you can say that. You are fixing the dimensions, but are lenient on the distribution. In fact, you are approaching autoencoders by doing this.

Separate-

I want to point you towards this article. It explains why we choose to minimise $KL(q(z|x)||p(z))$ instead of $KL(p(z)||q(z|x))$ (the latter is intractable) and what would happen if we choose less independent variables for our estimator q(z).

Also, have you tried increasing the dimensions of the latent space? It can also have a 'de-regularizing' effect. It seems that the model is underfitting the data- the reconstruction loss is high with the normal loss, compared to when you decrease the regularizing term.

Hope it helps.

",40843,,40843,,11/5/2020 17:55,11/5/2020 17:55,,,,1,,,,CC BY-SA 4.0 24425,2,,24181,11/5/2020 17:56,,1,,"

Some sources say MCTS (or planning in general) increases the sample efficiency.

If we're thinking purely about experiments run in simulations, then I'd estimate there may be cases where a combination of pure learning + MCTS (or some other form of planning / model-based aspect) may be more efficient, and there may be different cases where only a single one of those techniques on its own may perform better. So then I wouldn't say this is always necessarily true.

Often though, when saying "sample efficiency", we would only count the steps actually taken in a "real" environment, but maybe not count steps taken in a lookahead search or planning algorithm. In pure simulations this may seem like a weird distinction to make, but it's more sensible when you consider that we often use simulations just because they're convenient ways to evaluate and compare new algorithms, but often not the end goal. Often, the end goal would be to apply something in the "real world", for example on a robot or something. In such a situation, collecting pure learning experiences can be very expensive (time-consuming, maybe also risky because the robot may fall over and break, etc.). But you may be able to also provide that robot with a learned model, or simulator, and have the robot use that to also run its own search or planning algorithms on an approximated version of the real world. This is a clear case where performing such is much faster and cheaper and less risky than collecting true experiences in the real world for pure RL.


Assumed the transition model is known and the computational cost of interacting through planning is the same as interacting with the environment, I do not see the difference between playing many games versus playing a single game, but plan at each step.

Jumping back to the case where we actually are working purely in simulation, and where there's no difference in computational cost between steps taken for pure learning vs. steps taken in search/planning, there absolutely are still some differences. If you run search, you use extra time and have temporary extra memory usage (which frees up again after completing your search algorithm) to make one really good decision in the "main" environment. You could view the steps taken in MCTS simulations as a form of learning as well, but they have a different purpose from the steps taken in a pure RL setting; these steps are taken with the sole goal of learning how to act well in the root state. All search time and memory usage is dedicated to that single decision. This can enable smarter decision-making than if you're going purely off of what you learned through pure RL, due to 1) focusing more effort on just a single decision, and 2) not being constrained by the capacity of your learning algorithm to actually learn a good decision (simple function approximators may simply not be capable of accurately representing strong policies, and more complex function approximators will take a huge amount of time to learn). This smarter decision-making thanks to search can in turn also improve the quality of the experience used by your pure RL component.


Finally, since you started out by mentioning AlphaGo Zero, I'd like to emphasise that AlphaGo Zero and similar approaches are typically used in multi-agent adversarial domains (like zero-sum games). Pure RL approaches for such multi-agent settings do exist, but there has been significantly less research towards them than pure RL approaches for single-agent settings. Most of these pure RL approaches are really only applicable to single-agent settings, and can easily have poor performance when applied to these kinds of multi-agent settings. Search algorithms like MCTS on the other hand are very well-established techniques in these adversarial domains like games, and the combination of them with learning approaches appears to allow even for learning approaches to be used which are not explicitly "aware" of the fact that they're operating in such a multi-agent domain.

",1641,,,,,11/5/2020 17:56,,,,2,,,,CC BY-SA 4.0 24428,2,,24375,11/5/2020 18:51,,1,,"

Your examples are equivalent. But it is possible to find a constant yielding a different optimal policy.

Your examples are absolutely equivalent. The agent maximizes the reward, and only way to do so is by reaching $s^*$.

Consider $r_3$ :

$$ r_3(s, a)= \begin{cases} 1, & \text{ if } s \neq s^*\\ 2, & \text{ otherwise} \end{cases} $$

With a sufficiently large $\gamma$, moving infinitely without reaching $s^*$ is now the optimal solution.

For the generic case

$$ r_4(s, a)= \begin{cases} \alpha, & \text{ if } s \neq s^*\\ \beta, & \text{ otherwise} \end{cases} $$

the threshold is found by comparing the results of the series $\alpha + \alpha^2 + \alpha^3 + ... + \alpha^{t_m}$, where $t_m$ is the maximum episode length, and $\alpha + \alpha^2 + \alpha^3 + ... + \alpha^{t^*}$, where $t^*$ is the length of the episode following the fastest policy.

In the example of $r_3$, it is trivial to find examples where the fastest policy isn't optimal. Imagine a race, the agent starts on the left and gets either $\alpha$ or $\beta$ points, depending on where it is. With $\gamma = 0.9$ and no time-limit (infinite episodes) the optimal policy is to move randomly, but in the second-to-last house, avoid the goal state. With $\gamma = 0.1$, the optimal policy is to move randomly (not really, probably there would be a slight advantage in moving right), but in the second-to-last house, enter the goal.

",7496,,7496,,11/5/2020 19:37,11/5/2020 19:37,,,,3,,,,CC BY-SA 4.0 24429,2,,22627,11/5/2020 20:45,,3,,"

I believe Graph Representation Learning book by William L. Hamilton is a great resource to start

",11303,,2444,,1/15/2021 11:24,1/15/2021 11:24,,,,1,,,,CC BY-SA 4.0 24430,2,,15397,11/5/2020 22:25,,-1,,"

Yes, it's called hypothesis testing but normally you need a little bit more than pure MLE.

",32390,,,,,11/5/2020 22:25,,,,0,,,,CC BY-SA 4.0 24433,2,,22270,11/5/2020 23:40,,1,,"

The authors of the paper Learning Robust Rewards with Adversarial Inverse Reinforcement Learning (2018, published in ICRL), which introduced the inverse RL technique AIRL, argue that GAIL fails to generalize to different environment's dynamics. Specifically, in section 7.2 (p. 7), they describe an experiment where they disable and shrink the two front legs of the ant, then, based on the results, they conclude

GAIL learns successfully in the training domain, but does not acquire a representation that is suitable for transfer to test domains.

On the other hand, according to their experiments, AIRL is more robust to changes in the environment's dynamics.

",2444,,2444,,11/5/2020 23:45,11/5/2020 23:45,,,,0,,,,CC BY-SA 4.0 24434,1,24435,,11/6/2020 3:02,,0,217,"

I am training a convolutional neural network to detect objects (weeds amongst crops, in my case) using TensorFlow. The original dimensions of the raw training photos are 4000 x 3000 pixels, which must be resized to become workable. The idea here is to label objects in the training images (using Label-Img), train the model, and use it to detect weeds in certain situations.

According to TensorFlow 2 Detection Model Zoo, there are algorithms designed for different speeds, which involves initially resizing the images to a specified dimension. Although this is not a coding question, here is an example of SSD ResNet-50, which initially resizes the input images to 1024 x 1024 pixels:

model {
  ssd {
    num_classes: 1
    image_resizer {
      fixed_shape_resizer {
        height: 1024
        width: 1024
      }
    }
    feature_extractor {
      type: "ssd_resnet50_v1_fpn_keras"
      depth_multiplier: 1.0
      min_depth: 16
      conv_hyperparams {
        regularizer {
          l2_regularizer {
            weight: 0.00039999998989515007
          }
        }
        initializer {
          truncated_normal_initializer {
            mean: 0.0
            stddev: 0.029999999329447746
          }
        }
        activation: RELU_6
        batch_norm {
          decay: 0.996999979019165
          scale: true
          epsilon: 0.0010000000474974513
        }
      }
      override_base_feature_extractor_hyperparams: true
      fpn {
        min_level: 3
        max_level: 7
      }
    }
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
        use_matmul_gather: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    box_predictor {
      weight_shared_convolutional_box_predictor {
        conv_hyperparams {
          regularizer {
            l2_regularizer {
              weight: 0.00039999998989515007
            }
          }
          initializer {
            random_normal_initializer {
              mean: 0.0
              stddev: 0.009999999776482582
            }
          }
          activation: RELU_6
          batch_norm {
            decay: 0.996999979019165
            scale: true
            epsilon: 0.0010000000474974513
          }
        }
        depth: 256
        num_layers_before_predictor: 4
        kernel_size: 3
        class_prediction_bias_init: -4.599999904632568
      }
    }
    anchor_generator {
      multiscale_anchor_generator {
        min_level: 3
        max_level: 7
        anchor_scale: 4.0
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        scales_per_octave: 2
      }
    }
    post_processing {
      batch_non_max_suppression {
        score_threshold: 9.99999993922529e-09
        iou_threshold: 0.6000000238418579
        max_detections_per_class: 100
        max_total_detections: 100
        use_static_shapes: false
      }
      score_converter: SIGMOID
    }
    normalize_loss_by_num_matches: true
    loss {
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      classification_loss {
        weighted_sigmoid_focal {
          gamma: 2.0
          alpha: 0.25
        }
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    encode_background_as_zeros: true
    normalize_loc_loss_by_codesize: true
    inplace_batchnorm_update: true
    freeze_batchnorm: false
  }
}
train_config {
  batch_size: 64
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    random_crop_image {
      min_object_covered: 0.0
      min_aspect_ratio: 0.75
      max_aspect_ratio: 3.0
      min_area: 0.75
      max_area: 1.0
      overlap_thresh: 0.0
    }
  }
  sync_replicas: true
  optimizer {
    momentum_optimizer {
      learning_rate {
        cosine_decay_learning_rate {
          learning_rate_base: 0.03999999910593033
          total_steps: 100000
          warmup_learning_rate: 0.013333000242710114
          warmup_steps: 2000
        }
      }
      momentum_optimizer_value: 0.8999999761581421
    }
    use_moving_average: false
  }
  fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED"
  num_steps: 100000
  startup_delay_steps: 0.0
  replicas_to_aggregate: 8
  max_number_of_boxes: 100
  unpad_groundtruth_tensors: false
  fine_tune_checkpoint_type: "classification"
  use_bfloat16: true
  fine_tune_checkpoint_version: V2
}
train_input_reader {
  label_map_path: "PATH_TO_BE_CONFIGURED"
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED"
  }
}
eval_config {
  metrics_set: "coco_detection_metrics"
  use_moving_averages: false
}
eval_input_reader {
  label_map_path: "PATH_TO_BE_CONFIGURED"
  shuffle: false
  num_epochs: 1
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED"
  }
}

Because I will be labeling many pictures in the future, I need to decide on a dimension to resize my original ones to (literature review says 1100 x 1100 has been used in previous projects).

If I were to change the image resizer in the code above to 1100 x 1100, for example, would that have any effect on model accuracy/training loss? Would it even run? I'm fairly new to this, so any insights on this would be greatly appreciated!

Note: I am using a NVIDIA GPU, so that helps speed the process quite a bit. Google Colab also can be used.

",32750,,,,,11/6/2020 5:41,Will changing the dimension reduction size of a neural network (i.e. SSD ResNet-50) change the overall outcome and accuracy of the model?,,1,2,,,,CC BY-SA 4.0 24435,2,,24434,11/6/2020 5:41,,1,,"

That depends! You can try it. But if you change the sizes, you have to ensure that you do not mismatch the shapes. As far as size is concerned it won't affect the accuracy much unless you are significantly changing it.

Since the default is 1024x1024, and you are making 1100x1100, there wont be any issues.

Remember, there is a tradeoff in terms of speed and the amount of information you can derive from the image. The larger the image size, the higher the computation time and the image information you have.

",35791,,,,,11/6/2020 5:41,,,,1,,,,CC BY-SA 4.0 24437,1,24589,,11/6/2020 6:19,,2,156,"

I am working on a problem that involves two tasks - detection and classification. There is no single dataset for both tasks. I am training two models, separate on detection dataset and another on classification dataset. I use the images from the detection dataset as input and get classification predictions on top of detected bounding boxes.

Dataset description :

  1. Classification - Image of the single object (E.g. Car) in the center with a classification label.
  2. Detection - Image with multiple objects (E.g. 4 Cars) with bounding box annotations.

Task - Detect objects(e.g. cars) from detection datasets and classify them into various categories.

How do I verify whether the classification model trained on the classification dataset is working on images from detection dataset? (In terms of classification accuracy)

I cannot manually label the images from the detection dataset for individual class labels. (Need expert domain knowledge)

How do I verify my classification model?

Is there any technique to do this ? Like domain transfer or any weakly-supervised method ?

",35791,,,,,11/16/2020 22:19,How to verify classification model trained on classification dataset on a detection dataset for classification purpose?,,2,5,,,,CC BY-SA 4.0 24438,1,,,11/6/2020 7:25,,1,46,"

I have seen that a center loss is beneficial in computer vision, especially in face recognition. I have tried to understand this concept from the following material

  1. A Discriminative Feature Learning Approach for Deep Face Recognition
  2. https://www.slideshare.net/JisungDavidKim/center-loss-for-face-recognition

However, I could not understand the concept clearly. If someone can explain with the help of an example, that would be appreciated.

",41756,,2444,,11/6/2020 11:58,11/6/2020 11:58,"What is a ""center loss""?",,0,0,,,,CC BY-SA 4.0 24440,1,26001,,11/6/2020 11:10,,4,665,"

A few years ago when I was in university, I had implemented (for my final year project) an Itinerary Planning System, which incorporates an AI technique called "case-based reasoning".

Is case-based reasoning a machine learning technique or an AI technique (that is not machine learning)?

",41843,,2444,,1/26/2021 22:43,2/1/2021 1:53,Is case-based reasoning a machine learning technique?,,1,0,,,,CC BY-SA 4.0 24443,1,24444,,11/6/2020 11:58,,1,327,"

I am learning PyTorch on Udacity. In lesson 8, section 11: Training the Model, the instructor writes:

Then I have my embedding and hidden dimension. The embedding dimension is just a smaller representation of my vocabulary of 70k words and I think any value between like 200 and 500 or so would work, here. I've chosen 400. Similarly, for our hidden dimension, I think 256 hidden features should be enough to distinguish between positive and negative reviews.

There are more than 70000 different words. How could those more than 70000 unique words be represented by just 400 embeddings? How does an embedding look like? Is it a number?

Moreover, why would 256 hidden features be enough?

",27988,,2444,,11/7/2020 13:40,11/7/2020 13:40,Why is an embedding of dimension 400 enough to represent 70000 words?,,2,0,,,,CC BY-SA 4.0 24444,2,,24443,11/6/2020 12:50,,1,,"

The specific term you are looking for is "word embedding" and not just "embedding".

How to numerically represent textual data?

Neural networks (typically) require as inputs (and produce as outputs) numerical data (i.e. numbers, vectors, matrices, or higher-dimensional arrays). So, when processing textual data, we first need to encode (or convert) the text into a numerical representation. There are different ways to do it, such as

  • one-hot encoding (in that case, if you have 70000 words, you would have sparse vectors with 70000 entries where only one of those entries is equal to $1$ and all other entries are $0$: see this article for more info)

  • map each word to a number (in this case, you would have 70000 numbers, one for each word)

  • word embeddings

Each of these representations has different benefits and drawbacks. For instance, if you map each word to a number, then you just need to keep track of $70000$ numbers. In the case of one-hot encoding or word embeddings, you will need more memory. However, nowadays, word embeddings are widely used in natural language processing/understanding/generation tasks (and given that your question is about word embeddings), so let me briefly describe them.

Word embeddings

There are different word embedding techniques (such as word2vec). However, they are all based on the same ideas

  1. Words that are similar (or related) in meaning should be mapped to vectors (i.e. the "word embeddings") that are also similar in some sense (for instance, their cosine similarity should be high). For instance, the words "man" and "boy" should be mapped to vectors that are similar.

  2. These word embeddings are learned (rather than hard-coded or manually specified) given the data

  3. The size of the word embeddings is a hyper-parameter (this should answer your question!)

Hyper-parameters

To answer your question(s) more directly, the choice of the dimension of the embeddings or the number of "hidden features" (which are both hyper-parameters) was probably more or less arbitrary or based on the instructor's experience. In general, it is difficult to determine the optimal choice of any hyper-parameter. Sometimes you can just use numbers that other people have used in the past and have noticed that work "well enough". If you really want to find more appropriate values of the hyper-parameters, you could use some hyper-parameter optimization technique, such as Bayesian optimization or a simple grid search.

Further reading

You can find many resources online that explain the concept of "word embeddings" more in detail. For instance

",2444,,2444,,11/6/2020 13:01,11/6/2020 13:01,,,,4,,,,CC BY-SA 4.0 24445,2,,24311,11/6/2020 13:51,,3,,"

There are different questions and even different lines of thought here. Let's go through them

On resizing

  • Why do we need to resize? To fit the network input which is fixed when nets are no Fully Convolutional Networks (FCN)
  • What if my net is FCN? Still makes sense to resize to bound the dimension of the input features you want to detect (a person on a small image VS big image). Take into account that the kernel sizes do not vary although the image size does.

On keeping aspect ratio (or letterbox as some people like to say)

  • Why to keep aspect ratio? This is more of a philosophical question. It is believed that keeping the aspect ratio helps the nets to learn the natural variability in object sizes (say a person bounding box can not be super hight and super thin, because that would be a street light).

  • Why not to keep aspect ratio? If you resize without keeping the aspect ratio and the aspect ratio distortion is not mega super very huge, the networks will still learn. In other words, if your input images don't have crazy aspect ratios, then there is no difference between adding or not a bit of distortion. In fact, sometimes it will even act as a regularization or augmentation.

Conlusion

As long as your application is not too specific and your input images aspect ratios are bounded (this is, if you train with images from any regular camera), you should not worry too much about this.

When to worry about this? When you train with huge vertical or horizontal images or if you train with images taken from some very specific devices like geophisical, radio, or optical sensors. In these cases you should pay special attention on how to resize or split an image. For example, with a recording of a radio sensor, if you resize with aspect ratio deformation, a wave from a specific frequency would transform to another because of the sine wave warp)

",26882,,,,,11/6/2020 13:51,,,,3,,,,CC BY-SA 4.0 24447,1,,,11/6/2020 14:27,,0,22,"

Here's my problem: I work with medical image classification, and currently I have 3 classes:

  • class A: images with lesion 1 only; and images with lesion 1 and N other lesions
  • class B: images with 2 other lesions (no lesion 1)
  • class C: images with no lesion

The goal is to classify into "lesion 1", "other lesion", "no lesion". I'd like to know some approach/method/paper/clue for this classification. I think the presence of other lesions on both class A and B is confusing the model (the validation accuracy and f1-score are very low). Thanks in advance.

",42100,,,,,11/6/2020 14:27,CNN to detect presence/absense of label on images with mixed labels,,0,2,,,,CC BY-SA 4.0 24448,1,,,11/6/2020 14:32,,1,135,"

I'm trying to understand if a 3D convolution of the sort performed in a convolutional layer of a CNN is associative. Specifically, is the following true:

$$ X \otimes(W \cdot Q)=(X \otimes W) \cdot Q, $$

where

  • $\otimes$ is a convolution,
  • $X$ is a 3D input to a convolution layer,
  • $W$ is a 4D weights matrix reshaped into 2 dimensions,
  • and $Q$ is a PCA transformation matrix.

To elaborate: say I take my 512 convolutional filters of shape ($3 \times 3 \times 512$), flatten across these three dimensions to give a ($4096 \times 512$) matrix $W$, and perform PCA on that matrix, reducing it to say dimensions of ($4096 \times 400$), before reshaping back into ($400$) 3d filters and performing convolution.

Is this the same as when I convolve $X$ with $W$, and then perform PCA on that output using the same transformation matrix as before?

I know that matrix multiplication is associative i.e. $A(BC)=(AB)C$, and I have found that convolution operations can be rewritten as matrix multiplication.

So my question is, if I rewrite the convolution as matrix multiplication, is it associative with respect to the PCA transformation (another matrix multiplication)?

For example, does $X' \cdot (W' \cdot Q) = (X' \cdot W') \cdot Q$, where $X'$ and $W'$ represent the matrices necessary to compute the convolution in matrix multiplication form?

To try and figure it out, I looked to see how convolutions could be represented as matrix multiplications, since I know matrix multiplications are associative. I've seen a few posts/sites explaining how 2D convolutions can be rewritten as matrix multiplication using Toeplitz matrices (e.g. in this Github repository or this AI SE post), however, I'm having trouble expanding on it for my question.

I've also coded out simple convolutions with a $W$ matrix of $4 \times 3$, an $X$ matrix of $4 \times 2$, and using sklearn's PCA to reduce $W$ to $4 \times 2$. If I do this both ways, the output is not the same, leading me to think this kind of associativity does not exist. But how can I explain this with linear algebra?

Can anyone explain whether this is or is not the case, with a linear algebra explanation?

",42102,,2444,,11/7/2020 1:23,11/7/2020 1:23,Is the 3d convolution associative given that it can be represented as matrix multiplication?,,0,0,0,,,CC BY-SA 4.0 24450,1,24452,,11/6/2020 16:34,,2,65,"

The update equation for SARSA is $Q(S,A) = R + \gamma Q(S',A')$. Consider this: I take an action $A$ that leads to the terminal state. Now my $S'$ would be one of the terminal states. So...

  1. Intuitively, how does it make sense to take an action $A'$ when the environment already ended? Or is this something you just do anyway?

  2. Once a terminal state-action pair is reached, you update the previous state-action pair and then start the game loop all over again. But this means that the terminal state-action pair ($Q(S',A')$ in my example) is never updated. So, if your initial estimate of $Q(S',A')$ was wrong, you would never be able to fix it which would be very problematic. (And you can't set all the terminal values to zero because you are using function approximators)

So, how do I resolve these issues?

",42103,,2444,,11/6/2020 17:50,11/6/2020 17:50,"Intuitively, how does it make sense to take an action $A'$ when the environment already ended?",,1,0,,11/6/2020 17:59,,CC BY-SA 4.0 24451,1,,,11/6/2020 16:51,,1,36,"

I am training a deep learning model for object detection. The consensus is that the more images that you have, the better the results will be. All the tutorials that I have seen say that more images are key.

I am labeling objects in my images with Label-Img, which provides the algorithm with specific training samples on the images. For my images, I am using photos with dimensions of 1100 x 1100 pixels. In my case, I could generate anywhere between 50-100 high-quality training samples per image. For example:

In cases such as this where large numbers of training samples can be generated from a single image, do you really need several hundred images? Or can you lessen the number of images because of the number of training samples?

",32750,,,,,11/7/2020 4:45,"When training deep learning models for object detection in images, do you need a large number of images, or a large number of training samples?",,0,3,,,,CC BY-SA 4.0 24452,2,,24450,11/6/2020 17:13,,1,,"
  1. Intuitively, how does it make sense to take an action A' when the environment already ended?

It doesn't make sense, in that nothing can happen once the agent reaches a terminal state. However, it is often modelled as an "absorbing state" where the action is unimportant (either null or value ignored) with value by definition of $0$.

And you can't set all the terminal values to zero because you are using function approximators

The value is zero by definition. There is no need to approximate it. So don't use function approximators for action values in terminal states. When $S'$ is terminal, the update becomes:

$Q(S,A) \leftarrow Q(S,A) + \alpha(R - Q(S,A))$

Look at any implementation of Q learning and you will see a conditional calculation for the update value, that uses some variant of the above logic when $S'$ is terminal. For OpenAI Gym environments for instance, it will use the done flag.

",1847,,,,,11/6/2020 17:13,,,,0,,,,CC BY-SA 4.0 24453,1,24457,,11/6/2020 17:30,,2,118,"

When we find the eigenvectors of a graph (say in the context of spectral clustering), what exactly is the vector space involved here? Of what vector space (or eigenspace) are we finding the eigenvalues of?

",35576,,2444,,11/7/2020 0:49,11/8/2020 19:01,What exactly is the eigenspace of a graph (in spectral clustering)?,,1,2,,,,CC BY-SA 4.0 24454,1,,,11/6/2020 18:04,,1,198,"

I am studying RL. I was thinking whether a new state value or the observation is provided by the environment before the agent actually implements the action.

Take the maze problem as an example. Each state consists of all the available cells information, provided by the environment. But what if the environment is unknown? For example, there is a maze with an unknown destination cell. The agent needs to find the destination cell. The state is 1 or 0, meaning the destination reached or not. But the environment, which is the maze, can only provide the state at cell $i$ which is 0 or 1 only when the agent reaches cell $i$.

Can this still be solved by RL? I am confused about the environment setup.

",42107,,2444,,11/6/2020 22:06,4/1/2022 12:06,How can reinforcement learning be applied when the goal location or environment is unknown?,,1,1,,,,CC BY-SA 4.0 24456,1,,,11/6/2020 19:42,,1,64,"

I was told that AlphaGo (or some related program) was not explicitly taught even the rules of Go -- if it was "just given the rulebook", what does this mean? Literally, a book written in English to read?

",42111,,2444,,11/7/2020 0:44,11/7/2020 0:44,"Is which sense was AlphaGo ""just given a rule book""?",,1,3,,,,CC BY-SA 4.0 24457,2,,24453,11/6/2020 19:43,,1,,"

In spectral clustering we not find the eigenvectors of a graph (a graph is not a matrix) but the eigenvalues/eigenvectors of the Laplacian matrix related to the adjacency matrix of the graph:

graph => adjacency matrix => Laplacian matrix => eigenvalues (spectrum).

The adjacency matrix describes the "similarity" between two graph vertexs. In the most simple case (undirected unweighted simple graph), a value "1" in the matrix means two vertex joined by an edge, a value "0" means no edge between these vertex.

So, the space under the adjacency matrix is the space of connectivity, being row "i" of a column vector a measure of the connectivity with vertex "i". In other words, the adjacency and Laplacian matrix map from vertexs to vertex connectivity.

Example

Assume a simple graph with 3 vertex {1,2,3} and edges (1,2) and (2,3). The respective Laplacian matrix is:

$$ A=\begin{pmatrix} 1 & -1 & 0\\ -1 & 2 & -1\\ 0 & -1 & 1 \end{pmatrix} $$

a) vertex 1, than in vertex space is (1,0,0) maps to:

$$ A\begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix} = \begin{pmatrix} 1\\ -1\\ 0 \end{pmatrix} $$

if we analyze the product result, component by component, it means:

  • vertex 1 is connected to 1 node.
  • vertex 2 is connected to vertex 1
  • vertex 3 is not connected to vertex 1.

b) the set of vertexs 1 and 2, that is represented in vertex space as (1,1,0), maps to:

$$ A\begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} = \begin{pmatrix} 0\\ 1\\ -1 \end{pmatrix} $$

meaning that:

  • vertex 1 is internal or external to the set {1,2}, not frontier (in this concrete case, it is internal: belongs to set and has no edge with any node out of the set).
  • vertex 2 is a vertex in the set and connected to one vertex out of the set (internal frontier).
  • vertex 3 is a vertex not in the set but connected to it (external frontier).

Finally, see what happens if multiply (inner/scalar product) previous result by the vertex vector again:

$$ \begin{pmatrix} 1 & 1 & 0 \end{pmatrix} A\begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} = 1 $$

it gives the number of edges that connects the set of nodes {1,2} with the remainder graph.

",12630,,12630,,11/8/2020 19:01,11/8/2020 19:01,,,,2,,,,CC BY-SA 4.0 24459,2,,24456,11/6/2020 23:32,,2,,"

it was "just given the rulebook", what does this mean? Literally a book written in English to read?

The program was not given a natural language version of the rules to interpret. That might be an interesting AI challenge in its own way, but none of the current cutting-edge game playing reinforcement learning systems do much in the way of natural language processing.

Instead, "just given the rulebook" is a rough metaphor for what actually happened: The rules of Go were implemented as functions that the game-playing agent could query. The functions can answer things such as "when the board looks like this, what are my valid actions?" and "if the board looks like this, have I won yet?". The board state might be represented by a matrix with stone positions encoded using numbers. Outputs might be a similar matrix of numbers for valid action choices (where the player is allowed to put stones in Go) or perhaps a single number, $0$ for not won yet, $1$ for a move that wins the game.

There may even be further helper functions that help assess moves (e.g. a value for how many enemy stones would be captured if a piece was played in a specific location), but the bare minimum needed is "what moves are valid?" and "has anyone won?". A very common third function, useful for look-ahead planning is "if the board starts like this, and I take that action, what will the board look like next" - with this function, an agent can look ahead to future positions to help search for winning moves.

This approach is common in game playing agents. A learning agent can in theory also learn the rules of the game through trial and error, as long as it receives some feedback when it has broken the rules. However, more often the goal of training the agent is only to play as well as possible - learning the rules from scratch would be extra work for the agent, and maybe not an interesting problem to solve. So the agent is given helper functions by the developers that allow it to explore only valid moves according to the rules.

",1847,,1847,,11/6/2020 23:39,11/6/2020 23:39,,,,2,,,,CC BY-SA 4.0 24464,2,,24454,11/7/2020 1:39,,1,,"

When reading about RL and RL agents/algorithms, you always need to keep in mind that, typically, the RL agent/algorithm is trying to maximize the reward (or something equivalent, such as minimizing the regret) in the long run (i.e also the reward that you may receive in the future): that's its (mathematical) goal. Whether that also corresponds to the high-level goal (e.g. reaching some physical location in some world) that the (human) designer of the so-called reward function (i.e. the function that gives the reward to RL agent) had in mind is a different story.

To maximize the reward, the RL agent interacts with the environment by taking actions, receiving rewards, and moving to other states. Initially, the RL agent does not know which actions lead to more rewards, so it may take random actions (this is known as exploration). Once it starts to understand the dynamics of the environment, it may start to take only the actions that lead to high reward (this is known as exploitation).

To answer your question directly, the RL agent can indeed take actions without knowing the dynamics of the environment. However, initially, it may need to take some random actions (which may lead to low rewards), so that to get more rewards in the long run.

",2444,,40671,,11/7/2020 10:41,11/7/2020 10:41,,,,0,,,,CC BY-SA 4.0 24465,1,24481,,11/7/2020 1:46,,2,81,"

One can easily retrieve the optimal policy from the action value function but how about obtaining it from the state value function?

",32517,,,,,11/7/2020 13:09,Is it possible to retrieve the optimal policy from the state value function?,,1,1,,,,CC BY-SA 4.0 24468,1,24477,,11/7/2020 5:48,,0,514,"

How significant is adding a ReLU to fully connected (FC) layers? Is it necessary, or how is the performance of a model affected by adding ReLU to FC layers?

",41585,,2444,,11/7/2020 10:55,11/7/2020 11:22,How is the performance of a model affected by adding a ReLU to fully connected layers?,,1,0,0,,,CC BY-SA 4.0 24469,2,,19891,11/7/2020 5:53,,1,,"

For me, this worked perfectly. I encoded with conv2d and dense and then I flatten I and reshape in the decoder after the dense layer so the encoder and decoder are symmetrical. The only difference is that in my case I use images (224,224,1)

# create encoder
# 28,28 -> 1st conv2d (filter 3x3,relu activation, padding, strides == 'jumps')
self.encoder = tf.keras.Sequential([layers.Input(shape=(224,224,1)),
                                    layers.Conv2D(16,kernel_size=3,activation='relu',padding='same',strides=2),                
                                    layers.Conv2D(8,kernel_size=3,activation='relu',padding='same',strides=2),
                                    layers.Conv2D(4,kernel_size=3,activation='relu',padding='same',strides=2),
                                    layers.Flatten(),
                                    layers.Dense(units=3136,activation='sigmoid')]) # (28,28,4)

# deconvolution -> decoding 
self.decoder = tf.keras.Sequential([layers.Input(shape=(3136)),
                                    layers.Dense(units=3136,activation='sigmoid'),
                                    layers.Reshape((28,28,4)),
                                    layers.Conv2DTranspose(4,kernel_size=3,activation='relu',padding='same',strides=2),
                                    layers.Conv2DTranspose(8,kernel_size=3,strides=2,activation='relu',padding='same'),
                                    layers.Conv2DTranspose(16,kernel_size=3,strides=2,activation='relu',padding='same'),
                                    layers.Conv2D(1,kernel_size=(3,3),activation='sigmoid',padding='same')])
",38252,,2444,,12/2/2021 10:31,12/2/2021 10:31,,,,0,,,,CC BY-SA 4.0 24471,1,24993,,11/7/2020 7:09,,0,205,"

In a Convolutional Neural Network, unlike the fully connected layers, the same filter is used multiple times on the input while convolving - so during backpropagation, we get multiple derivatives for the filter parameters w.r.t the loss function. My question is, why do we sum all the derivatives to get the final gradient? Because, we don't sum the output of the convolution during forward pass. So, isn't it more sensible to average them? What is the intuition behind this?

PS: although I said CNN, what I'm actually doing is correlation for simplicity of learning.

",42117,,,,,1/3/2021 12:00,"In CNNs, why do we sum the filter derivatives w.r.t the loss function to get the final gradient?",,1,2,,,,CC BY-SA 4.0 24472,2,,18783,11/7/2020 7:42,,0,,"

I solved my problem with data augmentation and decreasing learning rate value and more epochs.

",33792,,,,,11/7/2020 7:42,,,,0,,,,CC BY-SA 4.0 24474,2,,12786,11/7/2020 7:54,,0,,"

This question has been asked a year ago when I faced this problem I searched and it hasn't any answers on it, I tried different ways and finally, data augmentation helped me. I used data augmentation and a very small learning rate. If the fluctuations are big, the batch size should be increased, and the learning rate should be decreased. After all, using more epochs helps you to have an almost smooth plot.

",33792,,,,,11/7/2020 7:54,,,,0,,,,CC BY-SA 4.0 24477,2,,24468,11/7/2020 11:22,,2,,"

ReLU is piecewise linear function that outputs the received input directly if it's positive, or outputs a zero. i.e., $max(0, x)$

How significant is adding relu to full connected layers?

ReLU, being an activation function, will determine what the output of the nodes in your FCs are. Since it's a non-linear function, one significance is it will allow the nodes in your model to learn complex mappings between the inputs and the outputs. Compared to using a linear function, it will allow back-propagation since it has a derivative, allowing the neural network to have the advantage of stacking of multiple FC layers.

Is it necessary?

ReLU (and non-linear activation functions in general) will introduce non-linear properties in a neural network that enables it to learn more complex arbitrary structures in the inputs. Without activation functions between the layers, your neural network will simply be a linear function, regardless of the number of layers it has. Why? A linear function + linear function gives a linear function. Additionally, see this answer.

How is the performance affected by adding ReLU?

Compared to no activation function at all, it will be slower to train but will only behave like a linear regression model. So ReLU will increase the power of the model by making it non-linear. However, compared to other non-linear activation functions like tanH, ReLU will speed up training as (1) its computation step is cheaper i.e., $0.0$ or $x$ without additional operations (2) its gradient just depends on the sign of the input $x$. See this answer

",40671,,,,,11/7/2020 11:22,,,,0,,,,CC BY-SA 4.0 24478,1,24504,,11/7/2020 11:55,,1,1194,"

When using iterative deepening, is the space complexity, $O(d)$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)?

",,user42125,2444,,11/9/2020 19:10,11/9/2020 19:27,What is the space complexity of iterative deepening search?,,1,0,,,,CC BY-SA 4.0 24480,2,,24443,11/7/2020 13:03,,1,,"

I finally grasped the concept of word embedding. Thanks to @nbro, after reading the 2 articles s/he recommended

  1. What Are Word Embeddings for Text? and

  2. Word embeddings

the 1st article gives me a good idea about the big picture of the Word Embeddings; whereas the 2nd article is actually the one which clears my mind.

I am an visual person, I understand things better if I could see how the things, in this case the Word Embeddings, look like(This answered my 2nd questions).

After seeing this image, my 1st question is answered and I realized that Word Embeddings is a 2 dimensional array where the number of rows of the array is decided by the number of unique words in your vocabulary and the columns/width is decided by yourself. Normally between 8 up to 1024 according to the 2nd article.

The columns/width within the course I am learning from is called embedding_dim, which I found hard to comprehend. Since each word embedding is a vector (this answered my 3rd question), for example the cat is [1.2, -0.1, 4.3, 3.2], and the vector is a meta concept for me which is easy to understand, I would like to call the embedding_dim : embedding_vector_width or embedding_vector_length.

For the 256 hidden features, how many of them would be enough, I think it's the same concept of how to figure out how many embedding_vector_width should be.

",27988,,,,,11/7/2020 13:03,,,,0,,,,CC BY-SA 4.0 24481,2,,24465,11/7/2020 13:09,,2,,"

You can obtain the optimal policy from the optimal state value function if you also have the state transition and reward model for the environment $p(s',r|s,a)$ - the probability of receiving reward $r$ and arriving in state $s'$ when starting in state $s$ and taking action $a$.

This looks like:

$$\pi^*(s) = \text{argmax}_a [\sum_{s',r} p(s',r|s,a)(r + \gamma v^*(s'))]$$

There are variations of this function, depending on how you represent knowledge of the environment. For instance, you don't actually need the full distribution model for reward, an expected reward function and separate distribution model for state transition rules would also work.

Without at least an approximate model of the environment, you cannot derive a policy from state values. If all you have is state values, then to pick an optimal action, you absolutely need the ability to look ahead a time step at what the next state might be for each action choice.

",1847,,,,,11/7/2020 13:09,,,,1,,,,CC BY-SA 4.0 24483,1,24492,,11/7/2020 15:29,,1,3764,"

When using the breadth-first search algorithm, is the space complexity $O(b^d)$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)?

",,user42125,2444,,11/9/2020 19:06,11/9/2020 19:09,What is the space complexity of breadth-first search?,,1,0,,,,CC BY-SA 4.0 24484,1,24491,,11/7/2020 16:51,,1,91,"

I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. In chapter 1.2.1 Single Computational Layer: The Perceptron, the author says the following:

Different choices of activation functions can be used to simulate different types of models used in machine learning, like least-squares regression with numeric targets, the support vector machine, or a logistic regression classifier. Most of the basic machine learning models can be easily represented as simple neural network architectures.

I remember reading something about it being mathematically proven that neural networks can approximate any function, and therefore any machine learning method, or something along these lines. Am I remembering this correctly? Would someone please clarify my thoughts?

",16521,,16521,,11/7/2020 18:10,11/7/2020 19:52,Can most of the basic machine learning models be easily represented as simple neural network architectures?,,1,3,,,,CC BY-SA 4.0 24487,2,,50,11/7/2020 17:29,,-1,,"

It's basically not possible to test besides some empirical experiments. All the generalization bounds only apply if your process actually follows the model assumptions which you don't actually know to be true.

",32390,,,user9947,11/8/2020 13:07,11/8/2020 13:07,,,,3,,,,CC BY-SA 4.0 24488,1,,,11/7/2020 17:59,,0,50,"

Let say we are in an environment where a random agent can easily explore all the states of an environment (for example: tic-tac-toe).

In those environments, using off-policy algorithm, is it a good practice to train using exclusively random actions, instead or epsilon-greedy, Boltzmann or whatever ?

For my mind, it seems logical, but I have never heard about it before.

",23818,,23818,,11/8/2020 0:03,11/8/2020 0:03,Off-policy full-random training in easy-to-explore environment,,0,20,,,,CC BY-SA 4.0 24489,1,24503,,11/7/2020 18:00,,1,833,"

Is the space complexity of the bidirectional search, where the breadth-first search is used for both the forward and backward search, $O(b^{d/2})$, where $b$ is the branching factor and $d$ the length of the optimal path (assuming that there is indeed one)?

",,user42125,2444,,11/9/2020 19:03,11/9/2020 19:25,What is the space complexity of bidirectional search?,,1,0,,,,CC BY-SA 4.0 24490,1,,,11/7/2020 18:52,,1,128,"

In the paper Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm (2015, published in Knowledge-Based Systems)

The test functions are divided to three groups: unimodal, multi-modal, and composite. The unimodal functions ($F1 - F7$) are suitable for benchmarking the exploitation of algorithms since they have one global optimum and no local optima. In contrary, multi-modal functions ($F8 - F13$) have a massive number of local optima and are helpful to examine exploration and local optima avoidance of algorithms

I imagine that exploration means it goes searching for something in unknown regions from a starting point. But exploitation would search more around the starting (or current point).

It is more or less that? What else differentiates both concepts?

",41573,,2444,,11/8/2020 11:30,11/8/2020 11:30,What is the difference between exploitation and exploration in the context of optimization?,,0,2,,,,CC BY-SA 4.0 24491,2,,24484,11/7/2020 19:52,,1,,"

I think the author refers to both different choices of activation function and loss. It is explained in more detail in chapter 2. In particular 2.3 is ilustrative of this point.

I don't think there is a relation between this argument and universal approximation theorems, which state that certain classes of neural networks can approximate any function in certain domains, rather than any learning algorithm.

",36115,,,,,11/7/2020 19:52,,,,6,,,,CC BY-SA 4.0 24492,2,,24483,11/7/2020 21:29,,0,,"

The space complexity of the breadth-first search algorithm is $O(b^d$) in the worst case, and it corresponds to the largest possible number of nodes that may be stored in the frontier at once, where the frontier is the set of nodes (or states) that you are currently considering for expansion.

You can take a look at section 3.5 (page 74) of the book Artificial Intelligence: A Modern Approach (3rd edition, by Norvig and Russell) for more info about the time and space complexity of BFS.

",2444,,2444,,11/9/2020 19:09,11/9/2020 19:09,,,,0,,,,CC BY-SA 4.0 24493,1,26887,,11/8/2020 5:30,,2,58,"

While reading the Mutual Information Neural Estimation (MINE) paper [1] I came across section 3.2 Correcting the bias from the stochastic gradients. The proposed method requires the computation of the gradient

$$\hat{G}_B = \mathbb{E}_B[\nabla_{\theta}T_{\theta}] - \frac{\mathbb{E_B}[\nabla_{\theta}T_{\theta}e^{T_{\theta}}]}{\mathbb{E}_B[e^{T_{\theta}}]},$$

where $\mathbb{E}_B$ denotes the expectation operation w.r.t. a minibatch $B$, and $T_{\theta}$ is a neural network parameterized by $\theta$. The authors claim that this gradient estimation is biased and that can be reduced by simply performing an exponential moving average filtering.

Can someone give me a hint to understand these two points:

  1. Why is $\hat{G}_B$ biased, and
  2. How does the exponential moving average reduce the bias?
",42139,,2444,,11/8/2020 12:09,3/18/2021 7:20,"In the MINE paper, why is $\hat{G}_B$ biased, and how does the exponential moving average reduce the bias?",,1,0,,,,CC BY-SA 4.0 24494,1,24499,,11/8/2020 10:49,,3,2503,"

If uniform cost search is used for both the forward and backward search in bidirectional search, is it guaranteed the solution is optimal?

",,user42125,,user42125,11/20/2020 15:18,11/20/2020 15:18,"If uniform cost search is used for bidirectional search, is it guaranteed the solution is optimal?",,2,0,,,,CC BY-SA 4.0 24499,2,,24494,11/8/2020 12:35,,2,,"

UCS is optimal (but not necessarily complete)

Let's first recall that the uniform-cost search (UCS) is optimal (i.e. if it finds a solution, which is not guaranteed unless the costs on the edges are big enough, that solution is optimal) and it expands nodes with the smallest value of the evaluation function $f(n) = g(n)$, where $g(n)$ is the length/cost of the path from the goal/start node to $n$.

Is bidirectional search with UCS optimal?

The problem of bidirectional search with UCS for the forward and backward searches is that UCS does not proceed layer-by-layer (as breadth-first search does, which ensures that when the forward and backward searches meet, the optimal path has been found, assuming they both expand one level at each iteration), so the forward search may explore one part of the search space while the backward search may explore a different part, and it could happen (although I don't have the proof: I need to think about it a little bit more!), that these searches do not meet. So, I will consider both cases:

  • when the forward and backward searches do not "meet" (the worst case, in terms of time and space complexity)

  • when they meet (the non-degenerate case)

Degenerate case

Let's consider the case when the forward search does not meet the backward search (the worst/degenarate case).

If we assume that the costs on the edges are big enough and the start node $s$ is reachable from $g$ (or vice-versa), then bidirectional search eventually degenerates to two independent uniform-cost searches, which are optimal, which makes BS optimal too.

Non-generate case

Let's consider the case when the forward search meets the backward search.

To ensure optimality, we cannot just stop searching when we take off both the frontiers the same $n$. To see why, consider this example. We take off the first frontier node $n_1$ with cost $N$, then we take off the same frontier node $n_2$ with cost $N+10$. Meanwhile, we take off the other frontier node $n_2$ with cost $K$ and the node $n_1$ with cost $K + 1$. So, we have two paths: one with cost $N+(K + 1)$ and one with cost $(N+10)+K$, which is bigger than $N+(K + 1)$, but we took off both frontiers $n_2$ first.

See the other answer for more details and resources that could be helpful to understand the appropriate stopping condition for the BS.

",2444,,2444,,11/11/2020 21:34,11/11/2020 21:34,,,,0,,,,CC BY-SA 4.0 24500,1,,,11/8/2020 14:20,,1,38,"

Agrawal and Goyal (http://proceedings.mlr.press/v23/agrawal12/agrawal12.pdf page 3) discussed how we can extend Thompson sampling for bernoulli bandits to Thompson sampling for stochastic bandits in general by simply Bernoulli sampling with the received reward $r_t \in [0,1]$.

My question is whether such extension from Bernoulli bandits to general stochastic bandits hold in general and not only for Thompson sampling. E.g. can I prove properties such as lower bounds on regret for Bernoulli bandits and always transfer these results to general stochastic bandits?

",36978,,,,,11/8/2020 14:20,Multi-armed bandits: reducing stochastic multi-armed bandits to bernoulli bandits,,0,0,,,,CC BY-SA 4.0 24501,1,,,11/8/2020 14:21,,1,88,"

My goal is to create an ML model to be able to classify different game stages, e.g., dialog with a non-player character, exploration, combat with enemy, in-game menu etc.

In order to do that, I am looking for an agent pre-trained on such a game. I am intending to develop a model using this pre-trained agent to produce a data set (frames-labels) and finally I will use that data set to train a model to classify those different stages.

I could only find a pre-trained model for the Doom, however, it is not much appropriate for my case because it does not have different game stages (it is merely based on running & shooting). Training my own Reinforcement Learning Agent is a whole another workload in terms of both time and GPU such a game needs.

Any single idea could help me a lot. Thanks!

",41691,,32410,,11/10/2020 13:10,11/10/2020 13:10,"Where can I find pre-trained agents able to play games with multiple stages like exploration, dialog, combat?",,0,3,,,,CC BY-SA 4.0 24503,2,,24489,11/8/2020 22:49,,0,,"

Norvig & Russell's book (section 3.5) states that the space complexity of the bidirectional search (which corresponds to the largest possible number of nodes that you save in the frontier)

$$O(2b^{d/2}) = O(b^{d/2}).$$

The intuition behind this result is that (as opposed to e.g. uniform-cost search or breadth-first search, which have space (and time) complexity of $O(b^{d})$) is that the forward and backward searches only have to go half way, so you will not eventually need to expand all $b^{d}$ leaves, but only half of them.

However, this space complexity is correct if you use a breadth-first search for the forward and backward searches (which is your scenario!), given that breadth-first search, assuming a finite branching factor, expands one level at a time, so it's guaranteed that both the forward and backward searches meet in the middle. This can be seen in figure 3.17 of the same book, where you can see that both searches have the same "radius". Moreover, the only nodes that you need to store in the frontier are the ones on the circumference (not all nodes that you see in the image)

However, if you used another search algorithm to perform the forward and backward searches, the space complexity may be different. This is true if e.g. the searches do not meet and then they end up exploring all the state space.

",2444,,2444,,11/9/2020 19:25,11/9/2020 19:25,,,,0,,,,CC BY-SA 4.0 24504,2,,24478,11/8/2020 23:21,,0,,"

As stated in my other answers here and here, the space complexity of these search algorithms is calculated by looking at the largest possible number of nodes that you may need to save in the frontier during the search.

Iterative deepening search (IDS) is a search algorithm that iteratively uses depth-first search (DFS), which has a space complexity of $O(bm)$, where $m$ is the maximum depth, with progressively bigger depths, and it turns out that its space complexity is $O(bd)$.

(Note that there is also the informed version of IDS, which uses the informed search algorithm A*, rather than DFS, known as iterative deepening A*).

",2444,,2444,,11/9/2020 19:27,11/9/2020 19:27,,,,0,,,,CC BY-SA 4.0 24508,1,,,11/9/2020 7:33,,4,214,"

I had a question today that I feel it must have an answer already, so I'm shopping around.

If we ask a model to learn the binary OR function, we get perfect accuracy with every model (as far as I know).

If we ask a model to learn the XOR function we get perfect accuracy with some models and an approximation with others (e.g. perceptrons).

This is due to the way perceptrons are designed -- it's a surface the algorithm can't learn. But again, with a multi-layered neural network, we can get 100% accuracy.

So can we perfectly learn a solved game as well?

Tic-tac-toe is a solved game; an optimal move exists for both players in every state of the game. So in theory our model could learn tic-tac-toe as well as it could a logic function, right?

",17612,,36578,,11/9/2020 23:27,12/3/2022 16:05,Can models get 100% accuracy on solved games?,,1,1,,,,CC BY-SA 4.0 24511,2,,10620,11/9/2020 12:08,,1,,"

The original question about both the estimation of the transition model, often denoted as $T$, and the reward function, sometimes denoted as $R$, arose because I was thinking about the probability distribution often denoted as

$$\color{red}{p}\left(s^{\prime}, r \mid s, a\right) \doteq \operatorname{Pr}\left\{S_{t}=s^{\prime}, R_{t}=r \mid S_{t-1}=s, A_{t-1}=a\right\},$$

which is often called the model (and Sutton & Barto also call it the dynamics function, given that it defines the dynamics of the environment), which incorporates both the transition model and reward function. In fact, both the transition probability distribution (of the next state $s'$ given $s$ and $a$) and different types of reward functions can be written as a function of this dynamics function. More precisely, we have the following results

  1. $\color{orange}{p}\left(s^{\prime} \mid s, a\right) \doteq \operatorname{Pr}\left\{S_{t}=s^{\prime} \mid S_{t-1}=s, A_{t-1}=a\right\}=\sum_{r \in \mathcal{R}} \color{red}{p}\left(s^{\prime}, r \mid s, a\right)$

  2. $\color{blue}{r}(s, a) \doteq \mathbb{E}\left[R_{t} \mid S_{t-1}=s, A_{t-1}=a\right]=\sum_{r \in \mathcal{R}} r \sum_{s^{\prime} \in \mathcal{S}} \color{red}{p}\left(s^{\prime}, r \mid s, a\right)$

  3. $\color{cyan}{r}\left(s, a, s^{\prime}\right) \doteq \mathbb{E}\left[R_{t} \mid S_{t-1}=s, A_{t-1}=a, S_{t}=s^{\prime}\right]=\sum_{r \in \mathcal{R}} r \frac{\color{red}{p}\left(s^{\prime}, r \mid s, a\right)}{\color{orange}{p}\left(s^{\prime} \mid s, a\right)}$

Given that this answer already (partially) answers the question

How can we estimate the transition model?

but not the question

How can we estimate the reward function?

I will provide here an answer to this second question.

We can use inverse reinforcement learning techniques, i.e. techniques that estimate the reward function given trajectories of the form $$\tau = (s_1, a_1, r_1, s_{2}, \dots, s_{T-1}, a_{T-1}, r_{T-1}, s_{T}),$$ which are often assumed to have been generated by some optimal policy $\pi^*$, to estimate the reward function. An example of such a technique is AIRL.

",2444,,2444,,11/9/2020 12:22,11/9/2020 12:22,,,,0,,,,CC BY-SA 4.0 24516,1,24522,,11/9/2020 18:22,,4,932,"

I was reading Artificial Intelligence: A Modern Approach 3rd Edition, and I have reached to the UCS algorithm.

I was reading the proof that UCS is complete.

The book state that:

Completeness is guaranteed provided the cost of every step exceeds some small positive constant $\epsilon .$

And that's because UCS will be stuck if there is a path with an infinite sequence of zero-cost actions.

Why the step cost must exceed $\epsilon$? Isn't enough for it to be greater than zero?

",36578,,2444,,11/9/2020 19:01,11/9/2020 23:26,Why is the completeness of UCS guaranteed only if the cost of every step exceeds some small positive constant?,,1,0,,,,CC BY-SA 4.0 24519,2,,23019,11/9/2020 22:23,,1,,"

The answer to your question can be found in the original paper that introduced the max-margin and projection imitation learning (IL) algorithms: Apprenticeship Learning via Inverse Reinforcement Learning (by Abbel and Ng, 2004, ICML). Specifically, theorem 1 (section 4, page 4) states

Let an $\text{MDP} \setminus R$, features $ \phi : S \rightarrow [0, 1]^k$, and any $\epsilon > 0$ be given. Then the apprenticeship learning algorithm (both max-margin and projection versions) will terminate with $t^{(i)} \leq \epsilon$ after at most

$$n=O\left(\frac{k}{(1-\gamma)^{2} \epsilon^{2}} \log \frac{k}{(1-\gamma) \epsilon}\right)$$ iterations.

Here $k$ is the dimension of the feature vectors, so it's clear that the number of iterations needed for these algorithms to terminate scales with $k$. The proof of this theorem can be found in appendix A of the same paper (and all other terms are defined in the paper, which you should read to understand all the details). Of course, this result holds (only) for these specific IL algorithms (which are the algorithms the author of your slides, Abbel, is referring to). See also theorem 2 and the experiments section (in particular, figure 4, which shows the performance as a function of the number of trajectories) of the same paper. These slides provide a nice overview of the contents of this paper, so I suggest that you read them too.

",2444,,2444,,11/10/2020 13:44,11/10/2020 13:44,,,,0,,,,CC BY-SA 4.0 24522,2,,24516,11/9/2020 22:58,,6,,"

Let's consider a problem where all edge costs are greater than zero, but not above some $\epsilon$:

Image a problem where we have an infinite path where the first edge is cost $\frac{1}{2}$, the next is $\frac{1}{4}$, the following is $\frac{1}{8}$, and so on forever. Every edge is greater than zero, meeting the condition being proposed in the question. However, this path overall has finite cost (1) even through there are an infinite number of states on that path. So, on this problem UCS will never reach paths with cost greater than 1. Thus, if the solution cost is 2, UCS will not find any solution to this problem, and thus it would not be a complete algorithm. So, all edges being greater than zero is not sufficient.

For most search algorithms to be complete, there must be a finite number of states with any given cost. (To be slightly more precise, there must exist some fixed $\epsilon$ such that in each range size $\epsilon$ there are a finite number of states.)

",17493,,17493,,11/9/2020 23:26,11/9/2020 23:26,,,,0,,,,CC BY-SA 4.0 24523,2,,24508,11/10/2020 2:42,,0,,"

So can we perfectly learn a solved game as well?

The short answer is yes. If your model has enough complexity it can theoretically learn any behavior you want.

So in theory our model could learn tic-tac-toe

Tic Tac Toe has already been solved. Another popular game that has been solved is Checkers, by the algorithm Chinook.

To be more specific, in Reinforcement Learning we make the assumption that any decision making process can be modeled as an MDP (Markov Decision Process). Once there, there are a host of different methods like Q-Learning and TD that theoretically converge towards the optimal policy - the one the plays perfectly.

Now, just because it is theoretically possible doesn't mean it will always empirically work. In games that are very complex and have a large state space it is extremely difficult to perfectly solve. This is because the only feasible way to go about them is to approximate and getting perfect play even in small edge cases becomes much more difficult as a result.

If you want to learn more about this topic I would highly recommend this series RL Course by David Silver

",42103,,1847,,11/13/2020 7:41,11/13/2020 7:41,,,,3,,,,CC BY-SA 4.0 24524,1,,,11/10/2020 5:38,,2,383,"

According to my experience with Tensorflow and many other frameworks, neural networks have to have a fixed shape for any output, but how does Google translate convert texts of different lengths?

",42182,,2444,,11/10/2020 9:55,11/10/2020 11:38,How is Google Translate able to convert texts of different lengths?,,1,0,,,,CC BY-SA 4.0 24525,2,,24524,11/10/2020 9:39,,4,,"

Usually, in natural language processing (NLP), they are using Sequence to Sequence Learning (Seq2Seq) with Neural Networks, such as Recurrent Neural Networks or more recently the Transformer (you can find two very good papers here, and here).

During training, to ensure the same size of the input and output they can just search for the longest sentence they have in the dataset or select a number that is high enough and pad all the other sentences with 0. In addition, they add a stop token where the sentence ends, so that the model is aware of this. When decoding (inference), the decoder is going to predict one word at a time until it predicts the stop token, which signals that the translation is done.

If you're interested to see an actual implementation, I would recommend looking at this tutorial which does a good job at explaining the code and how it works.

",20430,,20430,,11/10/2020 11:38,11/10/2020 11:38,,,,1,,,,CC BY-SA 4.0 24526,1,,,11/10/2020 9:59,,2,86,"

I'm reading the book Grokking Deep Learning. Regarding weight updates during training, it has the following code and explanation:

direction_and_amount = (pred - goal_pred) * input
weight = weight - direction_and_amount

It explains the motivation behind multiplying the prediction difference with input using three cases: scaling, negative reversal and stopping.

What are scaling, negative reversal, and stopping? These three attributes have the combined effect of translating the pure error into the absolute amount you want to change weight. They do so by addressing three major edge cases where the pure error isn’t sufficient to make a good modification to weight.

These three cases are:

  1. Negative input,
  2. zero input and
  3. the value of input (scaling).

Negative and zero cases are very obvious. However, I didn't understand scaling. Regarding scaling, there's the following explanation:

Scaling is the third effect on the pure error caused by multiplying it by input. Logically, if the input is big, your weight update should also be big. This is more of a side effect, because it often goes out of control. Later, you’ll use alpha to address when that happens.

But I didn't understand it. Considering the linear regression problem, why weight update should be big if the input is big?

",42191,,2444,,11/10/2020 14:42,11/10/2020 14:42,Why should the weight updates be proportional to input?,,0,2,,,,CC BY-SA 4.0 24527,2,,3981,11/10/2020 10:16,,1,,"

You could use transfer learning (i.e. use a pre-trained model, then change its last layer to accommodate the new classes, and re-train this slightly modified model, maybe with a lower learning rate) to achieve that, but transfer learning does not necessarily attempt to retain any of the previously acquired information (especially if you don't use very small learning rates, you keep on training and you do not freeze the weights of the convolutional layers), but only to speed up training or when your new dataset is not big enough, by starting from a model that has already learned general features that are supposedly similar to the features needed for your specific task. There is also the related domain adaptation problem.

There are more suitable approaches to perform incremental class learning (which is what you are asking for!), which directly address the catastrophic forgetting problem. For instance, you can take a look at this paper Class-incremental Learning via Deep Model Consolidation, which proposes the Deep Model Consolidation (DMC) approach. There are other continual/incremental learning approaches, many of them are described here or in more detail here.

",2444,,2444,,11/10/2020 12:25,11/10/2020 12:25,,,,0,,,,CC BY-SA 4.0 24529,2,,23567,11/10/2020 11:11,,3,,"

In general, is continuous learning possible with a deep convolutional neural network, without changing its topology?

Your intuition that it is possible to perform incremental (aka continual, continuous or lifelong) learning by changing the NN's topology is correct. However, dynamically adapting the NN's topology is just one approach to continual learning (a specific example of this approach is DEN). So, there are other approaches, such as

For more details about these and other approaches (and problems related to continual learning and catastrophic forgetting in neural networks), take a look at this very nice review of continual learning approaches in neural networks. You should also check this answer.

Are there ways to implement continuous learning in a deep neural network for image recognition?

Yes. Many of the approaches focus on image recognition and classification, and often the experiments are performed on MNIST or similar datasets (e.g. see this paper).

Does such an implementation make sense if the labels have to be specially prepared in advance?

Yes, you can prepare your dataset in advance, and then later train incrementally (in fact, in the experiments I have seen in some of these papers, they usually do this to simulate the continual learning scenario), but I am not sure about the optimality of this approach. Maybe with batch learning (i.e. the usual offline learning where you train on all data), you would achieve higher performance.

",2444,,2444,,11/10/2020 12:48,11/10/2020 12:48,,,,0,,,,CC BY-SA 4.0 24530,2,,14047,11/10/2020 11:45,,3,,"

Do you know which are the state-of-the-art approaches on this topic, and could you point me to some literature on them?

This answer already mentions some of the approaches. More concretely, currently, the most common approaches to continual learning (i.e. learning with progressively more data while attempting to address the catastrophic forgetting problem) are

  • dynamic/changing topologies approaches
  • regularization approaches
  • rehearsal (or pseudo-rehearsal) approaches
  • ensemble approaches
  • hybrid approaches

You can also take a look at this answer. If you are interested in an exhaustive overview of the state-of-the-art (at least, until 2019), you should read the paper Continual lifelong learning with neural networks: A review (2019, by Parisi et al.).

",2444,,2444,,11/10/2020 11:53,11/10/2020 11:53,,,,0,,,,CC BY-SA 4.0 24535,1,,,11/10/2020 14:44,,1,81,"

I am currently working with a categorical-binary RBM, where there are 50 categorical visible units and 25 binary hidden units. The categorical visible units are expressed in one-hot encoding format, such that if there is 5 categories, then the visible units are expressed as a $50 \times 5$ array, where each row is the one-hot encoding of a category from 1 to 5.

Ideally, the RBM should be able to reconstruct the visible units. However, since the visible units are in one-hot encoding, then the visible units array contains a lot of zeros. This means the RBM quickly learns to guess all zeros for the entire array to minimize the reconstruction loss. How can I force the RBM to not do this and to instead guess 1's where the category occurs and 0's otherwise?

Note that I would still have this problem with a regular autoencoder.

",41856,,41856,,11/10/2020 16:25,11/10/2020 16:25,How can I reconstruct sparse one-hot encodings using an RBM?,,0,2,,,,CC BY-SA 4.0 24538,1,,,11/10/2020 20:04,,1,39,"

I am using simple autoencoders for the task of semantic segmentation on the VOC2012 dataset. I am currently using a simple autoencoder based model. It is trained on adam optimizer with cross-entropy loss on 21 classes 0 - 20. You can find the code here: https://github.com/parthv21/VOC-Semantic-Segmentation

My Architecture:

   self.encoder = nn.Sequential(
        nn.Conv2d(3, 64, 3, stride=2, padding=1),
        nn.LeakyReLU(),
        nn.Conv2d(64, 128, 3, stride=2, padding=1),
        nn.LeakyReLU(),
        nn.Conv2d(128, 256, 3, stride=2, padding=1),
        nn.LeakyReLU(),
        nn.Conv2d(256, 512, 3, stride=2, padding=1),
    )

    self.decorder = nn.Sequential(
        nn.ConvTranspose2d(512, 256, 3, stride=2, padding=1, output_padding=1),
        nn.LeakyReLU(),
        nn.ConvTranspose2d(256, 128, 3, stride=2, padding=1, output_padding=1),
        nn.LeakyReLU(),
        nn.ConvTranspose2d(128, 64, 3, stride=2, padding=1, output_padding=1),
        nn.LeakyReLU(),
        nn.ConvTranspose2d(64, 21, 3, stride=2, padding=1, output_padding=1),
    )

After 200 iterations I am getting the following output

Training Data

Validation Data

Is a more complex architecture the only way I can fix this problem? Or can I fix this with a different loss function like dice or more regularization? The same issue happened after training for 100 iterations. So the model is not generalizing for some reason.

Edit

I also tried adding weights to CrossEntropy such that w_label = 1 - frequency(label). The idea was that 0 label for the background which was more common would contribute less to the loss, and other labels which were rare, would contribute more to the loss. But that did not help:

Another thing I tried was ignoring label 0 for background in the loss. But that created horrible results even for training data:

",42204,,2444,,11/12/2020 10:51,11/12/2020 10:51,How can I improve the performance on unseen data for semantic segmentation using an auto-encoder?,,0,2,,,,CC BY-SA 4.0 24539,1,,,11/11/2020 3:03,,3,62,"

I recently came across the featuretools package, which facilitates automated feature engineering. Here's an explanation of the package:

https://towardsdatascience.com/automated-feature-engineering-in-python-99baf11cc219

Automated feature engineering aims to help the data scientist by automatically creating many candidate features out of a dataset from which the best can be selected and used for training.

I only have limited experience with ML/AI techniques, but general AI is something that I'd been thinking about for a while before exploring existing ML techniques. One idea that kept popping up was the idea of analyzing not just raw data for patterns but derivatives of data, not unlike what featuretools can do. Here's an example:

It's not especially difficult to see the figure above as two squares, one that is entirely green and one with a blue/green horizontal gradient. This is true despite the fact that the gradient square is not any one color and its edge is the same color as the green square (i.e., there is no hard boundary).

However, let's say that we calculate the difference between each pixel and the pixel to its immediate left. Ignoring for a moment that RGB is 3 separate values, let's call the difference between each pixel column in the gradient square X. The original figure is then transformed into this, essentially two homogenous blocks of values. We could take it one step further to identify a hard boundary (applying a similar left-to-right transformation again) between the two squares.

Once a transformation is performed, there should be some way to assess the significance of the transformation output. This is a simple and clean example where there are two blocks of homogenous values (i.e., the output is clearly not random). If it's true that our minds use any kind of similar transformation process, the number of transformations that we perform would likely be practically countless, even in brief instances of perception.

Ultimately, this transformation process could facilitate finding the existence of order in data. Within this framework, perhaps "intelligence" could be defined simply as the ability to detect order, which could require applying many transformations in a row, a wide variety of types of transformations, an ability to apply transformations with a high probability of finding something significant, an ability to assess significance, etc.

Just curious if anyone has thoughts on this, if there are similar ideas out there beyond simple automated feature engineering, etc.

",30154,,,,,11/11/2020 10:49,Is automated feature engineering a path to general AI?,,1,0,,,,CC BY-SA 4.0 24543,1,,,11/11/2020 8:29,,3,390,"

While reading the original paper of Soft Actor Critic, I came across on page number 5, under equation (5) and (6)

$$ J_{V}(\psi)=\mathbb{E}_{\mathbf{s}_{t} \sim \mathcal{D}}\left[\frac{1}{2}\left(V_{\psi}\left(\mathbf{s}_{t}\right)-\mathbb{E}_{\mathbf{a}_{t} \sim \pi_{\phi}}\left[Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)-\log \pi_{\phi}\left(\mathbf{a}_{t} \mid \mathbf{s}_{t}\right)\right]\right)^{2}\right] \tag{5}\label{5} $$

$$ \hat{\nabla}_{\psi} J_{V}(\psi)=\nabla_{\psi} V_{\psi}\left(\mathbf{s}_{t}\right)\left(V_{\psi}\left(\mathbf{s}_{t}\right)-Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\log \pi_{\phi}\left(\mathbf{a}_{t} \mid \mathbf{s}_{t}\right)\right) \tag{6}\label{6} $$

The following quote:

where the actions are sampled according to the current policy, instead of the replay buffer

In the context of deriving the formulation of the (estimated) gradient for the value function square residual error (Equation 5 in the paper)

I'm having a hard time understanding why they use the action sampled from the current policy instead of the replay buffer. My intuition tells me that this is because SAC is an off policy Reinforcement Learning algorithm, and Q-learning uses $\max Q$ in one-step Q-value function update (to keep it off-policy), but why would sampling one action from the current policy still make it off-policy?

I first asked a friend of mine (researcher in RL) and the answer I got was

"If the action is sampled with the current policy given any state the update is on-policy."

I've checked SpinningUpRL by OpenAI's explanation of SAC but they only make it more clear which action is sampled from the current policy, and which one is from the replay buffer, but does not specify why.

Does this have anything to do with the stochastic policy? Or the entropy term in the update equation?

So I'm still quite confused. Link/references to explanation are also appreciated!

",27559,,2444,,11/23/2020 1:43,11/23/2020 1:43,"In Soft Actor Critic, why is the action sampled from current policy instead of replay buffer on value function update?",,0,9,,,,CC BY-SA 4.0 24544,2,,24539,11/11/2020 10:49,,2,,"

Automated feature engineering, if it is part of any aproach towards general intelligence, cannot be the whole solution. The search for features that are meaningful, as opposed to those that simply exist with no utility, needs some guidance.

In machine learning, feature engineering is typically a search for features that improve performance at a specific task, such as classification or regression. The "intelligence" is partially in the manner of searching, and in setting the goals in the first place. Automated feature engineering typically uses fairly crude search algorithms to look for good features, such as random combination, and follows up with raw processing power in order to cover large numbers of options. Automated feature engineering also does not set the goals for any task, or feed back active behaviour from the output of ML to direct the search. An intelligent agent might actively search for data in its environment in order to test an idea (e.g. repeat an action in order to discover whether the same thing happens, or move in order to observe an interesting event better). Feature engineering is a separate issue to this.

There are some theories of general intelligence that are focused on smart pattern matching as a key component. For instance, Jürgen Schmidhuber has long been a proponent of the reward system for intelligence being compression of observations and predictions. In such a system, better pattern matching discovered by an agent is an intrinsic reward signal as it allows for better compression of world models used by the agent.

Marcus Hutter is another well-known AI researcher, who has proposed a framework called AIXI which incorporates similar ideas. An agent operating using AIXI will benefit from discovering features in its observations that improve its own predictions of what will happen next. Some form of automated feature engineering could well be core to such an agent.

",1847,,,,,11/11/2020 10:49,,,,0,,,,CC BY-SA 4.0 24545,1,,,11/11/2020 11:00,,2,26,"

... and how do I reword my question in the title?

I have a dataset where each "instance" has a "series" of multiple photos taken from different angles. I need to classify each instance as a 0 or a 1.

A little over half of the images in each series probably do not contain the information required for a classification. Only some of the images are taken from an angle where the relevant clue is visible.

For training I have many such series and they are labelled at a series level, but not at an image level.

My current approach is to use a standard architecture like ResNet. I pass each image through the CNN then I combine the features by averaging, then put that through a sigmoid activated layer. I'm concerned that the network won't be able to learn because the "clue" is so buried among everything else.

Questions:

  • Is there a better/standard way to do this? Would going RNN help? What if the images are not really in a meaningful sequence?

  • If my way is good, is arithmetic averaging the right way to combine the features?

  • Would it be worth spending the time to label each image as "has positive clue"/"does not have positive clue"? Should I add a "not possible to tell"? What if it is possible to tell but it's just humans that can't tell?

",16871,,16871,,11/11/2020 11:12,11/11/2020 11:12,What are ways to learn a classifier for labelling a series of images rather than individual images?,,0,0,,,,CC BY-SA 4.0 24548,1,,,11/11/2020 16:48,,3,186,"

I have a question regarding the loss function of target networks and current (online) networks. I understand the action value function. What I am unsure about is why we seek to minimise the loss between the qVal for the next state in our target network and the current state in the local network. The Nature paper by Mnih et al. is well explained, however, I am not getting from it the purpose of the above. Here is my training portion from a script I am running:

for state, action, reward, next_state, done in minibatch:
    target_qVal = self.model.predict(state)

    # print(target_qVal)

    if done:
        target_qVal[0][action] = reward #done
    else:
        # predicted q value for next state from target model
        pred = self.target_model.predict(next_state)[0]
        target_qVal[0][action] = reward + self.gamma * np.amax(pred)

    # indentation position?
    self.model.fit(np.array(state), 
                   np.array(target_qVal), 
                   batch_size=batch_size,
                   verbose=0, 
                   shuffle=False, 
                   epochs=1)

I understand that the expected return is the immediate reward plus the cumulative sum of discounted rewards looking into the future $s'$ (correct me if I'm wrong in my understanding) when following a given policy.

My fundamental misunderstanding is the loss equation:

$$L = [r + \gamma \max Q(s',a'; \theta') - Q(s,a; \theta)],$$

where $\theta'$ and $\theta$ are the weights of the target and online neural networks, respectively.

Why do we aim to minimize the Q value of the next state in the target model and the Q value of the current state in the online model?

A bonus question would be, in order to collect $Q(s,a)$ values for dimensionality reduction (as in Mnih et al t-sne plot), would I simply collect the target_qVal[0] values during training and feed them into a list after each step to accumulate the Q values over time?

",34530,,2444,,11/12/2020 10:46,4/11/2021 11:07,Why do we minimise the loss between the target Q values and 'local' Q values?,,1,3,,,,CC BY-SA 4.0 24550,1,,,11/11/2020 17:27,,1,245,"

In GAN (generative adversarial networks), let us take "binary cross-entropy" as the loss function for discriminator $$(overall \; loss = -\sum log(D(x_i)) -\sum log(1-D(G(z_i))) $$ $$ where \; x_i = real \; image \; pixel \; matrix$$ $$ and \; z_i = a \; vector \; from \; latent \; space$$. Let us define discriminator real loss and fake loss: $$ d_{fake \; loss} = -\sum log(1-D(G(Z)))$$ $$ d_{real \; loss} = -\sum log(D(x))$$ $$ d_{fake \; loss} \; implies \; discriminator \; loss \; against \; fake \; images$$ $$ d_{real \; loss} \; implies \; discriminator \; loss \; against \; real \; images$$ Generator Loss : $$ g_{loss} = -\sum log(D(G(z_i)))$$ Since the functions are similar, we should be expecting some similarity in graphical patterns (i.e since none of the functions are inherently oscillatory, I expect that if one comes out to be oscillatory, the other one should be the same as well). But, If you refer to chapter 10 of the book "Generative Adversarial Networks with python by Jason Brownlee", we find some difference. The following are the graphs published in the book

Can anyone explain the difference in the plots between discriminator fake loss and generator loss (mathematically)?

",42181,,42181,,11/12/2020 14:25,12/8/2021 13:04,Explain the difference in graphical patterns between discriminator fake loss and generator loss in GAN,,1,1,,,,CC BY-SA 4.0 24551,1,24560,,11/11/2020 19:18,,0,97,"

Suppose we have the fuzzy membership grade for a person $x$ with a set $S = \text{set of tall people}$ be $0.9$, i.e. $\mu_S(x)=0.9$.

Does this mean that the probability of person $x$ being tall is $0.9$?

",42230,,2444,,11/12/2020 10:35,11/12/2020 10:35,How to calculate probability from fuzzy membership grade?,,1,0,,,,CC BY-SA 4.0 24552,2,,24494,11/11/2020 19:41,,1,,"

It depends on the stopping condition. If the stopping condition is "stop as soon as any vertex is encountered by both the forward and backward scan", then bidirectional uniform-cost search is not a correct algorithm -- it is not guaranteed to output the optimal path. But it is possible to adjust the stopping condition to make bidirectional uniform-cost search guaranteed to output an optimal solution.

See the following resources for details, and the correct stopping condition:

Computing Point-to-Point Shortest Paths from External Memory. Andrew V. Goldberg, Renato F. Werneck. ALENEX/ANALCO 2005.

Point-to-point shortest path algorithms with preprocessing. Andrew V. Goldberg. International Conference on Current Trends in Theory and Practice of Computer Science, 2007.

Efficient Point-to-Point Shortest Path Algorithms. Andrew V. Goldberg, Chris Harrelson, Haim Kaplan, Renato F. Wemeck.

I found these resources by looking at the Wikipedia article on bidirectional search; it mentions that the termination condition has been articulated by Andrew Goldberg et al and cites the third reference above. Then a quick search on Google Scholar immediately turned up the other papers as well.

Lesson for the future: It can be useful to spend a little time checking standard resources (such as Wikipedia and textbooks), and checking the literature (e.g., with Google Scholar). Many natural questions have already been answered in the literature.

",1794,,1794,,11/11/2020 19:53,11/11/2020 19:53,,,,1,,,,CC BY-SA 4.0 24555,1,,,11/12/2020 1:24,,0,81,"

I've been trying to learn about CNN's and reinforcement learning and I found this project to play with: https://github.com/adityajn105/flappy-bird-deep-q-learning

I've been trying to change the code to work with RGB input instead of grayscale. Pre-precosessing part is fine, but I'm having a problem with state and next_state I guess because they're deque and when deque is appended shape is (4,H,W) because it's appended 4 times (4 frames). Problem I'm having is when I append frames to deque which are RGB it becomes something like (4,H,W,3). I tried some stuff that came to mind and that I googled and read about online, but I still had problems with dimensions. What should be done so that it works with RGB instead of grayscale?

",42233,,,,,11/12/2020 4:04,DQN rgb input channels problem using pytorch,,1,1,,,,CC BY-SA 4.0 24557,2,,24548,11/12/2020 3:45,,1,,"

The loss function is designed in a way to approximate the bellman optimality for $Q^*(s,a)$. Given an optimal policy $\pi^*$, $Q^*(s,a)$ satisfies the equation $$Q^*(s,a) = r(s) + \gamma max_{a'}\sum_{s'}P(s'|s,a)Q^*(s',a')$$

At convergence, the highest $Q$ value that I can get taking action $a$ in state $s$ is equal to the reward I get for taking action $a$ and the max $Q$ values at the next state.

You can see in the loss function that the DQN tries to attain a $Q^*(s,a)$ value that closely approximates the equation above.

On a side note, in model based RL setting, the bellman optimality for $Q^*$ is expressed as $$Q^*(s,a) = r(s) + \gamma \sum_{s'}P(s'|s,a)V^*(s)$$ $V^*(s')$ is used in the model based case because $V^*(s)$ by definition represents the highest possible value attainable at state $s'$ following $\pi^*$. In model free setting, $V^*$ is replaced by $Q^*$ because computing $V^*$ is not useful in achieving model free control without a transition model $P(s'|s,a)$

",32780,,,,,11/12/2020 3:45,,,,1,,,,CC BY-SA 4.0 24558,2,,24555,11/12/2020 4:04,,1,,"

You can reshape it to (12, H, W) using NumPy reshape function. By the way, this will only increase the complexity of this problem. If you want to practice RL then just get the idea from their code and try implementing on some other problem/game.

",42234,,,,,11/12/2020 4:04,,,,0,,,,CC BY-SA 4.0 24560,2,,24551,11/12/2020 9:07,,3,,"

No, you can't extract any probability from a fuzzy membership grade. The uncertainty expressed by fuzzy logic is about partial truth, not about probability. $ \mu_S(x) = 0.9 $ doesn't mean that "$ x $ is tall" is true with a probability of 0.9, but that "$ x $ is tall" is 90% true (notice the difference in semantics). You have to think about fuzzy logic as an extension of logic (as its name implies), rather than an extension of probability.

It's true, however, that fuzzy logic is flexible and lets you define how the membership grades are combined in logic formulae, to the extend you can replicate probability theory within the fuzzy logic framework. Wikipedia has a good overview on this: https://en.wikipedia.org/wiki/Fuzzy_logic#Comparison_to_probability.

However, please understand that, in general, fuzzy membership $ \neq $ probability. How we come up with the fuzzy membership grade is subjective and application-dependent. Conversely, probabilities have a well-defined and unambiguous interpretation. The point of being fuzzy is to replicate our reasoning process which, even if is not necessarily formal and rigorous, is often very accurate. To do so it needs a set of (admittedly arbitrary) rules on how to calculate the "truthfulness" of logical formulae. This may turn to be very useful in applications where manipulating probabilities or coming up with them in the first place is not tractable.

",37359,,,,,11/12/2020 9:07,,,,0,,,,CC BY-SA 4.0 24561,1,,,11/12/2020 11:14,,1,56,"

I am following a course on machine learning and am confused about the bias-variance trade-off relationship to learning curves in classification.

I am seeing some conflicting information online on this.

The scikit-learn learning curve looks like the top 2 curves here:
(source: scikit-learn.org)

What I don't understand is: how do we read bias from this? If we look at this image where each blue dot is a model. I think the bias would be the green curve being high. But high bias indicates underfitting, right? So shouldn't the red curve be high then too?

High variance would be the gap between green and red, is this correct?

My question is how do the red and green curves relate to underfitting and overfitting, and how do learning curves fit with the figure with the concentric circles? Is bias purely related to the red curve, or is a model with a low validation score and high train score also a high bias model?

",42236,,4709,,1/4/2023 17:14,1/4/2023 17:14,Bias-variance tradeoff and learning curves for non-deep learning models,,0,3,,,,CC BY-SA 4.0 24562,2,,14047,11/12/2020 12:07,,2,,"

There are lots of different approaches that try to avoid catastrophic forgetting in neural networks. It is impossible to summarize all contributions here.

However, in addition to the already mentioned techniques, there are sparsity approaches that try to disentangle internal representations of the network on different tasks or learning steps. Sparsity usually helps, but the network has to learn to use it, imposing a structural sparsity by construction is not enough. Also, you can leverage bayesian approaches, through which you can associate a confidence measure to each of your weights and use this measure to mitigate forgetting. Also, meta-learning can be employed to meta-learn a model which is robust to forgetting on different sequences of tasks.

What I can suggest you in addition, is to take a look at ContinualAI wiki, which maintains a list of updated publications classified by the type of Continual Learning strategy and tagged with additional information. (Disclaimer: I am a member of ContinualAI association).

",42237,,2444,,12/12/2020 12:02,12/12/2020 12:02,,,,0,,,,CC BY-SA 4.0 24564,1,24566,,11/12/2020 15:37,,2,776,"

For a lot of VAE implementations I've seen in code, it's not really obvious to me how it equates to ELBO.

$$L(X)=H(Q)-H(Q:P(X,Z))=\sum_ZQ(Z)logP(Z,X)-\sum_ZQ(Z)log(Q(Z))$$

The above is the definition of ELBO, where $X$ is some input, $Z$ is a latent variable, $H()$ is the entropy. $Q()$ is a distribution being used to approximate distribution $P()$, which in the above case both $P()$ and $Q()$ are discrete distributions, because of the sum.

A lot of the times when VAEs are built for reconstructing discrete data types, let's say for example an image, where each pixel can be black or white or $0$ or $1$. The main steps of a VAE that I've seen in code are as follows:

  1. $\text{Encoder}(Y) \rightarrow Z_u, Z_{\sigma}$
  2. $\text{Reparameterization Trick}(Z_\mu, Z_\sigma) \rightarrow Z$
  3. $\text{Decoder}(Z) \rightarrow \hat{Y}$
  4. $L(Y)= \text{CrossEntropy}(\hat{Y}, Y) - 0.5*(1+Z_{\sigma}-Z_{\mu}^2-exp(Z_\sigma))$

where

  • $Z$ represents the latent embedding of the auto-encoder
  • $Z_\mu$ and $Z_\sigma$ represent the mean and standard deviation for sampling for $Z$ from a Gaussian distribution.
  • $Y$ represents the binary image trying to be reconstructed
  • $\hat{Y}$ represents its reconstruction from the VAE.

As we can see from the ELBO, it's the entropy of the latent distribution being learned, $Q()$, which is a Gaussian, and the cross entropy of the latent distribution being learned $Q()$ and the actual distribution $P()$ with $Z$ intersected with $X$.

The main points that confuse me are

  • how $\text{CrossEntropy}(\hat{Y}, Y)$ equates to the CE of the distribution for generating latents and its Gaussian approximation, and
  • how $(0.5*(1+Z_{\sigma}-Z_{\mu}^2-exp(Z_\sigma)))$ equates to the entropy

Is it just assumed the CE of $Y$ with $\hat{Y}$ also leads to the CE of the latent distribution with it's approximation, because they're part of $\hat{Y}$'s generation? It still seems a bit off because you're getting the cross entropy of $Y$ with it's reconstruction, not the Gaussian distribution for learning latents $Z$.

Note: $Z_\sigma$ is usually not softplused to be strictly positive as required by a Gaussian distribution, so I think that's what $exp(Z_\sigma)$ is for.

",30885,,2444,,6/5/2022 9:08,6/5/2022 9:10,How does the implementation of the VAE's objective function equate to ELBO?,,1,0,,,,CC BY-SA 4.0 24565,1,24692,,11/12/2020 16:34,,1,526,"

I have built my own RL environment, where a state is composed of two elements: the agent's position and a matrix of 0s and 1s (1 if a user has requested a service from the agent, 0 otherwise); an action is composed of 3 elements: the movement the agent chooses (up, down, left or right), a matrix of 0s and 1s (1 if a resource has been allocated to a user, 0 otherwise), and a vector representing the allocation of another type of resource (the vector contains the values allocated to the users).

I am currently trying to build a Deep Q Learning agent, I am a bit confused however as to what model (example Sequential), what type of layers (example Dense layers), how many layers, what activation mode I should use, and what the state and action sizes are. (Taking this code as a reference cartpole dqn agent)

I also do not know what my inputs and outputs should be.

The examples I have come across are rather simple and I don't know how to approach setting it all up for my agent.

",42372,,42372,,11/12/2020 17:55,11/18/2020 11:56,DQN layers when state space and action space are multi dimensional,,1,3,,,,CC BY-SA 4.0 24566,2,,24564,11/12/2020 22:15,,3,,"

I don't want to think about the correctness of your supposed ELBO equation now. Nevertheless, it's true that the ELBO can be rewritten in different ways (e.g. if you expand the KL divergence below, by applying its definition, you will end up with a different but equivalent version of the ELBO). I will use the most common (and definitely most intuitive, at least to me) one, which you can find in the paper that originally introduced the VAE, which you should use as a reference, when you are confused (although that paper may require at least 2 readings before you fully understand it).

Here's the most common form of the ELBO (for the VAE), which immediately explains the common VAE implementations (including yours):

$$\mathcal{L}\left(\boldsymbol{\theta}, \boldsymbol{\phi} ; \mathbf{x}^{(i)}\right)= \underbrace{ \color{red}{ -D_{K L}\left(q_{\boldsymbol{\phi}}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) \| p_{\boldsymbol{\theta}}(\mathbf{z})\right) } }_{\text{KL divergence}} + \underbrace{ \color{green}{ \mathbb{E}_{q_{\boldsymbol{\phi}}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right)}\left[\color{blue}{ \log p_{\boldsymbol{\theta}}\left(\mathbf{x}^{(i)} \mid \mathbf{z}\right)}\right] } }_{\text{Expected log-likelihood}}, $$

where

  • $p_{\boldsymbol{\theta}}(\mathbf{z})$ is the prior over $\mathbf{z}$ (i.e. the latent variable); in practice, the prior is not actually parametrized by $\boldsymbol{\theta}$ (i.e. it's just a Gaussian with mean zero and variance one, or whatever, depending on your assumptions about $\mathbf{z}$!), but in the paper they assume that $\mathbf{z}$ depends on $\boldsymbol{\theta}$ (see figure 1).

  • $q_{\boldsymbol{\phi}}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right)$ is the encoder parametrized by $\boldsymbol{\phi}$

  • $p_{\boldsymbol{\theta}}$ is the decoder, parametrized by $\boldsymbol{\theta}$

If you assume that $p_{\boldsymbol{\theta}}(\mathbf{z})$ and $q_{\boldsymbol{\phi}}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) $ are Gaussian distributions, then it turns out that the KL divergence has an analytical form, which is also derived in appendix B of the VAE paper (here is a detailed derivation)

$$ \color{red}{ -D_{K L}\left(q_{\boldsymbol{\phi}}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) \| p_{\boldsymbol{\theta}}(\mathbf{z})\right)} = \color{red}{ \frac{1}{2} \sum_{j=1}^{J}\left(1+\log \left(\left(\sigma_{j}\right)^{2}\right)-\left(\mu_{j}\right)^{2}-\left(\sigma_{j}\right)^{2}\right) } $$

Hence this implementation, which should be equivalent to your term $0.5*(1+Z_{\sigma}-Z_{\mu}^2-exp(Z_\sigma))$

The expected log-likelihood is actually an expectation, so you cannot compute it exactly, in general. Hence, you can approximate it with Monte Carlo sampling (aka sampling averages: remember the law of large numbers?). More concretely, if you assume that you have a Bernoulli likelihood, i.e. $p_{\boldsymbol{\theta}}\left(\mathbf{x}^{(i)} \mid \mathbf{z}\right)$ is a Bernoulli, then its definition is (again from the VAE paper, Appendix C.1)

$$ \color{blue}{ \log p(\mathbf{x} \mid \mathbf{z})}= \color{blue}{ \sum_{i=1}^{D} x_{i} \log y_{i}+\left(1-x_{i}\right) \cdot \log \left(1-y_{i}\right) }\tag{1}\label{1}, $$ where $\mathbf{y}$ is the output of the decoder (i.e. the reconstruction/generation of the original input).

This formula should be very familiar to you if you are familiar with the cross-entropy. In fact, minimizing the cross-entropy is equivalent to maximizing the log-likelihood (this may still be confusing because of the flipped signs in the ELBO above, but just remember that maximizing a function is equivalent to minimizing its negative!). Hence this loss.

To answer/address some of your questions/doubts directly

how $(0.5*(1+Z_{\sigma}-Z_{\mu}^2-exp(Z_\sigma)))$ equates to the entropy?

I answered this above. That's just the analytical expression for the KL divergence (by the way, the KL divergence is also known as relative entropy). See this answer for a derivation.

It still seems a bit off because you're getting the cross entropy of $Y$ with it's reconstruction, not the Gaussian distribution for learning latents $Z$.

As you can see from the definition of the ELBO above (from the VAE paper), the expected log-likelihood is, as the name suggests, an expectation, with respect to the encoder (i.e. the Gaussian, in case you choose a Gaussian). However, the equivalence is between the log-likelihood (which is the term inside the expectation) and the cross-entropy, i.e. once you have sampled from the encoder, you just need to compute the term inside the expectation (i.e. the cross-entropy). Your term $\text{CrossEntropy}(\hat{Y}, Y)$ represents the CE but after you have sampled a latent variable from the encoder (or Gaussian), otherwise, you could not have obtained the reconstruction $\hat{Y}$ (i.e. the reconstruction depends on this latent variable, see figure 1).

In equation \ref{1}, note that there is no expectation. In fact, in the implementations, you may just sample once from the encoder, and then immediately compute the ELBO. I have seen this also in the implementations of Bayesian neural networks (basically, normal neural networks with the same principles of the VAE). However, in principle, you could sample multiple times from the Gaussian encoder, compute \ref{1} multiple times, then average it, to compute a better approximation of the expected log-likelihood.

Hopefully, some of the information in this answer is clear. Honestly, I don't have much time to write a better answer now (maybe I will come back later). In any case, I think you can find all answers to your questions/doubts in the VAE paper (although, as I said, you may need to read it at least twice to understand it).

By the way, the simplest/cleanest implementation of the VAE that I have found so far is this one.

",2444,,2444,,6/5/2022 9:10,6/5/2022 9:10,,,,2,,,,CC BY-SA 4.0 24568,2,,24550,11/13/2020 9:18,,1,,"

Your statement that we should be expecting some similarity in graphical patterns is not correct. The GAN loss takes the following form:

$$ \min_G \max_D V(D,G) = \mathbb{E}_{x \sim p_{data}(x)} [\log D(x)] + \\ + \mathbb{E}_{z \sim p_z(z)} [\log (1 - D(G(z)))]\label{minimax} $$

We want to maximize this loss w.r.t. D in order to distinguish between real and fake samples ($D(x)\rightarrow 1$ and $D(G(z))\rightarrow 0$), whereas the task of G is exactly the opposite. We want to minimize the function w.r.t. G so that the difference between real and generated data will be minimal. Thus, the problem becomes a minimax non-cooperative game. Here is a good explanation of the loss.

Training GANs requires finding a Nash equilibrium of a non-convex game with continuous, high dimensional parameters. GANs are typically trained using gradient descent techniques that are designed to find a low value of a cost function, rather than to find the Nash equilibrium of a game. Two models are trained simultaneously and each model updates its cost independently with no respect to another player in the game.

In other words, as soon as D becomes better, G becomes also better. Updating the gradient of both models concurrently cannot guarantee a convergence. For more information, see Improved Techniques for Training GANs and Wasserstein GAN.

Moreover, since D and G play a non-cooperative game, it can be shown that there are some cases when it is impossible to find a Nash equilibrium at all:

",12841,,12841,,11/13/2020 10:27,11/13/2020 10:27,,,,0,,,,CC BY-SA 4.0 24569,1,,,11/13/2020 11:49,,1,242,"

I tried different values of genetic algorithm operators:

  • many crossover rates from 20% to 80%
  • many crossover rates from 1% to 20%
  • varying the population size

The study of different parameter values is called quantitative parameter tuning or sensitivity analysis. What is the difference between the two terms?

",41713,,2444,,11/13/2020 12:22,11/13/2020 14:01,What is the difference between sensitivity analysis and parameter tuning?,,1,0,,,,CC BY-SA 4.0 24570,2,,24569,11/13/2020 14:01,,1,,"

I would generally assume that parameter tuning is the process of finding the combination of hyperparameters (e.g., population size, crossover and mutation operators and rates, etc.) that yield the best performance on your problem. When you're thinking about the way that performance varies with parameter choice, this is the "what". What is the best choice of parameters.

Sensitivity analysis is the "how" or "how much". If I change my crossover rate from 0.9 to 1.0, how significant is the change in performance of my algorithm. Is my performance more or less stable across a wide range of choices (good) or is it highly dependent on finding this one little peak in the parameter space and every other choice is much worse (bad).

",3365,,,,,11/13/2020 14:01,,,,1,,,,CC BY-SA 4.0 24571,1,,,11/13/2020 19:31,,1,24,"

What is the consensus regarding NN "capacity" or expressive power? I remember reading somewhere that expressive power grows exponentially with depth, but I cannot seem to find that exact paper.

If I have some dataset and some neural network, is there some heuristic, theorem, or result that may help me to decide whether this particular neural network I've chosen has enough expressive power (or capacity) to learn that data?

Something similar to VC-dimension and VC-inequality, but regarding more complex models such as NNs.

I suspect that there is no simple answer, but, generally, what would be the answer to this question?

Overfitting on some subset might be a start, but that doesn't really tell me how the model behaves when there's more data, it only tells me that it's not fundamentally broken and can learn something.

I know it's a complex matter, but I'll be grateful for any help, be it some practical stuff, as well as some references, papers, etc. Of course, I googled some papers, but if you have something particularly interesting, please share.

",40977,,2444,,11/13/2020 20:30,11/13/2020 20:30,"Given a dataset and a neural network, is there some heuristic or theorem to determine whether this neural network has enough capacity?",,0,1,,11/14/2020 1:32,,CC BY-SA 4.0 24572,1,,,11/13/2020 20:39,,1,40,"

I have started studying ML just a short while ago, so that my questions will be very elementary. That being so, if they are not welcome, just tell me and I'll stop asking them.

I gave myself a homework project which is to make an ML algorithm be able to learn that, if the last digit of a number $n$ is $0, 2, 4, 5, 6$ or $8$, then it cannot be a prime, provided $n > 5$. Note that, if a number $n$ ends with $0, 2, 4, 6, 8$, then it is even, so it is divisible by $2$, hence not prime. Similarly, numbers ending in $5$ are divisible by $5$, so they cannot be prime.

Which ML approach should I choose to solve this problem? I know that I don't need ML to solve this problem, but I am just trying to understand which ML approach I could use to solve this problem.

So far, I have only learned about two ML approaches, namely linear regression (LR) and $k$-nearest neighborhoods, but they both seem inappropriate in this case since LR seems to be a good choice in finding numerical relations between input and output data and KNN seems to be good at finding clusters, and "primality" has neither of these characteristics.

",42266,,2444,,11/14/2020 18:42,11/14/2020 18:42,"Which ML approach could determine that a number greater than 5 is not prime, knowing that a number is not prime if it ends with an even digit or 5?",,0,6,,,,CC BY-SA 4.0 24574,1,,,11/14/2020 4:35,,3,166,"

From the MuZero paper (Appendix E, page 13):

In chess, 8 planes are used to encode the action. The first one-hot plane encodes which position the piece was moved from. The next two planes encode which position the piece was moved to: a one-hot plane to encode the target position, if on the board, and a second binary plane to indicate whether the target was valid (on the board) or not. This is necessary because for simplicity our policy action space enumerates a superset of all possible actions, not all of which are legal, and we use the same action space for policy prediction and to encode the dynamics function input. The remaining five binary planes are used to indicate the type of promotion, if any (queen, knight, bishop, rook, none).

Is the second binary plane all zeros or all ones? Or, something else? How is it known if the move is off the board? For my game, I know if it is a legal move on the board, but do not know if the move is off the board.

",42271,,2444,,11/14/2020 15:23,12/16/2020 3:04,How is MuZero's second binary plane for chess defined?,,1,0,,,,CC BY-SA 4.0 24575,1,,,11/14/2020 5:06,,0,436,"

For example, if we set the random seed to be 0, will we run into any problems? For example, maybe for seed 0, we can only reach a certain training error, but other seeds will converge to a much lower error

I'm specifically concerned about supervised learning on point cloud data, but curious about whether it matters in general whenever you use a neural network.

",21158,,2444,,11/14/2020 14:05,11/14/2020 17:21,Are there any downsides of using a fixed seed for a neural network's weight initialization?,,1,1,,,,CC BY-SA 4.0 24576,1,24630,,11/14/2020 5:57,,1,193,"

It seems to me that average pooling can be replaced by a strided convolution with a constant kernel. For instance, a 3x3 pooling would be equivalent to a strided convolution (of stride $3$) with a $3 \times 3$ matrix of constants, with each entry being $\frac{1}{9}$.

However, I haven't found any mention of this fact online (perhaps it's too trivial of an observation)? Why then are explicit pooling layers needed if they can be realized by convolutions?

",23416,,2444,,11/16/2020 0:02,11/16/2020 0:16,Is average pooling equivalent to a strided convolution with a specific constant kernel?,,1,0,,,,CC BY-SA 4.0 24579,1,24592,,11/14/2020 10:52,,1,177,"

In AlphaGo, the authors initialised a policy gradient network with weights trained from imitation learning. I believe this gives it a very good starting policy for the policy gradient network. the imitation network was trained on labelled data of (state, expert action) to output a softmax policy denoting the probability of actions for each state.

In my case, I would like to use weights learnt from the imitation network as initial starting weights for a DQN. Initial rewards from running the policy are high but start to decrease (for a while) as the DQN trains and later increases again.

This could suggest that the effect of initialising weights from the imitation network has little effect, since it kind of undo's the effect of initialising weights from the imitation network.

",32780,,32780,,11/14/2020 14:46,11/14/2020 17:53,Initialising DQN with weights from imitation learning rather than policy gradient network,,1,9,,,,CC BY-SA 4.0 24580,1,,,11/14/2020 11:27,,0,32,"

Consider an environment where there are 2 outcomes (e.g. dead and alive) and a discrete set of actions. For example, a game where the agent has 2 guns $A$ and $B$ to shoot a monster (the monster dies only if the correct gun is used).

Let's say we store the experience $e_1 = (s,a_1, r_1, s'_1)$ in the replay buffer $D$, where

  • $s$: we have the monster to kill
  • $a_1$: choose and use gun $A$
  • $r_1$: $-1$ (the wrong gun)
  • $s'_1$: monster is alive

But we also save the alternative situation $e_2 = (s, a_2, r_2, s'_2)$ in the replay buffer $D$, where

  • $s$: the same state (we have the monster)
  • $a_2$: choose and use gun $B$
  • $r_2$: $(-1) * r = 1$
  • $s'_2$: the monster is dead

I can't find a topic about this technique, or don't know what to look for.

",41979,,2444,,11/14/2020 14:38,11/14/2020 14:38,What are the implications of storing the alternative situation (that could have been experienced) in the replay buffer?,,0,6,,,,CC BY-SA 4.0 24582,2,,24437,11/14/2020 12:03,,2,,"

To verify the accuracy of the classification stage, you will need labeled images with a single car.

To train and verify accuracy of the detection stage and full system, you can:

  1. in the datasets with images with multiple cars, manually, mark the image rectangles that contains one car.
  2. from previous, split the image in one or more ones, each one containing a single car.
  3. pass each one of the previous image with a single car to the classification stage (that means assume classification has 100% accuracy). Record its outputs (labeled cars).
  4. now, from output of steps 1) and 3), you can produce labeled images with multiple cars. Use it to train the detector and verify full system accuracy.
",12630,,,,,11/14/2020 12:03,,,,2,,,,CC BY-SA 4.0 24583,2,,18648,11/14/2020 12:37,,0,,"

I interpret this as the following, I will use the uppercase matrix notation $\mathbf{K_{*}}$, etc...

The covariance matrix $\mathbf{K_{xx}}$ summarizes everything we know about the input feature space. I think of it as a unique signature that describes the data we have in $\mathbb{R}^{d_x}$. Along with the example training data, we have the labels $\mathbf{y}$, which give us a concrete definition of what the prediction values for $\mathbf{K_{xx}}$ should be.

We can then ask the question "What can we multiply the unique signature $\mathbf{K_{xx}}$ by in order to get the training outputs?" This takes the familiar form of $\mathbf{A}\mathbf{x} = \mathbf{b}$...

$$ \begin{aligned} \mathbf{K}_{xx}\mathbf{z} &= \mathbf{y} \\ \mathbf{z} &= \mathbf{K}_{xx}^{-1}\mathbf{y} \end{aligned} $$

$\mathbf{z}$ then represents the vector that the covariance matrix transforms into the outputs. Then all we have to do is multiply the new covariance of the train/test points by that same vector to get the predictions...

$$ \mathbf{\hat{y}} = \mathbf{K_{*x}}\mathbf{K_{xx}^{-1}}\mathbf{y} $$

I should add that I am relatively certain about this, but I am still learning this stuff myself so I am not 100

",42278,,,,,11/14/2020 12:37,,,,0,,,,CC BY-SA 4.0 24584,2,,3817,11/14/2020 12:44,,0,,"

Yes, you are trying to predict a real number output, so this is a regression problem. To know what kind of algorithm would be best I think you have to ask how much data you have and what you know already about the relationships of the numbers. If you try simple linear regression, what kind of error will you get?

If you were to try linear regression and you get an error that is acceptable, then it may be a very simple problem. Beyond linear regression you can look to more advanced things such as Gaussian processes and neural networks which will all make the kinds of predictions you are seeking.

",42278,,,,,11/14/2020 12:44,,,,0,,,,CC BY-SA 4.0 24585,1,,,11/14/2020 12:59,,2,29,"

I answered another question here about the mean prediction of a GP, but I have a hard time coming up with an intuitive explanation of the variance prediction of a GP.

Thew specific equation that I am speaking of is equation 2.26 in the Gaussian process book...

$$ \mathbb{V}[f_*] = k(x_*, x_*) - k_{x*}^TK_{xx}^{-1}k_{x*} $$

I have a number of questions about this...

  1. if $k(x_*, x_*)$ is the result of the kernel function with a single point $x_*$, then won't this value always be 1 (assuming an RBF kernel) since the kernel will give 1 for a covariance with itself ($k(x, x) =\exp\{-\frac{1}{2}|| x - x ||^2\}$)

  2. If the kernel value $k(x_*, x_*)$ is indeed one for any single arbitrary point, then how can I interpret the last multiplication on the RHS? $K_{xx}^{-1}k_{x*}$ is the solution to $Ax = b$, which is the vector which $K_{xx}$ projects into $k_{x*}$, but then my intuition breaks down and I cannot explain anymore.

  3. If the kernel value $k(x_*, x_*)$ is indeed one for any single arbitrary point, then can we view the whole term as the prior variance being reduced by the some sort of similarity between the test point and the training points?

  4. Is it every possible for this variance to be greater than 1? Or is the prior variance of 1 seen as the maximum, which can only be reduced by observing more data?

",42278,,,,,11/14/2020 12:59,How to interpret the variance calculation in a Guassian process,,0,0,,,,CC BY-SA 4.0 24589,2,,24437,11/14/2020 16:39,,1,,"

The Problem

We can see from the question that existing information on detection and classification in the small automotive vehicle domain has been located (in the form of two independent sets of vectors usable for machine training), and there is no already existing mapping or other correspondence between the elements of one set and the elements of the other. They were obtained independently, remain independent, and are linked only by the conventions of the domain (today's aesthetically acceptable and thermodynamically workable forms of small vehicles).

The goal stated in the question is to create a computer vision system that both detects cars and classifies them leveraging the information contained in the two distinct sets.

In the vision systems of mammals, there are also two distinct equivalences of sets; one arising from a genetic algorithm, the DNA that is expressed during the formation of the neural net geometry and bio-electro-chemistry of the visual system in early development; and the cognitive and coordinative pathways in the cerebrum and cerebellum.

If a robot, wheelchair, or other vehicle is to avoid traffic, we must produce a system that in some way matches or exceeds the collision avoidance performance of mammals. In crime prevention, toll collection, sales lot inventory, county traffic analysis, and other like applications, performance will again be expected to match or exceed the performance of biological systems. If a person can record the make, model, year, color, and license plate strings, so should the machine we employ in these capacities.

Consequently, this question is pertinent beyond academic curiosity, as it is applicable in current research and development of products.

That this question author notices the lack of a unified data set that can be used to train it to detect and characterize in a single network objects of interest is apropos and key to the challenge of finding a solution.

Approach The simplest approach would be to compose the system of two functions.

  1. $\quad\mathcal{D}: \mathbb{I}^4 \to {(\mathbb{I}^2, \mathbb{I}^2)}_1, \; {(\mathbb{I}^2, \mathbb{I}^2)}_2, \; ... $
  2. $\quad\mathcal{C}: {(\mathbb{I}^2, \mathbb{I}^2)}_i \to {(\mathbb{I})}_i$

The four dimensions of input for $\mathcal{D}$, the detector, are horizontal position, vertical position, rgb index, and brightness to decribe the pixelized image; and the output are bounding boxes as two "corner" coordinates corresponding to each identified vehicle, the second coordinate being either relative to the first or to a specific corner of the entire frame. The categorizer, $\mathcal{C}$, receives as input bounding boxes and produces as output the index or code that maps to the categories corresponding to the labels of the training set available for categorization. The system can then be described as follows.

$\quad\quad\mathcal{S}: \mathcal{C} \circ \mathcal{D}$

If the system is not color, subtract one from the above dimensionality of the input. If the system processes video, add one to the dimensionality of the input and consider using LSTM or GRU cell types.

The above substitution represented by "$\circ$" appears to be what is meant by, "I use the images from the detection dataset as input and get classification predictions on top of detected bounding boxes."

The interrogative, "How do I verify whether the classification model trained on the classification dataset is working on images from detection dataset? (In terms of classification accuracy)," appears to refer to the fact that labels do not exist for the second set that correspond to input elements of the first set, so an accuracy metric cannot be directly obtained. Since there is no obvious automatic way of generating labels for the vehicles in the pre-detected images containing potentially multiple vehicles, there is no way to check actual results against expected results. Composing multiple vehicle images from the categorization set to use as test input to the entire system $\mathcal{S}$ will only be useful in evaluating an aspect of the performance of $\mathcal{D}$, not $\mathcal{C}$.

Solution

The only way to evaluate the accuracy and reliability of $\mathcal{C}$ is with portions of the set used to train it that were excluded from the training and trust that the vehicles depicted in those images were sufficiently representative of the concept "car" to provide consistency of accuracy and reliability across the range of those detected by $\mathcal{D}$ in the application of $\mathcal{S}$. This means that the leveraging of the information, even if optimized to the degree possible by any arbitrary algorithm or parallelism in the set of all possible algorithms or parallelisms, is limited by the categorization training set. The number of set elements and the comprehensiveness and distribution of categories within that set must be sufficient to achieve an approximate equality between these two accuracy metrics.

  1. Categorizing a test sample from the labeled set for $\mathcal{C}$ excluded from the training
  2. Categorizing the vehicles isolated by $\mathcal{D}$ from its training input

With Additional Resources

Of course this discussion is in a particular environment, that of the system defined as the two artificial networks, one involving convolution based recognition and the other involving feature extraction, and the two training sets. What is needed is a wider environment where known vehicles are in view so that performance data of $\mathcal{S}$ is evaluated and a tap on the transfer of information between $\mathcal{D}$ and $\mathcal{C}$ can be used to differentiate between mistakes made on either side of the tap point.

Unsupervised Approach

Another course of action could be to not use the training set for categorization on the training of $\mathcal{C}$ at all, but rather use feature extraction and auto-correlation in an "unsupervised" approach, and then evaluate the results of on the basis of the final convergence metrics at the point when stability in categorization is detected. In this case, the images in the bounding boxes output by $\mathcal{D}$ would be used as training data.

The auto-trained network realizing $\mathcal{C}$ can then be further evaluated using the entire categorization training set.

Further Research

Hybrids of these two approaches are possible. Also, the independent training only in the rarest of cases leads to optimal performance. Understanding feedback as originally treated with rigor by MacColl in chapter 8 of his Fundamental Theory of Servomechanisms, later applied to the problem of linearity and stability of analog circuitry, and then to training, first in the case of GANs, may lead to effective methods to bi-train the two networks.

That evolved biological networks are trained in situ is an indicator that the most optimal performance can be gained by finding training architectures and information flow strategies that create optimality in both components simultaneously. No biological niche has ever been filled by a neural component that is first optimized and then inserted or copied in some way to a larger brain system. That is no proof that such component-ware can be optimal, but there is also no proof that the DNA driven systems that have emerged are not nearly optimized for the majority of terrestrial conditions.

",4302,,4302,,11/16/2020 22:19,11/16/2020 22:19,,,,1,,,,CC BY-SA 4.0 24590,2,,17971,11/14/2020 16:50,,1,,"

I was struggling with the same question. This is what I came up with after thinking it through.

With depth-first-search, you backtrack to a node that is a non-expanded child of your parent (or the parent of the parent when your parent has no more non-expanded children (and so on going up the tree)). So the space complexity is limited by your ancestors and the children of these ancestors. Which translates in m*b where m is the max path length (so max number of ancestors) and b is the branching factor (number of children per ancestor).

With greedy search when you backtrack you can jump to any evaluated but unexpanded node, you passed going down on paths earlier. So the algorithm, when backtracking, can make pretty random jumps throughout the tree leaving lots of sibling nodes unexpanded. You will have to remember the value of the evaluation function for all these unexpanded nodes though because possibly they are next up when backtracking occurs. So in a theoretical very worst-case scenario that could mean that almost the whole tree needs to be remembered. Hence O(b^m).

I know there are still gaps in the reasoning but intuitively this makes me understand it best.

",42281,,,,,11/14/2020 16:50,,,,0,,,,CC BY-SA 4.0 24591,2,,24575,11/14/2020 17:21,,1,,"

When you use a particular seed, it actually ceases to become a random initialization and is instead fixed. I believe the only reason to actually do this would be for reliable reproduction in research and not as a method of training production models.

",42278,,,,,11/14/2020 17:21,,,,4,,,,CC BY-SA 4.0 24592,2,,24579,11/14/2020 17:53,,1,,"

My understanding is that you are first training a policy network using imitation learning. Then you are adjusting that trained network in some way to be a value network for DQN. The most obvious change would be to remove softmax activation whilst keeping the network layer sizes identical. This would then present Q estimates for all actions from any given state.

The initial estimates would not be trained Q values though, they would be the "preferences" or the logits for probabilities to support a near optimal action choice. The main thing that will be likely in the new network is that for the one near optimal action choice, the network would predict the highest action value. As you derive the target policy by taking the maximising action, initially this looks good. However, the problem is that the Q values that this network predicts can have little to no relation to the real expected returns experienced by the agent under the target policy.

Initial rewards from running the policy are high but start to decrease (for a while) as the DQN trains and later increases again.

I think what is happening is that initially the greedy policy derived from your Q network is very similar to the policy learned during imitation learning. However, the value estimates are very wrong. This leads to large error values, large corrections needed, and radical changes to network weights throughout in order to change the network from an approximate policy function to an approximate action value function. The loss of performance occurs because there is not a smooth transition between the two very different functions that also maintains correct maximising actions.

I don't think this can be completely fixed. However you might get some insight into potential work-arounds by considering that you are not just doing imitation learning here. Instead you are performing both imitation learning (to copy a near optimal policy) and transfer learning (to re-use network weights on a related task).

Approaches that help with transfer learning may also help here. For instance, you could freeze the layers closer to input features, or reduce the learning rate for those layers. You do either of these things on the assumption that the low-level derived features (in the hidden layers) that the first network has learned are still useful for the new task.

",1847,,,,,11/14/2020 17:53,,,,3,,,,CC BY-SA 4.0 24593,1,24607,,11/14/2020 18:23,,0,586,"

I am trying to create a DQN agent where I have 2 inputs: the agent's position and a matrix of 0s and 1s. The output is composed of the agent's new chosen position, a matrix of 0s and 1s (different from the input matrix), and a vector of values.

The first input is fed to an MLP network, the second input (matrix) is fed to a convolutional layer, then their outputs are fed to a FC network, or at least that's the idea.

This is my attempt so far, having this tutorial as a reference.

Here is the code:

First, create the MLP network

def create_mlp(self, arr, regress=False): # for the position input
        # define MLP network
        print("Array", arr)
        model = Sequential()
        model.add(Dense(env.rows * env.cols, input_shape=(len(arr)//2, len(arr)), activation="relu"))
        model.add(Dense((env.rows * env.cols)//2, activation="relu"))
        
        # check to see if the regression node should be added
        if regress:
            model.add(Dense(1, activation="linear"))
            
        # return our model
        return model

Then, the CNN

def create_cnn(self, width, height, depth=1, regress=False): # for the matrix
        # initialize the input shape and channel dimension
        inputShape = (height, width, depth)
        output_nodes = 6e2
        
        # define the model input
        inputs = Input(shape=inputShape)

        # if this is the first CONV layer then set the input
        # appropriately
        x = inputs
        
        input_layer = Input(shape=(width, height, depth))
        conv1 = Conv2D(100, 3, padding="same", activation="relu", input_shape=inputShape) (input_layer)
        pool1 = MaxPooling2D(pool_size=(2,2), padding="same")(conv1)
        flat = Flatten()(pool1)
        hidden1 = Dense(200, activation='softmax')(flat) #relu

        batchnorm1 = BatchNormalization()(hidden1) 
        output_layer = Dense(output_nodes, activation="softmax")(batchnorm1) 
        output_layer2 = Dense(output_nodes, activation="relu")(output_layer) 
        output_reshape = Reshape((int(output_nodes), 1))(output_layer2)
        model = Model(inputs=input_layer, outputs=output_reshape)

        # return the CNN
        return model

Then, concatenate the two

def _build_model(self):
        # create the MLP and CNN models
        mlp = self.create_mlp(env.stateSpacePos)
        cnn = self.create_cnn(3, len(env.UEs))
        
        # create the input to our final set of layers as the *output* of both
        # the MLP and CNN
        combinedInput = concatenate([mlp.output, cnn.output])
        
        # our final FC layer head will have two dense layers, the final one
        # being our regression head
        x = Dense(len(env.stateSpacePos), activation="relu")(combinedInput)
        x = Dense(1, activation="linear")(x)
        
        # our final model will accept categorical/numerical data on the MLP
        # input and images on the CNN input, outputting a single value
        model = Model(inputs=[mlp.input, cnn.input], outputs=x)
        
        opt = Adam(lr=self.learning_rate, decay=self.epsilon_decay)
        model.compile(loss="mean_absolute_percentage_error", optimizer=opt)
        
        print(model.summary())
        
        return model

I have an error:

A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 32, 50), (None, 600, 1)]

The line of code that gives the error is:

combinedInput = concatenate([mlp.output, cnn.output])

This is the MLP summary

And this is the CNN summary

I'm a beginner at this, and I'm not where my mistakes are, the code does not work obviously but I do not know how to correct it.

",42372,,42372,,11/14/2020 21:45,11/15/2020 13:57,Keras DQN Model with Multiple Inputs and Multiple Outputs,,1,4,,11/15/2020 18:03,,CC BY-SA 4.0 24595,1,24596,,11/14/2020 19:48,,0,1359,"

I am trying understand machine learning inferece, and i would like to know what exactly is the difference between Google Coral USB and Movidius Intel Neural Compute Stick 2. From what i could gather the google coral USB speeds up the frame rate, but that doesn't look clear to me. My questions are: What exactly is the benefit from both of them in units? Like, is it frame rate? prediction speed? Are both visual processing units? And lastly, do i need to keep my neural network in a single computer board for training or can i have it at a cloud?

",42284,,,,,11/15/2020 0:04,Difference between Neural Compute Stick 2 and Google Coral USB for edge computing,,1,1,,11/15/2020 0:58,,CC BY-SA 4.0 24596,2,,24595,11/14/2020 23:47,,0,,"

I do not know much about the Neural Compute Stick but I can tell you a little bit about the Coral Edge TPU, since I used it myself (which I think also applies to the Neural Compute Stick).

The Edgte TPU is a specialized ASIC which is very efficient at the main calculations (convolutions, relu etc.) for neural network inferencing. That means it cannot be used for training* a neural network but for deploying a neural network in production after it has been trained and optimized/quantized (precision reduced from float32 to int8). But I think you already new that, as it seems from your question.

Now to your actual question in terms of speed: You cannot really compare speed of such chip in terms of framerate alone, nor can you call it a visual processing unit or not. The google coral is a general ASIC that is very fast at doing convolutions/relus etc. For what you gonna use your neural network (like Image recognition, or maybe stock predicitons) is complitly up to you, and especially up to your neural network for which task it was trained for. The only limitation you have here is regarding the supported layer operations. E.g. it is not possible to do operations like 3D convolutions or some fancy new non-linear activation functions (There was an overview of supported operations in the docs that I cannot find right now).

Furthermore the framerate also depends on your NN architecture, image resolution etc. so a comparison here is completly misleading. If you want to have a general indication of speed look at how many int8 operations it can handle per second (TOPS) also under what energy consuption (Watts), if you care.

The main advantage of this unit compared to a GPU that is usually used for training+inferencing is much lower energy consumption and unit cost. With roughly 4 Watts on the Edge TPU (as far as I can remember) compared to e.g. 250 Watts GPU. Energy consumption also dictates the necessary cooling solution, which can just be a passive for the Edge Tpu in many cases. Regarding the unit costs, I guess you can easily sees that yourself, if you keep in mind that you can get "nearly" similar inferencing speeds than with a full workstation class GPU. Furthermore, such unit has a much smaller formfactor and weight, which makes it perfect for applications at the edge. (I should also add that the efficiency gain is also a lot due to the precision reduction to int8 (quantization). Quantization is also possible on GPUs.)

And lastly, do i need to keep my neural network in a single computer board for training or can i have it at a cloud?

I am not quite sure what you mean with this. The general workflow would look like this: you use a GPU for training your neural network, after training you optimize/quantize your network, and lastly you deploy your neural network to the "computer board" of your choice that just needs to have a CPU and the Edge TPU and do inferencing operations/predictions for your task.

*Transfer learning is possible of e.g. the last layer. But no backpropagation for full NN training.

",13104,,13104,,11/15/2020 0:04,11/15/2020 0:04,,,,0,,,,CC BY-SA 4.0 24602,1,,,11/15/2020 7:11,,1,290,"

Some RL algorithms can only be used for environments with continuous action spaces (e.g TD3, SAC), while others only for discrete action spaces (DQN), and some for both

REINFORCE and other policy gradient variants have the choice of using a categorical policy for discrete actions and a Gaussian policy for continuous action spaces, which explains how they can support both. Is that interpretation completely correct?

For algorithms that learn a Q function or a Q function and a policy, what places this restriction for their use on either discrete or continuous action spaces environments?

In the same regard, if an algorithm suited for discrete spaces is to be tuned to handle continuous action spaces, or vice versa. What does such a configuration involve?

",40671,,2444,,11/15/2020 23:37,11/15/2020 23:37,What adapts an algorithm to continuous or to discrete action spaces?,,0,4,,,,CC BY-SA 4.0 24603,1,,,11/15/2020 8:02,,2,66,"

I started researching the subject of self-replication in neural networks, and unexpectedly I saw that there is not much research on this subject. I should mention I am new in the field of NNs.

This idea seems to be very appealing, but now I am having problems coming up with an actual use case. The paper Neural Network Quine from 2018 seems to be one of the main ones addressing this topic.

So, what are the use-cases of self-replicating neural networks? Why isn't this subject more thoroughly researched?

",42217,,2444,,10/29/2021 10:02,10/29/2021 10:02,What are the use-cases of self-replicating neural networks?,,0,9,,,,CC BY-SA 4.0 24605,1,24608,,11/15/2020 12:44,,2,213,"

In a general DQN framework, if I have an idea of some actions being better than some other actions, is it possible to make the agent select the better actions more often?

",41984,,2444,,11/9/2021 23:23,11/9/2021 23:23,"In DQN, is it possible to make some actions more likely?",,1,2,,,,CC BY-SA 4.0 24606,1,,,11/15/2020 13:42,,1,31,"

I am not sure if I really understand how anchor boxes are defined. As far as I understand, in YOLO algorithm you define a set of "good" shapes (anchor boxes) that may contain the object you are trying to detect for each cell.

However, I don't really understand how do you predefine the shape of these anchor boxes. As far as I have seen, there are examples in which the algorithm outputs bx, by, bh and bw values for each anchor box. Are you actually giving "freedom" to the algorithm to define these four values, or somehow is it being defined a fixed ratio between bh and bw for each of the anchor boxes? And how is this ratio defined in the output?

",42296,,42296,,11/15/2020 16:49,11/15/2020 16:49,How is the shape of the anchor boxes predefined in YOLO algorithm?,,0,0,,,,CC BY-SA 4.0 24607,2,,24593,11/15/2020 13:57,,1,,"

Firstly, concatenate only works on identical output shape of the axis. Otherwise, the function will not work. Now, your function output size is (None, 32, 50) and (None, 600, 1). Here, '32' and '600' must be same when you want to concatenate.

I would like to suggest some advice based on your problem. You can flatten both of them first and then concatenate. Because you need to flatten feature to use dense layer later.

def create_mlp(self, arr, regress=False): 
        # define MLP network
        print("Array", arr)
        model = Sequential()
        model.add(Dense(env.rows * env.cols, input_shape=(len(arr)//2, len(arr)), activation="relu"))
        model.add(Dense((env.rows * env.cols)//2, activation="relu"))
        **model.add.flatten() ### shape = (None, 1600)**
        # check to see if the regression node should be added
        if regress:
            model.add(Dense(1, activation="linear"))
        # return our model
        return model

And just remove the reshape layer in create_cnn function. (output shape should be = (None, 600)).

then concatenate two model

combinedInput = concatenate([mlp.output, cnn.output]) ## output shape =(None, 2200)

Later you can just use Dense layer as your code. I don't know how can you used dense (next to concatenate layer) without flatten the feature in create_mlp function.

Your code should work this way. You can read this simple one for better understanding.

",42283,,,,,11/15/2020 13:57,,,,1,,,,CC BY-SA 4.0 24608,2,,24605,11/15/2020 15:18,,4,,"

For single-step Q learning, the behaviour policy can be any stochastic policy without any further adjustment to the update rules.

You don't have to use $\epsilon$-greedy based on current Q function approximation, although that is a common choice because it works well in general cases. However, you should always allow some chance of taking all actions if you want the algorithm to converge - if you fixed things so that bad actions were never taken, the agent would never learn that they had low value.

Probably the simplest way to use your initial idea of best actions is to write a function that returns your assessment of which action to take, and use that with some probability in preference to a completely random choice. At some point you will also want to stop referencing the helper function (unles it is guaranteed perfect) and use some form of standard $\epsilon$-greedy based on current Q values.

I have done similar with a DQN learning to play Connect 4, where the agent would use a look-ahead search function that could see e.g. 7 steps ahead. If that was inconclusive it would use argmax of current Q values. Both these fixed action choices could be replaced, with probability $\epsilon$, with a random action choice to ensure exploration. It worked very well. You could replace the look-ahead search in my example with any function that returned "best" actions for any reason.

There are some other ways you can skew action selection towards better looking action choices. You could look into Boltzmann exploration or upper confidence bounds (UCB) as other ways to create behaviour policies for DQN.

",1847,,,,,11/15/2020 15:18,,,,6,,,,CC BY-SA 4.0 24609,1,,,11/15/2020 15:25,,2,95,"

I noticed that SARSA has been rarely used in the deep RL setting. Usually, the training for DQN is done off-policy. I think one of the major reasons for this is due to greater sample efficiency in training due to the reuse of experiences in training off-policy. For SARSA, I would think that at every time step of an update, a stochastic gradient update of that sample would have to be performed and then the sample would have to be thrown away.

The approach while might take longer to train, might still allow the agent to do relatively well. Would deep SARSA implementation perform as well as DQN in terms of final performance (since SARSA would definitely take longer to train)?

",32780,,2444,,11/15/2020 23:29,11/15/2020 23:29,Is Deep SARSA learning a feasible approach?,,0,0,,,,CC BY-SA 4.0 24611,2,,21926,11/15/2020 17:26,,0,,"

I don't know what is your dataset exactly look like. But based on assumption, I would like to refer something --

You can think your MDP environment this way

action = {stay, go}

reward = {something like based on visitor's satisfaction maybe rating}

state = {current money in hand, city, other some variable those key feature to make next iteration action}

I am working on a project-based stock market trading (sorry, I can not share detail). In stock market trading problem, we need to decide action sequentially ( like per hour or maybe day). If your problem (and data as I assume) is a sequential action selection problem.

More detail: the first day you visited NY and enjoyed your moment and cost $800. Now you wanted to continue the tour in NY or want to go Washington D.C/ Miami in next day. You need to take action based on several subjects (transportation, time in transport vehicle etc). So, action is either stay or go. what is state possibly? state = {NY, $1200, tired or not maybe etc...}

your reward function design may be trickier to design.

I will suggest studying some RL-based real-life problem which will improve understanding more. You can study this one and try to relate with your problem.

",42283,,,,,11/15/2020 17:26,,,,0,,,,CC BY-SA 4.0 24612,1,24614,,11/15/2020 18:36,,2,261,"

For policy evaluation purposes, can we use the Q-learning algorithm even though, technically, it is meant for control?

Maybe like this:

  1. Have the policy to be evaluated as the behaviour policy.
  2. Update the Q value conventionally (i.e. updating $Q(s,a)$ using the action $a'$ giving highest $Q(s',a')$ value)
  3. The final $Q(s,a)$ values will reflect the values for the policy being evaluated.

Am I missing something here, given that I have not seen Q-learning being used anywhere for evaluation purposes?

",38385,,2444,,11/15/2020 23:13,11/15/2020 23:31,Can we use Q-learning update for policy evaluation (not control)?,,1,0,,,,CC BY-SA 4.0 24613,1,24631,,11/15/2020 19:10,,3,276,"

Why aren't exploration techniques, such as UCB or Thompson sampling, typically used in bandit problems, used in full RL problems?

Monte Carlo Tree Search may use the above-mentioned methods in its selection step, but why do value-based and policy gradient methods not use these techniques?

",32517,,2444,,11/16/2020 23:18,2/11/2021 19:24,"Why aren't exploration techniques, such as UCB or Thompson sampling, used in full RL problems?",,3,0,,,,CC BY-SA 4.0 24614,2,,24612,11/15/2020 19:43,,1,,"

For off-policy learning you must have two policies - a behaviour policy and a target policy. If the two policies are the same, then you end up with SARSA, not Q learning.

You cannot use Q learning directly for evaluating a fixed target policy, because it directly learns optimal value function as the target policy, regardless of the behaviour policy. Instead you must use another variant of off-policy learning that can evaluate an arbitrary target policy.

Your suggested algorithm is:

  1. Have the policy to be evaluated as the behaviour policy.
  2. Update the Q value conventionally (i.e. updating $Q(s,a)$ using the action $a'$ giving highest $Q(s',a')$ value)
  3. The final $Q(s,a)$ values will reflect the values for the policy being evaluated.

This will not work for evaluating the behaviour policy. If the behaviour policy was stochastic and covered all possible state/action choices, then it will still be Q learning and converge on the optimal value function - maybe very slowly if the behaviour policy did not get to important states very often.

The "trick" to off-policy is that the environment interaction part uses the behaviour policy to collect data, and the update step uses the target policy to calculate estimated returns. In general for off-policy updates, there can be corrections required to re-weight the estimated returns. However, one nice thing about single-step TD methods is that there are no such additional corrections needed.

So this gives a way to do off-policy TD learning, using an approach called Expected SARSA. To use Expected SARSA, you will need to know the distribution of action choices i.e. know $\pi(a|s)$ for the target policy.

This is the variant of your description that will work to evaluate your target policy $\pi(a|s)$:

  1. Have any stochastic policy that "covers" the target policy as the behaviour policy.
  2. Update the Q value using Expected SARSA $Q(s,a) = Q(s,a) + \alpha(r + \gamma [\sum_{a'} \pi(a'|s')Q(s',a')] - Q(s,a))$
  3. The final $Q(s,a)$ values will reflect the values for the policy being evaluated.

Worth noting that Expected SARSA with a target policy of $\pi(s) = \text{argmax}_a Q(s,a)$ is exactly Q learning. Expected SARSA is a strict generalisation of Q learning that allows for learning the value function of any target policy. You may not see it used as much as Q learning, because the goal of learning an optimal value function is more common in practice.

",1847,,1847,,11/15/2020 23:31,11/15/2020 23:31,,,,4,,,,CC BY-SA 4.0 24615,1,,,11/15/2020 21:49,,2,58,"

I'm training an RL agent using the DQN algorithm to do a specific task. The environment is represented by a list of $10$ integer numbers from $0$ to $20$. An example would be $[5, 15, 8, 8, 0, \dots]$.

Is it okay to pass the list as floats to the dense layer, or would that impede the learning process? What is the right way to go about passing integer numbers to the neural network?

",38076,,2444,,11/15/2020 22:57,11/15/2020 22:57,What is the appropriate way of passing a list of integers that represents the environment to a neural network's dense layer?,,0,2,,,,CC BY-SA 4.0 24616,1,,,11/15/2020 22:22,,3,32,"

When studying bounding box-based detectors, it's not clear to me if data augmentation includes adding random rotations.

If random rotations are added, how is the new bounding box calculated?

",32390,,2444,,11/15/2020 22:43,11/15/2020 22:43,"If random rotations are included in the data augmentation process, how are the new bounding boxes calculated?",,0,2,,,,CC BY-SA 4.0 24617,1,,,11/15/2020 22:33,,2,29,"

My agent receives $1, 0, -1$ rewards for winning, drawing, and losing the game, respectively. What would be the consequences of setting reward to $-1$ for draws? Would that encourage the agent to win more or will it have no effect at all? Is it appropriate to do so?

",38076,,2444,,11/15/2020 22:50,11/15/2020 22:50,How can I discourage the RL agent from drawing in a zero-sum game?,,0,3,,,,CC BY-SA 4.0 24630,2,,24576,11/16/2020 0:07,,1,,"

Is average pooling equivalent to a strided convolution with a specific constant kernel?

Yes.

Why then are explicit pooling layers needed if they can be realized by convolutions?

It is probably because the convolution is more expensive than the usual/natural implementation (i.e. just summing and then dividing).

To see why, let's consider your example. If you implement pooling with a convolution (or cross-correlation), we would need to perform $9$ multiplications, then $8$ summations, for a total of $17$ operations. If we implement pooling as usual, we would need to perform $8$ summations and $1$ division (or multiplication), for a total of $9$ operations.

Moreover, convolution may also be more prone to numerical instability (multiplications of numbers in the range $[0, 1]$ are not nice), but I am not completely sure about this, given that we multiply always by the same numbers.

",2444,,2444,,11/16/2020 0:16,11/16/2020 0:16,,,,0,,,,CC BY-SA 4.0 24631,2,,24613,11/16/2020 0:36,,4,,"

You can indeed use UCB in the RL setting. See e.g. section 38.5 Upper Confidence Bounds for Reinforcement Learning (page 521) of the book Bandit Algorithms by Csaba Szepesvari and Tor Lattimore for the details.

However, compared to $\epsilon$-greedy (widely used in RL), UCB1 is more computationally expensive, given that, for each action, you need to recompute this upper confidence bound for every time step (or, equivalently, action taken during learning).

To see why, let's take a look at the UCB1 formula

$$ \underbrace{\bar{x}_{j}}_{\text{value estimate}}+\underbrace{\sqrt{\frac{2 \ln n}{n_{j}}}}_{\text{UCB}}, $$ where

  • $\bar{x}_{j}$ is the value estimate for action $j$
  • $n_{j}$ is the number of times action $j$ has been taken
  • $n$ is the total number of actions taken so far

So, at each time step (or new action taken), we need to recompute that square root for each action, which depends on other factors that evolve during learning.

So, the higher time complexity than $\epsilon$-greedy is probably the first reason why UCB1 is not so much used in RL, where interaction with the environment can be the bottleneck. You could argue that this recomputation (for each action) also needs to be done in bandits. Yes, it's true, but, in the RL problem, you have multiple states, so you need to compute value estimates for each action in all states (i.e. the full RL problem is more complex than bandits or contextual bandits).

Moreover, $\epsilon$-greedy is so conceptually simple that everyone can easily implement it in less than $5$ minutes (though this is not really a problem, given that both are simple to implement).

I am currently not familiar with Thompson sampling, but I guess (from some implementations I have seen) it's also not as cheap as $\epsilon$-greedy, where you just need to perform an argmax (can be done in constant time if you keep track of the highest value) or sample a random integer (it's also relatively cheap). There's a tutorial on Thompson sampling here, which also includes a section dedicated to RL, so you may want to read it.

",2444,,2444,,2/11/2021 18:02,2/11/2021 18:02,,,,0,,,,CC BY-SA 4.0 24632,2,,10839,11/16/2020 0:39,,1,,"

Short answer

One reason why we assume/require i.i.d. data is that it simplifies the computations. More specifically, if we assume the samples to be i.i.d., their joint probability is then simplified to a product of marginal probabilities.

Long answer

In a dataset $D$, suppose we have $n$ samples. We define their joint probability (i.e. the probability of these samples occurring at the same time) as follows

$$P(z_1, z_2, \dots, z_n) \tag{1}\label{1}$$

For instance, if each $z_i$ is binary (i.e. can take one of two values, e.g. $0$ or $1$). Then, to define the probability distribution over all possible values of all $z_i$, we need to compute $2^n$ probabilities (which corresponds to all combinations of the values of all $z_i$s).

More importantly, if the samples are correlated, this probability must be calculated as a product of conditional probabilities (by definition).

However, if the samples are i.i.d., then the join probability in equation (\ref{1}) can be computed as a product of marginal probabilities

$$P(z_1, \dots, z_n)=\prod_{i}P(z_i) \tag{2}\label{2}$$

which may be simpler than calculating with conditional probabilities because marginal probabilities may be simpler to compute.

Example: binary cross-entropy

In the case of a binary classification problem, we assume to have a labelled dataset $D = \{(x_i, y_i) \}$, where $y_i$ would be the binary label ($0$ or $1$) for the corresponding input $x_i$. So, we could define our likelihood function parametrized by the parameters $w$ as follows

$$\ell(w) = P(y_1, y_2, \dots, y_n \mid x_1, x_2, \dots, x_n; w) \tag{3}\label{3}$$

If we assume $(x_i, y_i)$ to be independent of $(x_i, y_j)$, for all $i \neq j$, then this joint probability (\ref{3}) of labels given the inputs can also be written as a product of the marginals of the labels given the inputs.

$$\ell(w) = \prod_i P(y_i \mid x_i; w) \tag{4}\label{4}$$

For numerical stability (because sums are more stable than products of small numbers), rather than considering the likelihood, we can consider the log-likelihood, which is just the logarithm of $\ell$. However, this transforms the product in (\ref{4}) into a sum (note that this is just a rule of logarithms!).

$$\log \ell(w) = \sum_i \log P(y_i \mid x_i; w) \tag{5}\label{5}$$

We can also do this because the logarithm is a strictly increasing function, so the maxima/minima of $\ell$ are attained at the same parameters as the maxima/minima of $\log \ell$.

So, now, the goal is to find the parameters $w$ of the log-likelihood such that the probability of the samples is maximized. Equivalently, rather than maximizing the log-likelihood, we can minimize its negative (this is what is usually done in practice!), which leads to what people call the cross-entropy function (which is just the negative log-likelihood).

$$\text{CE}(w) = - \log \ell(w)$$

So, minimizing the cross-entropy $\text{CE}(w)$ is exactly the same thing as maximizing the $\log \ell(w)$.

Given that the labels $y_i$ are binary, we can assume that $P(y_i \mid x_i; w)$ in (\ref{5}) is a Bernoulli distribution, so we could define the probability that the label is equal to $1$ (or $0$, it's the equivalent) as follows

$$P(y_i=1 \mid x_i; w)=\hat{p}^{y_i} (1-\hat{p})^{(1-{y_i})},$$

where $\hat{p}$ is the output of the neural network $f(x_i; w) = \hat{p}$ when fed with $x_i$, and $\hat{p}$ is just an estimate of the parameter $p$ of the Bernoulli distribution we try to learn.

So, now, our cross-entropy $\text{CE}(w)$ can be written as follows

$$\text{CE}(w) = \sum_i \log \left( \hat{p}^{y_i} (1-\hat{p})^{(1-y_i)} \right) $$

So, now, we just need to estimate $w$ to produce $\hat{p}$. So, basically, we have avoided the computation of conditional probabilities of the labels.

",11599,,2444,,10/17/2021 15:15,10/17/2021 15:15,,,,0,,,10/17/2021 15:15,CC BY-SA 4.0 24633,2,,24574,11/16/2020 2:30,,2,,"

Is the second binary plane all zeros or all ones? Or, something else? How is it known if the move is off the board? For my game, I know if it is a legal move on the board, but do not know if the move is off the board.

The second binary plane is one-hot by definition, there is a single one and everything else is zero. If this definition is not met, it's no longer "one-hot".

The paper doesn't state how exactly to implement "off the board" implementation. Research paper wouldn't go into coding level. However, detecting "off the board" is not a challenging task.

https://webcache.googleusercontent.com/search?q=cache:djj-G4T_PwgJ:https://craftychess.com/hyatt/boardrep.html+&cd=2&hl=en&ct=clnk&gl=au

The next step in board representation evolution is to enclose the board inside a larger array, so that illegal squares are "off" the edge and are easily detectable.

A possibility is add borders to your board. Crafty did that. Extend the board to 10x10. Not 9x9 because you need to deal with knight jumping.

Exactly how you should be doing it is implementation defined. We don't know what Google did because AlphaZero is not open source. I'm here just giving you an example.

",6014,,,,,11/16/2020 2:30,,,,1,,,,CC BY-SA 4.0 24634,1,,,11/16/2020 7:56,,0,48,"

I am solving a problem for which I have to select the best possible servers (level 1) to hit for a given data. These servers (level 1) in turn hit some other servers (level 2) to complete the request. The level 1 servers have the same set of level 2 servers integrated with them. For a particular request, I am getting success or failure as a response.

For this, I am using Thompson Sampling with Bernoulli prior. On success, I am considering reward as 1 and, for failure, it is 0. But in case of failure, I am receiving errors as well. In some error, it is evident that the error is due to some issue at the server (level 1) end, and hence reward 0 makes sense, but some error results from request data errors or issue at level 2 servers. For these kinds of errors, we can't penalize the level 1 servers with reward 0 nor can we reward them with value 1.

Currently, I am using 0.5 as a reward for such cases.

Exploring over the Internet, I couldn't find any method/algorithm to calculate the reward for such cases in a proper (informed) way.

What could be the possible way to calculate reward in such cases?

",42313,,2444,,11/17/2020 10:48,11/17/2020 10:48,Thompson sampling with Bernoulli prior and non-binary reward update,,0,2,,,,CC BY-SA 4.0 24635,1,,,11/16/2020 9:11,,0,162,"

I have database of sequential events for multiple animals. The events are represented by integers so it looks something like:

Animal A: [1,6,4,2,5,7,8] 
Animal B: [1,6,5,4,1,6,7]
Animal C: [5,4,2,1,6,4,3]

I can manually see that for each event 6 event 1 first happens. And event 4 happens quickly after a 1,6 combination. But these are easy to spot in such a small dataset, the real lists are 10000+ events per animal. Is there a way to use an algorithm or machine learning to search for these kinds of patterns?

",42315,,,,,11/16/2020 9:11,Find repeating patterns in sequence data,,0,4,,,,CC BY-SA 4.0 24636,1,,,11/16/2020 9:49,,0,101,"

In my project, I am detecting only one class, which is "airplane", using yolov5. However, at some frames, the neural network labels some of the buildings as airplanes, which obviously are not. This noise happens like 1 frame among 60 frames. How should I treat this issue? Which algorithms can be applied to filter out?

",42311,,2444,,11/16/2020 12:16,11/16/2020 12:16,Object detection noise filtering,,0,11,,,,CC BY-SA 4.0 24637,1,,,11/16/2020 11:24,,1,83,"

I have come up with some examples of CNNs (segmentation CNNs) that use ELU (exponential linear unit) as an activation function.

What are the benefits of this activation function over others, such as RELU or leaky RELU?

",42296,,2444,,11/16/2020 11:55,11/16/2020 11:55,What are the benefits of using ELU over other activation functions in CNNs?,,0,1,,,,CC BY-SA 4.0 24642,1,24730,,11/16/2020 11:59,,1,444,"

I am currently studying game theory based on Peter Norvig's 3rd edition introduction to artificial intelligence book. In chapter 17.5, the two player zero sum game can be solved by using the $\textbf{minimax}$ theorem $$max_x \, min_y \, x^TAy = min_x \, max_y \, x^TAy = v$$

where $x$ is the probability distribution of actions by the max player (in the left equation) and the min player (in the right equation).

Regarding the minimax theorem, I have 2 questions.

  1. Do both the min and the max players have the same probability distribution of actions ? In the book by Peter Norvig, the book demonstrated that in the game of $\textbf{Morra}$, both the min and max player had $[\frac{7}{12}:one, \frac{5}{12}:two]$ for the probability distributions.

  2. Also, regarding minimax game tree, is the difference between minimax game tree and the zero sum game the fact that for minimax game tree, the opponent can react to the first player's move whereas for zero sum game defined in 17.5, both players are unaware of each other's move ?

",32780,,,,,12/19/2020 16:02,Optimal mixed strategy in two player zero sum games,,1,1,,,,CC BY-SA 4.0 24643,1,,,11/16/2020 12:07,,5,150,"

What is the reason behind the name "Transformers", for Multi Head Self-Attention-based neural networks from Attention is All You Need?

I have been googling this question for a long time, and nowhere I can find any explanation.

",26580,,26580,,11/16/2020 12:46,3/16/2021 12:58,"Why are ""Transformers"" called this way?",,1,1,,,,CC BY-SA 4.0 24647,1,,,11/16/2020 15:01,,3,117,"

When learning off-policy with multi-step returns, we want to update the value of $Q(s_1, a_1)$ using rewards from the trajectory $\tau = (s_1, a_1, r_1, s_2, a_2, r_2, ..., s_n, a_n, r_n, s_n+1)$. We want to learn the target policy $\pi$ while behaving according to policy $\mu$. Therefore, for each transition $(s_t, a_t, r_t, s_{t+1})$, we apply the importance ratio $\frac{\pi(a_t | s_t)}{\mu(a_t | s_t)}$.

My question is: if we are training at every step, the behavior policy may change at each step and therefore the transitions of the trajectory $\tau$ are not obtained from the current behavior policy, but from $n$ behavior policies. Why do we use the current behavior policy in the importance sampling? Should each transition use the probability of the behavior policy of the timestep at which that transition was collected? For example by storing the likelihood $\mu_t(a_t | s_t)$ along with the transition?

",32583,,2444,,11/17/2020 10:56,11/18/2020 6:29,"When learning off-policy with multi-step returns, why do we use the current behaviour policy in importance sampling?",,1,0,,,,CC BY-SA 4.0 24648,1,24684,,11/16/2020 15:03,,1,498,"

I was trying Google Cloud's Vision API, and how the dominant colors part shows. I uploaded a sample image, and here is the results for the dominant colors. I realized it doesn't simply count pixel colors and cluster them. The background has many gray pixels which are not included.

How does it perform dominant colors? How can I do something similar?

",9053,,,,,11/17/2020 23:53,How to compute dominant colors in an image?,,1,0,,,,CC BY-SA 4.0 24649,1,24665,,11/16/2020 15:41,,1,139,"

I have a dataset of 3000 8x8 images, and I would like to train a GAN for an image generation purpose.

I am planning to start with a simple GAN model and see if it overfits. Before training, I try to do a comparison of the discriminator model prediction using real image input against the whole GAN model prediction using random seed input. My thought process is that since this model is not trained yet, the output for real images and fake images by the discriminator should not be predictable.

However, the discriminator model prediction using real image input always returns a value very close to 1.0, and the whole GAN model prediction using random seed input always returns a value near 0.5 with a small deviation. I suspect that during training, the model would simply pull the 0.5 value near 0.0 and would never actually learn from the dataset.

I try to increase the training parameters and different initializers, but the output is still the same.

By ruling out the possibility a bad dataset, what could be the reason for this situation?

This is some sneak peek of the generator and discriminator model building: https://pastebin.com/ehMDP7k6

",32511,,32511,,11/17/2020 11:20,11/17/2020 15:42,GAN model predictions before training is predictable,,1,4,,,,CC BY-SA 4.0 24651,1,,,11/16/2020 15:58,,2,42,"

I am trying to train a neural network using reinforcement learning / policy gradient methods. The states, i.e. the inputs, as well as the actions I am trying to sample are vectors with each element being a real number like in this question: https://math.stackexchange.com/questions/3179912/policy-gradient-reinforcement-learning-for-continuous-state-and-action-space

The answer that was given there already helped me a lot. Also, I have been trying to understand Chapter 13: Policy Gradient Methods from "Reinforcement Learning: An Introduction" by Sutton et al. and in particular Section 13.7: Policy Parametrization for Continous Actions.

My current level of understanding that I can use the weights in the network to calculate the mean/means and the standard deviation/covariance matrix. I can then use them to define a multivariate Gaussian distribution and randomly sample an action from there.

For now, I have one main question: In the book it says, that I have to split the weights, i.e. the policy's parameter vector, into two parts: $\theta = [\theta_{\mu}, \theta_{\sigma}]$. I can then use each part together with a feature vector to calculate the means and the covariance matrix. However, I was wondering, how this is usually done? Do I train two separate networks? I am not sure how an architecture of this will look like. Also, I am not sure what the output nodes will be in this case. Do they have any meaning like for supervised learning?

So far, I have only found papers that talk about this issue rather theoretically like it is presented in the book. I would be very happy to understand how this is actually implemented. Thank you very much!

",42323,,,,,11/16/2020 15:58,Understanding neural network achitectures in policy gradient reinforcement learning for continuous state and action space,,0,1,,,,CC BY-SA 4.0 24652,2,,12870,11/16/2020 16:56,,0,,"

Jacques Pitrat's last book Artificial Beings The Conscience of a Conscious Machine has some interesting chapters (§2, §3, §5, §6, §7) related to the questions of explanations and meta-explanations in AI systems.

It describes a reflexive, "expert system" like, meta-knowledge based approach to explainable AI

You could also read his papers Implementation of a reflective system (1996) and A Step toward an Artificial AI Scientist online (there could be a typo in it: "pile" is the French word for stack, including the call stack).

You might also look into J.Pitrat's blog and into the ongoing RefPerSys project (it is in November 2020).

PS. J.Pitrat (born in 1934) passed away in October 2019. French readers could see this. His blog might disappear in a few months.

",3335,,3335,,11/19/2020 12:26,11/19/2020 12:26,,,,0,,,,CC BY-SA 4.0 24659,1,,,11/17/2020 5:40,,1,212,"

This is the tutorial that I used to learn about GANs. In this tutorial, it taught us to intentionally provide false labels to "fool" the discriminator, but does it make the discriminator actually inaccurate? I don't quite understand his explanation, can anyone help me?

",42182,,2444,,11/17/2020 11:00,11/17/2020 11:00,Why do we need to provide false labels to the discriminator on purpose to train GANs?,,1,1,,,,CC BY-SA 4.0 24661,2,,24659,11/17/2020 8:00,,1,,"

in this tutorial, it taught us to intentionally provide false labels to "fool" the discriminator, does it make discriminator actually inaccurate?

When training GANs, the training steps for the generator and discriminator are separate:

  • There is a training stage for the discriminator, where it is presented with a mix of generated and real data, all correctly labelled. It is important at this stage to not update the generator (otherwise it will get worse by helping to make the fake data look more fake).

  • There is a training stage for the generator, where the discriminator is presented with only generated data, all labelled incorrectly as if it were real. It is important at this stage to not update the discriminator.

Typically you will alternate between the two stages frequently, making small updates to discriminator and generator separately. Some GANs use metrics to decide how much of each to do, because you don't want either the generator or discriminator to win outright and stop progress - at least at the start.

So yes there is a stage where you deliberately "fool" the discriminator, because being able to do so is the goal of the generator. However, one key detail of this stage is that the discriminator weights are not updated from that faked data. Instead the gradients from that stage are used only to update and improve the generator.

It may help if you don't think of the false labels as being "fool the discriminator", but instead they are "measure how well the generator is fooling the discriminator".

",1847,,1847,,11/17/2020 9:00,11/17/2020 9:00,,,,0,,,,CC BY-SA 4.0 24665,2,,24649,11/17/2020 12:37,,3,,"

I took a look at your model. It seems you have incorrect architecture. The Conv2D layers in your D should have following params: (n_filters, kernel=3, padding='same'), where n_filters is the number of filters and it usually should be doubled as per DCGAN architecture. You can also use strides but since your images are small it won't make any sense.

The D network also should not include UpSampling2D layer. This layer can be used in G network instead of Conv2DTranspose.

Your images should be normalized to the range of [-1, 1] and the activation function of G should be tanh.

You also included m.add(keras.layers.Reshape((8, 8, 3))) to your G network which is the final size of your data. Since you have the final size in the first layers you don't have to include upsampling Conv2DTranspose layers.

Your final model should look like

def make_discriminator_model():
    model = tf.keras.Sequential()
    model.add(layers.Conv2D(8, kernel=3, strides=1, padding='same',
                                     input_shape=[8, 8, 1]))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Conv2D(16, kernel=3, strides=1, padding='same'))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Flatten())
    model.add(layers.Dense(1))

    return model

def make_generator_model():
    model = tf.keras.Sequential()
    model.add(layers.Dense(8*8*16, use_bias=False, input_shape=(100,)))
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Reshape((8, 8, 16)))

    model.add(layers.Conv2D(8, kernel=3, strides=1, padding='same'))
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2D(1, kernel=3, strides=1, padding='same', activation='tanh'))

    return model

Note that I did not test the model, so you should adapt it to the shape of your data.

I encourage you to follow this tutorial.

Update regarding overfitting

GANs do not have overfitting problem in the classical sense. Instead, vanilla GAN have other problems like vanishing gradient. It happens when the D becomes overconfident regarding fake samples. In that case, D stops providing useful information to the G, since its error becomes 0. To find out if training fell into this problem, you should plot D and G loss. It will look as follows:

Another problem is the mode collapse. In that case, the G tricks the D by producing only one type of samples which looks realistic but G won't represent the real data distribution. Therefore, the generated samples will be homogeneous.

For more information, see Improved Techniques for Training GANs and Wasserstein GAN.

",12841,,12841,,11/17/2020 15:42,11/17/2020 15:42,,,,2,,,,CC BY-SA 4.0 24666,1,,,11/17/2020 12:51,,0,97,"

Let us imagine a face database with several subjects, each subject having multiple face images. How do we determine which is the best face suitable for face recognition purposes?

",42346,,,,,1/24/2023 20:07,"In a face database containing multiple images per subject, how do we determine the face image which is most suited for face recognition?",,1,0,,,,CC BY-SA 4.0 24667,2,,14320,11/17/2020 12:55,,0,,"

The claim that Neural Network with a single hidden layer can model any functions is proven in Cybenko's Approximation by superpositions of a sigmoidal function.

https://link.springer.com/article/10.1007/BF02551274 check also: https://en.wikipedia.org/wiki/Universal_approximation_theorem

The thing is that the neural network using sigmoidal functions, which are non-linear functions can.

",35957,,,,,11/17/2020 12:55,,,,0,,,,CC BY-SA 4.0 24668,1,24669,,11/17/2020 12:56,,2,1316,"

Every computer science student (including myself, when I was doing my bachelor's in CS) probably encountered the famous single-source shortest path Dijkstra's algorithm (DA). If you also took an introductory course on artificial intelligence (as I did a few years ago, during my bachelor's), you should have also encountered some search algorithms, in particular, the uniform-cost search (UCS).

A few articles on the web (such as the Wikipedia article on DA) say that DA (or a variant of it) is equivalent to the UCS. The famous Norvig and Russell's book Artificial Intelligence: A Modern Approach (3rd edition) even states

The two-point shortest-path algorithm of Dijkstra (1959) is the origin of uniform-cost search. These works also introduced the idea of explored and frontier sets (closed and open lists).

How exactly is DA equivalent to UCS?

",2444,,2444,,11/17/2020 13:01,11/17/2020 13:01,What is the difference between the uniform-cost search and Dijkstra's algorithm?,,1,0,,,,CC BY-SA 4.0 24669,2,,24668,11/17/2020 12:56,,1,,"

The answer to my question can be found in the paper Position Paper: Dijkstra's Algorithm versus Uniform Cost Search or a Case Against Dijkstra's Algorithm (2011), in particular section Similarities of DA and UCS, so you should read this paper for all the details.

DA and UCS are logically equivalent (i.e. they process the same vertices in the same order), but they do it differently. In particular, the main practical difference between the single-source DA and UCS is that, in DA, all nodes are initially inserted in a priority queue, while in UCS nodes are inserted lazily.

Here is the pseudocode (taken from the cited paper) of DA

Here is the pseudocode of the best-first search (BFS), of which UCS is just a particular case. Actually, this is the pseudocode of UCS where $g(n)$ is the cost of the path from the source node to $n$ (although the title indicates that this is the pseudocode of BFS).

",2444,,,,,11/17/2020 12:56,,,,2,,,,CC BY-SA 4.0 24671,1,,,11/17/2020 13:37,,1,154,"

Quick questions to see whether I understand GCNs correctly.

Is it correct that, if I have trained a GCN, it can take arbitrary graphs as input, assuming the feature size is the same?

I can't seem to find explicit literature on this.

",42349,,2444,,11/17/2020 14:08,11/20/2020 17:43,How does a GCN handle new input graphs?,,1,0,,,,CC BY-SA 4.0 24672,1,,,11/17/2020 16:43,,0,128,"

My idea is to model and train a neural network that receives a text version of a PDF file as the input and gives the content text as output.

Take the scenario:

  1. One prints a PDF file to a text file (the text file does not have images, but has the main text, headings, page numbers, some other footer text, and so on, and keeps the same number of columns - two for instance - of text);

  2. This text file is submitted to a tool that strips everything that is not the main content of the text in one single text column (one text stream), keeping the section titles, paragraphs, and the text in a readable form (does not mix columns);

  3. The tool generates a new version of the original text file containing only the main text portion, ready to be used for other purposes where the striped parts would be considered noise.

How to model this problem in a way a neural network can handle it?

Update 1

Here are some clarifications on the problem.

PDF file

The picture below shows two pages of a pdf version of a scientific paper. This is just to set the context, the PDF file is not the input for this problem, it is just to understand where the actual input data comes from.

The color boxes show some parts of interest for this discussion. Red boxes are headers and footers. We are not interested in them. Blue and green boxes are content text blocks. Different colors were used to emphasize the text is organized in columns and that is part of the problem. Those blue and green boxes are what we actually want.

Text file

If a use the ""save as text file"" feature of my free PDF reader, I get a text file similar to the image below.

The text file is continuous, but I put the equivalent of the first two pages of the PDF file side-by-side just to make things easier to compare. We can see the very same colored boxes. In terms of words, those boxes contain the same text as in the PDF version.

Understanding the problem

When we read a paper, we are usually not very interested in footers or headers. The main content is what we actually read and that will provide us with the knowledge we are looking for. In this case, the text is inside blue and green boxes.

So, what we want here is to generate a new version of the input (text) file organized in one single text stream (one column if you will), with the text laid-out in a form someone can read it, which means, alternating the blue and the green boxes.

However, if the original PDF has no footers, it should work in the same way, providing the main text content. If the text comes in three of four columns, the final product must be a text in good condition to be read without losing any information.

Any pictures will be simply stripped off the text version of the paper and we are fine with that.

",42356,,32410,,9/27/2021 21:56,10/19/2021 15:17,How to extract the main text from a formatted text file?,,2,1,,,,CC BY-SA 4.0 24673,1,,,11/17/2020 17:44,,0,40,"

I have two measuring devices. Both measure the same thing. One is accurate, the other is not, but does correlate with a non-fixed offset, some outliers, and some noise.

I won't always be using the accurate device. The nonfixed offset makes things difficult, but I'm certain there is sufficient similarity to make a link using a machine learning (or AI) technique and to convert one set of numbers to a good approximation of the other.

One is a footbed power meter and gives power in Watts every second. The other is a crank-based power meter, also outputting Watts at 1Hz. The footbed power is much less than the crank (which I know to be accurate), but it does track the increases and decreases in power, just with more noise and, as I say, a non-fixed offset (and by non-fixed I mean, at low power the offset is different to that at high power, I don't mean it isn't consistent, it is consistent). Both measure cadence which may be a useful metric to help find a pattern.

I will be collecting sets of data from both and hoped to plug the footbed data in as a column of values with the crank data as another column representing the truth, so after training, the model would be able to transform the footbed data to an approximation of the crank data.

Anyway, I'm completely lost as to how to begin. I've tried searching, but, clearly, I'm using the wrong keywords. Does anyone have any pointers, please?

",42358,,2444,,11/19/2020 11:17,4/18/2021 12:49,Which machine learning technique can I use to match one set of data points to another?,,1,0,,,,CC BY-SA 4.0 24674,1,,,11/17/2020 18:00,,2,43,"

I am working on a segmentation of MRI images of the thigh. I am trying to segment the fascia, there is a slight imbalance between the background and the mask. I have about 1400 images from 30 patients for training and 200 for validation. I am working with keras. The loss function is combination of weighted cross entropy and dice loss (smooth factor of dice loss = 2)

def combo_loss(y_true, y_pred,alpha=0.6, beta=0.6): # beta before 0.4
    return alpha*tf.nn.weighted_cross_entropy_with_logits(y_true, y_pred, pos_weight=beta)+((1-alpha)*dice_coefficient_loss(y_true, y_pred))

When I use a value alpha greater thatn 0.5 (the weighted cross entropy) the loss rapidly decreases during the first epoch. Afterwards if slowly decreases in a linear manner. Why is this happening? What would be a good approach reasonable approach to choose the values of alpha and beta?

",42357,,,,,11/17/2020 18:00,Loss function decays linearly in segmentation MRI fascia,,0,0,,,,CC BY-SA 4.0 24677,2,,24647,11/17/2020 20:03,,1,,"

According to my understanding, you don't use just the current behavior policy for sampling. The importance sampling ratio is calculated as the product of the probability ratios for both the target and behaviour policy throughout the trajectory.

See the calculation below, where the product is happening for all the probabilities throughout the trajectories. (screenshot from Chapter 5, section 5.5 (page 85) of Sutton & Barto)

",42262,,42262,,11/18/2020 6:29,11/18/2020 6:29,,,,1,,,,CC BY-SA 4.0 24678,2,,23711,11/17/2020 20:37,,0,,"

I would say any normalization such as min-max or standard deviation is fine as far as the scaling factor is provided as a feature, since time-series of different scale might behave differently.

",42330,,,,,11/17/2020 20:37,,,,0,,,,CC BY-SA 4.0 24679,2,,23089,11/17/2020 20:41,,0,,"

I would say offsets from previous N-event is necessary in this case. You can also encode their max and avg as other features in order to compress the irregular sampling times into a vector.

",42330,,,,,11/17/2020 20:41,,,,1,,,,CC BY-SA 4.0 24680,2,,22853,11/17/2020 20:55,,0,,"

Of course depends on your type of data, but Holt-Winter models can have different degree of complexity and use moving average, trend, and seasonality. This is most useful if the data is not hierarchical, meaning that the time-series are independent from each other. If time-series are relatives of each other then you can also try aggregating them, predict at aggregate level and then disaggregate. The following can be a good resource:

https://otexts.com/fpp2/

",42330,,,,,11/17/2020 20:55,,,,0,,,,CC BY-SA 4.0 24681,1,,,11/17/2020 21:25,,2,73,"

I might just be overthinking a very simple question but nonetheless the following has been bugging me a lot.

Given an MDP with non-trivial state and action sets, we can implement the SARSA algorithm to find the optimal policy or the optimal state-action-value function $Q^*(s,a)$ by the following iteration:

$$Q(s_t,a_t)\leftarrow Q(s_t,a_t) + \alpha(r_t + \gamma Q(s_{t+1}, a_{t+1}) - Q(s_t,a_t)).$$

Assuming each state-action pair is visited infinitely often, fix one such pair $(s,a)$ and denote the time sequence of visiting the said pair as $t_1 < t_2 < t_3 < \dots t_n\dots.$ Also, let $Q_{t_n}(s,a) = X_n$ for ease of notation and consider the sequence of random variables: $$X_0, X_1, \dots X_n,\dots $$

Can $\{X_n\}_{n\geq 0}$ be though of a discrete-time Markov Chain on $\mathbb{R}$ ? My intuition says no, because the recurrence equation will look like: $$X_{n+1} = (1-\alpha)X_n + \alpha(r_{t_n} +\gamma Q_{t_n}(s',a'))$$ and that last term $Q_{t_n}(s',a')$ will be dependent on the path even if we condition on $X_n = x.$

However, I am not quite able to rigorously write an answer in the either direction. I will greatly appreciate if someone can resolve this issue that I am having in either direction.

",38234,,,,,11/17/2020 21:25,Can $Q$-learning or SARSA be thought of a Markov Chain?,,0,0,,,,CC BY-SA 4.0 24682,1,24734,,11/17/2020 22:23,,2,260,"

I was reading about IDA* and I found this link explaining IDA* and providing an animation for it.

Here is a picture of the solution.

I know what is the cutoff condition (it depends on F), and the search is like DFS if the value of (f) of the node is less or equal to the cutoff, and like IDF it is iterative

My question is:

In the animation, when the threshold is 7 and after expanding the parent of the goal (14), they stated that a solution has been found, so, if we found the goal after expanding a node which is <= the cutoff value, can we consider it the solution without applying any condition-test on it? (it's F <= threshold): for example, if there was another level where there is a goal and it can be found with value 13 (less than 14), like the following pic:

when the threshold is 7, 11 will not be expanded so we will never get (13)

So, what is the correct solution?

",42300,,2444,,11/19/2020 1:06,11/21/2020 11:49,When does IDA* consider the goal has been found?,,1,0,,,,CC BY-SA 4.0 24683,1,,,11/17/2020 23:45,,1,139,"

I am reading Composing Music With Recurrent Neural Networks by Daniel D. Johnson. But I am really confused about the input passed to this network. If we pass notes of music along the time axis, then what is passed along the note axis?

The author says:

If we make a stack of identical recurrent neural networks, one for each output note, and give each one a local neighborhood (for example, one octave above and below) around the note as its input, then we have a system that is invariant in both time and notes: the network can work with relative inputs in both directions.

This might mean that the inputs passed to the network along the note axis are fixed representations of notes in the vocabulary. But I am not sure.

I am also having a hard time understanding the input passed to this network as the author explains a few paragraphs below. (Position, Pitchclass, Previous Vicinity, Previous Context, Beat).

Also, at some point, the author talks about RNN along the note axis. But in the architecture, there only seems to be RNN along the time axis. I would really appreciate is anyone could give me some more information to understand how this Biaxial Network is setup. This article by Deep Dark Learning was a little helpful but I am still not fully sure what is going on here.

",42204,,42204,,11/18/2020 19:39,11/18/2020 19:39,How is input defined for a biaxial lstm network for generating music?,,0,0,,,,CC BY-SA 4.0 24684,2,,24648,11/17/2020 23:53,,1,,"

You can find an explanation here (github of the googleapi):

My current understanding of a color's score is a combination of two things:

  • What is the focus of the image?
  • What is the color of that focus?

For example, given the following image:

The focus is clearly the cat, and therefore the color annotation for this image with the highest score (0.15) will be RGB = (232, 183, 135) which is the beige color:

The green of the grass (despite having more pixels in the image dedicated to it) has a much lower score by virtue of the algorithm's detection that it's the background and not the focus of the image.

In other words, higher "scores" means higher confidence that the color in question is prominent in the central focus of the image.

It is analogous to your case. Therefore, using a background removal this can help to find the focus of the image, and then make the histogram of the remaining objects' color. An example of the background removal using deep learning can be found in this post.

",4446,,,,,11/17/2020 23:53,,,,0,,,,CC BY-SA 4.0 24685,1,24700,,11/17/2020 23:54,,2,1872,"

I am using AlexNet CNN to classify my dataset which contains 10 classes and 1000 data for each class, with 60-30-10, splits for train, validation, and test. I used different batch sizes, learning rates, activation functions, and initializers. I'm using the sparse categorical cross-entropy loss function.

However, while training, my loss value is greater than one (almost equal to 1.2) in the first epoch, but until epoch 5 it comes near 0.8. Is it normal? If not, how can I solve this?

",33792,,2444,,11/19/2020 12:07,11/19/2020 12:07,Can the (sparse) categorical cross-entropy be greater than one?,,1,0,,,,CC BY-SA 4.0 24686,1,24694,,11/18/2020 7:04,,5,756,"

If there are two different optimal policies $\pi_1, \pi_2$ in a reinforcement learning task, will the linear combination (or affine combination) of the two policies $\alpha \pi_1 + \beta \pi_2, \alpha + \beta = 1$ also be an optimal policy?

Here I give a simple demo:

In a task, there are three states $s_0, s_1, s_2$, where $s_1, s_2$ are both terminal states. The action space contains two actions $a_1, a_2$.

An agent will start from $s_0$, it can choose $a_1$, then it will arrive $s_1$,and receive a reward of $+1$. In $s_0$, it can also choose $a_2$, then it will arrive $s_2$, and receive a reward of $+1$.

In this simple demo task, we can first derive two different optimal policy $\pi_1$, $\pi_2$, where $\pi_1(a_1|s_0) = 1$, $\pi_2(a_2 | s_0) = 1$. The combination of $\pi_1$ and$\pi_2$ is $\pi: \pi(a_1|s_0) = \alpha, \pi(a_2|s_0) = \beta$. $\pi$ is an optimal policy, too. Because any policy in this task is an optimal policy.

",40633,,40633,,11/18/2020 13:47,11/20/2020 14:13,"Given two optimal policies, is an affine combination of them also optimal?",,2,0,,,,CC BY-SA 4.0 24688,2,,11504,11/18/2020 7:44,,1,,"

I could give you my $0.02 on fraud detection.

  1. Read everything you can on the Equifax breach and seek to secure your data
  2. Benfords Law would be a good place to start
  3. If you can isolate log activity that is inhumanly consistant, if you "ip_address", "id" and/or "timestamp" all show a constant 3 second gap between activity or its always a random choice between 3 and 6 seconds between them.

If you plan on investing the time and resources that ML or Ai require, you will need to isolate "good data" as training data and train your model on that. Perhaps you could get the IP address of your known top 10 customers and include that.

Then begin training with that as your sample data, with keeping your test data seperate.

I'm sure there is alot more but I would need to know context of the info provided in terms of what kind of fraud you are looking for, what you have tried or why they think there is fraud to be found.

",34095,,,,,11/18/2020 7:44,,,,0,,,,CC BY-SA 4.0 24689,2,,24686,11/18/2020 9:06,,2,,"

Yes, in general any linear combination of probability distributions between optimal policies is also an optimal policy. In fact any combination with each state treated separately will also be an optimal policy.

This can be seen using the equation for optimal deterministic policy in terms of optimal value function:

$$\pi^*(s) = \text{argmax}_a [\sum_{r,s'}p(r,s'|s,a)(r + \gamma v^*(s'))] = \text{argmax}_a [q^*(s,a)]$$

The only way to have multiple equivalent optimal policies is when there are two or more actions tied for $\text{max}_a [q^*(s,a)]$. If that is the case, then it does not matter - in terms of expected future reward - which of those actions is taken in the affected state. It is possible to take one or the other action.

You can use any rule you wish to decide which of the tied-for-max actions to take, including a random choice. Creating a new policy that is a linear combination of optimal policies is one way to make that random choice. You could equally say that you would make one action choice on a Monday and another on Tuesday - in terms of being optimal choices it does not matter at all how you break ties.

",1847,,,,,11/18/2020 9:06,,,,0,,,,CC BY-SA 4.0 24690,1,24693,,11/18/2020 11:04,,1,45,"

I'm a beginner with a classic "racing car" sandbox and a homemade simple neural network.

My pattern:

  1. Copy the "top car" (without mutation) to the next generation

  2. If there are some cars still running (because simulation reached the 30s win condition), then copy a mutated version of them for the next generation.

  3. Fill the rest of the pool with mutation of the "top car".

But this is just some dumb intuitive pattern I made on the fly while playing with my code. Perhaps I should copy the cars that are still running as-is instead of mutating them. Or, perhaps, some selection method I don't know about.

A new random track is generated at each new generation. a "top car" may be good on a track and crash immediately on the next track. I just feel that basing everything on the top car is wrong because of the track randomness.

Is there some known pattern for selecting a batch of candidates? (paper, google-fu keyword, interesting blog, etc.)

I don't know what to search for. I don't even know the name of my network or any vocabulary related to AI.

",42373,,2444,,11/18/2020 12:40,11/18/2020 12:40,Is there some known pattern for selecting a batch of candidates for the next generation?,,1,3,,,,CC BY-SA 4.0 24692,2,,24565,11/18/2020 11:56,,1,,"

This is not necessarily the only way to do this but it would be the approach I'd take.

Assuming your agents position is a vector in $\mathbb{R}^d$, then I would have the network take as input this position vector and pass it through a fully connected layer. I would also take as input the matrix and pass it through a convolutional layer(s) and flatten the output so it is now also a vector in $\mathbb{R}^{d'}$. I would then concatenate these together so you have a vector in $\mathbb{R}^{d + d'}$ and pass this through some fully connected layers as usual.

As for how many layers and which activation to use this is something you'll have to do by trial and error as this really is problem specific.

Your output would typically be a score for each of the action combinations, though as one of your outputs is a matrix this could potentially make things expensive to compute as you would need $2^{n \times m}$ binary representations just for the matrix ($n$ is the number of rows and $m$ is the number of columns).

",36821,,,,,11/18/2020 11:56,,,,0,,,,CC BY-SA 4.0 24693,2,,24690,11/18/2020 12:10,,1,,"

The most general descriptive frameworks covering what you are trying to do are:

These put some context around your problem, and might give you some pointers. For instance, reinforcement learning is an alternative approach to the evolutionary system you are trying to build.

The specific AI system you appear to be building is a genetic algorithm, and more specific still you are attempting to find a neural network that is optimal at a task by searching for the best network using a system of population generation, selection and mutation which repeats.

There are lots of ways to set up a system like this, so your approach is not necessarily wrong. However, I think there are two key things that would improve what you have built so far:

  • Use a fitness function for selection. Score each car, perhaps by how far it got before crashing when the episode ends. To reduce luck factor on random courses, you could make this score the mean result from e.g. 3 different courses (it is not necessary, but may address your concern that selection is too random in your case). Select some fraction of top scoring cars, or look into other selection approaches - e.g. weighted selection based on fitness score or ranking.

  • Add "sex", more properly known as genome crossover between selected population members. Mutating individuals is limiting because it silos improvements to a single line of ancestry - if there are two good mutations found at random you rely on that single line finding both of them. Whilst crossover allows sharing of good mutations between lines, making it much more likely that two good mutations will end up in the same individual.

There is a framework called NEAT which covers the issues above plus has other features useful for evolving neural networks. It often does well at control scenarios like the one you are considering. You may want to look into it, if your focus is mainly on solving the control problem. However, it is relatively advanced from where you are, so if your current focus to learn by building from scratch you may get more initially from implementing fitness functions and crossover yourself.

",1847,,,,,11/18/2020 12:10,,,,2,,,,CC BY-SA 4.0 24694,2,,24686,11/18/2020 12:13,,2,,"

Short answer

Two policies are different if they take different actions in a specific state $s$ (or they give different probabilities of taking those actions in $s$). There can be more than one optimal policy for a given value function: this only happens when two actions have the same value in a given state. Nevertheless, both policies lead to the same expected return. So, although they take different actions, those actions lead to the same expected return, so it doesn't matter which one you take: both actions are optimal.

Long answer

There are a few important points that need to be understood before understanding that an affine combination of optimal policies is also optimal.

  • A policy $\pi$ is optimal if and only if $v_\pi(s) \geq v_{\pi'}(s)$, for all states $s \in S$ and $\pi' \neq \pi \in \Pi$ [1];

    • In that case, we denote $\pi$ as $\pi_*$ and $\pi_* = \pi \geq \pi'$, for all $\pi' \neq \pi \in \Pi$.

    • In simple words, a policy is optimal if it leads to more or equal expected return, in all states, with respect to all other policies

  • Optimal policies share the same state and state-action value functions [1, 2], i.e. $v_*$ and $q_*$, respectively

    • In other words, if $\pi_1$ and $\pi_2$ are optimal policies, then $v_{\pi_1}(s) = v_{\pi_2}(s) = v_{\pi_*}(s)$ and $q_{\pi_1}(s, a) = q_{\pi_2}(s, a) = q_{\pi_*}(s, a)$, for all $s \in S$ and $a \in A$
  • Consequently, two optimal policies $\pi_1$ and $\pi_2$ can differ in state $s$ (i.e. $\pi_1$ takes action $a_1$ and $\pi_2$ takes action $a_2$ and $a_1 \neq a_2$) if and only if there exist actions $a_1$ and $a_2$ in $s$ such that \begin{align} v_{*}(s) &= q_{\pi_1}(s, a_1) \\ &= q_{\pi_1}(s, a_2) \\ &= q_{\pi_2}(s, a_2) \\ &= q_{\pi_2}(s, a_1) \\ &= \max _{a \in \mathcal{A}(s)} q_{\pi_{*}}(s, a) \\ &= \max _{a \in \mathcal{A}(s)} q_{\pi_1}(s, a) \\ &= \max _{a \in \mathcal{A}(s)} q_{\pi_2}(s, a) \tag{1} \label{1} \end{align}

  • This holds for deterministic (i.e. policies that always take the same action in a given state, i.e. they give probability $1$ to one action) and stochastic (give non-zero probability only to optimal actions) optimal policies

So, two different optimal policies $\pi_1$ and $\pi_2$ lead to the same expected return, for all states. Given that optimality is defined in terms of expected return, then, if $a_1 = \pi_1(s) \neq \pi_2(s) = a_2$, for some state $s$, then, it doesn't matter whether you take $a_1$ or $a_2$, because both lead to the same expected return. So, as written in this answer, you can either take action $a_1$ or $a_2$: both are optimal in terms of expected returns and this follows from equation \ref{1} above.

In this simple demo task, we can first derive two different optimal policy $\pi_1$, $\pi_2$, where $\pi_1(a_1|s_0) = 1$, $\pi_2(a_2 | s_0) = 1$. The combination of $\pi_1$ and$\pi_2$ is $\pi: \pi(a_1|s_0) = \alpha, \pi(a_2|s_0) = \beta$. $\pi$ is an optimal policy, too. Because any policy in this task is an optimal policy.

Yes, correct. The reason is simple. In your case, $\pi_1$ and $\pi_2$ give probability $1$ to one action, $a_1$ and $a_2$ respectively, so they must give probability $0$ to any other actions. $\pi$ will give a probability $\alpha$ to action $a_2$ and probability $\beta$ to action $a_1$, but, given that $a_1$ and $a_2$ lead to the same expected return (i.e. they are both optimal), it doesn't matter whether you take $a_1$ or $a_2$, even if $\alpha \ll \beta$ (or vice-versa).

",2444,,2444,,11/20/2020 14:13,11/20/2020 14:13,,,,3,,,,CC BY-SA 4.0 24698,2,,5715,11/18/2020 17:14,,1,,"

RLS is a second order optimizer, so, unlike LMS which takes into account an approximation of the derivative of the gradient, RLS also considers the second order derivative. You can study more about second order methods in sub-section "8.6 Approximate Second-Order Methods" of the following book available online:

https://www.deeplearningbook.org/contents/optimization.html

Deep Learning An MIT Press book Ian Goodfellow and Yoshua Bengio and Aaron Courville

",42330,,,,,11/18/2020 17:14,,,,0,,,,CC BY-SA 4.0 24699,1,,,11/18/2020 17:36,,1,157,"

I'm experimenting with an RL agent that interacts with the following environment. The learning algorithm is double DQN. The neural network represents the function from state to action. It's build with Keras sequential model and has two dense layers. The observation in the environment consists of the following features

  1. the agent's position in an N-dimensional grid,
  2. metrics that represent the hazards of adjacent cells (temperatures, toxicity, radiation, etc.) of adjacent cells, and
  3. some parameters that represent the agent's current characteristics (health, mood, etc.).

There are patterns to the distribution of hazards and the agent's goal is to learn to navigate safely through space.

I am concatenating these features, in the aforementioned order, into a tensor, which is fed into the double DQN.

Does the order in which the features are concatenated to create the state (or observation) matter? Is it possible to group the features in some way to increase the learning speed? If I mix up the features randomly, would that have any effect or it doesn't matter to the agent?

",38076,,38076,,11/20/2020 16:58,11/20/2020 16:58,Does the order in which the features are concatenated to create the state (or observation) matter?,,0,3,,,,CC BY-SA 4.0 24700,2,,24685,11/18/2020 18:34,,2,,"

Both the sparse categorical cross-entropy (SCE) and the categorical cross-entropy (CCE) can be greater than $1$. By the way, they are the same exact loss function: the only difference is really the implementation, where the SCE assumes that the labels (or classes) are given as integers, while the CCE assumes that the labels are given as one-hot vectors.

Here is the explanation with examples.

Let $(x, y) \in D$ be an input-output pair from the labelled dataset $D$, where $x$ is the input and $y$ is the ground-truth class/label for $x$, which is an integer between $0$ and $C-1$. Let's suppose that your neural network $f$ produces a probability vector $f(x) = \hat{y} \in [0, 1]^C$ (e.g. with a softmax), where $\hat{y}_i \in [0, 1]$ is the $i$th element of $\hat{y}$.

The formula for SCE is (which is consistent with the TensorFlow implementation of this loss function)

$$ \text{SCE}(y, \hat{y}) = - \ln (\hat{y}_{y}) \label{1}\tag{1}, $$

where $\hat{y}_{y}$ is the $y$th element of the output probability vector $\hat{y}$ that corresponds to the probability that $x$ belongs to class $y$, according to $f$.

Actually, the equation \ref{1} is also the definition of the CCE with one-hot vectors as targets (which behave as indicator functions). The only difference between CCE and SSE is really just the representation of the targets, which can slightly change the implementation under the hood. Moreover, note that this is the definition of the CE for only $1$ training pair. If you have multiple pairs, you have to compute the CE for all pairs, then average these CEs (for a reference, see equation 4.108, section 4.3.4 Multiclass logistic regression of Bishop's book PRML).

Let's have a look at a concrete example with concrete numbers. Let $C=5$, $y = 3$, $\hat{y} = [0.2, 0.2, 0.1, 0.4, 0.1]$, then the SCE is

\begin{align} \text{SCE}(y, \hat{y}) &= - \ln (0.4) \approx 0.92, \end{align}

If $\hat{y} = [0.2, 0.2, 0.2, 0.1, 0.3]$, so $\hat{y}_{y} = 0.1$, and we still have $y = 3$, then the CCE is $2.3 > 1$.

You can execute this Python code to check yourself.

import numpy as np
import tensorflow as tf # Install TensorFlow 2.3!

y = 1
y_true = [3] # sparse label (integer)
y_true2 = [0, 0, 0, 1, 0] # one-hot vector

for y_y in [0.4, 0.1]:
    sce_np = -(y * np.log(y_y))
    print("SCE (NumPy) =", sce_np)

y_preds = [[0.2, 0.2, 0.1, 0.4, 0.1],
           [0.2, 0.2, 0.2, 0.1, 0.3]]
for y_pred in y_preds:
    sce_tf = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
    cce_tf = tf.keras.losses.categorical_crossentropy(y_true2, y_pred)

    print("SCE (TensorFlow) =", sce_tf)
    print("CCE (TensorFlow) =", cce_tf)

To answer the following question more directly.

However, while training, my loss value is greater than one (almost equal to 1.2) in the first epoch, but until epoch 5 it comes near 0.8. Is it normal? If not, how can I solve this?

Yes. It can happen, as explained above. (However, this does not mean that you do not have mistakes in your code.)

",2444,,2444,,11/19/2020 12:00,11/19/2020 12:00,,,,0,,,,CC BY-SA 4.0 24702,1,24703,,11/18/2020 19:52,,2,548,"

Background

From my understanding (and following along with this blog post), (deep) neural networks apply transformations to the data such that the data's representation to the next layer (or classification layer) becomes more separate. As such, we can then apply a simple classifier(s) to the representation to chop up the regions where the different classes exist (as shown by this blog post).

If this is true and say we have some noisy data where the classes are not easily separable, would it make sense to push the input to a higher dimension, so we can more easily separate it later in the network?

For example, I have some tabular data that is a bit noisy, say it has 50 dimensions (input size of 50). To me, it seems logical to project the data to a higher dimension, such that it makes it easier for the classifier to separate. In essence, I would project the data to say 60 dimensions (layer out dim = 60), so the network can represent the data with more dimensions, allowing us to linearly separate it. (I find this similar to how SVMs can classify the data by pushing it to a higher dimension).

Question

Why, if the above is correct, do we not see many neural network architectures projecting the data into higher dimensions first then reducing the size of each layer thereafter?

I learned that if we have more hidden nodes than input nodes, the network will memorize rather than generalize.

",42384,,2444,,11/19/2020 0:49,11/24/2020 15:41,"Why don't neural networks project the data into higher dimensions first, then reduce the size of each layer thereafter?",,2,0,,,,CC BY-SA 4.0 24703,2,,24702,11/18/2020 21:36,,0,,"

To better understand this you should think in terms of capacity. Capacity is a theoretical notion that shows how much information your network can model.

The capacity of a network (given sufficient training) ties in directly with the bias/variance tradeoff:

  • too little capacity and your network isn't able to learn the complex relationships in the data.
  • too much capacity and your network has the capability of learning the noise in the dataset, besides the useful relationships.

At some point a network reaches a point where it has a high enough capacity to memorize the whole training set!

Now, by increasing the number of hidden neurons you essentially increase the capacity of your network. If the network already has enough capacity to learn the problem, then by increasing the neurons you are giving the network the capability of overfitting more easily.

Note: all the above assume that the network is trained sufficiently (i.e. no early stopping, etc).

",26652,,,,,11/18/2020 21:36,,,,0,,,,CC BY-SA 4.0 24704,1,,,11/18/2020 21:56,,2,350,"

In Artificial Intelligence A Modern Approach, search algorithms are divided into tree-search version and graph-search version, where the graph-search version keeps an extra explored set to avoid expanding nodes repeatedly.

However, in breadth-first search or A* search, I think we still need to keep the expanded nodes in the memory so that we can track the path from the root to the goal. (The node structure contains its parent node which can be used to extract the solution path, but only if we have those nodes kept in the memory)

So, if I'm right, why do we need the tree-search version in BFS and A*, given that the expanded nodes still need to be stored? Why not just use the graph-search version then?

If I'm wrong, so how do we track the solution path given that the expanded nodes have been discarded?

",42387,,2444,,11/18/2020 22:09,11/18/2020 22:09,Why do we use the tree-search version of breadth-first search or A*?,,0,3,,,,CC BY-SA 4.0 24708,1,,,11/19/2020 2:28,,0,69,"

I consider pre-training a YOLOv5 with Google Open Images Object Detection dataset. The dataset includes general domain categories with ~15 M box samples. After the pre-training is done, I will fine-tune the model on MS COCO dataset.

I would like to do it, if I can improve AP by ~7%. Do you think that it is possible, and I have a logical expectation? Unfortunately, I could not find anywhere anyone has tried an Open Images pre-trained object detector with MSCOCO training.

",10664,,2444,,11/19/2020 12:36,11/19/2020 12:36,Is it possible to improve the average precision of YOLO trained on Open Images Dataset by fine-tuning it with COCO?,,0,2,,,,CC BY-SA 4.0 24711,2,,24672,11/19/2020 7:38,,1,,"

How to extract the main content text from a formated text file?

I am not sure that just a neural network is the best approach to your problem.

Traditional natural language processing software are using something else, and generally using a complex mix of several techniques. I am supposing you are processing written text available as some file (in a file format you are very familiar with, e.g. OOXML or PDF or HTML5).

Read the wikipage on natural-language understanding and the one on parse trees (or concrete syntax trees).

BTW, you might use LaTeX or the Lout formatter to produce some PDF file. Both are open-source software (easily available on most Linux distributions, including Debian or Ubuntu). I recommend you to try generating some PDF file using them, and experiment on the generated PDF file. And a lot of AI papers are available (as preprints) in PDF form.

You could also use, as a PDF input to experiment your software, this or that draft reports (you might enjoy reading them too...). If in 2021 your software is capable of "understanding" and "abstracting/summarizing" these PDF files, please send me an email to basile@starynkevitch.net explaining (in written English) how you did build your neural network and what is the output of your software.

There are several issues:

  • extracting the non-textual things (e.g. HTML tags from HTML input, or strings from a PDF file, or some LaTeX one).

  • detecting the human language used in your text (e.g. French or English or Russian or Chinese). N-gram based techniques come to mind.

  • having a data structure or database representing a dictionnary of a thousand (at least) of significant words (in English or Russian or whatever human language you are interested in) related to the domain you want to handle (that dictionary would be different if you want to parse weather forecasts or documentation related to the automotive industry, since the word pressure or speed relates to different concepts. Notice also that "weather" and "time" are expressed in French by the same word: "temps" - as in "le temps qu'il fait" for ongoing weather and "le temps qui passe" for the flow of time). A "Queen" is not the same for a chess player and an historian. A program translating -or just analyzing- chess comments from English won't use the same word for translating / understanding "bishop" (in chess, "fou" in French, literally the crazy guy, unrelated to religion; in Russian chess books it would be "слон", literally an elephant) than another program translating / analyzing historical comments from English (e.g. about Mary Stuart).

  • modeling inside your software some domain-specific knowledge related to your analyzed text, since you would handle differently weather forecast text to textual comments of chess competitions, or textual exercises in any computer science or programming book (like CLRS). You could use some frame-based representations, like in RefPerSys or in CyC.

  • building a semantic network representing the input text. I believe you might need some prior one representing domain-specific knowledge in the area of the analyzed text (e.g. a program analyzing comments on chess games needs to know the rule of chess; another program analyzing StackOverflow answers probably needs to know something about operating systems in general). In think that in English "overflow" or "overheating" means very different concepts to software developers and to weather forecasters or climate experts.

Look also for inspiration into this blog of the late Jacques Pitrat. He did wrote an interesting book on your topic.

You might look inside the DECODER European project, and read more about expert systems and their inference engine and knowledge bases.

Your project could give you some PhD.

You certainly need several years of work to achieve your goals. I suggest contacting some academic in your area to be your advisor.

Notice that on Linux the pdf2text software is extracting text from PDF files. It is open-source, but I won't say it is an AI software. However, you could use it thru popen(3). See also regex(7).

BTW, the PDF specification is public as ISO 32000-2:2017 (and is related to PostScript). Get it and read it, and see also this youtube video or this 978 pages document. On Linux, most PDF files can usually be inspected with od(1) or less(1).

My HP Office Pro 8610 printer (connected to a Linux desktop) is capable of printing some PDF and of scanning into a PDF file. But if I print on paper some PDF file and scan it, the PDF file did change a lot, even if visually it looks the same.

Notice that some drawings -or photos- could be embedded in a PDF file, and appear to a non-blind human reader as letters.

",3335,,3335,,11/19/2020 18:49,11/19/2020 18:49,,,,6,,,,CC BY-SA 4.0 24714,2,,24673,11/19/2020 10:22,,1,,"

OK, so I found the answer - it is to use multiple linear regression. I think this can be marked a solved, but I don't have enough rep to do that.

",42358,,,,,11/19/2020 10:22,,,,0,,,,CC BY-SA 4.0 24719,1,,,11/19/2020 12:51,,3,950,"

I've been training a VAE to reconstruct human names and when I train it on a batch size of 100+ after about 5 hours of training it tends to just output the same thing regardless of the input and I'm using teacher forcing as well. When I use a lower batch size for example 1 it super overfitted and a batch size of 16 tended to give a much better generalization. Is there something about VAEs that would make this happen? Or is it just my specific problem?

",30885,,,,,11/19/2020 12:51,Why would a VAE train much better with batch sizes closer to 1 over batch size of 100+?,,0,0,,,,CC BY-SA 4.0 24724,1,24738,,11/19/2020 14:02,,1,312,"

Here is David Silver's lecture on that. Look at 9:30 to 10:30.

He says that, since it is model-free learning, the environment's dynamics are unknown, so the action-value function $Q$ is used.

  • But then state-values are already calculated (via first-visit or every-visit). So, why aren't these values used?

  • Secondly, even if we were to use $Q$, we have $Q^{\pi}(s,a) = R(s) + \gamma \sum_{s'}P(s'|s,a)V^{\pi}(s')$, so we still need to know the transition model, which is unknown.

What am I missing here?

",42398,,2444,,11/20/2020 2:13,11/20/2020 14:09,Why does Monte Carlo policy evaluation relies on action-value function rather than state-value function?,,1,1,,,,CC BY-SA 4.0 24730,2,,24642,11/19/2020 15:41,,1,,"

I will answer the first question question based on information I have gathered so far. The probability of each action for the $\textbf{two player zero sum game}$ need not be the same for both players. It turns out that in the game of Morra, the probability vectors just turn out to be the same value.

In general to determine $\textbf{optimal mixed strategies}$ for two player two action game: Suppose we have 2 actions $a_1$ and $a_2$. In a mixed strategy, we let the probability of the row player taking action $a_1$ be $p$. The probability of the row player taking $a_2$ is then $1-p$. Now, we will try to find the probability $p$ such that the column player is indifferent to either action $a_1$ or $a_2$. (I.e the expected payoff for column player if row player plays $a_1$ with probability $p$ is the same for either actions).

Solving for $p$, we know that the column player can play any mixed strategy since any strategy played by the column player will yield the same payoff (since expected payoff is the same for either $a_1$ and $a_2$). Since column player can play any mixed strategy, we want to find a probability distribution over $a_1$ and $a_2$ for the column player such that the row player is indifferent to either action $a_1$ or $a_2$. We let the column player take action $a_1$ with probability $q$. It turns out we can repeat the same process for the column player and find the values of $p$. The game value for the row player can be computed from $x^TAy$ and this turns out to be the same value for the column player as well. (Only in zero sum games)

Doing this process yields a $\textbf{mixed strategy Nash Equilibrium}$, using such an approach does not always work however in the scenario when one action choice always dominates another action choice. (I.e in the case of the prisoner's dilemma of Alice and Bob, no matter what mixed strategy Bob tries to use, there will never be a mixed strategy such that Alice is indifferent to refusing or testifying). She will always testify.

I am not too sure about how to solve zero sum games involving more than 2 actions. I think linear programming would have to be used but i am not to familiar with how it can be applied.

",32780,,,,,11/19/2020 15:41,,,,0,,,,CC BY-SA 4.0 24731,2,,205,11/19/2020 16:51,,1,,"

Software Reverse Engineering is one of my hobby.

First things first: forget about headers. All information about headers and separate C file is gone.

You're missing some crucial step, IMHO.

  • Compilation creates one or multiple object files (.o), then the linker creates an executable.

  • You should work from disassembled code. The disassembler works pretty well with some exceptions (self-modifying code, self-extracting executable, various obfuscation techniques) and will take care of a lot of work for you: identifying various sections, finding functions, guessing (fairly accurately) the calling convention.

  • Then the compiler optimization will mess up your code in a very clever way and some part of your original code will never ever be seen again (hey, look, your 200 lines of bugged code always return 0 anyway so I'll just replace them with "xor eax, eax").

  • Sometimes, it's fine, and sometimes it produces unreadable C code (vectorizations that have no C equivalent and will be decompiled into hundreds of lines of intrinsic instead of a fine readable "for" loop).

  • I'm not done yet. You also have exceptions and interrupts, structures, union, function pointers, function inlining, threading, system call and signals, loop unrolling, etc.

Going down (from human-readable to binary) is relatively easy compared to going up (decompilation) because so much information is lost during the compilation process.

My best bet would be to have a bunch of disassembled function produced by a disassembler and produce an LLVM intermediate representation using your AI, then compare it with the LLVM IR produced by Clang (clang -S -emit-llvm foo.c).

An infinite quantity of C code can produce the exact same code. Therefore, I think it's meaningless to make an AI read C code for the purpose of decompilation: the information is lost forever.

Commercial and Open/Free decompilers do not produce C code either, they produce some kind of pseudo-C full of errors, missing code, or code even less readable than the ASM.

The following code :

int main() {
    int toto = 0x0000BEEF;
    int titi = 0xDEAD0000;
    toto = toto | titi;
    return toto;
}

produces this:

int __cdecl main(int argc, const char **argv, const char **envp)
{
     return -559038737;
}

And this is the disassembled version:

mov     eax, 0DEADBEEFh
retn

Plus a few thousands lines of assembly that are unrelated to your code but are needed to make the program work.

You can't go back and you have no way of knowing this is the exact same code unless

  1. you can do static analysis (very easy in this case, but absurdly difficult in the real world)

  2. or compare the IR or ASM produced by both code with the same compiler with the same options on the same architecture and operating system.

",42373,,2444,,11/19/2020 21:34,11/19/2020 21:34,,,,2,,,,CC BY-SA 4.0 24732,1,24776,,11/19/2020 18:28,,1,1134,"

I know this is a general question, but I'm just looking for intuition. What are the characteristics of problems (in terms of state-space, action-space, environment, or anything else you can think of) that are well solvable with the family of DQN algorithms? What kind of problems are not well fit for DQNs?

",38076,,2444,,11/21/2020 21:32,11/22/2020 20:03,What kind of problems is DQN algorithm good and bad for?,,1,0,,,,CC BY-SA 4.0 24734,2,,24682,11/19/2020 23:07,,1,,"

According to Artificial Intelligence: A Modern Approach 4th edition in IDA* the cutoff is the $f$-cost($g+h$); at each iteration, the cutoff value is the smallest $f$-cost of any node that exceeded the cutoff on the previous iteration.

In other words, each iteration exhaustively searches an $f$-contour, finds a node just beyond that contour, and uses that node's $f$-cost as the next contour.

And we must test if the node is a goal node when it was selected for expansion, otherwise, the algorithm is not optimal anymore (the proof is similar to that one in A*).

I think the animation that the site provided is misleading because in the code which is written in the last of the same site we have that:

function Search(node, g, threshold)              //recursive function
{

  f = g + heuristic(node);
  if(f > threshold)             //greater f encountered
         return f;
  if(node == Goal)               //Goal node found
         return FOUND;
  integer min = MAX_INT;     //min = Minimum integer
  foreach(tempnode in nextnodes(node))
  {

     //recursive call with next node as current node for depth search
     integer temp=search(tempnode, g + cost(node, tempnode), threshold);  
     if(temp == FOUND)            //if goal found
       return FOUND;
     if(temp < min)     //find the minimum of all 'f' greater than threshold encountered                                
       min = temp;

     }
     return min;  //return the minimum 'f' encountered greater than threshold
}

And in the previous code, we test if the node is the goal node only when it was selected for expansion.

",36578,,2444,,11/21/2020 11:49,11/21/2020 11:49,,,,1,,,,CC BY-SA 4.0 24735,1,,,11/19/2020 23:48,,1,46,"

My question may be a bit hard to explain... My neural network learns a categorical distribution, which serves as an index. This index will look up the value (= action_mean) in Input 2.

From this action_mean, I create a normal distribution where the network has to learn to adjust the standard deviation. The output of the network is a sample of this normal distribution.

Since the value of action_mean is directly taken from the input, somehow the gradient can't be computed or gives Nones, respectively, because the output of the net is not completely connected with the input.

Would there be a way to link my action_mean with the input value, without changing the input values itself? To describe my problem, I attached a simplified computational graph how tensorboard shows it.

I would be very thankful for any help!

",41715,,,,,11/19/2020 23:48,How to use unmodified input in neural network?,,0,5,,,,CC BY-SA 4.0 24736,1,,,11/20/2020 0:15,,2,231,"

I would like to use self-supervised learning (SSL) to learn features from images (the dataset consists of similar images with small differences), then use the resulting trained model to bootstrap an instance segmentation task.

I am thinking about using Faster R-CNN, Mask R-CNN, or ResNet for the instance segmentation task, which is pre-trained in an SSL way by solving a pretext task, with the aim that this will lead to higher accuracy and also teach the CNNs with fewer examples during the downstream task.

Is it possible to use SSL to pre-train e.g. a faster R-CNN on a pretext task (for example, rotation), then use this pre-trained model for instance segmentation with the aim to get better accuracy?

",42034,,2444,,11/20/2020 16:50,11/20/2020 16:56,Is it possible to pre-train a CNN in a self-supervised way so that it can later be used to solve an instance segmentation task?,,1,0,,,,CC BY-SA 4.0 24738,2,,24724,11/20/2020 1:55,,3,,"

In Model Based Reinforcement learning, state and state-action values for all states can be calculated based on the bellman equations. The equations are taken from Andrew Ng's Algorithms for Inverse Reinforcement Learning $$V^{\pi}(s) = R(s) + \gamma \sum_{s'}P(s'|s,a)V^{\pi}(s') \\ Q^{\pi}(s,a) = R(s) + \gamma \sum_{s'}P(s'|s,a)V^{\pi}(s')$$

In this setting, $Q^{\pi}$ can be obtained from $V^{\pi}$ because we have access to the transition model $P(s'|s,a)$. The $Q^{\pi}$ values allow us to carry out a step in $\textbf{policy improvement}$ as in policy iteration.

To answer the first bullet point, the first visit or every state visit policy evaluation in the model free setting for $\textbf{state values}$ is not helpful in determining how to carry out model free control because we cannot compute $Q^{\pi}(s,a)$ from $V^{\pi}$ in the model free case.

The update for SARSA in model free control is $$Q(s,a) \rightarrow Q(s,a) + \alpha (r(s) + \gamma Q(s',a') - Q(s,a))$$

Even though we do not know the transition model, we are essentially $\textbf{sampling}$ from $P(s'|s,a)$ by allowing the environment to provide us the possible next states $s'$ that we may end up in. The following update for SARSA is equivalent to computing $$Q^{\pi}(s,a) = R(s) + \gamma E_{s' \sim P(s'|s,a)}[Q^{\pi}(s',a')]$$ Essentially this should give the same $Q^{\pi}(s,a)$ values when we have the ground truth $P(s'|s,a)$ values for the model free case.

",32780,,32780,,11/20/2020 14:09,11/20/2020 14:09,,,,2,,,,CC BY-SA 4.0 24740,1,,,11/20/2020 8:24,,1,919,"

I am building my first ANN from scratch. I know that I need a transfer function and I want to use the sigmoid function as my teacher recommended that. That function can be between 0 and 1, but my input values for the network are between -5 and 20. Someone told me that I need to scale the function so that it is in the range of -5 and 20 instead of 0 and 1. Is this true? Why?

",42414,,2444,,11/20/2020 13:01,11/20/2020 21:04,"How to use sigmoid as transfer function when input is not (0,1) range in ANN?",,2,2,,,,CC BY-SA 4.0 24741,1,24903,,11/20/2020 9:35,,3,435,"

In the GradCAM paper section 3 they implicitly propose that two things are needed to understand which areas of an input image contribute most to the output class (in a multi-label classification problem). That is:

  • $A^k$ the final feature maps
  • $\alpha_k^c$ the average pooled partial derivatives of the output class scores $y^c$ with respect to the the final feature maps $A_k$.

The second point is clear to me. The stronger the derivative, the more important the $k$th channel of the final feature maps is.

The first point is not, because the implicit assumption is that non-zero activations have more significance than activations close to zero. I know it's tempting to take that as a given, but for me it's not so obvious. After all, neurons have biases, and a bias can arbitrarily shift the reference point, and hence what 0 means. We can easily transform two neurons [0, 1] to [1, 0] with a linear transformation.

So why should it matter which regions of the final feature maps are strongly activated?


EDIT

To address a comment further down, this table explains why I'm thinking about magnitude rather than sign of the activations.

It comes from thinking about the possible variations of

$$ L_{Grad-CAM}^c = ReLU\bigl( \sum_k \alpha_k^c A^k \bigr) $$

",16871,,16871,,11/29/2020 17:32,11/29/2020 17:32,"In GradCAM, why is activation strength considered an indicator of relevant regions?",,1,0,,,,CC BY-SA 4.0 24742,1,,,11/20/2020 11:21,,2,200,"

I am trying to implement the basic RL algorithms to learn on this 10x10 GridWorld (from REINFORCEJS by Kaparthy).

Currently I am stuck at TD(0). No matter how many episodes I run, when I am updating the policy after all episodes are done according to the value function I dont get the optimal value function which I obtain when toggle td learning on the grid from the link I provided above.

The only way I am getting the optimal policy is when I am updating the policy in each iteration and then following the updated policy when calculating the TD target. But according to the algorithm from Sutton and Barto a given policy (which is fixed over all episodes - see line 1 below) should be evaluated.

Using alpha=0.1 and gamma=0.9 after 1000 episodes my Td(0) algo finds the following value function

[[-0.04 -0.03 -0.03 -0.05 -0.08 -0.11 -0.09 -0.06 -0.05 -0.07]
 [-0.08 -0.04 -0.04 -0.06 -0.1  -0.23 -0.11 -0.06 -0.07 -0.11]
 [-0.13  -inf  -inf  -inf  -inf -0.58  -inf  -inf  -inf -0.25]
 [-0.24 -0.52 -1.23 -2.6   -inf -1.4  -1.28 -1.12 -0.95 -0.62]
 [-0.28 -0.49 -0.87 -1.28  -inf -2.14 -2.63 -1.65 -1.38 -1.04]
 [-0.27 -0.42 -0.64 -0.94  -inf  0.97 -1.67 -2.01 -2.79 -1.62]
 [-0.26 -0.36 -0.69 -0.93  -inf -1.17 -1.72 -1.92 -2.75 -1.82]
 [-0.25 -0.38 -0.67 -2.27  -inf -2.62 -2.74 -1.55 -1.31 -1.14]
 [-0.23 -0.31 -0.66 -1.2  -0.98 -1.24 -1.48 -1.02 -0.7  -0.7 ]
 [-0.2  -0.29 -0.43 -0.62 -0.64 -0.77 -0.87 -0.67 -0.54 -0.48]]

where -infare walls in the grid. If I update the policy according to that value function I am getting.

[['e   ' 'e   ' 'w   ' 'w   ' 'w   ' 'w   ' 'e   ' 'e   ' 's   ' 'w   ']
 ['e   ' 'n   ' 'n   ' 'w   ' 'w   ' 'e   ' 'e   ' 'e   ' 'n   ' 'w   ']
 ['n   ' 'XXXX' 'XXXX' 'XXXX' 'XXXX' 'n   ' 'XXXX' 'XXXX' 'XXXX' 'n   ']
 ['n   ' 'w   ' 'w   ' 's   ' 'XXXX' 'n   ' 'e   ' 'e   ' 'e   ' 'n   ']
 ['n   ' 'w   ' 'w   ' 'w   ' 'XXXX' 's   ' 'n   ' 'n   ' 'n   ' 'n   ']
 ['s   ' 'w   ' 'w   ' 'w   ' 'XXXX' 's   ' 'w   ' 'n   ' 'n   ' 'n   ']
 ['s   ' 'w   ' 'w   ' 'w   ' 'XXXX' 'n   ' 'w   ' 's   ' 's   ' 's   ']
 ['s   ' 'w   ' 'w   ' 'w   ' 'XXXX' 'n   ' 'e   ' 's   ' 's   ' 's   ']
 ['s   ' 'w   ' 'w   ' 's   ' 's   ' 's   ' 's   ' 'e   ' 's   ' 's   ']
 ['n   ' 'w   ' 'w   ' 'w   ' 'w   ' 'w   ' 'e   ' 'e   ' 'e   ' 'w   ']]

where (n, w, s, e) = (north, west, south, east). According to the result from Andrey Kaparthys simulation (from here) the final policy should look like this

Notes:

  • I did not use any exploration
  • when the agent ends up in the final state [5, 5] I used the value of the starting state [0, 0] as the value of its successor state V(S_{t+1}). The episode is then finished and the agent starts again in the starting state.
  • In every state the agent takes a random action taken from north, west, south or east. If he ends in a wall the value of the next state is just the value where the agent currently is in. And it stays in its state and takes a random action again.

I am scratching my head on this for a while now but I dont understand what I am missing.

  1. The value function has to converge. Meaning my policy should be the same as on the website (picture 2)?
  2. Only the value of my final state is positive while on the website simulation the whole optimal trajectory has positive values. I know that this is because on the website they update the policy in every step. But shouldn't it also work without updating it iteratively like I did it?
  3. Since I am taking a random action (from n,w,s,e) in every step in every episode for example state [6, 5] or [6, 6] (the one below the terminal state) can not really take advantage of the positivity of the terminal state since they are surrounded by more negative-reward-states than this positive-reward-state. This is why after so many iterations the values are getting negative.

I appreciate any help. Thanks in advance.

",29667,,29667,,11/20/2020 11:33,11/20/2020 11:33,Why is TD(0) not converging to the optimal policy?,,0,2,,,,CC BY-SA 4.0 24743,2,,24740,11/20/2020 12:26,,1,,"

There are several functions that can be denoted as sigmoid functions, such as the logistic function and the hyperbolic tangent, given that they have an $S$-shaped curve. You can find more info about them in the related Wikipedia article.

However, when people use the term sigmoid function, they typically refer to the logistic function, which is a function of the form $$\sigma: \mathbb{R} \rightarrow [0, 1],$$ So, it accepts any real number and produces a number between $0$ and $1$, so you can pass any real number (as input) to the sigmoid.

The standard logistic function $\sigma$ is mathematically defined as follows

$$\sigma(x)=\frac{1}{1+e^{-x}}\in [0, 1]$$

Here is the graph of the logistic function. The $y$-axis contains the output values, i.e. $\sigma(x)$ for any $x \in \mathbb{R}$, while the $x$-axis contains the input values, i.e. $x \in \mathbb{R}$.

That function can be between 0 and 1 but my input values for the network are between -5 and 20. Someone told me that I need to scale the function so that it is in the range of -5 and 20 instead of 0 and 1. Is this true? Why?

No, in principle, you do not have to scale your inputs, for the reasons I have just explained. However, this does not mean that, in practice, it may not be a good idea to standardize or normalize your inputs before feeding them into the neural network. See the vanishing gradient problem (VGP) in this article (by J. Brownlee), or this one (by M. Nielsen), or in this video (by A. Ng). Moreover, you should probably actually use another activation function, such as the ReLU, which does not suffer from the VGP (which, nevertheless, should not occur, if your neural network does not contain more than a couple of hidden layers).

",2444,,2444,,11/20/2020 13:17,11/20/2020 13:17,,,,0,,,,CC BY-SA 4.0 24744,1,,,11/20/2020 14:01,,0,82,"

I have to do a project that detects fabric surface errors and I will use machine learning methods to deal with it. I have a dataset that includes around six thousand fabric surface images with the size 256x256. This dataset is labeled, one thousand of it was labeled as NOK that means fabric surface with error, and the rest was labeled as OK which means fabric surface without an error.
I read a lot of papers about fabric surface error detection with machine learning methods, and I saw that "autoencoders" are used to do it. But as I saw that the autoencoders are used in unsupervised learning models without labels. I need to do it with supervised learning models. Is there any model that can I use for fabric surface error detection with images in the supervised learning? Can be autoencoders used for it or is there any better model to do it?

",42421,,,,,7/2/2022 12:07,Which models can I use for supervised learning with images?,,2,2,,,,CC BY-SA 4.0 24749,1,27622,,11/20/2020 16:16,,3,148,"

I have a scenario where, in an ideal situation, the greedy approach is the best, but when non-idealities are introduced which can be learned, DQN starts doing better. So, after checking what DQN achieved, I tried C51 using the standard implementation from tf.agents (link). A very nice description is given here. But, as shown in the image, C51 does extremely bad.

As you can see, C51 stays at the same level throughout. When learning, the loss right from the first iteration is around 10e-3 and goes on to 10e-5, which definitely impacts the change in the weights. But I am not sure how this can be solved.

The scenario is

  • 1 episode consists of 10 steps and the episode only ends after the 10th step, the episode never ends earlier.

  • states at each step are integer values and can take values between 0 and 1. In the image, states are of shape 20*1.

  • actions have the shape 20*1

  • learning rate = 10e-3

  • exploration factor $\epsilon$ starts out at 0.2 and decays up to 0.01

C51 has 3 additional parameters, which help it to learn the distribution of q-values

num_atoms = 51 # u/param {type:"integer"} 
min_q_value = -20 # u/param {type:"integer"} 
max_q_value = 20 # u/param {type:"integer"}

num_atoms is the number of support that the learned distribution will have, and min_q_value and max_q_value are the endpoints of the q-value distribution. I set them as 51 (the first paper and other implementations keep it as 51 and hence the name 51), and the min and max are set as the min and max possible rewards.

So, if anyone could help me with fine-tuning the parameters for C51 to work, I would be very grateful.

",41984,,2444,,11/21/2020 22:10,10/30/2022 5:05,"How should I change the hyper-parameters of the C51 algorithm, in order to obtain higher reward?",,1,3,,,,CC BY-SA 4.0 24750,2,,24736,11/20/2020 16:32,,2,,"

Is it possible to use SSL to pre-train e.g. a faster R-CNN on a pretext task (for example, rotation), then use this pre-trained model for instance segmentation with the aim to get better accuracy?

Yes, it's possible and this has already been done. I don't know the details (because I have not yet read those papers), but I will provide you with some links to some potentially useful papers (based on their titles and abstracts) and associated code.

You can probably find more relevant papers here, where I also found some of the just cited papers.

The pre-text tasks designed in these papers could be useful in your case, but it may also turn out that you need to develop other pre-text tasks or combine multiple of them.

Maybe you can start from some pre-trained faster R-CNN or some appropriate model for instance segmentation (that you can find on the web, for example, here), which has been pre-trained on some imagery data similar to yours (either with SSL or by other means), then try to fine-tune this model with your labeled dataset for instance segmentation, and see if you get better results than just training a faster R-CNN from scratch. Eventually, if this pre-trained model does not lead to higher performance, you could pre-train it yourself with some SSL technique that you can come up with or one that is described in the literature. Of course, you should probably use a pre-trained model that has been pre-trained with data that is relevant for your downstream task (i.e. the instance segmentation task). You didn't describe the details of your unlabelled and labeled data, so I cannot be more specific (and I wouldn't currently be able to, in any case, because I didn't fully read those papers, and my experience with SSL techniques is mostly theoretical).

For more info about SSL, take a look at this and this answers.

",2444,,2444,,11/20/2020 16:56,11/20/2020 16:56,,,,0,,,,CC BY-SA 4.0 24751,2,,22184,11/20/2020 17:15,,0,,"

Most (if not all) self-supervised learning techniques for (visual or textual) representation learning use pre-text tasks, and many pre-text tasks have been proposed in recent years.

However, as I say in my other answer (which you cite), the term SSL has also been used (at least, in robotics: for example, see this paper, which I am very familiar with) to refer to techniques that automatically (although approximately) label the unlabelled dataset for your downstream task (i.e. image recognition), i.e. they automatically create a labeled dataset of pairs $(x_i, \hat{y}_i)$, where $x_i$ is an image that contains an object and $\hat{y}_i$ is the automatically (and possibly approximately) generated label (such as "dog"). This latter use of the term SSL is closer to some weakly supervised learning (WSL) techniques. Actually, it can be considered a WSL technique.

Now, in this specific paper, they actually solve some kind of pre-text task, i.e. they exploit the relations between two different sensors to produce the labels.

To answer your question more directly: in all SSL papers that I have come across, some kind of pre-text task is always solved, i.e., in some way, you need to automatically generate the supervisory signal, and that task that we solve with the automatically generated learning signal (with the purpose of learning representations or generating a labeled dataset) can be considered the pre-text task (which may coincide with the downstream task, for example, in the case you're training an auto-encoder with an unlabelled dataset for the purposes of image denoising).

In any case, I wouldn't bother too much about it. Just keep your context in mind when reading your paper. If you're really worried about it, then you should probably read almost all SSL-related papers, but, in that case, by the end of that, you will be an expert on the topic and you will not need our help (or my help).

",2444,,2444,,11/21/2020 20:07,11/21/2020 20:07,,,,0,,,,CC BY-SA 4.0 24752,2,,24671,11/20/2020 17:43,,1,,"

Graph neural networks, of which GCNs are a specific type, are able to handle arbitrary graphs as input. GNNs operate first over "neighborhoods" of nodes to compute individual node representations and then optionally apply a pooling function to reduce these to a single graph-level representation that can be used in classification. This means that GNNs work "locally" and do not contain implicit assumptions about the graph topology.

For example, the update equation for a given node's representation in a GCN can be written as

$$h_v^{t+1} = \sigma\left({\bf W}^{t+1} \sum_{u \in \mathcal{N}_v} {\bf L}_{uv}~ h_u^t \right) $$

where $h_v^l$ is the representation of node $v$ at update $t$, $\sigma$ is an activation function, ${\bf W^{t}}$ is a weight matrix, ${\bf L_{uv}}$ is the value of the graph Laplacian (which is a matrix) at nodes $u$ and $v$, and finally $\mathcal{N}_v$ is the neighborhood of $v$. Looking at this expression it becomes clear that the value of the summation is always of the same dimension as $h$, no matter how you define the neighborhood $\mathcal{N}_v$. So you are correct that as long as the node representation size is static, the network can take arbitrary graphs as input.

I highly recommend this tutorial paper on GNNs, which I used when first learning about them. Section 2.3 specifically answers your question and discusses how things like cycles and non-positional graphs are handled.

",37972,,,,,11/20/2020 17:43,,,,0,,,,CC BY-SA 4.0 24753,1,24756,,11/20/2020 17:47,,2,142,"

Suppose, I have several sequences that include a series of text (the length of sequence can be varied). Also, I have some related reward value. however, the value is not continuous like the text. It has many missing values. Here is an example of the dataset.

Sequence 1        Sequence 2 .............. Sequence n
------------      ----------                -------------
Action  Reward    Action  Reward            Action  Reward
  A                 C                          D
  B       5         A                          B      6
  C                 A       7                  A       
  C       6         B       10                 D           
  A                 C                          A           
  B       2         A                          B           
  ..                ...                        ...
 ...               .....                      .....
  D       5         C      4                   D          

Now I want to predict the next action based on the reward value. The idea is I want to predict the actions that leads to more rewards. Previously, I used only action data to predict the next action using LSTM and GRU. However, how could I use this reward value in this prediction? I was if thinking Reinforcement learning (MDP) could solve the problem. However, as the rewards are discrete, I am not sure if RL could do that. Also, is it possible to solve this problem with Inverse RL? I have some knowledge of deep learning. However, I am new to reinforcement learning. If anyone gives me some suggestion or provide me useful papers link regarding this problem, it could help me a lot.

",18795,,,,,11/20/2020 19:17,Predict next event based on previous events and discrete reward values,,1,0,,,,CC BY-SA 4.0 24756,2,,24753,11/20/2020 19:17,,1,,"

Your problem does look like it could be a good match to reinforcement learning, or at least the related idea of contextual bandits. Whether or not it would be a good match to the full reinforcement learning algorithm depends on whether any of the data you are processing could be considered part of an environment state, and whether or not that state evolves based on rules that an agent could learn to take advantage of.

Previously, I used only action data to predict the next action using LSTM and GRU. However, how could I use this reward value in this prediction?

There are a few different ways to do this using reinforcement learning theory. The simplest is to build a regression predictor that approximates a sum of future rewards (also known as the return or utility) depending on a proposed action from the current state. Then you could use the value function approximator (formal name for the predictor you just built) to predict results from each possible action and pick the maximising one. It is possible to learn such a value function from a historical dataset using methods such as Q learning.

The subject is too complex to teach from scratch in a single answer here. A good learning resource is Reinforcement Learning: An Introduction by Sutton & Barto, which the authors have made available for free.

However, as the rewards are discrete, I am not sure if RL could do that.

Yes it can. Reinforcement learning just requires that rewards in each situation follow a consistent distribution of values. Always returning discrete values is not a problem, neither is always returning the same value in the same situation. Randomness in the reward value - such as sometimes returning a discrete value and other times not in the same situation - is also OK. You can treat missing values as zero, since you are concerned only with the sum of received rewards, using zero when no value is available has no effect on what will be considered the optimal solution.

Also, is it possible to solve this problem with Inverse RL?

Probably not. Inverse RL is concerned with figuring out the parameters that an existing agent is working with by observing it. For instance you could use it to observe a creature's behaviour and figure out which rewards were more valuable to it. In your case you have the reward values, so you don't need to figure them out.

Caveat: You need to figure out what constitutes state in your environment. If there is some state that can be used for predictions, but the agent's behaviour never changes the state, then you may want to spend some time modelling your problem as a contextual bandit instead. Bandit algorithms are introduced in the same book, Reinforcement Learning: An Introduction, but only as much is needed to teach about the full RL problem - bandit solvers can get far more sophisticated than the book considers.

Note that if the history of agent actions impacts the reward (e.g. it is a matter of timing when to take the right action), then that history is part of the state, and you likely do have a full reinforcement learning problem to solve.

",1847,,,,,11/20/2020 19:17,,,,0,,,,CC BY-SA 4.0 24757,1,,,11/20/2020 19:41,,0,91,"

I'm building an agent to solve the Taxi environment. I've seen this problem solved with Q-Learning algorithms but my DQN consistently fails to learn anything. The environment has a discrete observation space, I one-hot encode the state before feeding it to the DQN. I also went ahead to implement Hindsight Experience Replay to help the learning process but the DQN still doesn't learn anything. What can I do to fix this?

I've heard that DQN doesn't excel at environments that require planning to succeed, if that's the case, which algorithms would work well for this environment?

EDIT

When I posted this question, my DQN was learning from only 2 step transitions, since this environment can go on for several timesteps without any positive reward, I updated the agent to use transitions of 200 steps. Since I'm using Hindsight Experience Replay, my agent is sure to receive rewards within 200 timesteps even if it didn't meet the goal. I tried this and my agent still hasn't improved, it continually performs worse than the random agent baseline. I checked the contents of the buffer, I observed transitions that do lead to several rewards because their goals have been modified during HER and yet the DQN agent doesn't learn anything.

Also, I'm using TensorFlow's tf_agents for my implementation. Here's a link to the code. I repurposed this example.

I hope this helps

",42431,,42431,,11/22/2020 19:59,11/22/2020 19:59,DQN fails to learn useful policy for the Taxi environment (Dietterich 200),,0,4,,,,CC BY-SA 4.0 24759,2,,24740,11/20/2020 21:04,,1,,"

I presume when you say input you may be referring to the target values (the things you are trying to predict). If not, then some parts of your question might not make sense, like your proposal to apply a scaling.

In any case I would consider what the target distribution is before using a sigmoid and applying a scaling. The thing about a sigmoid is that the model has to be very confident in order to predict a value close to the boundaries. So in your case, the model would have to be really confident about something for it to output a -5, and it would have to be really confident of the opposite to output a +20. I would be surprised if this is what you are really looking for in this case.

Let's say your target distribution is uniform. Then your best bet is to use a linear activation, and maybe use max/min to clip it. Then it would behave a lot like a ReLU.

If I haven't quite nailed it with this answer, can you give more info on what you are predicting?

",16871,,,,,11/20/2020 21:04,,,,0,,,,CC BY-SA 4.0 24760,1,24761,,11/21/2020 5:04,,0,221,"

I am trying to understand what is meant by following equations in the Noise2Noise paper by Nvidia.

What is meant by the equation in this image? What is $\mathbb{E}_y\{y\}$? And how should I try to visualize these equations?

",27601,,2444,,11/23/2020 10:56,11/23/2020 10:56,What is the meaning of these equations in Noise2Noise paper?,,1,1,0,,,CC BY-SA 4.0 24761,2,,24760,11/21/2020 11:38,,1,,"

The equation you are referring to is called Mean Squared Error (or $L_2$ loss) and it is used for regression tasks, where the goal is to predict a real value given some input.

In your case, the inputs are measurements of temperature $y$, either at a certain point in time or point in space or both or none, this is not clear from the image. Now, the goal would be to predict the temperature at a new point in space, time, or both, where we don't have access to a measurement. That is we would like to find a function $f$ (e.g. a simple linear function) which we can use for prediction. But how can we measure which function is "best"? We introduce a loss function $L(f,y)$, another function which tells us how good our proposed function is.

Visually it looks like this (image source):

Red crossed are measurements, the black line is our function we use for prediction and the green dotted lines are the errors (the distance from our prediction to the real measurement). In this example salary depends on experience.

Now, the paper introduces the constant mean of all measurements as $y$, $z = \mathbb{E}_y\{y\} = \frac{1}{N}\sum_i^N y_i$, as our function $f$, which is known to be the minimizer for the $L_2$ loss in the case where there is no dependence on other variables (e.g. time or space).

",37120,,2444,,11/21/2020 14:18,11/21/2020 14:18,,,,3,,,,CC BY-SA 4.0 24766,1,24773,,11/21/2020 13:08,,2,114,"

I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.2 Relationship with Support Vector Machines says the following:

The perceptron criterion is a shifted version of the hinge-loss used in support vector machines (see Chapter 2). The hinge loss looks even more similar to the zero-one loss criterion of Equation 1.7, and is defined as follows: $$L_i^{svm} = \max\{ 1 - y_i(\overline{W} \cdot \overline{X}_i), 0 \} \tag{1.9}$$ Note that the perceptron does not keep the constant term of $1$ on the right-hand side of Equation 1.7, whereas the hinge loss keeps this constant within the maximization function. This change does not affect the algebraic expression for the gradient, but it does change which points are lossless and should not cause an update. The relationship between the perceptron criterion and the hinge loss is shown in Figure 1.6. This similarity becomes particularly evident when the perceptron updates of Equation 1.6 are rewritten as follows: $$\overline{W} \Leftarrow \overline{W} + \alpha \sum_{(\overline{X}, y) \in S^+} y \overline{X} \tag{1.10}$$ Here, $S^+$ is defined as the set of all misclassified training points $\overline{X} \in S$ that satisfy the condition $y(\overline{W} \cdot \overline{X}) < 0$. This update seems to look somewhat different from the perceptron, because the perceptron uses the error $E(\overline{X})$ for the update, which is replaced with $y$ in the update above. A key point is that the (integer) error value $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ can never be $0$ for misclassified points in $S^+$. Therefore, we have $E(\overline{X}) = 2y$ for misclassified points, and $E(X)$ can be replaced with $y$ in the updates after absorbing the factor of $2$ within the learning rate.

Equation 1.7 is as follows:

$$L_i^{(0/1)} = \dfrac{1}{2} (y_i - \text{sign}\{ \overline{W} \cdot \overline{X_i} \})^2 = 1 - y_i \cdot \text{sign} \{ \overline{W} \cdot \overline{X_i} \} \tag{1.7}$$

And figure 1.6 is as follows:

It is said that we are dealing with the case of binary classification, where $y \in \{ -1, +1 \}$. But the author claims that $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$, which doesn't include the case of $0$. So shouldn't the $\{ -2, +2 \}$ in $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ be $\{ -2, 0, +2 \}$?

",16521,,16521,,11/21/2020 15:19,2/7/2021 16:04,"Why doesn't the set $\{ -2, +2 \}$ in $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ include $0$?",,1,2,,,,CC BY-SA 4.0 24767,1,,,11/21/2020 13:19,,6,594,"

I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.2 Relationship with Support Vector Machines says the following:

The perceptron criterion is a shifted version of the hinge-loss used in support vector machines (see Chapter 2). The hinge loss looks even more similar to the zero-one loss criterion of Equation 1.7, and is defined as follows: $$L_i^{svm} = \max\{ 1 - y_i(\overline{W} \cdot \overline{X}_i), 0 \} \tag{1.9}$$ Note that the perceptron does not keep the constant term of $1$ on the right-hand side of Equation 1.7, whereas the hinge loss keeps this constant within the maximization function. This change does not affect the algebraic expression for the gradient, but it does change which points are lossless and should not cause an update. The relationship between the perceptron criterion and the hinge loss is shown in Figure 1.6. This similarity becomes particularly evident when the perceptron updates of Equation 1.6 are rewritten as follows: $$\overline{W} \Leftarrow \overline{W} + \alpha \sum_{(\overline{X}, y) \in S^+} y \overline{X} \tag{1.10}$$ Here, $S^+$ is defined as the set of all misclassified training points $\overline{X} \in S$ that satisfy the condition $y(\overline{W} \cdot \overline{X}) < 0$. This update seems to look somewhat different from the perceptron, because the perceptron uses the error $E(\overline{X})$ for the update, which is replaced with $y$ in the update above. A key point is that the (integer) error value $E(X) = (y − \text{sign}\{\overline{W} \cdot \overline{X} \}) \in \{ −2, +2 \}$ can never be $0$ for misclassified points in $S^+$. Therefore, we have $E(\overline{X}) = 2y$ for misclassified points, and $E(X)$ can be replaced with $y$ in the updates after absorbing the factor of $2$ within the learning rate.

Equation 1.6 is as follows:

$$\overline{W} \Leftarrow \overline{W} + \alpha \sum_{\overline{X} \in S} E(\overline{X})\overline{X}, \tag{1.6}$$ where $S$ is a randomly chosen subset of training points, $\overline{X} = [x_1, \dots, x_d]$ is a data instance (vector of $d$ feature variables), $\overline{W} = [w_1, \dots, w_d]$ are the weights, $\alpha$ is the learning rate, and $E(\overline{X}) = (y - \hat{y})$ is an error value, where $\hat{y} = \text{sign}\{ \overline{W} \cdot \overline{X} \}$ is the prediction and $y$ is the observed value of the binary class variable.

Equation 1.7 is as follows:

$$L_i^{(0/1)} = \dfrac{1}{2} (y_i - \text{sign}\{ \overline{W} \cdot \overline{X_i} \})^2 = 1 - y_i \cdot \text{sign} \{ \overline{W} \cdot \overline{X_i} \} \tag{1.7}$$

And figure 1.6 is as follows:

Figure 1.6 looks unclear to me. What is figure 1.6 showing, and how is it relevant to the point that the author is trying to make?

",16521,,16521,,4/23/2021 7:53,12/5/2022 15:07,How should we interpret this figure that relates the perceptron criterion and the hinge loss?,,2,2,,,,CC BY-SA 4.0 24769,1,,,11/21/2020 17:35,,1,91,"

Communication requires energy, and using energy requires communication. According to Shannon, the entropy value of a piece of information provides an absolute limit on the shortest possible average length of a message without losing information as it is transmitted. (https://towardsdatascience.com/entropy-the-pillar-of-both-thermodynamics-and-information-theory-138d6e4872fa)

I don't know whether Neural Network actually deals with information flow or not. This information flow is taken from the idea of entropy. Since I haven't found any paper or ideas based on the law of energy for neural networks. The law of energy states that energy can neither be created nor destroyed. If it is creating information (energy) (e.g. in the case of a generative model), then some information may be lost while updating weights. How is Neural Network ensuring this energy conservation?

",19780,,2444,,11/22/2020 19:44,11/22/2020 19:44,How does NN follows law of energy conservation?,,0,4,,,,CC BY-SA 4.0 24773,2,,24766,11/21/2020 19:39,,2,,"
  1. It is important to note that the exact statement is the eqation given below can never be 0 for misclassified points in $ S^+$ $$ E(X) = (y - \text{sign}\{\overline{W} \cdot \overline{X}\}) $$
  2. And $S+$ is defined as the set of all misclassified training points $X \in S$ that satisfy the condition $y(\overline{W} \cdot \overline{X})<0 $ which means that $y$ and $ (\overline{W} \cdot \overline{X}) $ will have different signs i.e if $y < 0$ then $(\overline{W} \cdot \overline{X}) > 0$ and vice versa.
  3. Finally the condition on $S^+$ constraints the the $ E(X)$ to these values when $ y < 0$ and $(\overline{W} \cdot \overline{X}) > 0$ $$ E(X) = (-1 - \text{sign}(+ve)) = -2 $$ or when $ y > 0$ and $(\overline{W} \cdot \overline{X}) < 0$ $$ E(X) = (1 - \text{sign}(-ve)) = 2 $$
",22540,,16521,,2/7/2021 16:04,2/7/2021 16:04,,,,0,,,,CC BY-SA 4.0 24776,2,,24732,11/21/2020 21:20,,2,,"

I don't currently have much practical experience with DQN, but I can partially answer this question also based on my theoretical knowledge and other info that I found.

DQN is typically used for

  • discrete action spaces (although there have been attempts to apply it to continuous action spaces, such as this one)

  • discrete and continuous state spaces

  • problems where the optimal policy is deterministic (an example where the optimal policy is not deterministic is rock-paper-scissors)

  • off-policy learning (Q-learning is an off-policy algorithm, but the point is that, if you have a problem/application where data can only be or has been gathered by a policy that is unrelated to the policy you want to estimate, then DQN is suitable, though there are other off-policy algorithms, such as DDPG)

This guide also states that DQN is slower to train, but more sample efficient than other approaches, due to the use of the experience replay buffer.

Moreover, if you have a small state and action space, it is probably a good idea to just use tabular Q-learning (i.e. no function approximation), given that it is guaranteed to converge to the optimal value function.

See also this and this questions and this article (which compares DQN with policy gradients).

",2444,,2444,,11/22/2020 20:03,11/22/2020 20:03,,,,0,,,,CC BY-SA 4.0 24784,1,,,11/22/2020 9:28,,1,117,"

In deep learning, using more layers in a neural network adds the capacity to capture more features. In most RL papers, their experiments use a 2 layer neural network. Learning to Reset, Constrained Policy Optimization, Model-based RL with stability guarantees just to name a few - these are papers I personally remember but there are definitely many others.

I came across this question whose answers generally agree that yes, RL using a shallow network is considered deep RL, but the reason for preference of shallow networks was not part of the question.

In MetaMimic (2018), the authors trained the largest neural net RL algorithm at the time (a residual net with 20 convolution layers) for one-shot imitation learning. The paper demonstrates that larger networks as policy approximators generalize better and represent many behaviours.

So, why are shallow 2 layer networks so widely used?

",40671,,2444,,11/22/2020 11:25,11/23/2020 8:28,Why are shallow networks so prevalent in RL?,,1,2,,,,CC BY-SA 4.0 24787,2,,22329,11/22/2020 14:52,,2,,"

The purpose of the input network is to embed the input tuple into a state/task representation, that can then be fed into the RNN hidden state at each time step.

$(o^a_t,m^a′_{t−1},u^a_{t−1},a)$ (input) $\rightarrow$ input network (embedding) $\rightarrow$ $z_t$ (task representation)


According to to section 6.1 of the paper, the input is a tuple represented as $(o^a_t,m^a′_{t−1},u^a_{t−1},a)$. Each of these terms is described in sec 3 as:

  • $o_t$ - The observation. The authors assume a POMDP

  • Two types of actions:

    • $u_t$ - An environment action selects by all the agents at each time step in exchange for a team reward
    • $m_t$ - A communication action, observed by other agent but has no direct impact on the reward or env
  • $a$ - The agent (This being a multi-agent DQN algorithm)

They form the input to the Deep Recurrent Q-Network architecture.

The purpose of the embedding network is to receive a tuple of these inputs and produce a state embedding $z$. This state embedding is then fed into a hidden state $h^a_{t-1}$ of the RNN.

Though the authors refer to it as an input network that produces embedding, in practice, they use different embedding functions for each of these inputs. The final task/state embedding $z_t^a$ is expressed as a sum:

$$ z_t^a = \text{MLP}(o_t^a) + \text{MLP}(m_{t-1}) + \text{LookupTable}(u_{t-1}) + \text{LookupTable}(a) $$

The summed embeddings are all of the same size.

This answer makes an assumption that by "embedding layer" you meant the input embedding network. Since the paper makes no reference to a single embedding layer in the model architecture.

",40671,,40671,,11/22/2020 14:58,11/22/2020 14:58,,,,0,,,,CC BY-SA 4.0 24788,1,31764,,11/22/2020 14:52,,1,114,"

I am diving in data-to-text generation for long articles (> 1000 words). After creating a template and fill it with data I am currently going down on paragraph level and adding different paragraphs, which are randomly selected and put together. I also added on a word level different outputs for date, time and number formats.

The challenge I see is, that when creating large amounts of such generated texts they become boring to read as the uniqueness for the reader goes down.

Furthermore, I also think it's easy to detect that such texts have been autogenerated. However, I still have to validate this hypotheses.

I was wondering if there is an even better method to bring in variability in such a text?

Can you suggest any methods, papers, resources or share your experience within this field.

I highly appreciate your replies!

",32329,,32329,,11/22/2020 19:34,9/20/2021 10:56,"Making generated texts from ""data-to-text"" more variable",,2,0,,,,CC BY-SA 4.0 24789,2,,23670,11/22/2020 17:30,,2,,"

There are many variations of the original VAE (proposed in the 2013 paper Auto-Encoding Variational Bayes), with different purposes (such as the generation of discrete data or graphs). Of course, I cannot enumerate all of them, so here I will list only the ones I am currently aware of.

VAEs have been used for drug design (see e.g. JT-VAE) and in reinforcement learning (see e.g. world models). I don't know if they have been used for self-driving cars, but it's possible.

",2444,,2444,,11/22/2020 17:35,11/22/2020 17:35,,,,1,,,,CC BY-SA 4.0 24790,1,,,11/22/2020 18:09,,1,299,"

I am training an agent with an Actor-Critic network and update it with TRPO so far. Now, I tried out PPO and the results are drastically different and bad. I only changed from TRPO to PPO, the rest of the environment and rewards are the same. PPO is just a more efficient method compared to TRPO and has proven to be a state-of-the-art method in RL. So, why shouldn't it work? I just thought to ask if someone knows roughly how to transform configuration parameters from TRPO to PPO.

Here some more details about my configurations.

TRPO

  • Actor loss: $-\log(\pi) * A$ where $A$ are advantages
  • Critic Loss: MSE(predicted_values, discounted return)
  • Desired KL Divergence for Actor and Critic: 0.005
  • Conjugate gradient iterations: 20
  • Residual tolerance in conjugate gradient: 1e-10
  • Damping coefficient for Fisher Product: 1e-3

PPO

  • Actor and Critic optimizer and learning rate: Adam with 0.0001
  • Actor loss: negative minimum of either:
    1. $\frac{\pi}{\pi_{old}} * A$
    2. $clamp(\frac{\pi}{\pi_{old}}, 1-0.1, 1+0.1) * A$
  • Critic loss: MSE(predicted_values, discounted_rewards)
  • Optimization iterations: 10


The rest of my problem set-up is absolutely the same. But somehow I get completely different results while training, as you can see on the plots above. I also changed learning rates and optimization iterations, gradient clipping, optimizing with mini-batches and $-log(\pi) * A$ as Loss for PPO, but neither helped. Taking importance sampling $\frac{\pi}{\pi_{old}} * A$ as loss for TRPO gives the same results there.

Can someone please help me to understand where could be the problem? Or which parameters I would need to change in PPO?

",41715,,2444,,11/23/2020 20:17,11/23/2020 20:17,Why does PPO lead to a worse performance than TRPO in the same task?,,0,0,,,,CC BY-SA 4.0 24793,1,,,11/22/2020 19:17,,2,110,"

I'm trying to understand how DeepFakes are generated and so far I understood that they're mostly generated through the usage of GANs and autoencoders.

The autoencoders part is understandable, but what I cannot understand is how to generate faces with GANs that match destination face.

GANs consist of a generator and a discriminator. The generator is getting noise input which is randomly selected from a normal distribution and feedback from the discriminator. The discriminator is taught how the real data looks like and just classifies if the data fed to him is real or fake. Depending on the answer - one of them (generator/discriminator) updates its model. If the discriminator guesses right, the generator is getting updated if not, then the discriminator is the one that is updating its model.

So after the training part is over, we can feed the generator more noise to achieve more fake data. In DeepFake videos, we normally try to swap the destination face with the input face. My problem with that is that the destination face has specific features for example it has closed eyes, smiles, rotates its head. If we feed the generator noise, how can we control the process to achieve similar facial features that are in the destination face?

I've found papers about GANs that can control some of the features of generated faces (StyleGANs). Although I'm not sure how would it be possible to extract "special features" of destination face and generate them with StyleGANs.

I will be extremely grateful for any help in understanding the concept of DeepFake with GANs. Thanks a lot.

",42461,,42461,,11/22/2020 19:54,11/24/2020 19:28,Generating fake faces containing specific features with GANs,,0,0,,,,CC BY-SA 4.0 24794,1,,,11/22/2020 20:58,,1,23,"

I have a regression task that I tray to solve with AI. I have around 6M rows with about 30 columns. (originally there was 100, but I reduce it with drop feature importance)

I understand basic principle: Look if model overfit or underfit - according change the parameters. In theory. I would ask for help with two graphs:

  1. If I understand correctly what is going on
  2. How would you attack the situation.

1. Graph

  • I use LightGBM
  • learning_rate = 1
  • max_depth = 3
  • num_leaves = 2**15,
  • number of iterations = 4000

If I understand this model is Underfitting. The validation and training is falling, but not very much... BUT: The number of iteration is now too large and place higher number is not ok. Learning rate is 1 (as hight as it gets). Only the max_depth is low, but if it is higher (I try 30) the graph is same, just the values are worse.

So, what to do, so that model would not underfit.

  1. Graph

  • I use Neural Nets
  • epochs=200,
  • batch_size=64

The model

i = Input(shape=(100,), name='input')
x = Dense(128)(i)
x = Dense(64)(i)
o = Dense(1, activation='relu', name='output')(x)

Here I am not sure. This doesn't really looks like underfit, but more that doesn't converged. Is this right?

So, should I create more complex model (more neurones or more layers?)?

And how much epochs do I need to see this behaviour? Because, in the beginning I use only 10 epochs, for faster development, and I thought that model is overfitting. Only when I use more epochs I see that I was wrong.

How would you start to "debug" this neural net? What would be the plan of attack?

",26993,,,,,11/22/2020 20:58,React on train-validation curve after trening,,0,0,,,,CC BY-SA 4.0 24795,2,,24271,11/22/2020 21:30,,1,,"

Apart from the entropy and the cross-entropy, which are widely used in deep learning and you seem to be aware of, there is also the Kullback-Leibler divergence (also known as relative entropy), which is widely used in the context of variational Bayesian neural networks and variational auto-encoders, given that it's often part of the loss function that is minimized, i.e. the Evidence Lower BOund, which is a proxy objective function for the KL divergence between the prior and posterior distributions (which actually corresponds to the minimum description length needed to encode the data: huh?). See this answer for more details. There is also the mutual information, which has also been used as a measure of uncertainty in the context of Bayesian neural networks.

",2444,,2444,,11/23/2020 1:57,11/23/2020 1:57,,,,0,,,,CC BY-SA 4.0 24798,1,,,11/23/2020 4:10,,2,145,"

I'm trying to implement the logic for a Sudoku XV puzzle, that it's essentially a standard sudoku with the addition of X and V markers between some pairs of squares. X markers in adjacent pairs requires that the sum of the two values is 10. Similarly, the V marks requires that the sum of the values is equal to 5.

(Assume that $$ S_{xyz} $$ stands for [digit][row][column])

I've written the following CNF formulae that handle the logic of a standard Sudoku puzzle:

There is at least one number in each entry: $$ \bigwedge_{x=1}9\bigwedge_{y=1}9\bigwedge_{z=1}9S_{xyz} $$

Each number appears at most once in each row: $$ \bigwedge_{y=1}9\bigwedge_{z=1}9\bigwedge_{x=1}{8\bigwedge_{i=x+1}9}{(\lnot S}_{xyz\ }\vee\lnot S_{iyz\ }) $$

Each number appears at most once in each column: $$ \bigwedge_{x=1}9\bigwedge_{z=1}9\bigwedge_{y=1}{8\bigwedge_{i=x+1}9}{(\lnot S}_{xyz\ }\vee\lnot S_{xiz\ }) $$

Each number appears at most once in each 3x3 sub-grid: $$ \bigwedge_{z=1}9\bigwedge_{i=0}2\bigwedge_{j=0}{2\bigwedge_{x=1}2\bigwedge_{y+1}3\bigwedge_{k=x+1}3\bigwedge_{l=1,\ \ y \neq l}3}{(\lnot S}_{(3i+x)(3j+y)z\ }\vee\lnot S_{(3i+k)(3j+l)z\ }) $$

Unfortunately, I'm stuck, and I don't really know how I can express the logic for X and V markers, and most importantly how to invalidate squares that contain neither an X nor a V marker that have digits summing to 5 or 10.

",42467,,2444,,11/23/2020 11:18,1/21/2023 13:02,How to translate sudoku XV boards in CNF format?,,1,0,,,,CC BY-SA 4.0 24799,1,24803,,11/23/2020 4:12,,3,160,"

Both MC and TD are model-free and they both follow a sample trajectory (in the case of TD, the trajectory is cut-short) to estimate the return (we basically are sampling Q values). Other than that, the underlying structure of both algorithms is exactly the same. However, from the blogs and texts I read, the equations are expressed in terms of V and NOT Q. Why is that?

",42398,,2444,,11/23/2020 13:25,11/23/2020 13:37,Is the expected value we sample in TD-learning action-value Q or state-value V?,,1,0,,,,CC BY-SA 4.0 24800,2,,10371,11/23/2020 6:48,,3,,"

Are there companies that still use expert systems?

There are still some expert system inference engines available in open source form, in particular CLIPS rules

A specialization of your question could be: what companies are using CLIPS in 2020 ?

I don't have any ideas, even if I did try in https://github.com/bstarynk/clips-rules-gcc

And the RefPerSys project is right now (in November 2020) discussing the idea of incorporating such rules in it.

Read of course quickly Jacques Pitrat's blog on http://bootstrappingartificialintelligence.fr/WordPress3/ and his last book (describing the design of an ambitious symbolic artificial intelligence system -CAIA- with expert system ideas) Artificial Beings The Conscience of a Conscious Machine (ISBN 9 781848211018) - his CAIA system is on https://github.com/bstarynk/caia-pitrat but there is absolutely no documentation, since Jacques Pitrat passed away in October 2019. His CAIA system was capable of generating all the 500KLOC of its C code from some kind of expert system rules (whose design is described in Jacques Pitrat's books and papers).

I am not a native English speaker (since I am French) but I heard that expert systems are called (in 2020) business rule management systems.

I heard that major banks in France (maybe BNP or Société Générale) are using such systems to decide to give some loan or some credit to persons and companies (in particular for people buying their flat - or a brand new automobile - with a bank credit and debt during dozen of years).

The French banking system is very opaque: you won't be able to understand their internal software, and banks are not publishing any document about the design of their software. At most they would publish the name of their AI systems, but nothing public about the software design.

According to rumors, Lexifi or Yseop might use some kind of very proprietary expert system technology and sell services with them. But their software tools are closed source and very proprietary.

Regarding expert system for games, see also recent papers by Professor Tristan Cazenave. He did use some kind of expert system technology for games.

My guess is that large Internet companies like Google or Amazon are using expert system technology inside their internal software (e.g. search engines). IBM Watson is rumored to use them also.

BTW, GNU make might be considered as some very crude expert system engine driving building of software artifact from source code.

",3335,,3335,,11/23/2020 7:48,11/23/2020 7:48,,,,2,,,,CC BY-SA 4.0 24801,1,,,11/23/2020 7:34,,4,138,"

I find the logistic map absolutely fascinating. Both in itself (because I love fractal) and because it is observed in nature (see: https://www.youtube.com/watch?v=ovJcsL7vyrk).

I'm wondering if anyone tried it as an activation function in some way or another with any kind of success.

I like it because it has some kind of "I'm not sure what to do" above ~3.0 and the less confidence the more chaotic the response is. It gives the possibility to explore some other solution to escape a local optimum (not sure I use this word correctly). And below 3 it's still a nice and smooth activation function like, eg, a tanh.

Eg : the reward i got isn't the reward i expect, and the higher the difference the more i'll explore other solution. But it's still gradual, from 1 choice, to 2 choice, 4, 8, 16, ... until it become chaotic. (giving the possibility to experiment some pseudo-random choice). And below this threshold it still act as a usable "good old" activation function.

Another good side is that it's gpu-friendly and don't need many iteration for this application since a little bit of uncertainty (even below the threshold) isn't undesirable. see : https://upload.wikimedia.org/wikipedia/commons/6/63/Logistic_Map_Animation.gif

Edit : so, ok, i tested it on my extremely naive racetrack. (feedforward, no feedback, no error, no fitness, only genetic selection for the car that didn't crash). It does work, for sure. I don't see any advantage in practive but with such a naive NN, there isn't much i can tell.

My implementation :

def logi(r):
    x = .6  # the initial population doesn't matter so i took .6
    for _ in range(random.randrange(10,40)):
        x = r * x * (1 - x)
    return x

The activation take 8% of my laptop cpu (while is was invisible on my radar with leaky leru)

",42373,,2444,,11/24/2020 10:23,11/24/2020 10:23,Has the logistic map ever been used as an activation function?,,0,19,,,,CC BY-SA 4.0 24802,2,,24784,11/23/2020 8:23,,1,,"

Simple policy parametizations, including linear functions in some cases, can solve continous control tasks in RL. It's therefore not necessary to have a complex approximator for the function to be expressive enough in capturing the desired agent's behaviour in popular RL benchmarks


Towards Generalization and Simplicity in Continous Control tries to answer the question "What are the simplest set of ingredients needed to succeed in some of the popular benchmarks?"

Among them is the representation capacity required of the function approximators.

It shows that in popular OpenAI gym locomotion benchmarks (Swimmer 3D, 3D Humanoid, Ant, Walker), using Linear and RBF policy architectures is just as competitive in the reward attained as using neural nets. Here's a summary:

  • The more Fourier Featurers RBF has, the better the performance
  • A 2 layer neural net policy's performance is slightly short of a 500-feature RBF
  • A linear representation is unable to learn some of the locomotion behaviour, e.g walking

I assume RBF, and its use of Conjugate Gradients, has some shortcomings which make shallow Neural networks more preferrable. (I'm not familiar with RBF, and stand to be corrected).

Overall, the the authors use these result to suggest that complex policy architectures should not be a default choice in RL unless side-by-side comparisons with simpler alternatives suggest otherwise.

This might explain the prevalence of shallow networks. It's also possible that since most RL research (presently) doesn't try to show how we can learn better representations using complex policy architectures, but rather how to better make use of representation capacities by addressing issues like sample efficiency, shallow networks are expressive enough in the adapted benchmarks. As expressed in Brail's comment, this is also useful for setting up fair comparisons.

However, the effect of network architecture design in RL has not been widely pursued.

It's also worth mentioning the findings of this work D2RL, (2020) which shows one can gain significant performance improvement by using input concatenations in the policy and value function, and that more layers in a vanilla neural net decreases the agent performance.

The effect of increasing the number of FCs in SAC. Performance drops after 2 layers (D2RL)

Further Reads:

Why is everyone using shallow networks (Reddit thread)

",40671,,40671,,11/23/2020 8:28,11/23/2020 8:28,,,,1,,,,CC BY-SA 4.0 24803,2,,24799,11/23/2020 8:39,,3,,"

However, from the blogs and texts I read, the equations are expressed in terms of V and NOT Q. Why is that?

MC and TD are methods for associating value estimates to time step based on experienced gained in later time steps. It does not matter what kind of value estimate is being associated across time, because all value functions are expressing the same thing in general, which is the expected return conditioned on a "current position" within the MDP. In MC the association is directly with a sampled return, in TD with a sampled combination of immediate reward and a later value estimate - most commonly in TD the same kind of value estimate (e.g. matching later state value estimates to state values).

Both approaches can be analysed and used from the perspective of both state value (V) and action value (Q) functions. They also apply to other value functions - e.g. afterstate values.

It is quite common for textbooks and tutorials to use the slightly simpler state value function to explain how MC or TD learning work in general, outside of being used for any purpose. You can also use the state value function for model-free policy evaluation in MC and TD.

However, without a model, you cannot use state value function for control (i.e. to learn an optimal policy). To pick the best action using state values, you need to do something like this:

$$\pi(s) = \text{argmax}_a [ \sum_{r,s'} p(r,s'|s,a)(r + \gamma v(s'))]$$

The problem here is that $p(r,s'|s,a)$ is a model of the environment. So, if it is needed, the control method would not be model-free.

Hence when you learn about MC or TD in a control scenario, model-free methods to learn optimal policies, then you generally need to use an action value (sometimes you can use an afterstate value, if the action involves choosing the next state directly).

With an action value function, the greedy policy becomes:

$$\pi(s) = \text{argmax}_a q(s, a)$$

This does not refer to any model of the environment. So it can be used when you have none.

",1847,,2444,,11/23/2020 13:37,11/23/2020 13:37,,,,0,,,,CC BY-SA 4.0 24804,1,24843,,11/23/2020 11:00,,2,46,"

I'm trying to read this paper describing Google's LSTM architecture for machine translation. It features this diagram on page 4:

I'm interested in the encoder block, on the left. Apparently, the pink and green cells are LSTMs. However, I can't tell if the x-axis is space or time. That is, are the LSTM cells on a given row all the same cell, with time flowing forward from left to right? The diagram on the next page in the paper seems to suggest that.

",1931,,2444,,11/23/2020 13:12,11/24/2020 15:39,"Does this diagram represent several LSTMs, or one through several timesteps?",,1,0,,,,CC BY-SA 4.0 24810,2,,20127,11/23/2020 12:24,,3,,"

The accepted answer does not provide a good definition of over-fitting, which actually exists and is a defined concept in reinforcement learning too. For example, the paper Quantifying Generalization in Reinforcement Learning completely focuses on this issue. Let me give you more details.

Over-fitting in supervised learning

In supervised learning (SL), over-fitting is defined as the difference (or gap) in the performance of the ML model (such as a neural network) on the training and test datasets. If the model performs significantly better on the training dataset than on the test dataset, then the ML model has over-fitted the training data. Consequently, it has not generalized (well enough) to other data other than the training data (i.e. the test data). The relationship between over-fitting and generalization should now be clearer.

Over-fitting in reinforcement learning

In reinforcement learning (RL) (you can find a brief recap of what RL is here), you want to find an optimal policy or value function (from which the policy can be derived), which can be represented by a neural network (or another model). A policy $\pi$ is optimal in environment $E$ if it leads to the highest cumulative reward in the long run in that environment $E$, which is often mathematically modelled as a (partially or fully observable) Markov decision process.

In some cases, you are also interested in knowing whether your policy $\pi$ can also be used in a different environment than the environment it has been trained in, i.e. you're interested in knowing if the knowledge acquired in that training environment $E$ can be transferred to a different (but typically related) environment (or task) $E'$. For example, you may only be able to train your policy in a simulated environment (because of resource/safety constraints), then you want to transfer this learned policy to the real world. In those cases, you can define the concept of over-fitting in a similar way to the way we define over-fitting in SL. The only difference may be that you may say that the learned policy has over-fitted the training environment (rather than saying that the ML model has over-fitted the training dataset), but, given that the environment provides the data, then you could even say in RL that your policy has over-fitted the training data.

Catastrophic forgetting

There is also the issue of catastrophic forgetting (CF) in RL, i.e., while learning, your RL agent may forget what it's previously learned, and this can even happen in the same environment. Why am I talking about CF? Because what it's happening to you is probably CF, i.e., while learning, the agent performs well for a while, then its performance drops (although I have read a paper that strangely defines CF differently in RL). You could also say that over-fitting is happening in your case, but, if you are continuously training and the performance changes, then CF is probably what you need to investigate. So, you should reserve the word over-fitting in RL when you're interested in transfer learning (i.e. the training and test environments do not coincide).

",2444,,2444,,11/23/2020 15:38,11/23/2020 15:38,,,,0,,,,CC BY-SA 4.0 24811,1,,,11/23/2020 12:40,,2,45,"

I am a bit stuck trying to understand how a lane change is performed from an operational point of view.

Let's assume a self-driving car uses an occupancy grid map for local planning, this map may even have the detected lane boundaries. It's following a slow car and decides to overtake, but how does it know where the centre of the adjacent lane is? Does it use a separate map, or is there a separate data structure that is used to keep the lane information which informs the car where the centre of the adjacent lane is?

Alternatively, does the car just decide to start drifting off to the side until it picks up the lane boundaries and then centres itself?

",42480,,,,,11/23/2020 12:40,How do self-driving cars perform lane changes?,,0,1,,,,CC BY-SA 4.0 24816,1,24840,,11/23/2020 13:48,,4,255,"

In this answer, afterstate value functions are mentioned, and that temporal-difference (TD) and Monte Carlo (MC) methods can also use these value functions. Mathematically, how are these value functions defined? Yes, they are a function of the next state, but what's the Bellman equation here? Is it simply defined as $v(s') = \mathbb{E}\left[ R_t \mid S_t = s, A_t = a, S_{t+1} = s' \right]$? If yes, how can we define it in terms of the state, $v(s)$, and state-action, $q(s, a)$, value functions, or as a Bellman (recursive) equation?

Sutton & Barto's book (2nd edition) informally describe afterstate value functions in section 6.8, but they don't provide a formal definition (i.e. Bellman equation in terms of reward or other value functions), so that's why I am asking this question.

",2444,,2444,,11/23/2020 14:14,2/17/2021 14:37,How are afterstate value functions mathematically defined?,,1,0,,,,CC BY-SA 4.0 24820,2,,24406,11/23/2020 15:24,,1,,"

The question is conceptually wrong, because of misunderstanding of area. Explanation: The idea is to replace open ai gym by something different. For example: web-site or computer game. There is no way to create an environment based on image. If you want to use implemented algorithm for open ai gym and want to change environment for your own, could do something like this: https://towardsdatascience.com/creating-a-custom-openai-gym-environment-for-stock-trading-be532be3910e. If need to change env complitely than it is necessary to rewrite your algorithm for your tasks.

",42038,,,,,11/23/2020 15:24,,,,0,,,,CC BY-SA 4.0 24822,1,,,11/23/2020 16:55,,0,38,"

I want to try a hierarchical reinforcement learning (HRL) approach to hard logical problems with combinatorial complexity, i.e. games like chess or Rubik's cube. The majority of HRL papers I have found so far focus either on training a control policy or they tackle quite simple games.

By HRL I mean all methods that (among others):

  • split hard and complex problem into a series of simpler ones
  • create desired intermediate goals (or spaces of such goals)
  • somehow think in terms of 'what to achieve' rather than 'how to achieve'

Do you know any examples of solving logically hard problems with HRL or maybe just any promising approaches to such problems?

",42483,,,,,11/23/2020 16:55,Hierarchical reinforcement learning for combinatorial complexity,,0,3,,,,CC BY-SA 4.0 24823,1,24828,,11/23/2020 17:32,,5,1507,"

I am reading about local search: hill climbing, and its types, and simulated annealing

One of the hill climbing versions is "stochastic hill climbing", which has the following definition:

Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search algorithm selects one neighbor node at random and decides whether to choose it as a current state or examine another state

Some sources mentioned that it can be used to avoid local optima.

Then I was reading about simulated annealing and its definition:

At every iteration, a random move is chosen. If it improves the situation then the move is accepted, otherwise it is accepted with some probability less than 1

So, what is the main difference between the two approaches? Does the stochastic choose only random (uphill) successor? If it chooses only (uphill-successors), then how does it avoid local optima?

",42300,,2444,,11/23/2020 20:27,11/23/2020 20:56,What is the difference between Stochastic Hill Climbing and Simulated Annealing?,,1,0,,,,CC BY-SA 4.0 24824,1,,,11/23/2020 18:48,,0,68,"

I performed semantic segmentation with U-net. My dataset consists of grayscale images of defects. After training the dataset for I got an metric accuracy of only 0.3 - 0.4 IOU. Eventhough it is merely low it performs well enough to identify instances that are huge means the prediction performs well enough in places where there is a standard intensity change(color change) and they are bigger instances. There are many other instances where there is no color change and it occoupies only few pixels in image(smaller instances) and the prediction rate is almost 0 on these instances.

I also tried Resdiual connection in the downsampling part of U-net likewise in ResNet. But still its the same and for smaller instances I used dilated convolution blocks in between the skip connections for encoder and decoder of U-net based on some papers. But still I cannot have a higher accuracy in my network and prediction rate for smaller instances are really poor. Although I use only 350 images for training with Data Augmentations. My image size is also 256,256.

Is there any other method I can try to increase the accuracy and prediction rate for smaller instances? Any suggestion would be helpful.

",41471,,,,,12/15/2022 1:07,Semantic segmentation failing in small instance detection,,1,0,,,,CC BY-SA 4.0 24825,1,,,11/23/2020 19:10,,1,48,"

In a 2-armed-bandit problem, an agent has an opportunity to see n reward for each action. Now the agent should choose actions m times and maximize the expected reward in these m decisions. but it cant see the reward of them. what is the best approach for this?

",42487,,,,,11/23/2020 19:10,Multi-armed bandit problem without getting rewards,,0,2,,,,CC BY-SA 4.0 24828,2,,24823,11/23/2020 20:56,,4,,"

Russell and Norvig's book (3rd edition) describe these two algorithms (section 4.1.1., p. 122) and this book is the reference that you should generally use when studying search algorithms in artificial intelligence. I am familiar with simulated annealing (SA), given that I implemented it in the past to solve a combinatorial problem, but I am not very familiar with stochastic hill climbing (SHC), so let me quote the parts of the book that describe SHC.

Stochastic hill climbing chooses at random from among the uphill moves; the probability of selection can vary with the steepness of the uphill move. This usually converges more slowly than steepest ascent, but in some state landscapes, it finds better solutions.

So, SHC chooses at random one "uphill move", i.e. a move that improves the objective function (for example, if you're trying to solve the travelling salesman problem, a "uphill move" could be any change to the current Hamiltonian cycle, a solution, so that the new Halmitonian cycle has a shorter cost) among the uphill moves (so among some set of moves that improve the objective).

In simulated annealing, you perform some move. If that move leads to a better solution, you always keep the better solution. If it leads to a worse solution, you accept that worse solution with a certain probability. There are other details, such as how you accept the worse solution (which you can find in Russell and Norvig's book), but this should already clarify that SA is different from SHC: SA can accept worse solutions in order to escape from local minima, while SHC accepts only uphill moves.

",2444,,,,,11/23/2020 20:56,,,,2,,,,CC BY-SA 4.0 24831,1,34346,,11/23/2020 22:03,,6,899,"

I know the original Transformer and the GPT (1-3) use two slightly different positional encoding techniques.

More specifically, in GPT they say positional encoding is learned. What does that mean? OpenAI's papers don't go into detail very much.

How do they really differ, mathematically speaking?

",26580,,2444,,11/30/2021 15:20,3/1/2022 17:01,What is the difference between the positional encoding techniques of the Transformer and GPT?,,2,0,,,,CC BY-SA 4.0 24833,2,,24824,11/23/2020 23:26,,0,,"

You may find it useful to categorize these smaller defects as a separate class, then introduce a class weights matrix to penalize incorrect classification of the smaller defects more heavily. If these small defects represent a very small portion of the total number of pixels in your training data, then the model may be stuck at a local minimum, where it just predicts them as zero because the loss is not heavily penalized for this. So you need to add extra penalty for the incorrect classification.

Assuming you create a new class for the small defects, the class weights matrix is implemented as a keras callback function.

If you do not want to create a multiclass problem, there is also this paper : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5977596/#!po=32.4561 The multilevel unet architecture will not solve class imbalance if that is truly the underlying issue, but if may make the network more aware of smaller scale changes in the images you're working with.

",42489,,42489,,11/24/2020 18:48,11/24/2020 18:48,,,,2,,,,CC BY-SA 4.0 24834,2,,24767,11/24/2020 0:41,,-2,,"

I can't fully explain the part because I forgot what it talks about.

However, regarding the hinge loss, it is basically allowing your SVM to tolerate misclassifications without increasing the cost function.

For example, you give someone 1 dollar or 1 euro. You can forgive them, you tolerate it. Your hinge loss is 0 for lending someone 1d dollar. However, if you give them 10 dollars or 100, you will ask them to refund you ASAP because you can't tolerate that much loss!

",42490,,,,,11/24/2020 0:41,,,,2,,,,CC BY-SA 4.0 24840,2,,24816,11/24/2020 12:59,,3,,"

Based on this and this resources, let me give an answer to my own question, but, essentially, I will just rewrite the contents of the first resource here, for reproducibility, with some minor changes to the notation (to be consistent with Sutton & Barto's book, 2nd edition). Note that I am not fully sure if this formulation is universal (i.e. maybe there are other ways of formulating it), but the contents of the first resource seem to be consistent with the contents in the second resource.

Setup

Let's assume that we have an infinite-horizon MDP

$$\mathcal{M} = (\mathcal{S}, \mathcal{Y}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \gamma),$$ where

  • $\mathcal{S}$ is the set of states
  • $\mathcal{Y} \subseteq \mathcal{S}$ is the set of afterstates (aka post-decision states or "end of period" states [1], which can also be written as after-states)
  • $\mathcal{A}$ is the set of actions
  • $\mathcal{T}$ is the transition function
  • $\mathcal{R}$ is the reward function
  • $\gamma$ is a discount factor

Let

  • $y \in \mathcal{Y}$ be an afterstate
  • $f: \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{Y}$ be a deterministic function (from state-action pairs to afterstates), so we have $f(s, a) = y$

The transition function $\mathcal{T}$ for $\mathcal{M}$ is defined as

\begin{align} \mathcal{T}(s, a, s^{\prime}) &\doteq P ( s^{\prime} \mid f(s, a)) \\ &= P ( s^{\prime} \mid y) \end{align}

A transition is composed of 2 steps

  1. a deterministic step, where we apply the deterministic function $f(s, a) = y$, which depends on an action $a$ taken in the state $s$, followed by
  2. a stochastic step, where we apply the probability distribution $P (s^{\prime} \mid y)$, which does not depend on the action $a$ anymore, but only on $y$

So, I have denoted afterstates with a different letter, $y$, because afterstates are reached with a deterministic function $f$, while other states, $s$ or $s'$, are reached with $P$.

After having taken the action $a$ in the state $s$, we get a reward (i.e. we get a reward in step 1), but we do not get a reward after the stochastic step (given that no action is taken).

So, we can define the reward function $\mathcal{R}$ for this MDP as follows

$$ \mathcal{R} (s, a, s^{\prime} ) \doteq \mathcal{R}(s, a) $$

The situation is illustrated by the following diagram

So, here, $P$ is the stochastic transition function (i.e. a probability distribution) as used above. Note that, here, $r_t$ is a specific realization of $R_t$ (the random variable) in the formulas below.

State value function

Let's recall the definition of the state value function $v_\pi(s)$ for a given policy $\pi$ (as defined in Sutton & Barto, section 3.5)

\begin{align} v_{\pi}(s) &\doteq \mathbb{E}_{\pi}\left[G_{t} \mid S_{t}=s\right] \\ &= \mathbb{E}_{\pi}\left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1} \mid S_{t}=s\right], \end{align} for all $s \in \mathcal{S}$ and

\begin{align} G_{t} &\doteq \sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1} \\ &= R_{t+1} + \gamma R_{t+2} + \gamma^{2} R_{t+3}+ \cdots \\ &= \mathcal{R}(s_t, a_t) + \gamma \mathcal{R}(s_{t+1}, a_{t+1})+\gamma^{2} \mathcal{R}(s_{t+2}, a_{t+2}) +\cdots, \end{align} where $\pi(s_t) = a_t$ and $\mathcal{R}(s_t, a_t) = R_{t+1}$, for $t=0, 1, 2, \dots$. (So, note that $\mathcal{R} \neq R_t$: the first is the reward function, while the second is a random variable that represents the reward received after having taken action $a_t$ in step $s_t$)

The optimal state value function is defined as

$$ v_{*}(s) \doteq \max _{\pi} v_{\pi}(s) $$

Afterstate value function

Similarly, we will define the afterstate value function, but we will use the letter $w$ just to differentiate it from $v$ and $q$.

\begin{align} w_{\pi}\left(y\right) &\doteq \mathbb{E}_{\pi}\left[G_{t+1} \mid Y_{t}=y\right] \\ &= \mathbb{E}_{\pi}\left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+2} \mid Y_{t}=y\right] \\ &= \mathbb{E}_{\pi}\left[ R_{t+2} + \gamma R_{t+3}+\gamma^{2} R_{t+4} + \cdots \mid Y_{t}=y\right] \\ &= \mathbb{E}_{\pi}\left[ \mathcal{R}(s_{t+1}, a_{t+1})+\gamma \mathcal{R}(s_{t+2}, a_{t+2}) + \gamma^{2} \mathcal{R}(s_{t+3}, a_{t+3}) + \cdots \mid Y_{t}=y\right] , \end{align} where $\mathcal{R}(s_{t+1}, a_{t+1}) = R_{t+2}$, for all $t$.

In other words, the value of an afterstate $y$ (at time step $t$, i.e. given $Y_t = y$) is defined as the expectation of the return starting from the state that you ended up in after the afterstate $y$.

This seems reasonable to me and is similar to my proposal for the definition of the afterstate value function in the question, although I was not considering any deterministic functions in a potential formulation, and I was also not thinking of afterstates as intermediate states, reached by a deterministic step, between the usual states.

Similarly to the optimal state value function, we also define the optimal afterstate value function

$$ w_{*}(y) \doteq \max _{\pi} w_{\pi}(y) $$

Afterstate value function defined in terms of the state value function

We can define the afterstate value function in terms

$$ w_{*}(y) = \sum_{s^{\prime}} P (s^{\prime} \mid y ) v_{*} ( s^{\prime} ) $$ In other words, $w_{*}(y)$ is defined as an expectation over the value of next possible states $s'$ from the afterstate $y$.

This seems to be correct and consistent with the above definitions.

More equations

In this and this resources, the state value function is also defined in terms of afterstate value function as follows

$$v_{*}(s)=\max_{a}\left(\mathcal{R}(s, a)+\gamma w_{*}(f(s, a))\right)$$

The Bellman equation for afterstate value function (from which an update rule can be derived) is given by

$$ w_{*}(y) = \sum_{s^{\prime}} P(s^{\prime} \mid y ) \max_{a} ( \mathcal{R} (s^{\prime}, a) + \gamma w_{*}(f ( s^{\prime}, a ))), $$ which is really similar to the Bellman equation for the state value function.

Finally, we can also express the state-action value function in terms of the afterstate value function

$$ q_\pi(s_t, a_t) = \mathcal{R}\left(s_{t}, a_{t}\right)+\gamma w_{\pi}\left(f\left(s_{t}, a_{t}\right)\right) $$

Given that this answer is already quite long, see the resource for more details (including an algorithm based on the afterstate Bellman equation).

Implementation

If you are the type of person that understands the concepts by looking at the code, then this Github project, which implements a Monte Carlo method that uses afterstates to play tic-tac-toe, may be useful. Afterstates are useful in tic-tac-toe because it is a 2-player game, where two agents take actions in turn, so we can estimate the action that you should take deterministically (as if it was the $f$ above) before the other agent takes an action (probabilistically), at least, this is my current interpretation of the usefulness of afterstates in this game (and similar games/problems).

",2444,,2444,,2/17/2021 14:37,2/17/2021 14:37,,,,2,,,,CC BY-SA 4.0 24842,2,,24702,11/24/2020 14:53,,0,,"

The accepted answer does not answer the question

Why, if the above is correct, do we not see many neural network architectures projecting the data into higher dimensions first then reducing the size of each layer thereafter?

Yes, it's true that if you increase the number of hidden neurons, you generally increase the capacity (in fact, the VC dimension of neural networks is typically expressed as a function of the number of parameters), but you're also suggesting to decrease the size afterwards.

For example, in this tutorial, they use 10 hidden neurons for the first hidden layer of an MLP, while the dataset contains only 4 features, which means that there are $4*10 = 40$ weights (aka parameters). In this other tutorial, there are less than 20 features (and only a few of them are used) and the first hidden layer has 128 neurons, which means that each feature is connected to $128$ hidden neurons, so there should be $4*128 = 512$ weights. So, MLPs can easily have more hidden neurons in the first layer and weights that connect the input(s) to the hidden neurons than the number of features.

In the case of CNNs, you may have fewer scalar parameters because of the properties of CNNs, i.e. parameter sharing. For instance, if you have $64$ filters of shape $3 \times 3$ and you want to process grayscale images of shape $32 \times 32$, then you have $3 * 3 * 64 = 576 < 1024 = 32*32$. Note that the number of parameters of the first layer does not change as a function of the input in the case of CNNs, so if that same CNN processes an image of size $5 \times 5$, then the first layer contains more scalar parameters than pixels.

So, in general, NNs can project the data to higher-dimensional space. The typical NNs that project to a lower-dimensional space are auto-encoders or data compressors, in general, that's why they are called in this way.

",2444,,2444,,11/24/2020 15:41,11/24/2020 15:41,,,,0,,,,CC BY-SA 4.0 24843,2,,24804,11/24/2020 15:39,,1,,"

are the LSTM cells on a given row all the same cell, with time flowing forward from left to right?

Yes this is correct

The x-axis on this figure is basically the time axis. Essentially all pink boxes in the same row are the same LSTM cell, with different inputs from the same sequence. At each timestep, the cell takes an input and produces an output which is fed to the next layer. At the 8-th layer, the outputs over all timesteps are inputted at the same time to the attention layer.

",26652,,,,,11/24/2020 15:39,,,,0,,,,CC BY-SA 4.0 24844,1,24845,,11/24/2020 15:46,,1,141,"

I would like some guidance on how to design an Environment for a Reinforcement Learning agent where the stopping conditions and rewards for the environment change based on an initial set of input parameters.

For example, let's say that a system generated alert triggers the instantiation of the RL environment, whereby the RL agent is launched to make decisions in the environment, based on the alert. The alert has two priorities "HIGH" and "LOW", when the priority is "HIGH" the stopping condition is a reward of "100" and when the priority is "LOW", the stopping condition is a reward of "1000".

In this scenario, is it preferable to create two separate environments based on the priority (input parameter) of the alert? Or is this a common requirement that should be designed into the environment/agent? If so, how? Note that I have simplified the scenario, so there could be multiple conditions (e.g., alert, system type, etc), but I am just trying to find a basic solution for the general case.

",42508,,2444,,11/25/2020 9:10,11/25/2020 9:10,"If the reward function of an environment depends on some initial conditions, should I create a separate environment for each condition?",,1,0,,,,CC BY-SA 4.0 24845,2,,24844,11/24/2020 17:32,,1,,"

In this scenario, is it preferable to create two separate environments based on the priority (input parameter) of the alert?

It is difficult to make a hard rule here.

If the resulting environments can be cleanly sorted into a few different categories, and the ideal behaviour and/or the states visited are radically different in each scenario, then maybe a few different agents optimised for each scenario could work well.

A more general approach however, is to include the episode start data as part of the state that the agent observes on each time step. A single agent can then in theory learn the different behaviours required depending on the initial values, plus still generalise from anything shared between the multiple scenarios.

The alert has two priorities "HIGH" and "LOW", when the priority is "HIGH" the stopping condition is a reward of "100" and when the priority is "LOW", the stopping condition is a reward of "1000".

This may work against you. RL agents do not respond to the absolute values of rewards, other than how they compare to other rewards also available within the same episode (or continuing environment).

If there is only ever one issue to solve at a time, and no conflict between solving either of the "HIGH" or "LOW" priority problems (such as splittig resources or effort between them), the different reward system seems redundant. Solved is solved. You might rate the usefulness of an agent that solves the "LOW" priority issue well higher, but it seems to me that this describes what you should work on first, not the goals of the agent. To influence the goals of the agent, both rewards would need to be available within the same episode or continuing environment, requiring the agent to make a choice between them.

",1847,,,,,11/24/2020 17:32,,,,5,,,,CC BY-SA 4.0 24847,1,,,11/25/2020 3:58,,1,81,"

I'm trying to implement the minimax algorithm with alpha beta pruning on a game that works like this:

  • Player 1 plays (x1, y1).

  • Player 2 can only see the x-value (x1) that Player 1 played (and not the y-value y1). Player 2 plays (x2, y2).

  • An action event happens, which may change the heuristic of the current game state.

  • Player 2 plays (x3, y3).

  • Player 1 can only see the x-value (x3) that Player 2 played (and not the y-value y3). Player 1 plays (x4, y4).

  • Action event. The game continues with alternating starting players for a maximum depth of 10.

To do so, I have been treating each turn as you regularly would with the minimax algorithm, with each player making moves given the set of moves already played, including the possibly hidden move from the turn before. However, I've noticed that my algorithm will return moves for Player 2 that assume that Player 1 plays a certain way when, in a "real" game, it may be the case that Player 1 plays something else (and vice versa). For example, if Player 2 could guarantee a win on a given turn under all circumstances (when Player 1 plays first for that series), it might not play optimally when it assumes Player 1 will not play its maximum-strength move.

I believe it is doing this precisely because it assumes that all moves are visible (a fully visible game state). And indeed, if that were the case, the sequence of moves it returns would be optimal.

How can I remedy this?

I do not believe that a probability-based algorithm (e.g. Expectiminimax) is necessary, since the game is entirely deterministic. The partial visibility part is making things difficult, though.

Something tells me that changing the turn order in my algorithm might be a solution to this problem, since the action event is the only time the game heuristic is changed.

",42518,,,,,11/25/2020 3:58,Minimax algorithm with only partial visibility,,0,0,,,,CC BY-SA 4.0 24848,1,24871,,11/25/2020 5:40,,3,86,"

I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In the introduction, the author says the following:

One of the most common assumptions in the design of learning algorithms is that the training data consist of examples drawn independently from the same underlying distribution as the examples about which the model is expected to make predictions. In many real-world applications, however, this assumption is violated because we do not have complete control over the data gathering process.

For example, suppose we are using a learning method to induce a model that predicts the side-effects of a treatment for a given patient. Because the treatment is not given randomly to individuals in the general population, the available examples are not a random sample from the population. Similarly, suppose we are learning a model to predict the presence/absence of an animal species given the characteristics of a geographical location. Since data gathering is easier in certain regions than others, we would expect to have more data about certain regions than others.

In both cases, even though the available examples are not a random sample from the true underlying distribution of examples, we would like to learn a predictor from the examples that is as accurate as possible for this distribution. Furthermore, we would like to be able to estimate its accuracy for the whole population using the available data.

It's this part that I am confused about:

In both cases, even though the available examples are not a random sample from the true underlying distribution of examples, we would like to learn a predictor from the examples that is as accurate as possible for this distribution.

What exactly is "this distribution"? Is it referring to the true underlying distribution, or the distribution of our sample (which, as was said, is not necessarily a "good" reflection of the underlying distribution, since it is not a random sample)?

",16521,,,,,11/26/2020 20:40,"Is this referring to the true underlying distribution, or the distribution of our sample?",,1,0,,,,CC BY-SA 4.0 24849,1,,,11/25/2020 8:37,,2,708,"

What is the impact of using a:

  • low crossover rate

  • high crossover rate

  • low mutation rate

  • high mutation rate

",41713,,2444,,11/28/2020 3:10,11/28/2020 3:10,What is the impact of changing the crossover and mutation rates?,,1,0,,,,CC BY-SA 4.0 24851,1,24865,,11/25/2020 12:14,,0,131,"

I would be grateful for some guidance on a RL problem I am trying to solve where multiple RL agents use a common/global policy at the initial state of an episode in the RL Environment, and then update this common/shared policy once the episode is completed.

Below is an example of the problem scenario:

  • An alert triggers a RL agent to execute a "episode" in the Environment
  • Multiple alerts (e.g., episodes) can occur at the same time, or, one alert may still be being processed (e.g., the episode has not finished) before another alert is triggered (e.g., another episode begins).

Below are the conditions of the Environment and desired behaviour of the RL Agent:

  • Multiple episodes can run at once (e.g., another episode starts before another finishes).
  • For each episode a "instance" of the RL agent uses the latest version of a common policy.
  • After each episode the RL agent updates the common policy.
  • Common policy updates are "queued" using versioning in code to prevent race conditions.

Q: How can multiple RL agents in this case use a common policy at the beginning of an episode and then update a common policy after completing it? - All I have found are discussions related to Q-Learning, where agents can update a shared Q-table, or later update a "global" Q-table without any examples of how this can be achieved and whether there are also methods for other approaches such as TD rather than only Q-Learning, for example

Q: Does this sound like a traditional multi-agent scenario, at least conceptually? If so, how might one go about implementing this, any examples would be really helpful.

Any help on this is greatly appreciated!

EDIT:

Since doing more investigation I have found this reference on Mathworks: Link, which is similar to the above problem, but not exact.

",42508,,42508,,11/25/2020 18:26,11/26/2020 7:23,How to use and update a shared/global policy between Reinforcement Learning Agents,,1,0,,,,CC BY-SA 4.0 24852,1,,,11/25/2020 13:07,,1,27,"

I am trying to set up a neural network architecture that is able to learn the points of one function (blue curves) from the points of an other one (red curves). I think that it could be somehow similar to the problem of learning a functional like it was described in this question here. I don't know at all what this (let's call it) functional looks like, I just see the 'blue' response of it to the 'red' input.

The inputs of the network would be the (e.g. 100) points of a red curve and the outputs would probably be the (e.g. 50) points of the blue curve. This is where my problem begins. It tried to implement a simple dense network with two hidden layers and around 200-300 neurons each. Obviously it didn't learn much.

I have the feeling that I somehow need to tell the network that the points next to each other (e.g. input points $x_0$ and $x_1$) are correlated and the function they belong to is differentiable. For the inputs this could be archieved by using convolutional layers, I suppose. But I don't really now how to specify, that the output nodes are correlated with each other as well.

At the beginning I had high hopes in the approach using Generalized Regression Neural Networks as presented here, where a lowpass filter is implemented with NNs. However, as I understood, only the filter coefficients are predicted. As I don't know anything about the general structure of my functional, this will not help me here ...

Do you have any other suggestions for NN architectures that could be helpful for this problem? Any hint is appreciated, thank you!

",42323,,,,,11/25/2020 13:07,Neural network architecture with inputs and outputs being an unkown function each,,0,0,,,,CC BY-SA 4.0 24856,1,,,11/25/2020 22:18,,2,45,"

When using an RNN to encode a sentence, one normally takes each word, passes it through an embedding layer, and then uses the dense embedding as the input into the RNN.

Lets say instead of using dense embeddings, I used a one-hot representation for each word, and fed that sequence into the RNN. My question is which of these two outcomes is correct:

  1. Due to the way in which an RNN combines inputs, since these vectors are all orthogonal, absolutely nothing can be combined, and the entire setup does not make sense.

  2. The setup does make sense and it will still work, but not be as effective as using a dense embedding.

I know I could run an experiment and see what happens, but this is fundamentally a theoretical question, and I would appreciate if someone could clarify so that I have a better understanding of how RNNs combine inputs. I suspect that the answer to this question would be the same regardless of whether we are discussing a vanilla RNN or an LSTM or GRU, but if that is not the case, please explain why.

Thank you.

",12201,,12201,,11/26/2020 2:01,11/26/2020 2:01,Can One-Hot Vectors be used as Inputs for Recurrent Neural Networks?,,0,1,,,,CC BY-SA 4.0 24857,1,,,11/26/2020 0:29,,2,1847,"

I am trying to understand Reinforcement Learning and already explored different Youtube videos, blog posts, and Wikipedia articles.

What I don't understand is the impact of $\epsilon$. What value should it take? $0.5$, $0.6$, or $0.7$?

What does it mean when $\epsilon = 0$ and $\epsilon = 1$? If $\epsilon = 1$, does it mean that the agent explores randomly? If this intuition is right, then it will not learn anything - right? On the other hand, if I set $\epsilon = 0$, does this imply that the agent doesn't explore?

For a typical problem, what is the recommended value for this parameter?

",41187,,2444,,11/26/2020 1:03,11/26/2020 19:02,What should the value of epsilon be in the Q-learning?,,1,0,,,,CC BY-SA 4.0 24859,2,,12144,11/26/2020 2:06,,1,,"

It appears that the rank based method would be slightly better in terms of time complexity because you sort only once for k number of sampling operations. This article (not mine) explains in detail clearly. https://towardsdatascience.com/how-to-implement-prioritized-experience-replay-for-a-deep-q-network-a710beecd77b

",21352,,,,,11/26/2020 2:06,,,,0,,,,CC BY-SA 4.0 24860,2,,24857,11/26/2020 2:09,,2,,"

What does it mean when ϵ=0 and ϵ=1? If ϵ=1, does it mean that the agent explores randomly? If this intuition is right, then it will not learn anything - right? On the other hand, if I set ϵ=0, does this imply that the agent doesn't explore?

You are correct, when ϵ=1 the agent acts randomly. When ϵ=0, the agent always takes the current greedy actions. Both of these scenarios are not ideal. Always acting greedily will prevent the agent from exploring possibly better parts of the state space, and instead the agent may get stuck in a local optimum. And always exploring randomly is obviously not ideal as well. Thus, we need to balance between these two. This is often called the balance between exploration and exploitation.

For a typical problem, what is the recommended value for this parameter?

ϵ is a hyper parameter. It is impossible to know in advance what the ideal value is, and it is highly dependent on the problem at hand. There is no general answer to this question.

That being said, the most common values that I have seen typically range between 0.01 and 0.1. But I want to stress, there is no ideal value that works for all problems. A typical strategy is to try several values and see which one works best. For more information, you might want to look up hyper parameter tuning.

Another common practice is slowly decaying epsilon over time (often this is called "annealing" or "simulated annealing"). Depending on the algorithm, decaying epsilon to zero may be a requirement for convergence. In some contexts, an algorithm that decays epsilon over time is called a GLIE algorithm. For example, see this.

",12201,,12201,,11/26/2020 19:02,11/26/2020 19:02,,,,4,,,,CC BY-SA 4.0 24861,1,,,11/26/2020 3:14,,5,157,"

I am self-studying about Reinforcement Learning using different online resources. I now have a basic understanding of how RL works.

I saw this in a book:

Q-learning is an off-policy learner. An off-policy learner learns the value of an optimal policy independently of the agent’s actions, as long as it explores enough.

An on-policy learner learns the value of the policy being carried out by the agent, including the exploration steps.

However, I am not quite understanding the difference. Secondly, I came across that off-policy learner works better than the on-policy agent. I don't understand why that would be i.e. why off-policy would be better than the on-policy.

",41187,,2444,,11/26/2020 10:39,11/28/2020 2:01,Why does off-policy learning outperform on-policy learning?,,1,2,,,,CC BY-SA 4.0 24862,1,,,11/26/2020 4:51,,1,44,"

I have 2 photos, and my goal is to detect the face in one and place it on the face of the person in the other photo- basically face detection and replacement. It's not deep fakes. It's more of a computer vision-based approach for smoothing the edges and stitching.

How can we achieve that? What are the approaches and techniques to do that? Some tutorials or code or github repos would be very helpful.

",9053,,9053,,11/26/2020 5:04,11/26/2020 20:05,Face detection and replacement in photos,,0,0,,,,CC BY-SA 4.0 24863,2,,24861,11/26/2020 5:23,,5,,"

This post contains many answers that describe the difference between on-policy vs. off-policy.

Your book may be referring to how the current (DQN-based) state-of-the-art (SOTA) algorithms, such as Ape-X, R2D2, Agent57 are technically "off-policy", since they use a (very large!) replay buffer, often filled in a distributed manner. This has a number of benefits, such as reusing experience and not forgetting important experiences.

Another benefit is that you can collect a lot of experience distributedly. Since RL is typically not bottlenecked by the computation for training but rather from collecting experiences, the distributed replay buffer in Ape-X can enable much faster training, in terms of seconds but not sample complexity.

However, it's important to emphasize that these replay-buffer approaches are almost on-policy, in the sense that the replay buffer is constantly updated with new experiences. So, the policy in the replay buffer is "not too different" from your current policy (just a few gradient steps away). Most importantly, this allows the policy to learn from its own mistakes if it makes any...

Off-policy learning, in general, can also refer to batch RL (a.k.a. offline RL), where you're provided a dataset of experiences from another behavior policy, and your goal is to improve over it. Notably, you don't get to rollout your current policy in any way! In this case, algorithms that worked well with a replay-buffer (like DQN, SAC) fail miserably, since they over-estimate the value of actions when they extrapolate outside the "support" of the dataset. See the BCQ paper which illustrates how a lot of "off-policy" algorithms like DQN fail when the "distance between the two policies is large". For this task, SOTA is a form of weighted behavioral cloning called Critic Regularized Regression (CRR).

It's also worth noting that importance sampling can correct off-policy gradients to be on-policy; but the farther away your target policy is, the larger the variance. This is especially deadly for long horizon tasks (often called curse of horizon).

To summarize, using replay-buffer (which makes the algorithm off-policy), especially a distributed one, can offer a lot of benefits over pure on-policy algorithms. However, this is a very special class of off-policy algorithms, where the behavioral policy is close to your policy.

But in general, off-policy is a lot harder than on-policy; you'll suffer from extrapolation bias if you use DQN-based approaches, and exponential variance blow-up if you use importance sampling to correct for it.

",36922,,2444,,11/28/2020 2:01,11/28/2020 2:01,,,,10,,,,CC BY-SA 4.0 24864,2,,21398,11/26/2020 5:33,,2,,"

Let's fix some notation: we're collecting data from behavior policy $\pi_0$ and we want to evaluate a policy $\pi$. Of course, if we had plenty of data from policy $\pi$ that would be the best way to evaluate $\pi$ as we just take the empirical average (without any importance sampling) and CLT gives us confidence intervals that shrink at $\frac{1}{\sqrt n}$ rates.

However, collecting data from $\pi$ is often time-consuming and costly: you may need to productionize it at a company, and if $\pi$ were dangerous, some damage could be done during rollouts. So how can we make the best use of our data from any policies, not necessarily $\pi$, to evaluate $\pi$? This is the question of off-policy evaluation, and you're right that IS is one approach.

This picture from a great talk by Thorsten provides nice intuition on why the weighting is unbiased .

",36922,,,,,11/26/2020 5:33,,,,0,,,,CC BY-SA 4.0 24865,2,,24851,11/26/2020 7:23,,0,,"

This sounds like distributed RL, and most of the work goes in building the distributed system; the actual RL part is just a DQN (+some tricks from Rainbow DQN). NB: multi-agent RL arises when the agents interact with each other in the same environment (like Hanabi the card game), while in this case we have multiple agents that collect experiences in parallel. Here's a possible design from the Ape-X paper:

",36922,,,,,11/26/2020 7:23,,,,3,,,,CC BY-SA 4.0 24866,1,,,11/26/2020 14:39,,1,58,"

Is it possible to train an Object Detector (e.g. SSD), to detect when something is not in the image. Imagine an assembly line that transports some objects. Each object needs to have 5 screws. If the Object Detector detects 4 screws, we know that one is missing, hence there is an anomaly.

Actually this is an Anomaly Detection task where there is something else than a screw (e.g. a hole), but unsupervised anomaly detectors are hard to train and not as stable as object detectors.

Is my assumption correct, that even though it is not really an object detection task, one can use such methods?

",23063,,,,,11/26/2020 14:39,Object Detection as a means of Anomaly Detection,,0,1,,,,CC BY-SA 4.0 24867,1,24915,,11/26/2020 16:26,,2,213,"

Given an alphabet $I=\left\{i_1,i_2,\dots,i_n\right\}$ and a sequence $S=[e_1,e_2,\dots,e_m]$, where items $e_j \in I$, I am interested in finding every single pattern (subsequence of $S$) that appears in $S$ more than $N$ times, and that has a length $L$ between two limits: $m<L<M$. In case of overlapping between patterns, only the longest pattern should be considered.

For example, given $N=3$, $m=3$ and $M=6$, an alphabet $I=\left\{A,B,C,D,E,F\right\}$, and the following sequence $S$ (the asterisks simply mark the positions of the patterns in this example): $$ S=[A,A,*F,F,A,A*,B,D,E,F,E,D,*F,F,A,A*,F,C,*C,A,B,D,D*,C,C,*C,A,B,D,D*,C,A,C,B,E,A,B,C,*F,F,A,A*,E,A,B,C,A,D,E,*F,F,A,A*,B,C,D,A,E,A,B,*C,A,B,D,D*,] $$ The sought algorithm should be able to return the following patterns: $$ [C,A,B,D,D], [F,F,A,A] $$ together with their respective positions within $S$.

An available Python implementation would be very desirable (and a parallel implementation, even more so).

I was reading about BIDE algorithms, but I think this is not the correct approach here. Any ideas? Thanks in advance!

",41986,,41986,,11/26/2020 16:43,11/30/2020 12:37,Mining repeated subsequences in a given sequence,,1,1,,,,CC BY-SA 4.0 24868,1,24869,,11/26/2020 18:30,,1,325,"

I'm trying to wrap my mind around the concepts of regularisation and optimisation in neural nets, especially around their differences.

In my current understanding, regularisation is intended to tackle overfitting whereas optimisation is about convergence.

However, even though regularisation adds terms to the loss function, both approaches seem to do most of their things during the update phase, i.e. they work directly on how weights are updated.

If both concepts are focused on updating weights,

  1. what are the conceptual differences, or why aren't both L2 and Adam, for example, called either optimisers or regularisers?

  2. Can/should I use them together?

",37735,,2444,,11/26/2020 18:34,11/26/2020 19:05,What are the conceptual differences between regularisation and optimisation in deep neural nets?,,1,0,,,,CC BY-SA 4.0 24869,2,,24868,11/26/2020 18:51,,2,,"

You are correct.

The main conceptual difference is that optimization is about finding the set of parameters/weights that maximizes/minimizes some objective function (which can also include a regularization term), while regularization is about limiting the values that your parameters can take during the optimization/learning/training, so optimization with regularisation (especially, with $L_1$ and $L_2$ regularization) can be thought of as constrained optimization, but, in some cases, such as dropout, it can also be thought of as a way of introducing noise in the training process.

You should use regularisation when your neural network is big (where big, of course, is not well-defined) and you have little data (where little is also not well-defined). Why do you want to use regularisation in this case? Because big neural networks have more capacity, so they can memorize the training data more easily (i.e. they can over-fit the training data). If the training data is not representative of the whole data distribution (that you are trying to capture with your neural network), then your neural network may fail on other data from that data distribution, i.e. it may not generalize. Regularization techniques, such as $L_1$ and $L_2$ penalties and dropout, limit the complexity of the functions that can be represented by your neural network (remember that, for a specific set of weights $\theta$, a neural network represents a specific function $f_\theta$), so they prevent your neural network from learning a complicated function that would just over-fit the training data.

Of course, you can also use regularisation when you have a very large dataset or your neural network is not that big. However, in principle, the case mentioned above is the case where you will likely need some kind of regularisation. So, as a rule of thumb, the bigger your NN is and the smaller your training dataset is, the more likely you will need some kind of regularisation.

The typical way to visualize that your neural network is over-fitting is to look at the evolution of the loss function (a type of objective function, which you want to minimize) for the training dataset and the validation dataset (a dataset that you do not use for training, i.e. updating the weights of the neural network). If the training loss is very small, while the validation loss is bigger (stays constant or even increases), as you train more the neural network, that's a good sign of over-fitting, and it suggests that you may need some kind of regularisation. However, note that, even with regularisation, it is not guaranteed that you will find a good set of weights that is able to generalize to all data (not seen during training), but it's more likely.

There are other forms of regularization, such as the KL divergence in the context of Bayesian neural networks or variational auto-encoders, but these are more advanced topics that you don't need to know now. In any case, the KL divergence has the same role as the other regularisation techniques that I mentioned above, i.e., in some way, it restricts/limits the possible functions that you can learn.

",2444,,2444,,11/26/2020 19:05,11/26/2020 19:05,,,,3,,,,CC BY-SA 4.0 24871,2,,24848,11/26/2020 20:40,,0,,"

In both cases, even though the available examples are not a random sample from the true underlying distribution of examples, we would like to learn a predictor from the examples that is as accurate as possible for this distribution.

"The true underlying distribution" is the closest "distribution" that is explicitly mentioned as such in the part of the text preceding the phrase "this distribution", so that's what it's referring to. For clarity, I've put the two things that are "the same" in bold in the above quote.

",1641,,,,,11/26/2020 20:40,,,,2,,,,CC BY-SA 4.0 24872,1,24878,,11/26/2020 21:53,,1,762,"

I am using Keras (on top of TF 2.3) to train an image classifier. In some cases I have more than two classes, but often there are just two classes (either "good" or "bad"). I am using the tensorflow.keras.applications.VGG16 class as base model with a custom classifier on top, like this:

input_layer = layers.Input(shape=(self.image_size, self.image_size, 3), name="model_input")
base_model = VGG16(weights="imagenet", include_top=False, input_tensor=input_layer)
model_head = base_model.output
model_head = layers.AveragePooling2D(pool_size=(4, 4))(model_head)
model_head = layers.Flatten()(model_head)
model_head = layers.Dense(256, activation="relu")(model_head)
model_head = layers.Dropout(0.5)(model_head)
model_head = layers.Dense(len(self.image_classes), activation="softmax")(model_head)

As you can see in the last (output) layer I am using a softmax activation function. Then I compile the whole model with the categorical_crossentropy loss function and train with one-hot-encoded image data (labels).

All in all the model performs quite well, I am happy with the results, I achieve over 99% test and validation accuracy with our data set. There is one thing I don't understand though:

When I call predict() on the Keras model and look at the prediction results, then these are always either 0 or 1 (or at least very, very close to that, like 0.000001 and 0.999999). So my classifier seems to be quite sure whether an image belongs to either class "good" or "bad" (for example, if I am using only two classes). I was under the assumption, however, that usually these predictions are not that clear, more in terms of "the model thinks with a probability of 80% that this image belongs to class A" - but as said in my case it's always 100% sure.

Any ideas why this might be the case?

",14504,,2444,,11/28/2020 2:23,11/29/2020 19:07,Why is my Keras prediction always close to 100% for one image class?,,2,0,,,,CC BY-SA 4.0 24873,2,,24872,11/26/2020 23:13,,1,,"

Without more details about the nature of the dataset, it is impossible to know for sure. However, here are a few likely causes:

  1. You were calling predict on training data, not testing data. The network will be a lot more sure about images that it trained on than on images it has never seen before.

  2. Your model overfit the data. This can happen when you use an overly complex model on a small dataset. You may want to experiment with regularization.

  3. You were looking at too small a sample of images. Did you run predict on every image, or just a few? If the latter, it is possible you just picked a sample that the network is very confident about.

",12201,,,,,11/26/2020 23:13,,,,0,,,,CC BY-SA 4.0 24874,1,,,11/26/2020 23:43,,2,64,"

I'm working on a classifier for the famous MNIST handwritten data set.

I want to create a few features on my own, and I want to be able to estimate which feature might perform better before actually training the classifier. Lets say that I create the feature which calculates the ratio of ink used between two halves of a digit. Note that with ink I mean how much white there is used (which ranges from 0-255 per pixel).

For example I would calculate the ratio between the total amount of white in the left and right halves (seperated by red line). I could also do the same with the op and bottom half, or seperate the digits diagonally. With this I can calculate the mean and standard deviation.

I imagine that the left / right ratio might give some differences for other numbers. But the ratios might all be closer to the average.

Is there some method for estimating which feature might perform better compared to others? I.e. is there a method which gives a numerical value on how "seperable" a data set is?

",42556,,,,,11/26/2020 23:43,Statistical method for selecting features for classification,,0,0,,,,CC BY-SA 4.0 24875,1,,,11/27/2020 15:47,,1,50,"

I am implementing the A3C algorithm and I want to add off-policy training using Retrace but I am having some trouble understanding how to compute the retrace target. Retrace is used in combination with A3C for example in the Reactor.

I often see the retrace update written as

\begin{equation} \Delta Q(s, a) = \sum_{t' = t}^{T} \gamma^{t'-t}\left(\prod_{j=t+1}^{t'}c_j\right) \delta_{t'} \end{equation}

with $\delta_{t'} = r(s_{t'}, a_{t'}) + \gamma \mathbb{E}[Q(s_{t'+1}, a_{t'+1})] - Q(s_{t'}, a_{t'})$ and $c_j$ being the Retrace factors $c_j = \lambda \min(c, \frac{\pi(a_j|s_j)}{b(a_j|s_j)})$.

Now, when employing neural networks to approximate $Q_{\theta}(s, a)$ it is often easier to define a loss \begin{equation} \mathcal{L}_{\theta} = \left(G_t - Q(s, a)\right)^2 \end{equation} and let the backward function and the optimizer do the update. How can I write the Retrace target $G_t$ to use in such a setup?

Is it correct to write it as follows? \begin{equation} G_t = \sum_{t'=t}^T \gamma^{t'-t} \left(\prod_{j=t+1}^{t'}c_j\right) (r_{t'} + \gamma Q(s_{t'+1}, a_{t'+1}) - Q(s_{t'}, a_{t'})) \end{equation}

and then compute $\mathcal{L}$ as above, take the gradient $\nabla\mathcal{L}_{\theta}$ and perform the update step $Q(s_t, a_t) = Q(s_t, a_t) + \alpha \nabla\mathcal{L}_{\theta}$ ?

",32583,,32583,,11/28/2020 17:05,11/28/2020 17:05,How to compute the Retrace target for multi-step off-policy Reinforcement Learning?,,0,0,,,,CC BY-SA 4.0 24876,2,,24849,11/27/2020 16:54,,3,,"

The crossover rate, $p_c \in [0, 1]$, is a hyper-parameter that controls the rate at which solutions are subjected to crossover. So, the higher $p_c$, the more crossovers you perform, so the more diversity (in terms of solutions/chromosomes) you may introduce in the population. Typical values of $p_c$ are in the range $[0.5, 1.0]$. For example, in this simple implementation, I used $p_c = 0.8$. In this case, I manually searched for this value (and it didn't take me much time to make the GA find the solution I was looking for), but in other cases/problems you may need to perform some hyper-parameter optimization (e.g. with grid search).

The mutation rate, $p_m \in [0, 1]$ (also a hyper-parameter) controls the rate at which you mutate single chromosomes or the genes of single chromosomes. For example, if you have a binary chromosome of $n$ bits, you could flip the $i$th bit with probability $p_m$, for all $i$ independently. Mutation can prevent premature convergence, but, if $p_m$ is very high, the genetic algorithm becomes a random search, so the value of $p_m$ is typically not high. This paper (which you probably should read because it contains more information about the mutation and crossover operations, although it is a bit old) states that typical values are in the range $[0.005, 0.05]$, but, in this implementation, I used $p_m = 0.5$.

So, in general, higher values of these rates promote exploration at the expense of exploitation. In this sense, this is similar to the $\epsilon$ in the $\epsilon$-greedy behaviour policy in Q-learning (a famous reinforcement learning algorithm).

However, note that each of these operations may undo the other operation. For instance, let's suppose that we have the binary chromosomes $p_1 = [0, 1, 0]$ and $p_2 = [0, 0, 1]$, which we can combine by replacing (with probability $p_c$) the bit at position $i$ of $p_1$ with the bit of the chromosome $p_2$ at the same position $i$. So, after the crossover, you could get the children $c_1 = [0, 0, 0]$ and $c_2 = [0, 1, 1]$. If you perform a mutation by flipping the bits, then you could end up with the same chromosomes as the original ones, that's also one of the reasons why you may not want to have a very high mutation rate. So, it's not completely true that, the higher the mutation rate, the more you explore, because you can end up with the same solutions as before, so you have not explored anything.

You could also have an adaptive value for these parameters. See e.g. this paper for an example of such an adaptive schedule.

",2444,,2444,,11/28/2020 2:55,11/28/2020 2:55,,,,0,,,,CC BY-SA 4.0 24878,2,,24872,11/28/2020 2:20,,2,,"

Traditional neural networks can be over-confident (i.e. give a probability close to $0$ or $1$) even when they are wrong, so you should not interpret the probability that it produces as a measure of uncertainty (i.e. as a measure of how much it is confident that the associated predicted class is the correct one), as that it is essentially wrong. See this and this answers for more details about this.

Given that this overconfidence is not desirable in many scenarios (such as healthcare, where doctors also want to know how confident the model is about its predictions, in order to decide whether to give a certain medication to the patient or not), the ML community has been trying to incorporate uncertainty quantification/estimation in neural networks. If you are interested in this topic, you could read the paper Weight Uncertainty in Neural Network (2015) by Blundell et al., which proposes a specific type of Bayesian neural network, i.e. a neural network that models the uncertainty over the actual values of the weights, from which we may also quantify/estimate the uncertainty about the inputs. This paper should not be too difficult to read if you are already familiar with the details of variational-autoencoders.

So, the answer to your question is: yes, it's possible that the output probability is close to $1$ because neural networks can be over-confident. (I am assuming that the values returned by tf.keras's predict method are probabilities: I don't remember anymore, so I assumed that you did not make any mistake).

A similar question was already asked in the past here. The accepted answer should provide more details about different types of uncertainty and solutions.

",2444,,2444,,11/29/2020 19:07,11/29/2020 19:07,,,,0,,,,CC BY-SA 4.0 24881,2,,20735,11/28/2020 2:46,,0,,"

Okay so your CNN model is taking an Image and outputting the bounding boxes for them. That means the last layer of CNN model must be having four outputs which are generating real numbers. This is a regression problem.

In that case, you can take L1 loss (mean absolute error) or L2 loss (mean square error) as your loss function. I have created a similar project, I have used L1 loss.

Suppose your input image is x and predicted_bb is the model output and real_bb is your original bounding box for image x. Then you should proceed as follows

predicted_bb = model(x)

# CALCULATE LOSS BETWEEN THE ACTUAL & PREDICTED BB COORDINATES

loss_bb = torch.nn.functional.l1_loss(predicted_bb, real_bb, reduction="none").sum(1)

# SET GRADIENTS TO ZERO

optimizer.zero_grad()

# BACKPROPOGATE THE LOSS

loss.backward()

For tensorflow keras

predicted_bb = Dense(4, activation='relu')(x)

model = Model(inputs = image_input, outputs = [predicted_bb])
model.compile(loss=['mae'], optimizer='adam', metrics =['accuracy'])
",42583,,,,,11/28/2020 2:46,,,,1,,,,CC BY-SA 4.0 24882,2,,20118,11/28/2020 4:10,,1,,"

I will give one perspective on this from the domain of robotics. You are right that most RL agents are trained in simulation particularly for research papers, because it allows researchers to in theory benchmark their approaches in a common environment. Many of the environments exist strictly as a test bed for new algorithms and are not even physically realizable, e.g. HalfCheetah. You could in theory have a separate simulator say running in another process that you use as your planning model, and the "real" simulator is then your environment. But really that's just a mocked setup for what you really want in the end, which is having a real-world agent in a real-world environment.

What you describe could be very useful, with one important caveat: the simulator needs to in fact be a good model of the real environment. For robotics and many other interesting domains, this is a tall order. Getting a physics simulator that faithfully replicates the real-world environment can be tricky, as one may need accurate friction coefficients, mass and center of mass, restitution coefficients, material properties, contact models, and so on. Oftentimes the simulator is too crude an approximation of the real-world environment to be useful as a planner.

That doesn't mean we're completely hosed though. This paper uses highly parallelized simulators to search for simulation parameters that approximate the real-world well. What's interesting is it's not even necessarily finding the correct real-world values for e.g. friction coefficients and such, but it finds values for parameters that, taken together, produces simulations that match the real-world experience. The better the simulation gets at approximating what's going on in the real world, the more viable it is to use the simulator for task planning. I think with the advent of GPU-optimized physics simulators we will see simulators be a more useful tool even for real-world agents, as you can try many different things in parallel to get a sense of what is the likely outcome of a planned action sequence.

",20955,,,,,11/28/2020 4:10,,,,0,,,,CC BY-SA 4.0 24883,1,,,11/28/2020 4:37,,2,937,"

I know this might be specific to different problems, but does anyone know if there is any rule of thumb or references on what constitutes a large state space?

I know that, according to multiple papers, tabular Q-learning is not suitable for these problems with large state spaces, but I can't find any indication of what would constitute a large state space

I've been working on a problem where my state space is about 30x30 and updating my Q-learning with the tabular method runs and works well. Would 100x100 start to become too big or 400x400?

",42586,,2444,,11/28/2020 11:56,11/28/2020 12:51,What constitutes a large space state (in Q-learning)?,,1,1,,,,CC BY-SA 4.0 24884,1,24894,,11/28/2020 8:27,,1,82,"

A dataset is a collection of data points. It is known that the data points in the dataset can repeat. And the repetition does matter for building AI models.

So, why does the word dataset contain the word set? Does it have any relation with the mathematical set, where order and repetition do not matter?

",18758,,18758,,6/10/2021 6:15,6/10/2021 6:15,Is it okay to think of any dataset in artificial intelligence as a mathematical set?,,1,1,,,,CC BY-SA 4.0 24886,2,,24883,11/28/2020 10:55,,5,,"

I know this might be specific to different problems but does anyone know if there is any rule of thumb or references on what constitutes a large state space?

Not really, it is all relative. There are two main ways in which the scale of a value table might be too much:

  • Memory required to represent the table. This is relatively simple to calculate for any size.

  • Time required to populate the table with accurate estimates. This depends on how you are collecting that data, and how much variance there is given the same state, action choices.

If you are using action values you need to allow for the fact that the table size is not just $|\mathcal{S}|$, but $|\mathcal{S} \times \mathcal{A}|$.

If you are running fast, local simulations of the environment, then table sizes of a million or even a hundred million are not unreasonable. Software language or library choice for the simulation and agent can have a significant impact at these larger scales.

If any of the state description is a continuous variable, then the table size would become infinite in theory, so no finite table that could actually be realised on a computer could fully capture it. You must use some form of approximation to get practical results. However, even then you can still use tabular approaches when approximation involves discretising the state variable directly - there are a few different methods used in machine learning to do that - e.g. tile coding. You can also use tile coding in large discrete spaces.

I've been working on a problem where my state space is about 30x30 and updating my Q-learning with the tabular method runs and works well. Would 100x100 start to become too big or 400x400?

It depends on the action space too, but 400x400 gives state space of 160,000. Assuming < 10 actions, then this is still well within the realm of tabular methods. You probably have less than 1,000,000 parameters to handle for a tabular method, which compares well to a moderately-sized neural network, with the advantage of better stability and possibility to create a fully optimal agent. If simulating the environemnt is fast, something at that scale may only take a few minutes to a few hours to fully optimise in tabular form.

As with most machine learning though, if you care about some optimisation, such as memory size of the agent, speed at which it solves the problem, or some other metric, then you will need to try experiments with different approaches. Then you will have some knowledge about best approach for your problem, given your definition of "best agent". The experience you gain with that might extend and apply to similar problems in future.

",1847,,1847,,11/28/2020 12:51,11/28/2020 12:51,,,,0,,,,CC BY-SA 4.0 24887,1,25016,,11/28/2020 12:34,,2,146,"

In Q-learning, is it mandatory to know all possible states that can the agent may end up in?

I have a network with 4 source nodes, 3 sink nodes, and 4 main links. The initial state is the status network where the sink nodes have its resources at its maximum. In a random manner, I generate service requests from the source nodes to the sink nodes. These service requests are generated at random timesteps, which means that, from state to state, the network status may stay the same.

When a service request is launched, the resources from the sink nodes change, and the network status changes.

The aim of the agent is to balance the network by associating each service request to a sink node along with the path.

I know that in MDP you are supposed to have a finite set of states, my question is: if that finite set of states is supposed to be all possible states that can happen, or is just a number that you consider enough to optimize the Q-table?

",42591,,2444,,11/30/2020 1:33,12/6/2020 10:17,Do I need to know in advance all possible number of states in Q-Learning?,,1,0,,,,CC BY-SA 4.0 24889,1,,,11/28/2020 13:38,,2,31,"

I have an input-output system, which is fully determined by 256 parameters, of which I know a significant amount are of less importance to the input-output pattern.

The data I have is some (64k in total) input-parameter-output match.

My goal is to compress these 256 parameters to a smaller scale (like 32) using an encoder of some kind while being able to preserve the response pattern.

But I can't seem to find a proper network for this particular problem, because I'm not trying to fit these parameters (they all have a mean of one and variance of 1/4), but rather its influence on the output, so traditional data-specific operations will not work in this case.

",42507,,11539,,11/29/2020 19:16,11/29/2020 19:16,Compressing Parameters of an Response System,,0,0,,,,CC BY-SA 4.0 24890,2,,24368,11/28/2020 14:22,,1,,"

Yeah, it seems that you're right and based on the description of the paper it would indeed behave uniformly random at the very first iteration (or maybe just always deterministically pick whichever action happens to be the first one in the list). I can't find anything that would suggest otherwise in the paper, and also the pseudocode they put on arXiv suggests the same behaviour.

I can't really think of any reason why this would be better than playing according to $P(s, a)$ in the very first iteration though. Maybe it creates a tiny little bit more variety in the tree search results you get? I suppose all the stochasticity that is normally inherent to a vanilla MCTS is no longer present in MuZero (or AlphaZero), because they always run for exactly the same number of iterations, and don't have any sort of random rollouts anymore; this would at least introduce a tiny bit of variation.

On the other hand, I also can't imagine this choice really making a meaningful difference in practice, except maybe if you go down to extremely low iteration counts (like if you run fewer iterations than there are legal actions in the root node).

",1641,,,,,11/28/2020 14:22,,,,2,,,,CC BY-SA 4.0 24891,1,,,11/28/2020 17:07,,3,226,"

Convolution Neural Network (CNNs) operate over strict grid-like structures ($M \times N \times C$ images), whereas Graph Neural Networks (GNNs) can operate over all-flexible graphs, with an undefined number of neighbors and edges.

On the face of it, GNNs appear to be neural architectures that can subsume CNNs. Are GNNs really generalized architectures that can operate arbitrary functions over arbitrary graph structures?

An obvious follow-up - How can we derive a CNN out of a GNN?

Since non-spectral GNNs are based on message-passing that employ permutation-invariant functions, is it possible to derive a CNN from a base-architecture of GNN?

",42600,,2444,,11/28/2020 18:43,5/28/2021 11:00,How can we derive a Convolution Neural Network from a more generic Graph Neural Network?,,1,7,,,,CC BY-SA 4.0 24894,2,,24884,11/28/2020 18:12,,3,,"

It's true that your original dataset can contain duplicates, so it should not be called a set, in order to be consistent with the mathematical definition of a set. There are mathematical objects known as multi-sets that can contain duplicates, but the order of the elements is still not relevant. There are also tuples and sequences, where the order of the elements matters.

If you want to get rid of the duplicate elements in your dataset, you could perform a pre-processing step where you remove them. Even if you do that, it is often the case that, if you are learning with mini-batches (i.e. using mini-batch stochastic gradient descent), these mini-batches could contain the same elements, because you may sample the same element in different batches or even in the same batch (this is known as sampling with replacement). Of course, this depends on how you sample your training dataset to build the batches (or mini-batches). So, if you do not want duplicates even in the mini-batches, you need to perform sampling without replacement.

Moreover, there are datasets that contain elements whose order in the dataset can be relevant for the predictions, such as datasets of time-series data, while, in mathematical sets and multi-sets, the order of the elements does not matter.

So, yes, it is often called a dataset (or data set), but it is not necessarily a set in a mathematical sense. In general, it should just be interpreted as a collection of data. In scenarios where the order of the elements or the existence of duplicates in the dataset (or any other information or property of your collection of data) is relevant, you should probably emphasize/note it.

",2444,,2444,,11/28/2020 20:10,11/28/2020 20:10,,,,0,,,,CC BY-SA 4.0 24895,1,,,11/28/2020 19:04,,2,86,"

Slightly generalizing the definition in Jaeger 2001, let's define a reservoir system to be any system of the form

$$h_{t}=f(h_{t-1}, x_t)$$ $$y_t=g(Wh_t)$$

where $f$ and $g$ are fixed and $W$ is a learnable weight matrix. The idea is we feed a sequence of inputs $x_t$ into the system, which has some fixed initial state $h_0$, and thereby generate the sequence of outputs $y_t$. $f$ is fixed (for example, a randomly generated RNN) we can then attempt to learn $W$ in some way in order to get the system to have the behavior that we want.

Now we add the echo state condition: the system has the echo state condition iff for any left-infinite sequence $...x_{-3}, x_{-2}, x_{-1}, x_0$, there is only one sequence of states $h_t$ consistent with this input sequence.

Seen from this perspective, any training procedure that could be applied to an echo state system could be applied to a generic reservoir system. So what do we get out of the echo state condition? Is there some reason to think echo state systems will generalize better, or be more quickly trainable? Jaeger does not seem to attempt to argue in this direction, he just describes how to train an ESN, but as I've said, nothing about these training methods seems to require the echo state property.

",1931,,,,,12/21/2022 21:07,What do echo state networks give us over a generic RNN resevoir?,,1,0,,,,CC BY-SA 4.0 24898,1,,,11/29/2020 11:34,,1,23,"

Capsule Networks use an encoder-decoder structure, where the encoder part consists of the capsule layers (PrimiaryCaps and DigitCaps) and is also the part of the capsule network which performs the actual classification. On the other hand, the decoder attempts to reconstruct the original image from the output of the correct DigitCap. The Decoder in the Capsule Network is used as the regularizer of the network as it helps the network learn better features.

I can see how the decoder is helpful for datasets such as MNIST where all image classes have clear differences and the input size if the image is quite small. However, if the input has large dimensions and differences between image classes are quite small, I see the decoder network as overkill, as it will find it hard to reconstruct images for different classes.

In my case, my dataset consists of 3D MRI images of patients which have Alzheimer's Disease and those who do not. I am down-sampling the images and producing 8 3D patches which will be used as input to the network. The patches still have high dimensions considering that these are 3D, and there are not many clear differences between patches of the two image classes.

My questions here are:

  1. How significant is the decoder part of the capsule network? CNNs that perform image classification, usually do not have a decoder part. Why does the capsule network rely on the decoder to learn better features?

  2. Are there any alternatives to the decoder within the capsule network, acting as a regularizer? Can the decoder be ignored completely?

",42613,,2444,,12/4/2020 13:51,12/4/2020 13:51,How significant is the decoder part of the capsule network?,,0,0,,,,CC BY-SA 4.0 24901,1,,,11/29/2020 14:06,,0,118,"

I am implementing an AI for a mobile checkers game, and have used alpha-beta pruning with Minimax.

Now I have the problem of horizontal effect, and need to do Quiesence search to avoid that. Any advice on what makes a position volatile for a checkers game?

I want to consider the cases when player can take a piece, and also when any piece can be taken by opponent a volatile position, and continue searching for another depth. Anything else?

",42610,,,,,11/29/2020 15:02,What is a good way of identifying volatile positions for a checkers game?,,1,0,,,,CC BY-SA 4.0 24902,2,,24901,11/29/2020 15:02,,1,,"

When combatting the horizon affect, you want to consider any short term actions that will greatly affect your position evaluation. Thus, in addition to captures, you will also want to include:

  1. When the opponent can make a king next move
  2. When the current player can make a king next move
  3. When the only legal moves left will lead to capture the turn after for the opponent
  4. When the only legal moves left will lead to capture the turn after for the current player

3 and 4 are commonly called Zugzwang and play a very prominent role in high level checkers, but may be a bit more difficult to implement. However, they will contribute perhaps most of all when combatting the horizon affect.

",12201,,,,,11/29/2020 15:02,,,,0,,,,CC BY-SA 4.0 24903,2,,24741,11/29/2020 15:25,,1,,"

I think you are misreading the relevant passage here.

Since you do not specify exact excerpt(s), I take that by "implicit assumption" you refer to the equation (2) (application of a ReLU) and the corresponding text explanation (bold emphasis mine):

We apply a ReLU to the linear combination of maps because we are only interested in the features that have a positive influence on the class of interest, i.e. pixels whose intensity should be increased in order to increase $y^c$. Negative pixels are likely to belong to other categories in the image. As expected, without this ReLU, localization maps sometimes highlight more than just the desired class and perform worse at localization.

The first thing to notice here is that this choice is not at all about activations close to zero, as you seem to believe, but about negative ones; and since negative activations are indeed likely to belong to other categories/classes than the one being "explained" at a given trial, it is very natural to exclude them using a ReLU.

Grad-CAM maps are essentially localization ones; this is apparent already from the paper abstract (emphasis mine):

Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say ‘dog’ in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.

and they are even occasionally referred to as "Grad-CAM localizations" (e.g. in the caption of Fig. 14); taking a standard example figure from the paper, e.g. this part of Fig. 1:

it is hard to see how including the negative values of the map (i.e. removing the thresholding imposed by the ReLU) would not lead to maps that include irrelevant parts of the image, hence resulting in a worse localization.


A general remark: while your claim that

After all, neurons have biases, and a bias can arbitrarily shift the reference point, and hence what 0 means

is correct as long as we treat the network as an arbitrary mathematical model, we can no longer treat a trained network as such. For a trained network (which Grad-CAM is all about), the exact values of both biases & weights matter, and we cannot transform them arbitrarily.


UPDATE (after comments):

Are you pointing out that it's called "localization" and therefore must be so?

It is called "localization" because it is localization, literally ("look, here is the "dog" in the picture, not there").

I could make a similar challenge "why does positive mean X and why does negative mean Y, and why will this always be true in any trained network?"

It is not at all like that; positive means X and negative means not-X in the presence of class X (i.e. a specific X is present), in the specific network, and all this in a localization context; notice that Grad-CAM for "dog" is different from the one for "cat" in the picture above.

Why is it that a trained network tends to make 0 mean "insignficant" in the deeper layers? [...] why is it that a network should invariably make the positive activations be the ones that support the prediction rather than the negative ones? Couldn't a network just learn it the other way around but use a negative sign on the weights of the final dense layer (so negative flips to positive and thus supports the highest scoring class)?

Again, beware of such symmetry/invariability arguments when such symmetries/invariabilities are broken; and they are indeed broken here for a very simple reason (albeit hidden in the context), i.e. the specific one-hot encoding of the labels: we have encoded "cat" and "dog" as (say) [0, 1] and [1, 0] respectively, so, since we are interested in these 1s (which indicate class presence), it makes sense to look for the positive activations of the (late) convolutional layers. This breaks the positive/negative symmetry. Should we had chosen to encode them as [0, -1] and [-1, 0] respectively ("minus-one-hot encoding"), then yes, your argument would hold, and we would be interested in the negative activations. But since we take the one-hot encoding as given, the problem is no longer symmetric/invariant around zero - by using the specific label encoding, we have actually chosen a side (and thus broken the symmetry)...

",11539,,11539,,11/29/2020 17:08,11/29/2020 17:08,,,,5,,,,CC BY-SA 4.0 24905,2,,9033,11/29/2020 16:13,,1,,"

Q-learning is an off-policy learning algorithm. We are following the behaviour policy, $b$, which is $\epsilon-$greedy. This behaviour policy need not be an optimal policy rather it is a more explorable policy. But we are learning the target policy, $\pi$, which is argmax of state action value $(Q(s,a))$. This target policy is by definition optimal policy.

From the $\epsilon$-greedy policy improvement theorem we can show that for any $\epsilon$-greedy policy (I think you are referring to this as a non-optimal policy) we are still making progress towards the optimal policy and when $\pi^{'}$ = $\pi$ that is our optimal policy (Rich Sutton's book Chapter-5). Here $\pi^{'}$ is the new policy and $\pi$ is the previous policy.

Think of this diagram, where we are selecting action based on $\epsilon$-greedy policy but still making progress towards the optimal policy $\pi_*$.

",28048,,2444,,11/29/2020 17:51,11/29/2020 17:51,,,,5,,,,CC BY-SA 4.0 24910,1,24917,,11/29/2020 22:13,,0,189,"

I'm using tf.Keras to build a deep-fully connected autoencoder. My input dataset is a dataframe with shape (19947,), and the purpose of the autoencoder is to predict normalized gene expression values. They are continuous values that range from [0,~620000].

I tried different architectures and I'm using relu activation for all layers. To optimize I'm using adam with mae loss.

The problem I have is the network trains successfully (although the train loss is still terrible) but when I'm predicting I notice that although the predictions do make sense for some nodes, there are always a certain number of nodes that only output 0. I've tried changing the number of nodes of my bottleneck layer (output) and it always happens even when I decrease the output number.

Any ideas on what I'm doing wrong?

tf.Keras code:

input_layer = keras.Input(shape=(19947,))
simple_encoder = keras.models.Sequential([
    input_layer,
    keras.layers.Dense(512, activation='relu'),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(16, activation='relu')
])
simple_decoder = keras.models.Sequential([
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(512, activation='relu'),
    keras.layers.Dense(19947, activation='relu')
])
simple_ae = keras.models.Sequential([simple_encoder, simple_decoder])
simple_ae.compile(optimizer='adam', loss='mae')
simple_ae.fit(X_train, X_train,
              epochs=1000,
              validation_data=(X_valid, X_valid),
              callbacks=[early_stopping])

Output of encoder.predict with 16 nodes on the bottleneck layer. 7 nodes predict only 0's and 8 nodes predict "correctly"

",42631,,,,,11/30/2020 7:25,Autoencoder: predictions missing for nodes in the bottleneck layer,,1,2,,,,CC BY-SA 4.0 24911,1,24918,,11/29/2020 23:31,,0,275,"

Q-learning uses the maximizing value at each step, which implies that there is a probability distribution and it happens to choose the one with the highest probability. There is no direct mapping between a particular state to ONLY a particular action but a bunch of actions with varying probabilities. I don't understand.

",42398,,2444,,11/30/2020 20:40,11/30/2020 20:41,"Why is the policy implied by Q-learning deterministic, when it always chooses the action with highest probability?",,1,2,,,,CC BY-SA 4.0 24914,1,,,11/30/2020 0:11,,3,178,"

We have an image classifier that was built using CNN with faster R-CNN and Yolov5.

It is designated to run on 3D objects. All of those objects have similar "features" structure, but the actual features of each object class are somewhat different one from another. Therefore, we strive to detect the classes based on those differences in features.

In theory there are thousands of different classes, but for now we have trained the model to detect 4 types of classes, by training it on data sets that includes many images from different angles for each of those 4 classes (1,000 images each).

The main problem we face is that whenever the model runs on an "unknown" object, it may still classify it as one of our 4 classes, and sometimes it will do it with a high probability score (0.95), which hinders the whole credibility of our model results.

We think it might be since we are using SoftMax, which seems to force the model to assign an unknown object to one of the 4 classes.

We want to know what will the best way to overcome this issue.

We tried adding a new, fifth "trash" class, with 1,000 images of "other" objects that do not belong to our four classes, but it significantly reduced the confidence level for our test images, so we are not sure if this is at all a progress.

",41590,,,,,11/30/2020 0:11,"Image classification - Need method to classify ""unknown"" objects as ""trash"" (3D objects)",,0,0,,,,CC BY-SA 4.0 24915,2,,24867,11/30/2020 0:21,,1,,"

You can do this similar to the BIDE approach. It can be done like this:

class TreeNode:
    def __init__(self, element, depth, count=0, parent=None):
        self.count= count
        self.element= element
        self.depth= depth
        self.subnodes= dict()
        self.parent= parent
        
    def __repr__(self):
        return f'{self.__class__.__name__}({self.element}, {self.depth}, {self.count})'
        
    def get_subnode(self, element, create=True):
        if create is True:
            return self.subnodes.setdefault(element, TreeNode(
                element, 
                self.depth+1, 
                count=0, 
                parent=self
            )
        )
        else:
            return self.subnodes.get(element)

    def get_subnode_increasing_count(self, element):
        subnode= self.get_subnode(element)
        subnode.count+= 1
        return subnode
    
    def harvest(self, N, m, M, prefix=[], append_result_to=None):
        # get all sequences which occurs more than N times
        # and have a length (depth) of m < depth < M
        prefix= prefix.copy()
        prefix_len= len(prefix)
        if append_result_to is None:
            result= list()
        else:
            result= append_result_to
        if self.element is not None:
            prefix.append(self.element)
            prefix_len+= 1
        if N >= self.count or prefix_len >= M:
            return result
        if N < self.count and m < prefix_len and prefix_len < M:
            # the subsequence matches the constraints
            result.append(prefix)
        for subnode in self.subnodes.values():
            subnode.harvest(
                N, 
                m, 
                M, 
                prefix=prefix, 
                append_result_to=result
            )
        return result
        
    def get_prefix(self):
        if self.parent is None:
            prefix= []
        else:
            prefix= self.parent.get_prefix()
        if self.element is not None:
            prefix.append(self.element)
        return prefix
        
    def print_tree_counts(self, leaves_only=False, min_count=0):
        if leaves_only is False or not self.subnodes:
            if self.count >= min_count:
                print(self.get_prefix(), self.count)
        for subnode in self.subnodes.values():
            subnode.print_tree_counts(leaves_only=leaves_only, min_count=min_count)

sequential approach

def find_patterns(S, N, m, M):
    root= TreeNode(None, 0, 0)
    active_nodes= []
    for el in S:
        root.count+=1
        # append the root node
        active_nodes.append(root)
        # now replace all nodes in active nodes by the node
        # that is reached one level deeper following el
        active_nodes= [node.get_subnode_increasing_count(el) for node in active_nodes if node.depth < M]

    # now harvest the tree after applying the restrictions
    return root, root.harvest(N, m, M)

S='AAFFAABDEFEDFFAAFCCABDDCCCABDDCACBEABCFFAAEABCADEFFAABCDAEABCABDD'
root, patterns= find_patterns(S, 3, 3, 6)
patterns

The result is:

[['F', 'F', 'A', 'A']]

Your second pattern occurs exactly 3 times and so doesn't fullfill the requirement of > 3 occurances ([C,A,B,D,D]).

modifications for parallel processing

To make it processable in parallel you can do a slide modification. Just create another method in TreeNode, that allows to merge nodes. Like this:

def merge_nodes(self, other_nodes):
    # merge other_nodes into this node
    # including all subnodes
    if len(other_nodes) > 0:
        elements= set()
        for other_node in other_nodes:
            self.count+= other_node.count
            elements.update(other_node.subnodes.keys())
        # elements now contains the set of the next elements 
        # with which the sequence continues across the 
        # other nodes
        for element in elements:
            # get the node of the resulting tree that represents
            # the sequnce continuing with element, if there is
            # no such subnode, create one, since there is at least
            # one other node that counted sequence seq + element
            my_subnode= self.get_subnode(element, create=True)
            other_subnodes= list()
            for other_node in other_nodes:
                # now get the subnode for each other node, that
                # represents the same sequence (seq + element)
                other_subnode= other_node.get_subnode(element, create=False)
                if other_subnode is not None:
                    other_subnodes.append(other_subnode)
            # merge the subnode the same way
            my_subnode.merge_nodes(other_subnodes)

Now you can call find_patterns on subtrees, but you need to take into account, that you may not split the input sequence normally. You need to define some overlapping sequence part, so that patterns, that begin at the end of one sequence-fragment can be completed but that this overlapping sequences need to be counted differently (otherwise you get double-counts, which lead to wrong inputs). So you have to make sure, that only patterns, which began in the sequence fragment are continued with the overlap part, but that no new patterns are started in the overlap part, because you count them elsewere:

def find_patterns(S, N, m, M, overlap=[]):
    root= TreeNode(None, 0, count=0, parent=None)
    active_nodes= []
    def process_element(active_nodes, element, M):
        return [
            node.get_subnode_increasing_count(element) 
                for node in active_nodes 
                    if node.depth < M
        ]
    for el in S:
        root.count+=1
        # append the root node
        active_nodes.append(root)
        # now replace all nodes in active nodes by the node
        # that is reached one level deeper following el
        active_nodes= process_element(active_nodes, el, M)
    # complete the already started sequences with the
    # overlapping sequence (the sequence that may be
    # processed in another process)
    for el in overlap:
        active_nodes= process_element(active_nodes, el, M)
    # now harvest the tree after applying the restrictions
    return root, root.harvest(N, m, M)

Now you can do:

split_point= len(S) // 2
S1= S[split_point:]
S2= S[:split_point]
S1_overlap= S2[:10] 
# in fact you could have just used :5 above, 
# but that doesn't affect the result
r1, _= find_patterns(S1, 3, 3, 6, overlap=S1_overlap)
# Note: you could safely start the comand in the next 
# line in parallel to the previous call of find_patterns
# and of course, you could split your sequence in as
# many fragments, as you like, as long as you maintain
# the overlap correctly and call merge_nodes afterwards
r2, _= find_patterns(S2, 3, 3, 6) 
r1.merge_nodes([r2])
patterns= r1.harvest(3, 3, 6)

This yields the same result as above. To make it a bit clearer, what I mean with the overlap part:

S1 is 'CBEABCFFAAEABCADEFFAABCDAEABCABDD'
S2 is 'AAFFAABDEFEDFFAAFCCABDDCCCABDDCA'
S1_overlap is 'AAFFAABDEF'

With find_patterns(S1, 3, 3, 6) you would only search for repeated patterns in S1, so it would not consider patterns which begin with the part BDD (starting at the end of S1) and are continued within S2. With find_patterns(S1, 3, 3, 6, overlap=S1_overlap) I consider these patterns as well.

",26402,,26402,,11/30/2020 12:37,11/30/2020 12:37,,,,3,,,,CC BY-SA 4.0 24916,2,,24788,11/30/2020 3:53,,1,,"

The state of the art in text generation is the GPT model. GPT-3, which was just released in summer of 2020, has been used to generate many very impressive articles, and is widely considered the best text generation model. This article and this one should give you an example of how powerful it is at text generation.

GPT is a transformer based architecture, somewhat like BERT. The main difference is it only takes into account left context, which is why it is so well suited for text generation.

GPT-3 is still very new and is not available for free. However, GPT-2, the previous release of the model, is available for free. While obviously not as advanced as GPT-3, it is still quite impressive in its own right, and for someone trying to generate text, it is the clear choice.

Here is a link to a tutorial explaining the basics to get you started with GPT-2.

If you are interested in diving into the relevant research papers:

Here are the open AI papers on GPT 1-3

  1. Improving Language Understanding by Generative Pre-Training
  2. Language Models are Unsupervised Multitask Learners
  3. Language Models are Few-Shot Learners

Additionally, if you have never seen transformers before, take a look at:

  1. Attention Is All You Need
",12201,,,,,11/30/2020 3:53,,,,0,,,,CC BY-SA 4.0 24917,2,,24910,11/30/2020 7:25,,0,,"

My closest guess is because you are using the activation function ReLU, which pushes data to be greater than zero. However, because of your data's nature, the autoencoder is highly dependent on negative calculations to reconstruct your data, but the best it can achieve is zero.

In the context of artificial neural networks, the rectifier is an activation function defined as the positive part of its argument: $$f(x) = x^+ = max(0, x)$$ https://en.wikipedia.org/wiki/Rectifier_(neural_networks)

",35757,,,,,11/30/2020 7:25,,,,8,,,,CC BY-SA 4.0 24918,2,,24911,11/30/2020 7:59,,2,,"

Q-learning uses the maximizing value at each step,

Mostly true. The target policy that Q-learning learns the action values of is the one with the maximum value. While training, Q-learning will take one action randomly (typically with a high probability of taking the action with maximum value), whilst it makes updates to value estimates assuming it will always take the maximising action next.

which implies that there is a probability distribution and it happens to choose the one with the highest probability.

Not true. This is not implied at all. The action values that Q-learning estimates are not probabilities, but expected sums of future reward. The probability distribution for exploring actions is added outside of the Q table or Q function estimate. There is no implied probability distribution of actions in the target policy. When implementing the target policy, you may decide to break ties randomly, but that is not an important detail.

It is worth noting that even if you were working with a table of action probabilities, then an agent that always chose the one with the maximum probability would be deterministic. A list of numbers from a table or non-stochastic function (which is how a Q table or Q function is implemented, and also how policy functions are implemented for methods that do process probabilities), even if it represents probabilities, cannot be random in itself. Instead, it must be interpreted by a process that decides how to use them. A process that includes a random number generator can use the numbers to generate a probability distribution that it samples from.

In Q-learning, the process that updates the Q table estimates does not include a random number generator, so as a consequence it is deterministic. However, the choice of which actions to explore is often random, and it is sometimes the case that the environment includes randomness in state transitions or reward values. So Q-learning taken as a whole is a random process.

",1847,,2444,,11/30/2020 20:41,11/30/2020 20:41,,,,2,,,,CC BY-SA 4.0 24921,1,24969,,11/30/2020 12:35,,6,964,"

The Deep Learning book by Goodfellow et al. states

Convolutional networks stand out as an example of neuroscientific principles influencing deep learning.

Are convolutional neural networks (CNNs) really inspired by the human brain?

If so, how? In particular, what structures within the brain do CNN-like neuron groupings occur?

",37533,,2444,,12/3/2020 13:10,12/4/2020 13:17,Are convolutional neural networks inspired by the human brain?,,2,0,,,,CC BY-SA 4.0 24923,1,,,11/30/2020 14:29,,1,98,"

I am trying to understand the last two lines of this math notation (from this paper).

How did the Var and double summation of the Cov come to the equation?

The first two lines I understood something like $(a-b)^2 = a^2 -2ab +b^2$.

",27601,,2444,,1/25/2022 10:58,2/24/2022 11:06,How did the variance and double summation of the covariance come to the L2 minimization equation?,,1,0,,,,CC BY-SA 4.0 24924,2,,24923,11/30/2020 15:05,,1,,"

As in the second line, the first two terms are $\mathbb{E}_{\hat{y}}$, it means the variable of the expectation is $\hat{y}_i$ and you can take out $y_i$s from $\mathbb{E}_{\hat{y}}$s. Now we can use $\mathbb{E}\{\hat{y}_i\} = y_i$ and rewrite the terms of the second line likes the following:

$$ \mathbb{E}_{\hat{y}}(\sum_i y_i)^2 = (\sum_i y_i)^2 \\ \mathbb{E}_{\hat{y}}\left[(\sum_i y_i)(\sum_i \hat{y}_i)\right] = (\sum_i y_i) \mathbb{E}_{\hat{y}}(\sum_i \hat{y}_i) = (\sum_i y_i)^2 $$

The second line is written based on linearity of the expectation and $\mathbb{E}\{\hat{y}_i\} = y_i$. Hence, we can rewrite the second line like the following as $\sum_i y_i = \mathbb{E}_{\hat{y}}(\sum_i \hat{y}_i)$:

$$ \frac{1}{N^2}\left[\mathbb{E}_{\hat{y}}(\sum_i \hat{y}_i)^2 - \left(\mathbb{E}_{\hat{y}}(\sum_i \hat{y}_i)\right)^2\right] $$

And the final step is using this formula $Var(X) = E(X^2) - (E(X))^2$ and take $X = \sum_i \hat{y}_i$:

$$ \frac{1}{N^2}\left[\mathbb{E}_{\hat{y}}(\sum_i \hat{y}_i)^2 - \left(\mathbb{E}_{\hat{y}}(\sum_i \hat{y}_i)\right)^2\right] = \frac{1}{N^2} Var(\sum_i \hat{y}_i) $$

From variance to covariance, you can use this formula: $$Var(\sum_{i=1}^nX_i) = \sum_{i=1}^n\sum_{j=1}^n cov(X_i, X_j)$$

",4446,,4446,,11/30/2020 15:12,11/30/2020 15:12,,,,0,,,,CC BY-SA 4.0 24925,1,24960,,11/30/2020 16:30,,1,71,"

I was going through the TRPO paper, and there was a line under Appendix D "Approximating Factored Policies with Neural Networks" in the last paragraph which I am unable to understand

The action consists of a tuple $(a_1, a_2..... , a_K)$ of integers $a_k\in\{1, 2,......,N_k\} $ and each of these components is assumed to have a categorical distribution.

I can't seem to get how each component has a categorical distribution. I think it should be the tuple that has a categorical distribution. I think I am getting something wrong.

",42644,,2444,,12/1/2020 22:24,12/2/2020 22:24,Why does each component of the tuple that represents an action have a categorical distribution in the TRPO paper?,,1,0,,,,CC BY-SA 4.0 24927,1,24928,,11/30/2020 23:13,,2,214,"

Why is it useful to define the return as the sum of the rewards from time $t$ onward rather than up to $t$?

The return for an MDP is usually defined as

$$G_t=R_{t+1}+R_{t+2}+ \dots +R_T$$

Why is this defined as the return? Is there anything useful about this?

It seems like it's more useful to define the return as $$G_t=R_0+ \dots+R_t,$$ because your "return", so to speak, is the "profit from investment" so it seems like your return will be your accumulated reward from taking actions up to that point.

",30885,,2444,,12/2/2020 10:14,12/2/2020 10:14,Why is it useful to define the return as the sum of the rewards from time $t$ onward rather than up to $t$?,,1,0,,,,CC BY-SA 4.0 24928,2,,24927,11/30/2020 23:35,,3,,"

It wouldn't make sense to define the return as you propose, from time 0 to $t$. Once we are in a state at time $t$ we don't care what the returns have been, rather what they will be in the future, thus returns are defined as the total sum of discounted rewards from the current time step onwards. This allows the agent to make decisions about which actions to take based on how valuable taking said action is in the current state at time $t$ -- clearly the rewards previous to this have no effect upon that.

",36821,,36821,,12/1/2020 0:50,12/1/2020 0:50,,,,3,,,,CC BY-SA 4.0 24929,1,24932,,11/30/2020 23:36,,1,88,"

For episodic tasks with an absorbing state, why can't $\gamma=1$ and $T= \infty$?

In Sutton and Barto's book, they say that, for episodic tasks with absorbing states that becomes an infinite sequence, then the return is defined by:

$$G_t=\sum_{k=t+1}^{T}\gamma^{k-t-1}R_k$$

This allows the return to be the same whether the sum is over the first $T$ rewards, where $T$ is the time of termination or over the full infinite sequence, with $T=\infty$ xor $\gamma=1$.

Why can't we have both? I don't see how they can both be set to those parameters. It seems like, if you have an absorbing state, the rewards from terminal onward will just be 0 and not be affected by $\gamma$ or $T$.

Here's the full section of the book on page 57 in the 2nd edition

I think the reasoning behind this also leads to why for policy evaluation where

$$v_\pi(s)=\sum_a\pi(a|s)\sum_{s',r}p(s',r|s,a)[r+\gamma v_\pi(s')]$$

"Has an existence and uniqueness guarantee only if $\gamma < 1$ or termination is guaranteed under $\pi$"(page 74). This part I'm also a bit confused by, but seems related.

",30885,,2444,,12/1/2020 1:21,12/1/2020 1:42,"For episodic tasks with an absorbing state, why can't we both have $\gamma=1$ and $T= \infty$ in the definition of the return?",,1,0,,,,CC BY-SA 4.0 24932,2,,24929,12/1/2020 1:42,,3,,"

$T = \infty$ and $\gamma = 1$ cannot be both true at the same time because the return defined in equation 3.11 is supposed to be a unified definition of the return for both continuing and episodic tasks. In the case of continuing tasks, $T = \infty$ and $\gamma = 1$ cannot be true at the same time, because the return may not be finite in that case (as I think you already understood).

Moreover, note that, in that specific example of the book, they assume that the agent ends up in an absorbing state, so this specific sum is finite, no matter whether $T$ is finite or $\infty$, given that, once you enter the absorbing state, you will always get a reward of $0$. Of course, if you discount those specific rewards, the sum will still be finite. However, in general, if you had a different MDP where the absorbing state is not reachable (i.e. the episode never ends), then the return could not be finite.

",2444,,,,,12/1/2020 1:42,,,,0,,,,CC BY-SA 4.0 24933,1,,,12/1/2020 2:13,,1,189,"

In a lot of explanations online for Xavier Initialization, I see the following:

With each passing layer, we want the variance to remain the same. This helps us keep the signal from exploding to a high value or vanishing to zero. In other words, we need to initialize the weights in such a way that the variance remains the same for x and y. This initialization process is known as Xavier initialization.

Source https://prateekvjoshi.com/2016/03/29/understanding-xavier-initialization-in-deep-neural-networks/

However, the intuition behind why var(output) should equal var(inputs) is never explained. Does anyone know why intuitively var(output) should equal var(inputs)?

",26159,,2444,,12/1/2020 13:10,12/1/2020 13:10,Why should variance(output) equal variance(input) in Xavier Initialisation?,,0,0,,,,CC BY-SA 4.0 24935,1,,,12/1/2020 9:53,,1,19,"

I'm building a tool that should assist a director to broadcast a racing game. I want this tool to suggest the human director which car to focus on and with which camera (among the available ones). I can access quite a lot of data about the current race so I can extrapolate some parameters(like car positions, how many cars near to each other there are, how close they are, last time the camera was switched etc) to be used in the decision making process. I would like the AI to learn from the human director in order to suggest him according to his "direction style".

My idea is to split the problem in 2 sub problems: the first is the choice of the car to focus on and the second is the choice of the camera to use (or cameras, since is fairly common to switch cameras while following the same car). My plan was to use some sort of Q-learning, rewarding the AI whenever one of the generated suggestions is chosen by the director but I guess it would be really difficult to define a set of states and moreover it would probably take ages before it would start to give some useful suggestions.

Are there some other good approaches I could consider? I'm also thinking about using a neural network so maybe the learning process would be faster.

",42648,,,,,12/1/2020 9:53,Looking for a good approach for building an automated director for a racing game spectator mode,,0,0,,,,CC BY-SA 4.0 24937,2,,22734,12/1/2020 11:54,,1,,"

After doing many runs with GPT-2 355M I come to the conclusion that anything below 20k tokens yield worse and worse results the less tokens you have regardless of how many steps you train for. For my application the sweet spot is around 30-35k tokens.

",38955,,,,,12/1/2020 11:54,,,,0,,,,CC BY-SA 4.0 24938,1,,,12/1/2020 11:55,,1,186,"

I am trying to implement an Actor Critic method that controls an RC car. For this I have implemented a simulated environment and actor critic tensorflowjs models.

My intention is to train a model to navigate an environment without colliding with various obstacles.

For this I have the following:

State(continuous):

  • the sensors distance(left, middle, right): [0..1,0..1,0..1]

Action(discrete):

  • 4 possible actions(move forward, move back, turn left, turn right)

Reward(cumulative):

  • moving forward is encouraged
  • being close to an obstacle is penalized
  • colliding with an obstacle is penalized

The structure of the models:

buildActor() {
      const model = tf.sequential();
      model.add(tf.layers.inputLayer({inputShape: [this.stateSize],}));

      model.add(tf.layers.dense({
        units: parseInt(this.config.hiddenUnits),
        activation: 'relu',
        kernelInitializer: 'glorotUniform',
      }));

      model.add(tf.layers.dense({
        units: parseInt(this.config.hiddenUnits/2),
        activation: 'relu',
        kernelInitializer: 'glorotUniform',
      }));

      model.add(tf.layers.dense({
        units: this.actionSize,
        activation: 'softmax',
        kernelInitializer: 'glorotUniform',
      }));

      this.compile(model, this.actorLearningRate);

      return model;
    }
buildCritic() {
      const model = tf.sequential();

      model.add(tf.layers.inputLayer({inputShape: [this.stateSize],}));

      model.add(tf.layers.dense({
        units: parseInt(this.config.hiddenUnits),
        activation: 'relu',
        kernelInitializer: 'glorotUniform',
      }));

      model.add(tf.layers.dense({
        units: parseInt(this.config.hiddenUnits/2),
        activation: 'relu',
        kernelInitializer: 'glorotUniform',
      }));

      model.add(tf.layers.dense({
        units: this.valueSize,
        activation: 'linear',
        kernelInitializer: 'glorotUniform',
      }));

      this.compile(model, this.criticLearningRate);

      return model;
    }

The models are compiled with an adam optimized and huber loss:

compile(model, learningRate) {
      model.compile({
        optimizer: tf.train.adam(learningRate),
        loss: tf.losses.huberLoss,
      });
    }

Training:

trainModel(state, action, reward, nextState) {
      let advantages = new Array(this.actionSize).fill(0);

      let normalizedState = normalizer.normalizeFeatures(state);
      let tfState = tf.tensor2d(normalizedState, [1, state.length]);
      let normalizedNextState = normalizer.normalizeFeatures(nextState);
      let tfNextState = tf.tensor2d(normalizedNextState, [1, nextState.length]);

      let predictedCurrentStateValue = this.critic.predict(tfState).dataSync();
      let predictedNextStateValue = this.critic.predict(tfNextState).dataSync();

      let target = reward + this.discountFactor * predictedNextStateValue;
      let advantage = target - predictedCurrentStateValue;
      advantages[action] = advantage;
      // console.log(normalizedState, normalizedNextState, action, target, advantages);

      this.actor.fit(tfState, tf.tensor([advantages]), {
        epochs: 1,
      }).then(info => {
          this.latestActorLoss = info.history.loss[0];
          this.actorLosses.push(this.latestActorLoss);
        }
      );

      this.critic.fit(tfState, tf.tensor([target]), {
        epochs: 1,
      }).then(info => {
          this.latestCriticLoss = info.history.loss[0];
          this.criticLosses.push(this.latestCriticLoss);
        }
      );

      this.advantages.push(advantage);
      pushToEvolutionChart(this.epoch, this.latestActorLoss, this.latestCriticLoss, advantage);
      this.epoch++;
    }

You ca give the simulation a spin on https://sergiuionescu.github.io/esp32-auto-car/sym/sym.html .

I found that some behaviors are being picked up - the model learns to prioritize moving forward after a few episodes, but then hits the wall and it reprioritizes spinning - but seems to completely 'forget' that moving forward was ever prioritized.

I've been trying to follow https://keras.io/examples/rl/actor_critic_cartpole/ to a certain extent, but have not found an equivalent of the way back-propagation is handled there - GradientTape.

Is it possible to perform training similar to the Keras example in Tensorflowjs?

The theory i've went through on Actor Critic mentions that the Critic should estimate the reward yet to be obtain until the rest of the episode, but i am training the critic with: reward + this.discountFactor * predictedNextStateValue where reward is the cumulative reward until the current step. Should i keep track of a maximum total reward in previous episodes and subtract my reward from that instead?

When i am training the actor i am generating a zero filled advantages tensor:

let advantages = new Array(this.actionSize).fill(0);
let target = reward + this.discountFactor * predictedNextStateValue;
let advantage = target - predictedCurrentStateValue;
advantages[action] = advantage;

All other actions than the taken one will receive a 0 advantage. Could this discourage any previous actions the were proven beneficial? Should i average out the advantages per state and action?

Thanks for having the patience to go trough all of this.

",42665,,42665,,12/5/2020 19:15,8/28/2022 0:08,Advantage Actor Critic model implementation with Tensorflowjs,,1,1,,,,CC BY-SA 4.0 24939,2,,23703,12/1/2020 14:59,,2,,"

If you're using a library such as Trax which contains great submodules for various Transformers (Skipping, BERT, Vanilla and Reformer) you can use the inbuilt trax.data.inputs.add_loss_weights() function and provide a value for the id_to_mask parameter.

Example Usage:

train_generator = trax.data.inputs.add_loss_weights(
data_generator(batch_size, x_train, y_train,vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])

Here are some resources for building Transformers in Trax:

",40434,,,,,12/1/2020 14:59,,,,4,,,,CC BY-SA 4.0 24941,1,,,12/1/2020 15:34,,1,40,"

Can anyone help me understand these functions described in the paper Noise2Noise: Learning Image Restoration without Clean Data

I have read the portion A.4 in the appendix but need a more detailed and easier to understand explanation. Especially where signum or sign function (sgn) is coming from along with other explanations.

(10),(11),(12)

",27601,,2444,,12/1/2020 16:32,12/1/2020 16:32,"What is the intuition behind equations 10, 11 and 12 of the paper ""Noise2Noise: Learning Image Restoration without Clean Data""?",,0,0,,,,CC BY-SA 4.0 24942,1,,,12/1/2020 15:35,,1,243,"

I have been searching this but did not find the answer, so sorry if this is a duplicated question.

I was working with cross-validation, where some doubts came to my mind, and I am not sure which is the correct answer.

Let's say I have a mixed dataset, with numerical and categorical features. I want to perform a K-Fold Cross-Validation with it, with a K=10. Some of these numerical features are missing, so I decided that I will replace those NaNs with the average of that feature.

My steps are the following ones:

  1. Read the entire dataset
  2. Perform One Hot Encoding to categorical features.
  3. Divide my data into different folds. Let's say that I will use 90% for training, 10% for validating.
  4. For every different combination of folds, I replace the missing values from the training and validating sets separately. This means, on one hand, I get the average of the missing values of the training part, and on the other hand the average of the missing values of the validating part.
  5. Normalize the data of the training and validating sets between [0, 1] separately, as I did before.
  6. Train the correspondant model.

So let's put a simple example of a dataset of 20 rows with N columns. Once I do steps 1 and 2, in the first iteration I will select the 18 first rows as a training set, and the last two rows as validating set. I fill the missing values of the 9 first rows with the average of those 18 rows. Then the same for the 2 last rows. Then, again, normalize in the same way, separately. And do this for every combination of folds.

I am doing it like this, because otherwise, from my understanding, is that you are training your model with biased data. You should not have access to the validation data, thus you should not be able to do the average with those numbers. Hence I am using only the numbers of the training part. If I do the average with the entire dataset, this will make my model overfitting.

I am not so sure about the normalization step, as I do not really think this will have the same impact. But here I do not really know...

Is this approach correct? Or should I do the average and normalization with the entire dataset? Why?

",42672,,18758,,10/9/2021 8:52,10/9/2021 8:52,How to fill NaNs in Cross-Validation?,,1,0,,,,CC BY-SA 4.0 24944,2,,24895,12/1/2020 16:50,,0,,"

The echo state condition states that differences in the input sequence results in separate trajectories of reservoir states.

Which means that when you make the reservoir large enough any difference in the input sequence result in a linearly separable difference in the state space.

This is quite similar to the Kerneltrick in e.g. Support Vector machines where the data becomes linearly separable in the feature space. In reservoir computing the reservoir is in a way a random feature space.

To sum up, the echo state condition ensures that the signal becomes linearly separable.

",42601,,,,,12/1/2020 16:50,,,,3,,,,CC BY-SA 4.0 24950,1,,,12/2/2020 1:40,,0,281,"

This is a question from page 94 of Sutton and Barto's RL book 2020.

I read in someone's compiled GitHub answers to this book's exercises their answer was: "No because each state in an episode of blackjack is unique."

I think my answer is more yes, but I'm thinking in terms of casino blackjack, where they have multiple decks shuffled together and add in the dropped cards back into the deck every X games in order to prevent card counting and 1 game can be seen as an episode. I think in this case that first-visit MC and every-visit MC would have drastically different results, given that, at the start of the new episode, the state of the deck, which is only partially observed, will change the value of taking an action given a state (because I believe the cards left in the deck affect the value of an action, but the deck is not totally observable).

If this is blackjack, where the discarded cards are added back in and shuffled every episode, I'll agree that it shouldn't make a difference.

Are there any flaws in my conjecture?

",30885,,2444,,12/4/2020 13:42,12/4/2020 13:42,Suppose every-visit MC was used instead of first-visit MC on blackjack. Would you expect the results to be different?,,0,3,,,,CC BY-SA 4.0 24951,2,,16610,12/2/2020 6:16,,2,,"

I believe that there is no clear answer to your question. It essentially boils down to whether you are a reductionist – whether you believe that quantitative measurements can truly give justice to the complexity of the real world, and that a framework such as expectation maximization can losslessly capture what we care about as humans in the performing of tasks.

From a non-reductionist perspective, one would be aware that almost any mathematical representation of complex real-world goals will necessarily be a proxy rather than the true goal (as many goals are not mathematically formalizable, such as what we perceive as "good music" or "meaning"), and thus the reward hypothesis is at best an approximation. Based on this, a non-reductionist's reward hypothesis could be rephrased as:

that all of what we mean by goals and purposes can be well thought of approximately operationalized (albeit at a certain domain-dependent loss) as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)

Clearly the original (stricter) version of the reward hypothesis does apply to some cases, such as purely-quantitative domains (e.g. maximizing $ earned on the stock market, or maximizing score in a video game), but as soon as the problem involves enough "complexity" (e.g. humans, or wherever you think the boundary should be), a non-reductionist would say that mathematics is clearly not fit to the task to truly capture the intended goal.

More info on the reward hypothesis (as presented by Michael Littman himself) is here. I would have added it as a comment to the question but do not have enough reputation.

",42207,,42207,,12/6/2020 21:51,12/6/2020 21:51,,,,4,,,,CC BY-SA 4.0 24952,1,24953,,12/2/2020 13:12,,3,306,"

I have a network with nodes and links, each of them with a certain amount of resources (that can take discrete values) at the initial state. At random time steps, a service is generated, and, based on the agent's action, the network status changes, reducing some of those nodes and links resources.

The number of all possible states that the network can have is too large to calculate, especially since there is the random factor when generating the services.

Let's say that I set the state space large enough (for example, 5000), and I use Q-Learning for 1000 episodes. Afterwards, when I test the agent ($\max Q(s,a)$), what could happen if the agent faces a state that did not encounter during the training phase?

",42591,,2444,,12/4/2020 10:11,12/4/2020 10:11,What happens when the agent faces a state that never before encountered?,,2,0,,,,CC BY-SA 4.0 24953,2,,24952,12/2/2020 13:38,,2,,"

Having too many states to actually visit is a common problem in RL. This is exactly why we often use function approximation. If you replace your q table with a good function approximator such as a neural network, it should be able to generelize well to states it has not yet encountered.

If you do not use a function approximator but stick with a table, the agent will have no idea what to do when it encounters a new state. For more information, see Reinforcement Learning by Sutton and Barto, chapter 9.

",12201,,,,,12/2/2020 13:38,,,,0,,,,CC BY-SA 4.0 24954,2,,24952,12/2/2020 14:00,,1,,"

I will try to explain this problem with the very tangible example of chess. In chess, the number of possible states is any configuration that you can make with the pieces on the board. So, the starting position is a state, and after you did one move you are in a different state. The total number of chess states is more than $10^{100}$. It is therefore very unlikely that a chess bot has seen all the states in training when playing a match.

So, how does the algorithm solve this? For the answer, we have to look at how an RL algorithm chooses which move is the best. This obviously depends on the implementation of the algorithm, but, generally, the calculation of 'how good' a move is, is done with the use of an approximation taking into account the 'potential future reward'. If you capture the queen, that would probably be good (not a chess expert here), even though you have not seen this exact state before. If you go further than this, a network might be able to approximate what happens many moves in the future. The specifics come down to implementation, etc., but this is the gist of it.

",34383,,2444,,12/3/2020 0:42,12/3/2020 0:42,,,,3,,,,CC BY-SA 4.0 24955,2,,24942,12/2/2020 14:09,,1,,"

I would do the exact same thing as you are describing! One of the main reasons that you would want to do cross-validation is to prevent that your model is unable to generalize later. Therefore, you take out a random small subset which will be your new small validation set and do all the 'operations' on that which you are also doing on your training set. This way, you can check that these 'operations' are thus also generalizable (if they work for all different cross-validations).

",34383,,,,,12/2/2020 14:09,,,,0,,,,CC BY-SA 4.0 24957,1,,,12/2/2020 19:15,,0,564,"

Newbie to CV here so sorry of this is basic. Here's the deal, I have a program that I run many times. and each run I produce a screenshot. I need to compare screenshots from N-1 and N runs and make sure they aren't different in any dramatic way. Of course there are some minor changes like logos and pictures getting updated, etc.

SO far I've used something as simple as absdiff from opencv to highlight the difference regions and then use some sort of threshold to determine whether something passes or not. But I want to make it slightly intelligent but I'm not 100% sure how to proceed. Google hasn't yielded ghe best answers.

Essentially, I want to train the model on many different pairs of images and have the output be binary, yes or no depending on whether it should pass or not. In theory, I should be able to plug in 2 images and based on previous training, it should be able to tell me whether there is significant difference or not. What are some ways I might approach this, particularly with regards to what kinds of models to use.

The requirements here might seem amorphous but that's kinda the nature of the problem. the differences could be, in theory, anything. I am hoping that there will be patterns between different images and that a model would pick up on that. Things like the name of a document is 045 instead of 056 or a logo is slightly updated.

",42641,,,,,12/24/2022 7:01,Training a model to identify certain differences between images?,,1,1,,,,CC BY-SA 4.0 24960,2,,24925,12/2/2020 22:24,,1,,"

I'm not sure specifically which Atari games present this type of action space, but you can imagine a game in which you can perform multiple different types of actions at the same timestep (i.e. the different "factors" they mention in the paper).

As an example, imagine a game in which you can both move and jump at the same time. In that case, you might have a 4-dimensional discrete action space for moving (NSWE), and a 2-dimensional discrete action space for jumping (yes/no jump), both of which will require a Categorical distribution which has the size of that factor ($N_k$ in the paper).

So in this case, you would need to have a categorical distribution for each factor, unless you were to turn these two factors into 1 joint 4*2-dimensional factor and learn a single categorical distribution on that (which would likely be less efficient).

",42207,,,,,12/2/2020 22:24,,,,0,,,,CC BY-SA 4.0 24962,1,25007,,12/3/2020 1:23,,1,94,"

My environment is set up so that my self-driving agent can take maximum of 400 steps (which is the end goal) before it resets with a completion reward. Despite attaining the end goal during the $\epsilon$-greedy stage, it still kills/crashes itself in subsequent episodes.

I would like to know if this common in RL (D3QN) scenarios.

A graph showing episodes vs steps has been placed below.

As one can see, the agent reaches 400 steps in episode 1000. But, in the subsequent episode, it falls down below 50 steps.

",31755,,31755,,12/5/2020 4:31,12/5/2020 16:47,Should my agent be taking varying number of steps?,,1,10,,,,CC BY-SA 4.0 24963,2,,18407,12/3/2020 1:31,,0,,"

An Autoencoder helps you in learning an embedding space that can be used in a PCA or T-SNE to classify different categories of images in an unsupervised fashion. Since you are trying to reconstruct an input image, the model tries to learn an underlying distribution of patterns in your images which are useful for reconstructing and those patterns translate to an embedding space

For example, if you want an autoencoder to learn patterns in cat vs dog images and define the embedding space to be a 16 dimensional vector, the model will learn 16 different patterns that can help you in reconstructing the image.

In this case, it is better to balance your dataset with the classes you want to learn(have equal number of cat vs dog images in your training set) so that you don't induce bias(so that the model doesn't favour learning more about cats than dogs)

I would argue that overfitting is not a good thing for autoencoder because your embedding space will then get restricted only to your training domain(for ex, the whiskers of a cat in your training sample might be smaller when compared to the cats in your test sample, you don't want the embedding space to learn size of whiskers to that precision, but rather learn whiskers are important). It's important to have any model to generalize across different circumstances. I agree to the fact that you need to train the system for longer epochs to learn good representations.

I hope this gives you a better intuition on the parameters to consider to learn a good embedding space.

",30939,,,,,12/3/2020 1:31,,,,0,,,,CC BY-SA 4.0 24964,2,,23842,12/3/2020 1:52,,0,,"

Imagine a small kid who has no idea about the world around it. You teach the kid how to write the number "6" and that is the only thing that it knows.

Now, No matter what other number you show the kid , it's gonna always respond with "6" because that is the only thing it knows or it has learned.

You teach the kid how to write the number "9", so now it knows how to differentiate a "6" from a "9" and no matter what other number you show the kid, there is a 50 % chance of it responding with a "6" or a "9" because it knows only that much.

The purpose of a neural network is to understand underlying distribution in data that can help it in classifying different numbers. It's important to have a classifier that understands general characteristics of numbers and help us with our task. If you have 10 neural networks trained on 10 different digits, and you show each of these networks the number "10", each network will output the number on which it was trained because that is all it knows(similar to the naive kid above).

I hope this answers your question!

",30939,,,,,12/3/2020 1:52,,,,4,,,,CC BY-SA 4.0 24966,1,24999,,12/3/2020 3:14,,3,161,"

AlphaGo Zero

AlphaGo Zero uses a Monte-Carlo Tree Search where the selection phase is governed by $\operatorname*{argmax}\limits_a\left( Q(s_t, a) + U(s_t, a) \right)$, where:

  1. the exploitation parameter is $Q(s_t, a) = \displaystyle \frac{\displaystyle \sum_{v_i \in (s_t, a)} v_i}{N(s_t, a)}$ (i.e. the mean of the values $v_i$ of all simulations that passes through edge $(s_t, a)$)
  2. the exploration parameter is $U(s_t, a) = c_{puct} P(s_t,a) \frac{\sqrt{\sum_b N(s_t, b)}}{1 + N(s_t, a)}$ (i.e. the prior probability $P(s_t, a)$, weighted by the constant $c_{puct}$, the number of simulations that passes through $(s_t, a)$, as well as the number of simulations that passes through $s_t$).

The prior probability $P(s_t, a)$ and simulation value $v_i$ are both outputted by the deep neural network $f_{\theta}(s_t)$:

This neural network takes as an input the raw board representation s of the position and its history, and outputs both move probabilities and a value, (p, v) = fθ(s). The vector of move probabilities p represents the probability of selecting each move a (including pass), pa = Pr(a| s). The value v is a scalar evaluation, estimating the probability of the current player winning from position s.

My confusion

My confusion is that $P(s_t, a)$ and $v_i$ are probabilities normalized to different distributions, resulting in $v_i$ being about 80x larger than $P(s_t,a)$ on average.

The neural network outputs $(p, v)$, where $p$ is a probability vector given $s_t$, normalized over all possible actions in that turn. $p_a = P(s_t, a)$ is the probability of choosing action $a$ given state $s_t$. A game of Go has about 250 moves per turn, so on average each move has probability $\frac{1}{250}$, i.e. $\mathbb{E}\left[ P(s_t, a) \right] = \frac{1}{250}$

On the other hand, $v$ is the probability of winning given state $s_t$, normalized over all possible end-game conditions (win/tie/lose). For simplicity sake, let us assume $\mathbb{E} \left[ v_i \right] \ge \frac{1}{3}$, where the game is played randomly and each outcome is equally likely.

This means that the expected value of $v_i$ is at least 80x larger than the expected value of $P(s_t, a)$. The consequence of this is that $Q(s_t, a)$ is at least 80x larger than $U(s_t, a)$ on average.

If the above is true, then the selection stage will be dominated by the $Q(s_t, a)$ term, so AlphaGo Zero should tend to avoid edges with no simulations in them (edges where $Q(s_t, a) = 0$) unless all existing $Q(s_t, a)$ terms are extremely small ($< \frac{1}{250}$), or the MCTS has so much simulations in them that the $\frac{\sqrt{\sum_b N(s_t, b)}}{1 + N(s_t, a)}$ term in $U(s_t, a)$ evens out the magnitudes of the two terms. The latter is not likely to happen since I believe AlphaGo Zero only uses $1,600$ simluations per move, so $\sqrt{\sum_b N(s_t, b)}$ caps out at $40$.

Selecting only viable moves

Ideally, MCTS shouldn't select every possible move to explore. It should only select viable moves given state $s_t$, and ignore all the bad moves. Let $m_t$ is the number of viable moves for state $s_t$, and let $P(s_t, a)$ = 0 for all moves $a$ that are not viable. Also, let's assume the MCTS never selects a move that is not viable.

Then the previous section is partly alleviated, because now $\mathbb{E} \left[ P(s_t, a) \right] = \frac{1}{m_t}$. As a result, $Q(s_T, a)$ should only be $\frac{m_t}{3}$ times larger than $U(s_t, a)$ on average. Assuming $m_t \le 6$, then there shouldn't be too much of an issue

However, this means that AlphaGo Zero works ideally only when the number of viable moves is small. In a game state $s_t$ where there are many viable moves ($>30$) (e.g. a difficult turn with many possible choices), the selection phase of the MCTS will deteriorate as described in the previous section.

Questions

I guess my questions are:

  1. Is my understanding correct, or have I made mistake(s) somewhere?
  2. Does $Q(s_t, a)$ usually dominate $U(s_t, a)$ by this much in practice when the game state has many viable moves? Is the selection phase usually dominated by $Q(s_t, a)$ during these game states?
  3. Does the fact that $Q(s_t, a)$ and $U(s_t, a)$ being in such different orders of magnitude (when the game state has many viable moves) affect the quality of the MCTS algorithm, or is MCTS robust to this effect and still produces high quality policies?
  4. How common is it for a game state to have many viable moves (>30) in Go?
",42699,,42699,,12/3/2020 6:42,12/4/2020 20:08,"AlphaGo Zero: does $Q(s_t, a)$ dominate $U(s_t, a)$ in difficult game states?",,1,1,,,,CC BY-SA 4.0 24968,2,,24921,12/3/2020 10:28,,1,,"

In the eye, the retinal ganglion cells have a receptive field that is equivalent to some types of convolution filters, most of them edge detectors.

The brain is a big unknown, nobody knows how it does to organize, memorize, create concepts, learns the language, ... . Thus, it is not possible to establish a parallelism.

In particular, brain has a capacity of handle independence in scale and rotation that CNN's are not able to reproduce.

As general remark about NN and brain: even when it is always said that "neural network cells" are "inspired" in biological neurons, there are critical differences that made this similitude only a "inspiration". Thus, comparison of a CNN or any other kind of NN with brain is always a fuzzy comparison. The biggest difference is probably the learning capacity: the human brain learns by itself, while the neural network needs an external system (the back-propagation algorithm) that feeds the NN with the learned parameters.

",12630,,12630,,12/3/2020 10:33,12/3/2020 10:33,,,,3,,,,CC BY-SA 4.0 24969,2,,24921,12/3/2020 10:40,,4,,"

Yes, CNNs are inspired by the human brain [1, 2, 3]. More specifically, their operations, the convolution and pooling, are inspired by the human brain. However, note that, nowadays, CNNs are mainly trained with gradient descent (GD) and back-propagation (BP), which seems not to be a biologically plausible way of learning, but, given the success of GD and BP, there have been attempts to connect GB and BP with the way humans learn [4].

The neocognitron, the first convolutional neural network [1], proposed by Kunihiko Fukushima in 1979-1980, and described in the paper Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position, already uses convolutional and pooling (specifically, averaging pooling) layers [1]. The neocognitron was inspired by the work of Hubel and Wiesel described in the 1959 paper Receptive fields of single neurones in the cat's striate cortex.

Here is an excerpt from the 1980 Fukushima's paper.

The mechanism of pattern recognition in the brain is little known, and it seems to be almost impossible to reveal it only by conventional physiological experiments. So, we take a slightly different approach to this problem. If we could make a neural network model which has the same capability for pattern recognition as a human being, it would give us a powerful clue to the understanding of the neural mechanism in the brain. In this paper, we discuss how to synthesize a neural network model in order to endow it an ability of pattern recognition like a human being.

Several models were proposed with this intention (Rosenblatt, 1962; Kabrisky, 1966; Giebel, 1971; Fukushima, 1975). The response of most of these models, however, was severely affected by the shift in position and/or by the distortion in shape of the input patterns. Hence, their ability for pattern recognition was not so high.

In this paper, we propose an improved neural network model. The structure of this network has been suggested by that of the visual nervous system of the vertebrate. This network is self-organized by "learning without a teacher", and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their position nor by small distortion of their shapes. This network is given a nickname "neocognitron", because it is a further extention of the "cognitron", which also is a self-organizing multilayered neural network model proposed by the author before (Fukushima, 1975)

However, Fukushima did not train the neocognitron with gradient descent (and back-propagation) but with local learning rules (which are more biologically plausible), and that's probably why he doesn't get more credit, as I think he should.

You should read at least Fukushima's paper for more details, which I will not replicate here.

Section 9.4 of the Deep Learning book also contains details about how CNNs are inspired by neuroscience findings.

",2444,,2444,,12/4/2020 13:17,12/4/2020 13:17,,,,0,,,,CC BY-SA 4.0 24971,1,25009,,12/3/2020 12:14,,0,323,"

What is the impact of the number of features on the prediction power of an ANN model (in general)? Does an increase in the number of features mean a more powerful prediction model (for approximation purpose)?

I'm asking these questions because I am wondering if there is any benefit in using two variables (rather than one) to predict one output.

If there is a scientific paper that answers my question, I would thank you.

",41210,,2444,,12/4/2020 14:13,12/5/2020 13:04,What is the impact of the number of features on the prediction power of a neural network?,,1,2,,,,CC BY-SA 4.0 24972,1,,,12/3/2020 12:28,,2,103,"

I have just come across the idea of self-supervised learning. It seems that it is possible to get higher accuracies on downstream tasks when the network is trained on pretext tasks.

Suppose that I want to do image classification on my own set of images. I have limited data on these images and maybe I can use self-supervised learning to achieve better accuracies on these limited data.

Let's say that I try to train a neural network on a pretext task of predicting the patch position relative to the center patch on different images that are readily available in quantity, such as cats, dogs, etc.

If I try to initialise the weights of my neural network, then do image classification on my own images, which are vastly different from that of the images used in the pretext task, would self-supervised learning work because the images for the pretext and downstream tasks are different?

TLDR: Must the images used in the pretext task and the downstream tasks be the same?

",32780,,2444,,12/4/2020 12:15,6/7/2021 12:10,Is it possible to use self-supervised learning on different images for the pretext and downstream tasks?,,1,1,,,,CC BY-SA 4.0 24979,2,,5322,12/3/2020 15:37,,0,,"

A simple example of this is token embeddings. If "prior knowledge" just means anything known prior to creation of the graph, then using pretrained vector embeddings meets this criteria. This is simply a way to provide a fixed method for projecting tokens into higher-dimensional space instead of training it at the same time as the rest of the model. Given that vector embeddings are somewhat interpretable and that the same embedding can be reused across tasks and models, I'd consider pretrained embeddings to be prior knowledge being incorporated.

The embeddings could also technically be handcrafted, but I'm not aware of any work like that and am skeptical of its usefulness in deep models.

",29873,,,,,12/3/2020 15:37,,,,0,,,,CC BY-SA 4.0 24983,2,,24957,12/4/2020 1:24,,0,,"

If your image just slightly changes, all you need is the simple algorithm, just find the keyword "k-Nearest Neighbor" or take a look at this link.

To locate where is the difference, you can find the difference by subtracting two images by this script:

def extract_diff(imageA, imageB):
    '''
    Find the different between two image:
        + Input: two RGB image
        + Output: binary image show different between two image
    Assume the different between two image in each channel will be bigger or equal 30
    '''
    subtract = imageB.astype(np.float32) - imageA.astype(np.float32)
    mask_motion = cv2.inRange(np.abs(subtract),(30,30,30),(255,255,255))
    # mask_motion[mask_motion==255] = 1 # scale to 1 to reduce computation
    return mask_motion

And locate where the position with value 255 in the image. Next time you should this kind of question in the stack overflow community, there is a more active community, they can give you a better solution than mine.

",41287,,,,,12/4/2020 1:24,,,,0,,,,CC BY-SA 4.0 24984,1,25060,,12/4/2020 1:27,,4,1088,"

I'm learning about Actor-Critic reinforcement learning algorithms. One source I encountered mentioned that Actor and Critic can either share one network (but use different output layers) or they can use two completely separate networks. In this video he mentions that using two separate networks works for simpler problems, such as Mountain Car. However, more complex problems like Lunar Lander works better with a shared network. Why is that? Could you explain what difference that choosing one design over another would that make?

",38076,,38076,,12/4/2020 18:58,12/8/2020 4:34,What difference does it make whether Actor and Critic share the same network or not?,,1,0,,,,CC BY-SA 4.0 24986,2,,16509,12/4/2020 3:59,,0,,"

I would recommend taking a look at Bilingual Evaluation Understudy(BELU) score which is commonly used in evaluating machine translation results by sequence to sequence model. Here is the reference https://en.wikipedia.org/wiki/BLEU

",30939,,,,,12/4/2020 3:59,,,,0,,,,CC BY-SA 4.0 24987,2,,16172,12/4/2020 5:04,,0,,"

The newer models generally outperform older ones on the ImageNet challenge in their accuracy scores*. This does not necessarily mean that this difference in performance will be reflected in your particular classification problem.

The closer your problem is to the ImageNet one, the more likely that the relative model performances will be similar. However when you perform transfer learning you will often have to fine-tune the model to achieve a stronger performance, the better you tune the model will effect performance, and there will often be a difference in which model is performing best on a given task. You can see papers in various classification tasks where VGG may be performing best, or Inception, or even AlexNet. I believe the simplest models (AlexNet has only 8 layers) may be the easiest to fine tune, and also may require the smallest amount of data for good performance.

*There are exceptions, MobileNet is more recent but the innovation is that it is a smaller model rather than the strongest model i.e. it is designed to be useable on mobile devices rather than running on the latest GPU.

",40833,,,,,12/4/2020 5:04,,,,0,,,,CC BY-SA 4.0 24989,1,24990,,12/4/2020 8:26,,1,163,"

Q-Learning is guaranteed to converge if $\alpha$ decreases over time.

On page 161 of the RL book by Sutton and Barto, 2nd edition, section 8.1, they write that Dyna-Q is guaranteed to converge if each action-state pair is selected an infinite number of times and if $\alpha$ decreases appropriately over time.

It seems that it would be better if $\alpha$ increased over time, as it is the learning rate of the gradient of Q-function values ($R+\gamma\max_aQ(S',a)-Q(S,A)$), and, initially, they start off incredibly inaccurate, because they are initialized arbitrarily and over time converge to the true values, hence you'd want to weight them more as time increases rather than decrease it?

Why is this a convergence criterion?

",30885,,2444,,12/21/2020 22:10,12/21/2020 22:10,"If $\alpha$ decreases over time, why is Q-learning guaranteed to converge?",,1,0,,,,CC BY-SA 4.0 24990,2,,24989,12/4/2020 9:13,,5,,"

Why is this a convergence criterion?

It is because $R$ and $S'$ are stochastic. A large learning rate applied when these values have variance would not converge to mean, but would wander around typically within some value proportional to $\alpha\sigma$ of the true value, where $\sigma$ is the standard deviation of the term $R + \gamma\text{max}_aQ(S',a)$. If you reduce $\alpha$ towards zero, then this expected error will also reduce to zero.

For deterministic environments, it should be possible to prove convergence with large $\alpha$.


In the special case of static policy, tabular learning and $\alpha = \frac{1}{N(s,a)}$ where $N(s,a)$ is number of visits to state $s$, action $a$, then the expected error for each Q value is the MSE from basic stats i.e. $\frac{\sigma_{TD}}{\sqrt{N(s,a)}}$

",1847,,2444,,12/4/2020 10:16,12/4/2020 10:16,,,,0,,,,CC BY-SA 4.0 24993,2,,24471,12/4/2020 11:28,,0,,"

I really liked the question. Yes, we sum over derivatives. First of all think what backpropagation is trying to do: finding the affect of each parameter on the loss.

So as you said:

the same filter is used multiple times on the input while convolving

meaning that each kernel affects the final loss in several ways, so those affects should be summed together, not averaged.

",41547,,,,,12/4/2020 11:28,,,,4,,,,CC BY-SA 4.0 24998,1,,,12/4/2020 18:04,,1,277,"

I have a Reinforcement Learning environment where the state is a 2D matrix with 0s and 1s (only one column with the value of 1 in each row).

Example:

(
 (0, 1, 0),
 (0, 0, 1),
 (1, 0, 0),
 (0, 0, 0),
 (0, 1, 0)
)

The action the agent must take is for each row in the input, choose one resource out of 12 resources the agent has if there is a column with the value of 1 in that row, else choose no resource if the row has 0s only (example: row[3] wouldn't have any resources chosen for it by the agent). The rows correspond to the users the agent must allocate resources to.

In the step() method in the RL environment, the agent would receive a reward or a penalty depending on the action. If the reward is positive, the agent updates the state matrix, putting a 0 instead of 1 in the rows corresponding to the users that were allocated resources, which should be the next state. If the reward is negative, the episode ends, the environment resets and a new state is received by the agent

It came to my understanding that, in a deep learning approach, the DQN agent would receive a 2D matrix of 0s and 1s as input to its neural network (the state matrix), and output a vector with the chosen resources for each row of the input.

The network must choose a resource out of 12 resources for each row if that row has a 1 in it, and no resource is chosen if there is no column with the value of 1 in that row of the input. In other words, the network must choose an element out of 12 and output a vector with the chosen elements, depending on the input matrix.

Is there a way to do this using Deep Q-Learning and neural networks ?

",42372,,42372,,12/4/2020 19:45,12/4/2020 19:45,DQN Agent with a 2D matrix as input in Keras,,0,9,,,,CC BY-SA 4.0 24999,2,,24966,12/4/2020 20:08,,2,,"

I don't think you've necessarily made any real mistakes in your calculations or anything like that, that all seems accurate. I can't really confidently answer your questions about "Does X usually happen?" or "How common is X?", would have to experiment to make sure of that. I think we can also confidently immediately answer the question about whether MCTS is robust and can still produce high quality policies with "yes", since we've seen state-of-the-art, superhuman results in a bunch of games using these techniques.

But I do think there's a few important details that may change your perception:

  1. MCTS does not compare $Q(s, a)$ values to $U(s, a)$ values in its selection phase. It compares $Q(s, a) + U(s, a)$ expressions of actions $a$, to $Q(s, b) + U(s, b)$ expressions for different actions $b$. So, the difference in magnitudes $Q(s, a) - U(s, a)$ is not nearly as important as the difference in magnitude $Q(s, a) - Q(s, b) + U(s, a) - U(s, b)$!

  2. For any single given state $s$, it is certainly not the case that we expect the different $Q$-values to be have a nice average like $0.5$ or anything like that. There will likely be plenty of states $s$ where we're already in such a strong position that we can afford to make a mistake or two and still expect to win; all the $Q$ values here will be close to $1.0$. There will also be many states where we're in such a terrible position that we expect to lose no matter what; all the $Q$ values here will be close to $0.0$. And then there will of course be states that a network is not sure about, which will have $Q$ values somewhere in between. I suspect that "in between" won't often be a nice mix of all sorts of different values though. If it's something like $0.7$, and there's higher values that attract more attention, during training the MCTS + network will likely become very interested in learning more about that state, and very quickly learn whether that should really just be a $1.0$ or whether it should be lowered. For this reason, I imagine that in unsure states, values will have a tendency to hover around $0.5$.

  3. MCTS will only let the $Q(s, a)$ term dominate the selection phase for as long as it believes that this is actually likely to lead to a win. If this is correct and indeed leads to a win, well, that's great, no need to explore anything else! During the tree search, if further investigation of this action leads the MCTS to believe that it actually is a loss, the $Q$ value will drop (ideally towards $0$), and then it will automatically stop being a dominant term. If the tree search fails to adjust for this in time, and we end up wandering down this losing path anyway, we'll get a value signal of $0$ at the end and update our value network and in the future we'll know better than to repeat this mistake.

",1641,,,,,12/4/2020 20:08,,,,0,,,,CC BY-SA 4.0 25002,2,,1635,12/5/2020 0:39,,0,,"

The paper The First Law of Robotics (a call to arms) (AAAI-94), by Weld and Etzioni, discusses the first Asimov's law, some technical issues it gives rise to (some of them are already mentioned in the other answers), and how they could be addressed (they propose a simplistic way to formalize the first law, but they don't claim it is the right way to do it). You should read it for more details.

",2444,,2444,,12/5/2020 0:50,12/5/2020 0:50,,,,0,,,,CC BY-SA 4.0 25003,1,,,12/5/2020 0:58,,4,474,"

In the news, DeepMind's AlphaFold is said to have solved the protein folding problem using neural networks, but isn't this a problem only optimised quantum computers can solve?

To my limited understating, the issue is that there are too many variables (atomic forces) to consider when simulating how an amino acid chain would fold, in which case only a quantum computer can be used to simulate it.

Is the neural network just making a very good estimate, or is it simulating the actual protein structure?

",5708,,2444,,12/5/2020 1:10,1/6/2022 11:54,Is AlphaFold just making a good estimate of the protein structure?,,2,1,,,,CC BY-SA 4.0 25006,2,,25003,12/5/2020 2:04,,8,,"

AlphaFold (version 1 and 2) predicts (so estimates) the 3D shape of the protein from the sequence of amino acids. AlphaFold's performance is measured with the global distance test (GDT), which is a measure of similarity between two protein structures (the prediction and the ground-truth) that ranges from 0 to 100.

There is a short video and a longer one (both by DeepMind) that summarise the issue of protein folding, how it is important, how well AlphaFold approximately solves it (in the competition Critical Assessment of protein Structure Prediction (CASP)), i.e. AlphaFold 2 achieves a median GDT score of 92.4 (and 87 on the hardest proteins), which is a lot higher than AlphaFold 1's GDT score of 58 (which was the highest achieved score at the time), where, according to John Moult (president of CASP), a score around 90 is considered a satisfactory solution to the protein folding problem. You can find more details about AlphaFold 2 in this DeepMind blog post and about AlphaFold 1 in this other blog post or the associated paper published in Nature this year. You can find the code for AlphaFold 1 here, but there are other community/open-source implementations.

Despite the importance of the problem and achievement, there is clearly a lot of hype about this breakthrough (given also that it was achieved by DeepMind). This is also discussed in this video by Lex Fridman.

",2444,,2444,,12/7/2020 19:21,12/7/2020 19:21,,,,2,,,,CC BY-SA 4.0 25007,2,,24962,12/5/2020 10:35,,1,,"

Your graph looks to me like a typical learning curve plotted for training process in reinforcement learning.

Looking at it in detail I can say:

  • There is clearly some learning occurring.

  • There is a strong random element throughout. As you say the epsilon is reduced to near zero by episode 2000, and I would assume a driving simulation is mostly deterministic but with a lot of state hidden from the agent, this mainly implies that episodes are started in different states.

  • There may be an effect from continued learning that causes the agent performance to vary so much. However, it is more likely towards the end that the agent is encountering new never-seen-before states and making poor decisions in them.

  • The resulting graph matches to my experience of training DQN agents where some of the hyperparameters are off. Probably you need to do some kind of exploration or search through those parameters.

From your comments:

Perhaps the model is subsequently overfitting? Or maybe there is some catastrophic forgetting?

These are possibilities that I cannot rule out by looking at the graph.

However, I think it is more likely that you have some agent design or hyperparameters that are not a good fit to the problem. Those hyperparameters could be almost anything - not enough episodes to cover variation, neural network too simple, neural network too complicated, epsilon decay too fast, poor choice of regularisation, experience replay memory too small, poor choice of optimiser, etc etc.

It is also possible you have limited the state model to the point that learning is hard or even impossible. A common beginner's mistake here is to use static sensor information such as a single current screenshot on each timestep, so the agent has no way to assess how fast it is going, or which way it is already turning etc (if your input includes direct knowledge of current speed, turning etc then this is probably not a problem for you, I am just raising one of many possibilities).

One thing that may help you understand the agent's learning performance better is instead of plotting the learning episode graph, plot a test graph, perhaps once every ~100 episodes, where you run some number of episodes without learning and with $\epsilon = 0$, and take average of the metrics you are interested in. This will remove some of the randomness from the values you are plotting and give you a much better sense of training progress than your current plot.

With Q learning, you may also get significant benefit by setting a non-zero minimum $\epsilon$, e.g. $\epsilon = \text{max}(\text{eps_decay}(n), 0.01)$ where $\text{eps_decay}(n)$ is your current epsilon decay function for step $n$. That is because the off-policy updates with exploration are still helpful at all stages of learning.

",1847,,1847,,12/5/2020 16:47,12/5/2020 16:47,,,,0,,,,CC BY-SA 4.0 25009,2,,24971,12/5/2020 13:04,,0,,"

Based on the clarifications given in the comments on the original question, i will try my best to give an answer.

If the number of features increases, does the approximation capability of an ANN improve theoretically speaking? It depends on whether this additional feature adds 'usefulness' to the data that was already supported. If you add a feature that is already in the dataset, it obviously will not increase the approximation capabilities. If you add a feature that is not yet in the current set of features and does influence the thing you are trying to approximate, then yes! There are some exceptions in this case of course, such as what if features are multicorrelated etc. It is not super important or measurable in ANNs, but statistics has a bunch of theory on this stuff. Simple statistical multiple regression has 4 assumptions and also checks for multicollinearity to see whether it is possible get 'useful' results from the data etc. As ANNs is super general, this is not applicable, but you could argue that similar practices could be looked into when seeking the best possible 'theoretical' validity of the features that you are using.

Does adding more features make your ANNs produce more accurate results (in practice)? Again, it depends. If you indeed added a useful feature, then the possibility exists that your ANN will in practice be able to produce more accurate results. But if your ANN is not able to converge to a solution, then your results will be gibberish. It is more likely that a network is unable to converge if you just throw data at it. More data requires bigger networks etc. So it is highly dependent on your training method. AKA, if your added feature is useful and the method you use is sufficient, then I'd argue that, yes, adding a feature will most likely make your ANNs produce more accurate results.

",34383,,,,,12/5/2020 13:04,,,,3,,,,CC BY-SA 4.0 25010,2,,24938,12/5/2020 19:30,,1,,"

After tinkering a bit more with my experiment, i got it to consistently manifest the intended behavior after around 200 episodes.

Changes to the model itself were minimal: i replaced the loss function on the actor to tf.losses.softmaxCrossEntropy.

Some changes to the training environment seemed to have a significant impact and improved training:

  • Ending the episode after the reward reaches a minimum threshold - this prevents the models being polluted when the car was stock in a corner or flat against a wall.
  • Making sure that the reward used in training was inline with the action that produced that reward - the model training in my case is asynchronous to the physics of the environment - i am sampling the simulation and providing inputs, but the simulation is not interrupted by model related processing.
",42665,,,,,12/5/2020 19:30,,,,0,,,,CC BY-SA 4.0 25011,1,,,12/6/2020 0:51,,2,351,"

I noticed that the TensorFlow library includes a use_bias parameter for the Dense layer, which is set to True by default, but allows you to disable it. At first glance, it seems unfavorable to turn off the biases, as this may negatively affect data fitting and prediction.

What is the purpose of layers without biases?

",38076,,2444,,12/7/2020 14:19,12/7/2020 14:19,What's the purpose of layers without biases?,,1,4,,,,CC BY-SA 4.0 25012,1,,,12/6/2020 3:00,,5,429,"

There are certain proteins that contain metal components, known as metalloproteins. Commonly, the metal is at the active site which needs the most prediction precision. Typically, there is only one (or a few) metals in a protein, which contains far more other atoms. So, the structural data that we could be used to train AlphaFold will contain far less information about the metal elements. Not to mention most proteins don't have metals at all (it is estimated that only 1/2-1/4 of all proteins contain metals [1]).

Given that maybe there is not enough structural data about protein local structure around metal atoms (e.g. Fe, Zn, Mg, etc.), then AlphaFold cannot predict local structure around metals well. Is that right?

I also think that the more complex electron shell of metal also makes the data less useful, since its bounding pattern is more flexible than carbon, etc.

",25322,,19524,,12/6/2020 23:01,4/27/2021 15:00,Can AlphaFold predict proteins with metals well?,,1,2,,,,CC BY-SA 4.0 25013,1,25026,,12/6/2020 8:25,,3,301,"

In the paper Attention Is All You Need, this section confuses me:

In our model, we share the same weight matrix between the two embedding layers [in the encoding section] and the pre-softmax linear transformation [output of the decoding section]

Shouldn't the weights be different, and not the same? Here is my understanding:

For simplicity, let us use the English-to-French translation task where we have $n^e$ number of English words in our dictionary and $n^f$ number of French words.

  • In the encoding layer, the input tokens are $1$ x $n^e$ one-hot vectors, and are embedded with a $n^e$ x $d^{model}$ learned embedding matrix.

  • In the output of the decoding layer, the final step is a linear transformation with weight matrix $d^{model}$ x $n^f$, and then applying softmax to get the probability of each french word, and choosing the french word with the highest probability.

How is it that the $n^e$ x $n^{model}$ input embedding matrix share the same weights as the $d^{model}$ x $n^f$ decoding output linear matrix? To me, it seems more natural for both these matrices to be learned independently from each other via the training data, right? Or am I misinterpreting the paper?

",42699,,2444,,12/6/2020 11:21,12/6/2020 21:03,Transformers: how does the decoder final layer output the desired token?,,1,0,,,,CC BY-SA 4.0 25015,1,25030,,12/6/2020 9:23,,2,323,"

I was reading the paper Attention Is All You Need.

It seems like the last step of the encoder is a LayerNorm(relu(WX + B) + X), i.e. an add + normalization. This should result in a $n$ x $d^{model}$ matrix, where $n$ is the length of the input to the encoder.

How do we convert this $n$ x $d^{model}$ matrix into the keys $K$ and values $V$ that are fed into the decoder's encoder-decoder attention step?

Note that, if $h$ is the number of attention heads in the model, the dimensions of $K$ and $V$ should both be $n$ x $\frac{d^{model}}{h}$. For $h=8$, this means we need a $n$ x $\frac{d^{model}}{4}$ matrix.

Do we simply add an extra linear layer that learns a $d^{model}$ x $\frac{d^{model}}{4}$ weight matrix?

Or do we use the output of the final Add & Norm layer, and simply use the first $\frac{d^{model}}{4}$ columns of the matrix and discard the rest?

",42699,,42699,,12/6/2020 21:19,12/6/2020 22:29,Transformers: how to get the output (keys and values) of the encoder?,,1,0,,,,CC BY-SA 4.0 25016,2,,24887,12/6/2020 10:17,,1,,"

When you start off learning about Q-learning, you start with a simple example that has a few states. For each of the states, you try to estimate what the 'value' is of that state. Because there are so few states, it is possible to store these values in a table (it is also useful for the intuitiveness of the explanation).

However, if you start trying to solve more 'real-life' problems, the number of states can be insanely huge. The essence however will stay the same. You are trying to estimate what you want to do next, based on an estimation of how good each state is that you can end up in. However, now the values are 'most of the times' not stored in a table anymore, as the approach will often come down to using an ANN to estimate the value function.

Answer to your question: You are going to run into a lot of problems when training some model with Q-learning if your table is not able to store the values of possible states that it can come across. In practice, most implementations do not use a Q-table and just use an ANN, which alleviates the problem of having to 'define' how many states your problem consists of.

",34383,,,,,12/6/2020 10:17,,,,1,,,,CC BY-SA 4.0 25018,1,,,12/6/2020 13:21,,1,632,"

I have a Reinforcement-Learning environment where the state is an array of 0s and 1s with length equals to the number of users the agent must satisfy (11 users).

The agent must choose one of 12 resources for the 11 users according to the state array. If state[0] == 1, that means that user0 needs a resource, so the agent must choose a resource out of the 12 resources it has. So, the action array's first element would be, for example: action[0] = 10, which means that resource 10 was allocated to user0.

If the next user (user1) is asking for a resource as well, then the number of resources to choose from is 12 - 1, in other words, because resource10 was already allocated to user0, it cannot be allocated to another user.

If state[X] == 0, it means that userX is not asking for a resource, therefore it must not be allocated any resource.

An example of a state array:

[1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0]

An example of an action array according to the state array example: (resource count starts at 0 | -1 indicates no resource was allocated)

[10, 2, -1, -1, -1, 3, 11, 5, -1, -1, -1]

I'm new to Reinforcement Learning and Deep Learning, and I have no idea how to translate that into a neural network.

",42372,,,,,12/7/2020 1:43,How to build a DQN agent with state and action being arrays?,,1,0,,,,CC BY-SA 4.0 25019,2,,25011,12/6/2020 13:53,,1,,"

Bias is one of the hyperparameters in neural networks, which let you shift activation function. Disabling bias means setting bias to be zero.

Even though, in many cases, bias is a big help for successful learning, in some cases, you may want to add an extra constraint to your neural network in finding the objective function. For example, in the paper below, a zero-bias layer as the last layer helps increase the output's interpretability.

https://ieeexplore.ieee.org/abstract/document/9173537

",35757,,,,,12/6/2020 13:53,,,,1,,,,CC BY-SA 4.0 25026,2,,25013,12/6/2020 21:03,,2,,"

I found the answer by reading the paper referenced by that section, Using the output embedding to improve language models

Based on this observation, we propose threeway weight tying (TWWT), where the input embedding of the decoder, the output embedding of the decoder and the input embedding of the encoder are all tied. The single source/target vocabulary of this model is the union of both the source and target vocabularies. In this model, both in the encoder and decoder, all subwords are embedded in the same duo-lingual space.

It seems like they learned a single embedding matrix ($n^e + n^f$) x $d^{model}$ in dimension.

",42699,,,,,12/6/2020 21:03,,,,0,,,,CC BY-SA 4.0 25028,2,,9105,12/6/2020 21:38,,1,,"

No magical formula

As already stated in this answer, the definition of the fitness function depends on the problem, given that it essentially determines the solutions that you are looking for, and it raises similar issues to the ones you would encounter while defining a reward function in reinforcement learning, such as fitness misspecification (in fact, the concepts of a reward function and fitness functions are similar, although, in some cases, a fitness function is more similar to a cost function in supervised learning problems: as an example, take a look at the fitness functions used to solve symbolic regression). The design of a fitness function is an engineering problem, in the sense that you need to think about the solutions you are looking for, and what represents a good and bad solution. So, I cannot give you the magical formula to define the fitness function for all problems, but I can give you some info that can be useful to guide the design of a fitness function.

What is a fitness function?

Let's start with the definition. A fitness function is a function that maps the chromosome representation into a scalar value (a real number) that quantifies the quality of the chromosome (i.e. solution), so it's a function of the form

$$ f : \Gamma \rightarrow \mathbb{R}, $$

where $\Gamma$ is the space of chromosomes (where a chromosome is a numerical encoding of a solution for your problem that is suitable for evaluation and modification).

Different types of optimization problems

There are different optimization problems that affect the definition of the fitness function [1], such as

  • Unconstrained: this is the simplest case where the fitness function corresponds to your objective function
  • Constrained: in this case, your fitness function could be composed of two terms: the original objective and a penalty term (that penalizes solutions that do not satisfy the constraints)
  • Multi-objective: where you have multiple fitness functions (one for each objective) and the final fitness function is a combination of these multiple fitness functions; in this context, you will often encounter terms like Pareto-optimality
  • Dynamic (or noisy), where the fitness of solutions can change over time or depend on some noise component (e.g. Gaussian noise)

Encodings and fitness functions

In genetic algorithms, a form of evolutionary algorithms, the chromosomes are often assumed to be binary (i.e. $\Gamma$ is a space of binary arrays), so this can limit the way you can evaluate them.

In other evolutionary approaches, the solutions may be encoded differently and represent something different than just a collection of numbers. In particular, in genetic programming, the solutions are programs, so the fitness should correspond to something that the program you are looking for is supposed to accomplish. For example, if you are looking for an analytical expression (e.g. a polynomial) that minimises the squared error with another expression, then the fitness would be e.g. the squared error.

See section 10.3 (p. 180) of [1] for more details.

Cheap fitness evaluation

One of the most desirable properties that you should look for while designing a fitness function is how cheap it is to evaluate the fitness of an individual. Ideally, the fitness evaluation should be quite cheap in order for the evolutionary algorithm to be feasible and practically useful. If it takes 1 year to perform an evaluation of an individual, then you don't get anything done. Alternatively, you can approximate the fitness of an individual, if the computation of the exact fitness is too expensive.

Absolute and relative measures

The fitness function can provide an absolute or relative (to other chromosomes in the population or other competing populations) measure of the quality of a chromosome (or solution). Relative fitness functions are used in co-evolutionary algorithms and are suited for situations where an absolute measure of the quality of a solution is not possible [1].

Fitness sharing

There is also the concept of fitness sharing (see section 9.6.1 of [1], p. 165) where the fitness of an individual can be adjusted based on the fitness of other individuals

Further reading

Apart from the [1], which has several chapters on evolutionary computation, you could take a look at Evolutionary Computation 2: Advanced Algorithms and Operators. The paper Fitness functions in evolutionary robotics: A survey and analysis (2009) also seems useful from the title and abstract (and other parts that I quickly read).

",2444,,2444,,1/20/2021 14:04,1/20/2021 14:04,,,,0,,,,CC BY-SA 4.0 25030,2,,25015,12/6/2020 22:29,,3,,"

I have read the OpenNMT source code (https://github.com/OpenNMT/OpenNMT-py/blob/cd29c1dbfb35f4a2701ff52a1bf4e5bdcf02802e/onmt/modules/multi_headed_attn.py).

It seems like an extra linear layer learns the weights $W^{key}$ and $W^{value}$ (plus biases), so to get the output (keys and values), you multiply the output of the encoder's final add + norm layer by $W^{key}$ to get the keys, and by $W^{value}$ to get the values.

Additionally, these weights and biases seem to be independent across each of the decoding layer. So you feed the same encoder output (add + norm layer output), but multiply by different $W^{key}$ and $W^{value}$ matrices and add by different biases for each of the decoding layer, resulting in different keys and values for each layer

",42699,,,,,12/6/2020 22:29,,,,0,,,,CC BY-SA 4.0 25031,2,,25012,12/6/2020 22:50,,4,,"

Let me address first some of the things you wrote in your question:

There are certain proteins that contain metal components, known as metalloproteins.

Natural proteins do not ever contain metal components as far as we know. Natural proteins are composed of natural amino acids which only contain H,C,O,N,S. Selenocysteine contains Se (also a non-metal!) but it's a proteinogenic amino acid, which means it's a precursor to proteins but doesn't typically show up in the protein itself. From the Wikipedia page that you gave us in your question: "Metalloprotein is a generic term for a protein that contains a metal ion cofactor" and "A cofactor is a non-protein chemical compound or metallic ion".

But your question still deserves an answer, because even though it would incorrect to call a co-factor "part" of the protein, they can still affect the folding and the overall shape. But let's first address one last part of your question:

Given that maybe there is not enough structural data about protein local structure around metal atoms (e.g. Fe/Zn/Mg), then AlphaFold cannot predict local structure around metals well. Is that right?

The first sentence of the Wikipedia article that you linked in your question, says "For instance, at least 1000 human proteins (out of ~20,000) contain zinc-binding protein domains [3] although there may be up to 3000 human zinc metalloproteins [4]." Therefore, while metalloproteins might not be the majority of all proteins, there's a decent enough number of them that are of relevance to the human body, and therefore constructing training databases that contain enough metalloproteins (or even 100% metalloproteins, if desired) is not difficult.

I mentioned a bit elsewhere that AlphaFold was used to predict protein structures in the CASP competition, for which you can see for yourself that many/most of the proteins for which contestants (such as DeepMind) need to predict the structure, come from studies of proteins of relevance to humans because the CASP structures typically come from X-ray crystallography studies, which are typically done on proteins of relevance to humans.

You can also see for yourself not only the "target list" that I showed above, but also the results of the competition which will show how well AlphaFold performed in CASP13 (2018) and CASP14 (2020) for metalloproteins.

Finally:

I also think that the more complex electron shell of metal also makes the data less useful, since its bounding pattern is more flexible than carbon, etc.

It is true that metals are typically harder to model than C,H,O,N,S, and even Se, if doing ab initio calculations on the metals or metail-containing complexes. However, the purpose of machine learning in protein folding studies is to skip ab initio, statistical-dynamical and/or molecular-dynamical calculations of the relevant structures and simply use training data to predict the protein structures. That being said, there needs to be enough training data available (as you correctly pointed out) to learn what happens near the metal co-factors: The answer to this is that there are indeed enough metalloproteins to sufficiently populate a training set, but they won't contain enough of the specific metals involved in every metalloprotein. For example, lots of data will be available for proteins containing Fe since Fe is in hemoglobin (for example) which is essential to the functioning of red blood cells to absorb oxygen, but the protein vanabins contains vanadium which is much rarer and therefore training data involving it will be much less available. You're also correct that metal elements can form more bonds than typical elements found in organic compounds.

So it depends on the metal in the relevant co-factor. Fe-based co-factors will have quite a lot of training data available, as well as Mg-based ones, Zn-based ones, and a lot of other ones which contain the "more common" metals. For proteins like vanabins which contains vanadium, you are quite correct that training data will be limited, but also keep in mind that vanabins is a very rare protein found in sea squirts and we already know more about its structure (through X-ray crystallography, which means we don't need machine learning for it) than we even know about what it even does. The chances of other vanadium-containing co-factors in metalloproteins being very significant is too low to justify working on protein folding algorithms specifically for them.

",19524,,36737,,4/27/2021 15:00,4/27/2021 15:00,,,,7,,,,CC BY-SA 4.0 25032,2,,25018,12/7/2020 1:09,,2,,"

Let's assume you would like to work with a classic DQN. You need to train the Q-network where inputs are the states and actions. The DQN is a function of Q(states, actions). The network is supposed to predict Q-value. The agent must pick up the action that produces the highest Q by giving the all possible actions to the network, in your case. Let's assume the current state is

[1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0]

You must pick up an item for the user which state is 1? you need to generate the all possible actions, for example,

[1, 2, -1, -1, -1, 3, 4, 5, -1, -1, -1]
[12, 1, -1, -1, -1, 2, 3, 4, -1, -1, -1]
[11, 12, -1, -1, -1, 1, 2, 3, -1, -1, -1]
...

You will need to generate all of these possible actions into the Q-network. Your worst case would be around 479 million possible actions to calculate in DQN.

If you would like to implement DQNs for this problem, I do recommend you to check out REINFORCE -> DQN -> Advantage Actor-Critic (A2C), which is the combination of policy network and DQN. With A2C, you can produce continuous actions with a policy network, action in range a[i]∈(0,1)*12, then probably round the generated actions. Then, the next problem is the policy network may produce an impossible action. So, you might skip to the nearest possible item.

For further reading on continuous action domains, I recommend to check out DDPG, PPO

",42774,,42774,,12/7/2020 1:43,12/7/2020 1:43,,,,4,,,,CC BY-SA 4.0 25041,1,25054,,12/7/2020 13:46,,2,113,"

The Decoder mask, also called "look-ahead mask", is applied in the Decoder side to prevent it from attending future tokens. Something like this:

[0, 1, 1, 1, 1]
[0, 0, 1, 1, 1]
[0, 0, 0, 1, 1]
[0, 0, 0, 0, 1]
[0, 0, 0, 0, 0]

But is this mask applied only in the first Decoder block? Or to all its blocks?

",26580,,,,,12/8/2020 0:10,"Is the Decoder mask (triangular mask) applied only in the first decoder block, or to all blocks in Decoder?",,1,0,,,,CC BY-SA 4.0 25042,1,,,12/7/2020 14:13,,0,27,"

I'm currently starting a research project focused on NLP.

One of the steps involved in this project will be the development of a text simplification system, probably using a neural encoder-decoder architecture.

For most Text Simplification research available, the most commonly used dataset is one derived from pairing Wikipedia entries in both English and Simplified English. My problem arises from the fact that the focus of my research is not on the English Language, but rather in Portuguese, specifically Portugal Portuguese.

There exists no Simple Portuguese Wikipedia page and it seems that there exists no publicly available text simplification dataset in Portugal Portuguese at all. Due to this fact I'm curious if there would be any way of tackling this problem. Maybe having a dataset simply of complex Portuguese and simple portuguese, but with no pairings, although I'm not quite sure how that could be formulated to train a NN with.

So my question is if there are any text simplification datasets in Portugal (or maybe Bazil, as a last resource) Portuguese and, if not, what would be the optimal way to build that dataset.

Thank you.

",31147,,31147,,12/7/2020 15:45,12/7/2020 15:45,Finding or creating a dataset for Neural Text Simplification,,0,3,,,,CC BY-SA 4.0 25046,1,,,12/7/2020 16:33,,2,56,"

In page 4 of the paper Old Photo Restoration via Deep Latent Space Translation, it says the encoder $E_{R,X}$ of $VAE_1$ tries to fool the discriminator with a contradictory loss to ensure that $R$ and $X$ are mapped to the same space. What do they mean by "contradictory loss"?

",42707,,2444,,6/4/2022 14:57,6/4/2022 14:57,"What is the ""contradictory loss"" in the ""Old Photo Restoration via Deep Latent Space Translation"" paper?",,1,0,,,,CC BY-SA 4.0 25047,2,,3723,12/7/2020 17:59,,3,,"

In principle, yes, you can also evolve the genetic algorithm (or, in general, evolutionary algorithm), i.e. you can evolve its operations (such as the mutation and cross-over) and hyper-parameters (such as the size of the population or mutation rate). For example, you could use genetic programming to evolve the cross-over operation of a genetic algorithm. However, these genetic operators and hyper-parameters are usually designed and determined by a human and do not change during the evolution process. Nevertheless, there is a framework for evolutionary computation in which reproduction and mutation can also evolve, known as auto-constructive evolution, and there are other examples in the literature of meta-optimization methods applied to evolutionary algorithms, such as

More generally, the optimization of an optimization algorithm is known as meta-optimization and/or hyper-parameter optimization/tuning, so the idea of meta-optimization is not just restricted to evolutionary algorithms, but can be applied also to e.g. deep learning.

",2444,,2444,,12/28/2020 18:41,12/28/2020 18:41,,,,0,,,,CC BY-SA 4.0 25048,2,,25046,12/7/2020 18:07,,1,,"

The contradictory loss is the same loss function that the discriminator would normally use, except with deliberately incorrect labels. That is, when you train the generator, the output of the generator is fed to the discriminator, but instead of the correct label (typically $0$ for a false image), the opposite label is applied (e.g. $1$ for a real image).

This label is condractictory in that it is incorrect and the opposite of the ground truth. However, this is useful, because reducing the loss of this signal represents the goal of the generator. It is important to note that during this phase of the training, you should not update any parameters of the discriminator. The process is followed in order to find gradients for improving the generator only.

",1847,,,,,12/7/2020 18:07,,,,0,,,,CC BY-SA 4.0 25053,1,25094,,12/7/2020 23:18,,5,2321,"

The paper Attention Is All You Need describes the transformer architecture that has an encoder and a decoder.

However, I wasn't clear on what the cost function to minimize is for such an architecture.

Consider a translation task, for example, where give an English sentence $x_{english} = [x_0, x_1, x_2, \dots, x_m]$, the transformer decodes the sentence into a French sentence $x_{french}' = [x_0', x_1', \dots, x_n']$. Let's say the true label is $y_{french} = [y_0, y_1, \dots, y_p]$.

What is the object function of the transformer? Is it the MSE between $x_{french}'$ and $y_{french}$? And does it have any weight regularization terms?

",42699,,42699,,12/7/2020 23:27,10/19/2021 12:20,What is the cost function of a transformer?,,1,0,,,,CC BY-SA 4.0 25054,2,,25041,12/8/2020 0:10,,1,,"

The masking should be applied to all Decoder blocks, otherwise in some blocks, past words can attend to future words, which would be cheating during training.

This is reflected in The Annotated Transformer as well. Notice that in the Decoder class, the forward function applies the same mask to each layer of the decoder:

class Decoder(nn.Module):
    "Generic N layer decoder with masking."
    def __init__(self, layer, N):
        super(Decoder, self).__init__()
        self.layers = clones(layer, N)
        self.norm = LayerNorm(layer.size)
        
    def forward(self, x, memory, src_mask, tgt_mask):
        for layer in self.layers:
            x = layer(x, memory, src_mask, tgt_mask)
        return self.norm(x)
",42699,,,,,12/8/2020 0:10,,,,0,,,,CC BY-SA 4.0 25055,1,,,12/8/2020 0:49,,1,889,"

The paper Attention Is All You Need describes the Transformer architecture, which describes attention as a function of the queries $Q = x W^Q$, keys $K = x W^K$, and values $V = x W^V$:

$\text{Attention(Q, K, V)} = \text{softmax}\left( \frac{QK^T}{\sqrt{d_k}} \right) V \\ = \text{softmax}\left( \frac{x W^Q (W^K)^T x}{\sqrt{d_k}} \right) x W^V$

In the Transformer, there are 3 different flavors of attention:

  1. Self-attention in the Encoder, where the queries, keys, and values all come from the input to the Encoder.
  2. Encoder-Decoder attention in the Decoder, where the queries come from the input to the Decoder, and the keys and values come from the output of the Encoder
  3. Masked self-attention in the Decoder, where the queries, keys and values all come from the input to the Decoder, and, for each token, the $\text{softmax}\left( \frac{QK^T}{\sqrt{d_k}} \right)$ operation is masked out (zero'd out) for all tokens to the right of that token (to prevent look-ahead, which is cheating during training).

What is the gradient (i.e. the partial derivatives of the loss function w.r.t. $x$, $W^Q$, $W^K$, $W^V$, and any bias term(s)) of each of these attention units? I am having a difficult time wrapping my head around derivating a gradient equation because I'm not sure how the softmax function interacts with the partial derivatives, and also, for the Encoder-Decoder attention in the Decoder, I'm not clear how to incorporate the encoder output into the equation.

",42699,,42699,,12/8/2020 1:23,9/7/2022 7:23,What is the gradient of an attention unit?,,1,3,,,,CC BY-SA 4.0 25056,2,,4286,12/8/2020 1:10,,3,,"

A neural network can be reduced to a linear regression model only if we use linear activation functions (i.e. $\sigma(x) = x$), and only if we do not use any neural network specific techniques such as convolution, residuals, etc., as shown below:

$\text{neural network}(x) = \sigma_n(W_{n} \sigma_{n-1}(W_{n-1}\dots\sigma_1(W_1 x + b_1) + \dots + b_{n-1}) + b_n) \\ = W_n (W_{n-1} \dots (W_1 x + b_1) + \dots + b_{n-1}) + b_n \\ = \left( W_n W_{n-1} \dots W_1 \right) x + \left( W_n W_{n-1} \dots W_2 \right) b_1 + \left( W_n W_{n-1} \dots W_3 \right) b_2 + \dots + W_n b_{n-1} + b_n \\ = W_z x + b_z$

where $W_z = \displaystyle \prod_i W_i$ is a weight matrix and $b_z$ is some vector constant.

This follows the linear regression model form $y = Wx + b$, where $W$ is the weight matrix and $b$ is the vector constant. As a result, you can analytically solve for $W_z$ and $b_z$ using linear regression techniques, and no longer need gradient descent.

Note that for this to work well as a linear regression, you need to check for the OLS data assumptions, such as making sure the regressors have no collinearity, the residuals have no heteroskedasticity, there's no auto-regression, and that the errors are roughly normally distributed (more info). Deep neural networks with non-linear activation functions do not require these assumptions since they are universal approximators, although checking for some conditions may help make the task easier to predict (more info*).

*Note - this link talks specifically about time series data with neural network, but the same concept applies to any task in general.

The neural network specific techniques, such as ReLu, convolutions, residuals, etc., is what allows the network to learn non-linear relationships, and therefore make neural networks something more than just repeated applications of linear regression.

",42699,,42699,,12/8/2020 20:19,12/8/2020 20:19,,,,2,,,,CC BY-SA 4.0 25057,2,,21237,12/8/2020 2:07,,5,,"

In statistics, if $X$ and $Y$ are independent and randomly distributed variables:

$\mathbb{E}[X + Y] = \mathbb{E}[X] + \mathbb{E}[Y] \\ Var(X + Y) = Var(X) + Var(Y) \\ \mathbb{E}[XY] = \mathbb{E}[X]\mathbb{E}[Y] \\ Var(XY) = (Var(X) + \mathbb{E}[X]^2)(Var(Y) + \mathbb{E}[Y]^2) - \mathbb{E}[X]^2\mathbb{E}[Y]^2$

Let $Q$ and $K$ be random $d_k$ x $d_k$ matrices, where each entry is a some random distribution with $0$ mean and $1$ variance. Every entry is independent from each other.

Since each entry of $Q$ and $K$ have identical distribution, we can focus only on the top-left-most element of $QK$ without loss of generality. The same applies to every other element.

The top-left-most element of $QK$ is $\displaystyle \sum_{i=0}^{d_k} Q_{1,i} K_{i, 1}$.

Since $Q$ and $K$ are independent:

$\mathbb{E}[Q_{1, i} K_{i, 1}] = \mathbb{E}[Q_{1, i}] \mathbb{E}[K_{i, 1}] = 0 \\ Var(Q_{1, i} K_{i, 1}) = (Var(Q_{1, i}) + \mathbb{E}[Q_{1, i}]^2)(Var(K_{i, 1}) + \mathbb{E}[K_{i, 1}]^2) - \mathbb{E}[Q_{1, i}]^2\mathbb{E}[K_{i, 1}]^2 = 1$

And so summing up $d_k$ of them:

$\mathbb{E} \left[\displaystyle \sum_{i=0}^{d_k} Q_{1,i} K_{i, 1} \right] = \displaystyle \sum_{i=0}^{d_k} \mathbb{E} \left[ Q_{1,i} K_{i, 1} \right] = 0 \\ Var\left(\displaystyle \sum_{i=0}^{d_k} Q_{1,i} K_{i, 1} \right) = \displaystyle \sum_{i=0}^{d_k} Var\left( Q_{1,i} K_{i, 1} \right) = d_k$

For your code block, you are computing the dot product of matrices $a$ and $b$, when you should be doing a matrix multiplication (the attention function multiplies $Q$ by $K$ after all, which is the vectorized form of dot-product -- it doesn't actually do dot product). It should work out to unit variance.

edit: the last paragraph is incorrect, as dot is the same as matrix multiply in the above case

",42699,,42699,,12/8/2020 20:51,12/8/2020 20:51,,,,3,,,,CC BY-SA 4.0 25058,1,,,12/8/2020 3:25,,0,83,"

I have 6600 images and I am supposed to know the rotation of the object in each image. So, given an image, I want to regress to a single value.

My attempt: I use Resnet-18 to extract a feature vector of length 1000 from an image. This is then passed to three fully-connected layers: fc(1000, 512) -> fc(512, 64) -> fc(64, 1)

The problem I am facing right now is that my training loss and validation loss immediately go down after the first 5 epochs and then they barely change. But my training and validation accuracy fluctuates wildly throughout.

I understand that I am experiencing over-fitting and I have done the following to deal with it:

  • data augmentation (Gaussian noise and color jittering)
  • L1 regularization
  • dropout

So far, nothing seems to be changing the results much. The next thing I haven't tried is reducing the size of my neural net. Will that help? If so, how should I reduce the size?

",42805,,,,,12/8/2020 3:25,How should I use deep learning to find the rotation of an object from its 2D image?,,0,2,,,,CC BY-SA 4.0 25059,1,36467,,12/8/2020 3:31,,0,263,"

I am working on LSTM and CNN to solve the time series prediction problem.

I have seen some tutorial examples of time series prediction using CNN-LSTM. But I don't know if it is better than what I predicted using LSTM.

Could using LSTM and CNN together be better than predicting using LSTM alone?

",41045,,2444,,12/8/2020 11:24,8/23/2022 22:06,Time series prediction using LSTM and CNN-LSTM: which is better?,,3,5,,,,CC BY-SA 4.0 25060,2,,24984,12/8/2020 4:34,,5,,"

One can expect the optimal high-level features required to choose the next action and to evaluate a state to be quite similar. Because of that, it is a reasonable idea to share the same network for both policy and value function – you are essentially parameter sharing the feature-extraction part of your neural network, and fine tuning the different heads of your network on the two different tasks: action choice and value prediction.

Using two vs one networks is mostly a question of sample efficiency: theoretically, in both case your AC algorithm should work. In practice however, it will usually be useful to have parameter sharing as the representations encouraged by one of the tasks might be highly useful for the other and vice-versa, enabling one task to cause the other task to get unstuck from local optima. Another reason why this might work better is simply because you do not have to learn the same (or at least similar) representations from scratch twice – leading to a more sample efficient training.

",42207,,,,,12/8/2020 4:34,,,,1,,,,CC BY-SA 4.0 25061,1,,,12/8/2020 8:01,,1,391,"

In autonomous driving, we know that the behaviour prediction module is concerned with understanding how the agents in the environment will behave. Similarly, in the perception module, the tracking algorithms are responsible for getting an estimate of the object's state over time.

",42806,,2444,,12/8/2020 17:19,5/7/2021 19:03,What is the difference between object tracking and trajectory prediction?,,1,1,,,,CC BY-SA 4.0 25062,1,,,12/8/2020 9:44,,5,93,"

I'm trying to create a simple blogpost on RNNs, that should give a better insight into how they work in Keras. Let's say:

model = keras.models.Sequential()
model.add(keras.layers.SimpleRNN(5, return_sequences=True, input_shape=[None, 1]))
model.add(keras.layers.SimpleRNN(5, return_sequences=True))
model.add(keras.layers.Dense(1))

I came up with the following visualization (this is only a sketch), which I'm quite unsure about:

The RNN architecture is comprised of 3 layers represented in the picture.

Question: is this correct? Is the input "flowing" thought each layer neuron to neuron or only though the layers, like in the picture below. Is there anything else that is not correct - any other visualizations to look into?

Update: my assumptions are based on my understanding from what I saw in Geron's book. The recurrent neurons are connected, see: https://pasteboard.co/JDXTFVw.png ... he then proceeds to talk about connections between different layers, see: https://pasteboard.co/JDXTXcz.png - did I misunderstand him or is it just a peculiarity in keras framework?

",42808,,42808,,12/8/2020 11:39,12/28/2022 18:07,How to graphically represent a RNN architecture implemented in Keras?,,1,2,,,,CC BY-SA 4.0 25063,2,,25062,12/8/2020 10:41,,0,,"

The first image is correct. The information will flow from left to right in each layer and from top to bottom in between layers.

",20430,,20430,,12/8/2020 11:45,12/8/2020 11:45,,,,3,,,,CC BY-SA 4.0 25064,1,25080,,12/8/2020 10:46,,2,162,"

I am a medical doctor working on methodological aspects of health-oriented ML. Reproducibility, replicability, generalisability are critical in this area. Among many questions, some are raised by adversarial attacks (AA).

My question is to be considered from a literature review point of view: suppose I want to check an algorithm from an AA point of view:

  • is there a systematic methodology approach to be used, relating format of the data, type of models, and AA? Conceptually, is there a taxonomy of AA? If so, practically, are some AA considered as gold standards?
",42809,,2444,,12/9/2020 10:21,12/9/2020 10:47,Is there a taxonomy of adversarial attacks?,,1,0,,,,CC BY-SA 4.0 25065,1,,,12/8/2020 11:08,,2,32,"

Context

I was making a Transformer Model to convert English Sentences to German Sentences. But the loss stops reducing after some time.

Code

import string
import re
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Embedding, LSTM, RepeatVector, Dense, Dropout, BatchNormalization, TimeDistributed, AdditiveAttention, Input, Concatenate, Flatten
from tensorflow.keras.layers import Activation, LayerNormalization, GRU, GlobalAveragePooling1D, Attention
from tensorflow.keras.optimizers import Adam
from tensorflow.nn import tanh, softmax
import time
from tensorflow.keras.losses import SparseCategoricalCrossentropy, CategoricalCrossentropy
from numpy import array
from tensorflow.keras.utils import plot_model
from sklearn.utils import shuffle
import time
import tensorflow as tf
from numpy import array
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.datasets.imdb import load_data

def load_data(filename):
    file = open(filename, 'r')
    text = file.read()
    file.close()
    return text

def to_lines(text):
    return text.split('\n')

def clean_data(pair):
    pair = 'start_seq_ ' + pair + ' end_seq_'

    re_print = re.compile('[^%s]' % re.escape(string.printable))
    table = str.maketrans('', '', string.punctuation)
    tokens = [token.translate(table) for token in pair.split()]
    tokens = [token.lower() for token in tokens]
    tokens = [re_print.sub('', token) for token in tokens]
    tokens = [token for token in tokens if token.isalpha()]
    return tokens

lines = to_lines(load_data('/content/drive/My Drive/spa.txt'))

english_pair = []
german_pair = []
language = []
for line in lines:
    if line != '':
        pairs = line.split('\t')
        english_pair.append(clean_data(pairs[0]))
        german_pair.append(clean_data(pairs[1]))

        language.append(clean_data(pairs[0]))
        language.append(clean_data(pairs[1]))

english_pair = array(english_pair)
german_pair = array(german_pair)
language = array(language)

def create_tokenizer(data):
    tokenizer = Tokenizer()
    tokenizer.fit_on_texts(data)
    return tokenizer

def max_len(lines):
    length = []
    for line in lines:
        length.append(len(line))
    return max(length)

tokenizer = create_tokenizer(language)

vocab_size = len(tokenizer.word_index) + 1

max_len = max_len(language)

def create_sequences(sequences, max_len):
    sequences = tokenizer.texts_to_sequences(sequences)
    sequences = pad_sequences(sequences, maxlen=max_len, padding='post')
    return sequences

X1 = create_sequences(english_pair, max_len)
X2 = create_sequences(german_pair, max_len)
Y = create_sequences(german_pair, max_len)


X1, X2, Y = shuffle(X1, X2, Y)

training_samples = int(X1.shape[0] * 1.0)

train_x1, train_x2, train_y = X1[:training_samples], X2[:training_samples], Y[:training_samples]
test_x1, test_x2, test_y = X1[training_samples:], X2[training_samples:], Y[training_samples:]

train_x2 = train_x2[:, :-1]
test_x2 = test_x2[:, :-1]
train_y = train_y[:, 1:].reshape(-1, max_len-1)
test_y = test_y[:, 1:].reshape(-1, max_len-1)

train_x2 = pad_sequences(train_x2, maxlen=max_len, padding='post')
test_x2 = pad_sequences(test_x2, maxlen=max_len, padding='post')

train_y = pad_sequences(train_y, maxlen=max_len, padding='post')
test_y = pad_sequences(test_y, maxlen=max_len, padding='post')

All code above just prepares the Data, so if you want you can skip that part. Code After this starts implementing the Transformer Model.

class EncoderBlock(tf.keras.layers.Layer):
    def __init__(self, mid_ffn_dim, embed_dim, num_heads, max_len, batch_size):
        super(EncoderBlock, self).__init__()
        # Variables
        self.batch_size = batch_size
        self.max_len = max_len
        self.mid_ffn_dim = mid_ffn_dim
        self.embed_dim = embed_dim
        self.num_heads = num_heads
        self.attention_vector_len = self.embed_dim // self.num_heads
        if self.embed_dim % self.num_heads != 0:
            raise ValueError('I am Batman!')

        # Trainable Layers
        self.mid_ffn = Dense(self.mid_ffn_dim, activation='relu')
        self.final_ffn = Dense(self.embed_dim)

        self.layer_norm1 = LayerNormalization(epsilon=1e-6)
        self.layer_norm2 = LayerNormalization(epsilon=1e-6)

        self.combine_heads = Dense(self.embed_dim)

        self.query_dense = Dense(self.embed_dim)
        self.key_dense = Dense(self.embed_dim)
        self.value_dense = Dense(self.embed_dim)

    def separate_heads(self, x):
        x = tf.reshape(x, (-1, self.max_len, self.num_heads, self.attention_vector_len))
        return tf.transpose(x, perm=[0, 2, 1, 3])

    def compute_self_attention(self, query, key, value):
        score = tf.matmul(query, key, transpose_b=True)
        dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
        scaled_score = score / tf.math.sqrt(dim_key)
        weights = tf.nn.softmax(scaled_score, axis=-1)
        output = tf.matmul(weights, value)
        return output

    def self_attention_layer(self, x):
        query = self.query_dense(x)
        key = self.key_dense(x)
        value = self.value_dense(x)

        query_heads = self.separate_heads(query)    
        key_heads = self.separate_heads(key)
        value_heads = self.separate_heads(value)

        attention = self.compute_self_attention(query_heads, key_heads, value_heads)

        attention = tf.transpose(attention, perm=[0, 2, 1, 3]) 
        attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))

        output = self.combine_heads(attention)
        return output
        
    def get_output(self, x):
        attn_output = self.self_attention_layer(x)
        out1 = self.layer_norm1(x + attn_output)

        ffn_output = self.final_ffn(self.mid_ffn(out1))

        encoder_output = self.layer_norm2(out1 + ffn_output)
        return encoder_output

class DecoderBlock(tf.keras.layers.Layer):
    def __init__(self, mid_ffn_dim, embed_dim, num_heads, max_len, batch_size):
        super(DecoderBlock, self).__init__()
        # Variables
        self.batch_size = batch_size
        self.max_len = max_len
        self.mid_ffn_dim = mid_ffn_dim
        self.embed_dim = embed_dim
        self.num_heads = num_heads
        self.attention_vector_len = self.embed_dim // self.num_heads
        if self.embed_dim % self.num_heads != 0:
            raise ValueError('I am Batman!')

        # Trainable Layers

        self.query_dense1 = Dense(self.embed_dim, name='query_dense1')
        self.key_dense1 = Dense(self.embed_dim, name='key_dense1')
        self.value_dense1 = Dense(self.embed_dim, name='value_dense1')

        self.mid_ffn = Dense(self.mid_ffn_dim, activation='relu', name='dec_mid_ffn')
        self.final_ffn = Dense(self.embed_dim, name='dec_final_ffn')

        self.layer_norm1 = LayerNormalization(epsilon=1e-6)
        self.layer_norm2 = LayerNormalization(epsilon=1e-6)
        self.layer_norm3 = LayerNormalization(epsilon=1e-6)

        self.combine_heads = Dense(self.embed_dim, name='dec_combine_heads')

        self.query_dense2 = Dense(self.embed_dim, name='query_dense2')
        self.key_dense2 = Dense(self.embed_dim, name='key_dense2')
        self.value_dense2 = Dense(self.embed_dim, name='value_dense2')

    def separate_heads(self, x):
        x = tf.reshape(x, (-1, self.max_len, self.num_heads, self.attention_vector_len))
        return tf.transpose(x, perm=[0, 2, 1, 3])

    def compute_self_attention(self, query, key, value):
        score = tf.matmul(query, key, transpose_b=True)
        dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
        scaled_score = score / tf.math.sqrt(dim_key)
        weights = tf.nn.softmax(scaled_score, axis=-1)
        output = tf.matmul(weights, value)
        return output

    def masking(self, x):
        b = []
        for batch in range(x.shape[0]):
            bat = []
            for head in range(x.shape[1]):
                headd = []
                for word in range(x.shape[2]):
                    current_word = []
                    for represented_in in range(x.shape[3]):
                        if represented_in > word:
                          current_word.append(np.NINF)
                        else:
                          current_word.append(0)
                    headd.append(current_word)
                bat.append(headd)
            b.append(bat)
        return b

    def compute_masked_self_attention(self, query, key, value):
        score = tf.matmul(query, key, transpose_b=True)
        score = score + self.masking(score)
        score = tf.convert_to_tensor(score)
                
        dim_key = tf.cast(tf.shape(key)[-1], tf.float32)
        scaled_score = score / tf.math.sqrt(dim_key)
        weights = tf.nn.softmax(scaled_score, axis=-1)
        output = tf.matmul(weights, value)
        return output

    def masked_self_attention_layer(self, x):
        query = self.query_dense1(x)
        key = self.key_dense1(x)
        value = self.value_dense1(x)

        query_heads = self.separate_heads(query)    
        key_heads = self.separate_heads(key)
        value_heads = self.separate_heads(value)

        attention = self.compute_masked_self_attention(query_heads, key_heads, value_heads)

        attention = tf.transpose(attention, perm=[0, 2, 1, 3]) 
        attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))

        output = self.combine_heads(attention)
        return output

    def second_attention_layer(self, x, encoder_output):
        query = self.query_dense2(x)
        key = self.key_dense2(encoder_output)
        value = self.value_dense2(encoder_output)

        query_heads = self.separate_heads(query)    
        key_heads = self.separate_heads(key)
        value_heads = self.separate_heads(value)

        attention = self.compute_self_attention(query_heads, key_heads, value_heads)

        attention = tf.transpose(attention, perm=[0, 2, 1, 3]) 
        attention = tf.reshape(attention, (-1, self.max_len, self.embed_dim))

        output = self.combine_heads(attention)
        return output
      
    def get_output(self, x, encoder_output):
        masked_attn_output = self.masked_self_attention_layer(x)
        out1 = self.layer_norm1(x + masked_attn_output)

        mutli_head_attn_output = self.second_attention_layer(out1, encoder_output)
        out2 = self.layer_norm2(out1 + mutli_head_attn_output)

        ffn_output = self.final_ffn(self.mid_ffn(out2))
        decoder_output = self.layer_norm3(out2 + ffn_output)
        return decoder_output

embed_dim = 512
mid_ffn_dim = 1024

num_heads = 8
max_len = max_len
batch_size = 32

encoder_block1 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
encoder_block2 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
encoder_block3 = EncoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)

decoder_block1 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
decoder_block2 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)
decoder_block3 = DecoderBlock(mid_ffn_dim, embed_dim, num_heads, max_len, batch_size)

# Define Loss and Optimizer
loss_object = SparseCategoricalCrossentropy()
optimizer = Adam()

embedding = Embedding(vocab_size, embed_dim, name='embedding')
position_embedding = Embedding(vocab_size, embed_dim)

final_transformer_layer = Dense(vocab_size, activation='softmax')

def positional_embedding(x):
    positions = tf.range(start=0, limit=max_len, delta=1)
    positions = position_embedding(positions)
    return x + positions

def train_step(english_sent, german_sent, german_trgt):
    with tf.GradientTape() as tape:
        english_embedded = embedding(english_sent)
        german_embedded = embedding(german_sent)

        english_positioned = positional_embedding(english_embedded)
        german_positioned = positional_embedding(german_embedded)

        # Encoders
        encoder_output = encoder_block1.get_output(english_positioned)
        encoder_output = encoder_block2.get_output(encoder_output)
        encoder_output = encoder_block3.get_output(encoder_output)

        # Decoders
        decoder_output = decoder_block1.get_output(german_positioned, encoder_output)
        decoder_output = decoder_block2.get_output(decoder_output, encoder_output)
        decoder_output = decoder_block3.get_output(decoder_output, encoder_output)

        # Final Output
        transformer_output = final_transformer_layer(decoder_output)

        # Compute Loss
        loss = loss_object(german_trgt, transformer_output)

    variables = embedding.trainable_variables + position_embedding.trainable_variables + encoder_block1.trainable_variables + encoder_block2.trainable_variables
    variables += encoder_block3.trainable_variables + decoder_block1.trainable_variables + decoder_block2.trainable_variables + decoder_block3.trainable_variables
    variables += final_transformer_layer.trainable_variables

    gradients = tape.gradient(loss, variables)
    optimizer.apply_gradients(zip(gradients, variables))

    return float(loss)

def train(epochs=10):
    batch_per_epoch = int(train_x1.shape[0] / batch_size)
    for epoch in range(epochs):
        for i in range(batch_per_epoch):
            english_sent_x = train_x1[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len)
            german_sent_x = train_x2[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len)
            german_sent_y = train_y[i*batch_size : (i*batch_size)+batch_size].reshape(batch_size, max_len, 1)

            loss = train_step(english_sent_x, german_sent_x, german_sent_y)

            print('Epoch ', epoch, 'Batch ', i, '/', batch_per_epoch, 'Loss ', loss)

train()

And the Code is done! But the loss stops reducing at around value of 1.2 after some time. Why is this happening?

Maybe Important

I tried debugging the model, by passing random input integers, and the model was still performing the same way it did when I gave real Sentences as input.

When I tried training the model with just 1 training sample, the loss stops reducing at around 0.2. When I train it with 2 training samples, the result was the approximately the same as when I trained it with 1 training sample.

When I stopped shuffling the dataset the loss gone till around 0.7 and again stopped learning.

I tried simplifying the model by removing some encoder and decoder blocks but the results were approximately the same. I even tried making the model more complex but the results were again approximately the same.

",42813,,,,,12/8/2020 11:08,Why does the loss stops reducing after a point in this Transformer Model?,,0,4,,,,CC BY-SA 4.0 25072,2,,2245,12/8/2020 16:36,,0,,"

This answer already gives the idea of what a trap function (sometimes known as deceptive function) is. However, given that the work on trap functions is not abundant in the literature (at least this topic is not covered extensively in one of my reference books, i.e. this one, on page 211, exercises 11.7, only mentions a specific deceptive function, but does not define what a trap function is), let me also provide you with a few references, in case you are looking for more details and formulations.

The book Evolutionary Computation 1: Basic Algorithms and Operators also mentions deceptive functions and deceptive problems several times. Moreover, the Python package GA_kit includes a module to evaluate GAs with deceptive functions, in case you learn more by looking or playing with the code.

",2444,,2444,,12/8/2020 17:52,12/8/2020 17:52,,,,0,,,,CC BY-SA 4.0 25073,2,,25061,12/8/2020 16:44,,1,,"

Although both processes might be doing estimations (because data sources aren't perfect and/or have noise), there is a key difference:

  • Object tracking cares solely abouth where objects are now. That means that there is actual sensor data that can support the current position. For example: from a camera and a lidar, the computer predicts where a vehicle stands.

  • Trajectory prediction is done with the main purpose of predicting where objects will be in the future, meaning that there is no sensor data yet. For example: from past data, the computer predicts where a vehicle will be after 1 second from now.

However, both processes might need each other. In order to predict a trajectory, it could be easier to work with curated positions given by an object tracker than raw sensor data. On the other hand, past predictions might be given as input to an object tracker, in order to mitigate noise in the sensor data and acting so as a belief-based filter.

",27444,,,,,12/8/2020 16:44,,,,2,,,,CC BY-SA 4.0 25074,1,25087,,12/8/2020 21:04,,4,3543,"

I want to create a Deep Learning model that measures the distance between the camera and certain objects in an image. Is it possible? Please, let me know some resources related to this task.

",36107,,2444,,12/9/2020 22:33,12/10/2020 9:44,How to calculate the distance between the camera and an object using Computer Vision?,,2,0,,,,CC BY-SA 4.0 25075,1,,,12/9/2020 6:18,,2,32,"

Given a supervised problem with X, y input pairs, one can do two things for obtaining the function f that maps X with y with Neural Networks (and in general in machine learning):

  • Deploy directly a supervised learning algorithm that maps X to y

  • Deploy a (variational) auto-encoder for learning useful features, and then using these for training the supervised learning algorithm

I would like to be pointed to some papers/blogs that explain which technique is better and when or where they conduct empirical benchmarking experiments.

",42832,,2444,,12/9/2020 10:15,12/9/2020 10:15,Literature on the advantages of using an auto-encoder for classification,,0,2,,,,CC BY-SA 4.0 25077,1,,,12/9/2020 10:14,,2,62,"

I want to code up one time step in a LSTM. My focus is on understanding the functioning of the forget gate layer, input gate layer, candidate values, present and future cell states.

Lets assume that my hidden state at t-1 and xt are the following. For simplicity, lets assume that the weight matrices are identity matrices, and all biases are zero.

htminus1 = np.array( [0, 0.5, 0.1, 0.2, 0.6] )
xt = np.array( [-0.1, 0.3, 0.1, -0.25, 0.1] )

I understand that forget state is sigmoid of htminus1 and xt

So, is it?

ft = 1 / ( 1 + np.exp( -( htminus1 + xt ) ) )

>> ft = array([0.47502081, 0.68997448, 0.549834  , 0.4875026 , 0.66818777])

I am referring to this link to implement of one iteration of one block LSTM. The link says that ft should be 0 or 1. Am I missing something here?

How do I get the forget gate layer as per schema given in the below mentioned picture? An example will be illustrative for me.

Along the same lines, how do I get the input gate layer, it and vector of new candidate values, \tilde{C}_t as per the following picture?

Finally, how do I get the new hidden state ht as per the scheme given in the following picture?

A simple, example will be helpful for me in understanding. Thanks in advance.

",42837,,42837,,12/9/2020 10:51,12/10/2020 11:30,Understanding LSTM through example,,1,0,,,,CC BY-SA 4.0 25080,2,,25064,12/9/2020 10:47,,2,,"

There are already a couple of papers in the literature that attempt to provide a taxonomy and survey of adversarial attacks. I will just list the two that I think are reliable enough that you can probably use as a reference.

Needless to say, there are different adversarial attacks, such as the Fast Gradient Signed Method (FGSM), and they can be classified into different categories, such as evasion attacks or poising attacks. You can find a lot more info in these cited papers.

",2444,,,,,12/9/2020 10:47,,,,0,,,,CC BY-SA 4.0 25081,2,,25074,12/9/2020 11:06,,4,,"

You can use libraries OpenCV and Python to find the distance.

You can refer this : Vehicle detection and distance estimation.

Since you didn't mention about dataset,you may consider datasets and methods in this paper.

If your camera is fixed/has many objects infront of it,you might use nearest object around camera approach..This approach is extremely useful if you want to deal with latitudes and longitudes.

",41585,,,,,12/9/2020 11:06,,,,0,,,,CC BY-SA 4.0 25082,2,,25077,12/9/2020 11:52,,1,,"

This is an image to better understand lstm... At $f_t$, we are taking the sigmoid of a weight matrix * the input at the current timestep + another weight matrix * $h_{t-1}$

Code Sample for $f_t$:

import numpy as np
import math

def sigmoid(values):
    sigmoid_applied = []
    for value in values:
        result = 1 / (1 + math.pow(math.e, -value))
        sigmoid_applied.append(result)
    return np.array(sigmoid_applied)

w1 = np.random.uniform(0, 1, size=[hidden_vector_len, input_len])
w2 = np.random.uniform(0, 1, size=[hidden_vector_len, hidden_vector_len])

f_t = sigmoid(np.dot(w1, input) + np.dot(w2, prev_hidden_state)) # Its matrix multiplication and not just simple multiplication

Note - There is also a bias term which I haven't included here for simplicity

If you understood $f_t$, you can do the same for other states also.

If you feel I am wrong anywhere in this post, then please do consider adding a comment

",42813,,42813,,12/10/2020 11:30,12/10/2020 11:30,,,,2,,,,CC BY-SA 4.0 25084,1,,,12/9/2020 14:37,,1,349,"

During my research for Google DeepMind's Go-playing program Alpha Go and its successor Alpha Go Zero, I discovered that the system uses a clever pipeline and an interplay of blocks of both policy and value networks to play the game of Go in such a way, that it is able to outperform even the best players in the world. This is in particular remarkable, because the game of Go was considered to be unsolvable a few years ago. This success gained international attention and it was labeled as a breakthrough in the community of AI. It is also not a secret that the research team behind AlphaGo and AlphaGo Zero used lots of computation power to create such a sophisticated system.

But, since each board configuration is considered as a distinct state, where algorithms can be applied really well, and just consider AlphaGo Zero, which uses no prior knowledge and can figure out how the play the game of go from scratch, my question is the following:

Is there any way to state (theoretically) how the performance of AlphaGo would be in continuous action spaces (e.g. self-driving cars)?

",26494,,2444,,12/10/2020 10:29,12/10/2020 10:29,What would be the AlphaGo's performance in continuous action space?,,0,3,,,,CC BY-SA 4.0 25085,1,,,12/9/2020 14:54,,0,144,"

I am trying to set up an experiment where an agent is exploring an n x n gridworld environment, of which the agent can see some fraction at any given time step. I'd like the agent to build up some internal model of this gridworld.

Now the environment is time-varying, so I figured it would useful to try using an LSTM so the agent can learn potentially useful information about how the environment changes. However, since the agent can only see some of the environment, each observation that could be used to train this model would be incomplete (i.e. the problem is partially-observable from this perspective). Thus I imagine that training such a network would be difficult since there would be large gaps in the data - for example, it may make an observation at position [0, 0] at t = 0, and then not make another observation there until say t = 100.

My question is twofold

  1. Is there a canonical way of working around partial observability in LSTMs? Either direct advice or pointing to useful papers would both be appreciated.
  2. Can an LSTM account for gaps in time between observations?

Thanks!

",42846,,,,,12/9/2020 14:54,Using an LSTM for model-based RL in a POMDP,,0,2,,,,CC BY-SA 4.0 25086,1,25090,,12/9/2020 18:28,,8,850,"

I'm doing a project on Reinforcement Learning. I programmed an agent that uses DDQN. There are a lot of tutorials on that, so the code implementation was not that hard.

However, I have problems understanding how one should come up with this kind of algorithms by starting from the Bellman equation, and I don't find a good understandable explanation addressing this derivation/path of reasoning.

So, my questions are:

  1. How is the loss to train the DQN derived from (or theoretically motivated by) the Bellman equation?
  2. How is it related to the usual Q-learning update?

According to my current notes, the Bellman equation looks like this

$$Q_{\pi} (s,a) = \sum_{s'} P_{ss'}^a (r_{s,a} + \gamma \sum_{a'} \pi(a'|s') Q_{\pi} (s',a')) \label{1}\tag{1} $$

which, to my understanding, is a recursive expression that says: The state-action pair gives a reward that is equal to the sum over all possible states $s'$ with the probability of getting to this state after taking action $a$ (denoted as $P_{ss'}^a$, which means the environment acts on the agent) times the reward the agent got from taking action $a$ in state $s$ + discounted sum of the probability of the different possible actions $a'$ times the reward of the state, action pair $s',a'$.

The Q-Learning iteration (intermediate step) is often denoted as:

$$Q^{new}(s,a) \leftarrow Q(s,a) + \alpha (r + \gamma \max_a Q(s',a') - Q(s,a)) \label{2}\tag{2}$$

which means that the new state, action reward is the old Q value + learning rate, $\alpha$, times the temporal difference, $(r + \gamma \max_a Q(s',a') - Q(s,a))$, which consists of the actual reward the agent received + a discount factor times the Q function of this new state-action pair minus the old Q function.

The Bellman equation can be converted into an update rule because an algorithm that uses that update rule converges, as this answer states.

In the case of (D)DQN, $Q(s,a)$ is estimated by our NN that leads to an action $a$ and we receive $r$ and $s'$.

Then we feed in $s$ as well as $s'$ into our NN (with Double DQN we feed them into different NNs). The $\max_a Q(s',a')$ is performed on the output of our target network. This q-value is then multiplied with $\gamma$ and $r$ is added to the product. Then this sum replaces the q-value from the other NN. Since this basic NN outputted $Q(s,a)$ but should have outputted $r + \gamma \max_a Q(s',a')$ we train the basic NN to change the weights, so that it would output closer to this temporal target difference.

",42849,,2444,,12/10/2020 17:39,12/10/2020 17:39,"How is the DQN loss derived from (or theoretically motivated by) the Bellman equation, and how is it related to the Q-learning update?",,1,0,,,,CC BY-SA 4.0 25087,2,,25074,12/9/2020 19:43,,7,,"

In general, calculation of distance between camera and object is impossible if you don't have further scene dependent information.

To my knowledge you have 3 options:

Stereo Vision

If you have 2 cameras looking at the same scene from a different point of view you can calculate the distance with classical Computer Vision algorithms. This is called stereo vision, or also multiview geometry. Stereo Vision is the reason why humans can infer the distance to objects around them (because we have 2 eyes).

Structure from Motion

You move your camera and therefore change your viewpoint and can essentially do stereo mapping over time. Structure from Motion

Scene Understanding

Why is it then still possible for a one-eyed person to infer depth to some extent? Because humans have lots of scene dependent understanding. If you see a rubber duck that takes half of your field of view, you know it's pretty close because you know a rubber duck is not big. If you don't know the size of rubber ducks it is impossible to know whether you see a big rubber duck that is far away or a small rubber duck that is really close.

This is where Deep Learning based models come into play. A recent overview over monocular depth estimation can be found in Zhao2020

",42852,,42852,,12/10/2020 9:44,12/10/2020 9:44,,,,0,,,,CC BY-SA 4.0 25088,1,,,12/9/2020 23:33,,1,41,"

Previously, I have build a donkey car where the steering of the two front wheels was done using a motor servo. This project was a success and the car was able to drive autonomously after training was done.

source Donkey Car 2

Now: I have this Rc Car kit that has two motors on the back, powering two wheels and a trolley wheel in front. The steering is supposed to be done by playing around with the two back motors.

My question is: Is there any method modify the donkey car code, so I can train the model?

Considering that Donkey Car uses the angle of the servo to train the model, and now I just have the information of the two back wheels and no servo steering the vehicle

Not sure if there approach that is specific to this concept.

",38666,,,,,12/9/2020 23:33,Algorithms for training a two motor powered rc car without steering servo,,0,0,,,,CC BY-SA 4.0 25089,1,,,12/9/2020 23:37,,1,84,"

(Disclaimer: I don't know much about ML/AI, besides some basic ideas behind it all.)

It seems like ML/AI models can often be boiled down to statistics, where certain levers (weights) get fine-tuned based on the specific input of a large set of training data.

Clearly, ML/AI models don't only distinguish themselves in their training data alone, otherwise there would not be so many improvements happening in the field all the time. My question therefore is: What does distinguish different models of the same category?

If I have an AI that completes real-life pictures that have some missing parts, and an AI that completes a painting with missing parts, what key concepts separates the two?

If I have an AI detecting text in an image, and an AI detecting... trees in an image, what key concepts separates the two?

In other words, what is stopping me from "taking" an existing implementation of a certain AI category, and just feeding it my specific training set + rewards (i.e. judgement criteria for good vs bad output), in order to solve a specific task?

In yet again other words, if I wanted to use ML/AI to build a new model for a specific task, what concepts and topics would I need to pay extra attention to? (I guess you could say I'm trying to reverse engineer the learning process of the field here. I don't have the time to properly teach myself and become an "expert", but find it all very interesting and would still like to use some of the wonderful things people have done.)

",42857,,2444,,12/10/2020 1:35,1/1/2023 15:08,"Aside from specific training sets, what distinguishes the capabilities of different AI implementations?",,1,5,,,,CC BY-SA 4.0 25090,2,,25086,12/10/2020 0:28,,2,,"

The Bellman equation in RL is usually defined $$v_\pi(s) = \sum_a \pi(a|s) \sum_{s', r} p(s', r|s, a)\left[r + v_\pi(s')\right] = \mathbb{E}_{s' \sim p, a \sim \pi}\left[r(s, a) + v_\pi(s')\right] \; .$$ The way you have written it is correct, but I just thought I would point this out. Regardless, your intuition is correct in that it expresses a recursive relationship such that the value of your current state $s$ is equal to the sum of the expected reward from this state plus the expected value of the state you transition into.

You do, in fact, implement the Q-learning update in Deep Q-Learning. The loss function that you minimise in DQN is $$ L(\theta) = \mathbb{E}_{(s,a,r,s')\sim U(D)}\left[\left( r + \gamma \max_{a'}Q(s', a'; \theta^-) - Q(s, a; \theta)\right)^2 \right]\;$$ where $U(D)$ denotes uniformly at random from replay buffer $D$ and $\theta$ are your network parameters (the network parameterises the Q-function), and $\theta^-$ are a previous iteration of the parameters that are updated every $c$ episodes to help with convergence of the network.

As you can see, the loss function is minimising the 'Bellman error' error from your equation 2. Lets think about why this is.

The TD update you provide is gradually shifting the Q value for $(s, a)$ towards $r + \max_a Q(s', a)$ - this is what we want after all since it eventually converges to the optimal Q-function.

Now lets think about the Deep Q-learning case. We want our network to approximate $Q(s, a)$ and so if we train the network, using the MSE loss, with $r + \max_a Q(s', a)$ as our target then our network will gradually be shifted towards predicting $r + \max_aQ(s', a)$ (which again would give us optimal Q-values for state-action pairs), just like with the TD update.

This is assuming that you know how training of neural networks works so if you don't then I would recommend asking/searching for a relevant question that explains this.

",36821,,36821,,12/10/2020 10:48,12/10/2020 10:48,,,,4,,,,CC BY-SA 4.0 25091,1,,,12/10/2020 0:43,,1,130,"

In Generative Adversarial Networks, the Generator takes noise vector as input and feeds it forward to create an image. The noise vector consists of random numbers sampled from the normal distribution. In several examples that I've encountered, the noise vector had 100 numbers (implementation 1, implementation 2). Is there a reason this number is used? How does noise size affect the generation image?

",38076,,38076,,12/10/2020 1:48,12/10/2020 1:48,How does noise input size affect fake image generation with GANs?,,0,0,,,,CC BY-SA 4.0 25094,2,,25053,12/10/2020 7:16,,5,,"

I took a look at the Tensor2Tensor's source code implementation, and it seems like the loss function is the cross-entropy between the predicted probability matrix $\|\text{sentence length}\| \times \|\text{vocab}\|$ (right before taking the argmax to find the token to output), and the $\|\text{sentence length}\|$-length vector of token IDs as the true label.

",42699,,2444,,10/19/2021 12:20,10/19/2021 12:20,,,,2,,,,CC BY-SA 4.0 25095,1,25104,,12/10/2020 8:59,,0,601,"

I am not familiar with Deep learning and Pytorch. And I want to know how to deal, in general with such a situation. So, I was wondering if I used a pretrained model (EfficientNet for example) if I want to change the _fc attribute and use conv2d in it, how can I recover a 2D structure? Because the pretrained model flattens it just before _fc.

for example, the pretrained model outputs a flattened feature vector of 1280 elements what I did is the following:

self.efficient_net._fc = nn.Sequential(
                nn.Linear(1280, 1225),
                nn.Unflatten(dim=1, unflattened_size=(1, 35, 35)),
                nn.Conv2d(1, 35, kernel_size=1),
                ...,
                )

I didn't have a specific height and width to recover in the 2D structure, so I assumed that h = w = some size and I use a linear layer whose output is equal to the square of the "some size". In the example above 35² = 1225. I am not sure if the unflatten is the correct way to do this. Then I added the conv2d. My code works but it doesn't give good results which probably means that the 2D structure I recovered does not capture any meaningful information. Can anyone enlighten me with general knowledge about how things are done in my situation, or give me some comments? Thank you!

",40411,,40411,,12/10/2020 10:15,12/10/2020 22:09,How to use a conv2d layer after a flatten?,,1,4,,,,CC BY-SA 4.0 25096,1,,,12/10/2020 10:14,,1,8,"

When we translate a text from one language to another, how does the frequency of various POS tags change?

So, let's say we have a text in English, with 10% nouns, 20% adjectives, 15% adverbs, 25% verbs, etc., which we now translate to German, French, or Hindi. Can we say that in these other languages the POS tag frequency will remain the same as earlier?

",42863,,2444,,11/30/2021 15:25,11/30/2021 15:25,"When we translate a text from one language to another, how does the frequency of various POS tags change?",,0,0,,,,CC BY-SA 4.0 25097,1,25116,,12/10/2020 12:45,,1,830,"

Since transformers are good at processing sequential data, can we also use them for audio classification problems (same as RNNs)?

",42865,,2444,,12/11/2020 11:06,12/11/2020 11:06,Can we use transformers for audio classification tasks?,,1,0,,,,CC BY-SA 4.0 25098,1,,,12/10/2020 13:08,,1,22,"

I am looking for some known approach, or some previous work, on the following problem:

Let $\Sigma$ be an alphabet of symbols and $\Sigma^*$ be the set of all the strings that you can compose from this alphabet. Furthermore, let $f:\Sigma^*\rightarrow2^{\Sigma^*}$ be a function that assigns a certain set of $\Sigma$-strings to each $\Sigma$-string. Suppose you have a dataset $\mathcal{D}\subseteq\Sigma^*\times2^{\Sigma^*}$ of input-output pairs.

With this data, the goal is to learn a function $f^\prime:\Sigma^*\rightarrow2^{\Sigma^*}$ that, given a string $\sigma\in\Sigma^*$, gives any superset of $f(\sigma)$, e.g. $f^\prime(\sigma)\supseteq f(\sigma)$. Of course, returning the set of all strings is not a good solution, so $f^\prime(\sigma)$ should not be much larger than $f(\sigma)$ (to give a rough idea, if $|f(\sigma)|=10$, then $|f^\prime(\sigma)|=100$ would still be ok, but $|f^\prime(\sigma)|=10000$ wouldn't). To give an intuitive reason behind this, I have already an algorithm which, given a $\sigma$ and a set $S\supseteq f(\sigma)$, returns $f(\sigma)$. However, this algorithm has a extremely high time-complexity (growing with $|S|$), and I want to use this machine learning approach to narrow down the search.

I would like to use any Machine Learning approach (from Evolutionary Computing to Deep Learning) to solve this problem.

So far my only idea would be to use an encoder-decoder architecture. I construct character embeddings for all symbols in $\Sigma$, and then through some neural architecture (I was thinking about an LSTM) I aggregate them to obtein a string representation. Given this, the decoder generates in sequence all elements of the corresponding set (by a similar, but inverse, fashion).

This is clearly not optimal, because sets lack any meaningful order, and this approach is order-dependent (by nature of LSTMs and decoders in general). Of course I could always sort all sets, but this still imposes a structure to my problem that is not there, and I feel like this could make it harder to solve.

So, in sum, my question is: Is there any known approach to the problem of generating sets of objects from a given input in the literature? If not, how could I improve my approach?

",23527,,23527,,12/10/2020 14:06,12/10/2020 14:06,Is there any known approach to generate sets of objects?,,0,0,,,,CC BY-SA 4.0 25099,1,,,12/10/2020 13:14,,1,1428,"

Attention models/gates are used to focus/pay attention to the important regions. According to this paper, the authors describe that a model with Attention Gate (AG) can be trained from scratch. Then the AGs automatically learn to focus on the target.

What I am having trouble understanding is that, in the context of computer vision, doesn't a filter from the convolutional layers learn the region of interest?

The authors say that adding Attention Gate reduces complexity when compared with multi-stage CNNs. But the job a trained AG would do is the same as that of a filter in a convolutional layer that would lead to the correct output, right?

",41564,,2444,,12/11/2020 11:11,12/12/2020 18:48,What is the difference between Attention Gate and CNN filters?,,1,0,0,,,CC BY-SA 4.0 25101,1,,,12/10/2020 16:25,,2,223,"

From Wikipedia, in the Monte-Carlo Tree Search algorithm, you should choose the node that maximizes the value:

$${\displaystyle {\frac {w_{i}}{n_{i}}}+c{\sqrt {\frac {\ln N_{i}}{n_{i}}}}},$$

where

  • ${w_{i}}$ stands for the number of wins for the node considered after the $i$-th move,

  • ${n_{i}}$ stands for the number of simulations for the node considered after the $i$-th move,

  • $N_{i}$ stands for the total number of simulations after the $i$-th move run by the parent node of the one considered

  • $c$ is the exploration parameter—theoretically equal to$\sqrt{2}$; in practice usually chosen empirically.

Here (and I've seen in other places as well) it claims that the theoretical ideal value for $c$ is $\sqrt{2}$. Where does this value come from?

(Note: I did post this same question on cross-validated before I knew about this (more relevant) site)

",42870,,2444,,12/10/2020 17:34,12/10/2020 17:34,Why is the ideal exploration parameter in the UCT algorithm $\sqrt{2}$?,,0,1,,,,CC BY-SA 4.0 25102,1,25115,,12/10/2020 16:27,,3,579,"

Raul Rojas's book on Neural Networks dedicates section 8.4.3 to explaining how to do second-order backpropagation, that is, computing the Hessian of the error function with respect to two weights at a time.

What problems are easier to solve using this approach rather than first-order backpropagation?

",14892,,2444,,12/12/2020 11:30,12/12/2020 11:30,Why is second-order backpropagation useful?,,1,0,0,,,CC BY-SA 4.0 25104,2,,25095,12/10/2020 22:09,,1,,"

To answer the question in the title, your enclosed method is a valid way to use 2d convs after a flattened feature vector. However, the bad results you experience could come from the structure of your model or from the way you train it. Regarding you last question, it is very hard to give you an advice without knowing your intentions in detail. Regardless, here are my two cents.

First of all, in the case you actually want to use this approach, to have a pretrained model and add your layers after one of its layers, you might want to keep the parameters of the original network intact at least until your newly initialized layers get trained properly. To achieve that, you need to use the gradients for updating only the parts you added as described in this comment. (You definitely should read the other comments there as well to get a better picture.)

Secondly, it might be worth reconsidering if you really want to add your layers after the very last layers of the pretrained network. Depending on your goals, using the output of some prior layers as the input to your layers might be more beneficial to you. (Just do not forget to keep the parameters of the pretrained model intact as I advised in the prior point.)

Lastly, the structure of your layers should also be reconsidered. 2d convs with a kernel size of 1x1 in this situation seems strange to my limited experience, but without knowing what you want to do, its hard to give any solid advice in this regard.

Therefore you might be better off splitting your question into smaller parts and work your way through them one by one.

(My reputation is not high enough otherwise I would have left a comment.)

",40549,,,,,12/10/2020 22:09,,,,1,,,,CC BY-SA 4.0 25105,1,,,12/10/2020 22:14,,1,95,"

In the paper On the Variance of the Adaptive Learning Rate and Beyond, in section 2, the authors write

To further analyze this phenomenon, we visualize the histogram of the absolute value of gradients on a log scale in Figure 2. We observe that, without applying warmup, the gradient distribution is distorted to have a mass center in relatively small values within 10 updates. Such gradient distortion means that the vanilla Adam is trapped in bad/suspicious local optima after the first few updates.

Here is figure 2 from the paper.

Can someone explain this part?

Such gradient distortion means that the vanilla Adam is trapped in bad/suspicious local optima after the first few updates.

Why is this true?

",42535,,2444,,12/11/2020 0:21,12/31/2022 11:08,Why is Adam trapped in bad/suspicious local optima after the first few updates?,,1,0,,,,CC BY-SA 4.0 25106,1,,,12/10/2020 22:29,,0,32,"

After reading this topic on GitHub how long time it takes to train YOLOV3 on coco dataset I was wondering how researchers deal with long training times while inventing new architectures.

I imagine that to evaluate the model you need to train it first. How do they make tweaks in their architectures, e.g. tweaking layers, adding pooling, dropout, etc., if training can take a few days? Is it pure art and it is designed roughly or is it a more deliberate process?

What are the steps of engineering new architecture using deep neural networks?

",42878,,,,,12/11/2020 7:06,What is the process of inventing deep neural network models? How researchers deal with long training times?,,1,0,,,,CC BY-SA 4.0 25108,1,,,12/11/2020 2:43,,1,21,"

I'm working on a project to equip model locomotives with sound boards. I'm in the process of designing the board at the moment, and the idea is to allow users to load their own sound files onto an SD card plugged into the board.

Conventionally, model locomotive sounds are collected from high-fidelity microphones placed on and around the real engine in question. The engine is started up then put through idle and all of the different notches, as well as dynamic braking, horn and bell sounds, etc. This practice is very expensive because you have to find a willing (usually small) railroad or museum, pay for travel expenses, and diesel fuel ain't exactly cheap at the volumes these engines go through. Secondly, newer engines are hard to record because railroads aren't exactly in the business of letting hobbyists tape microphones all over their money making machines. As such, the main cost for a sound board comes not from the circuit's BOM cost, but from the effort required to get sounds from locomotives.

What there's plenty of are YouTube videos of amateur rail enthusiasts taking videos of locomotive sightings at close(ish) proximity, including startups and shutdowns. My question is - is there a way to take a bunch of different audio recordings of the same engine, remove noise and the doppler effect, and from that create a profile that can be used to simulate what the engine might sound like at different throttle notches? Is machine learning the right tool for this?

",42883,,,,,12/11/2020 2:43,Can Machine Learning be used to synthesize engine sounds?,,0,1,,,,CC BY-SA 4.0 25109,1,,,12/11/2020 3:54,,4,263,"

I trained a simple model to recognize handwritten numbers from the mnist dataset. Here it is:

model = Sequential([
    Conv2D(filters=1, kernel_size=(3,1), padding='valid', strides=1, input_shape=(28, 28, 1)),
    Flatten(),
    Dense(10, activation='softmax')])

I experimented with varying the number of filters for the convolutional layer, while keeping other parameters constant(learning rate=0.0001, number of episodes=2000, training batch size=512). I used 1, 2, 4, 8, and 16 filters, and the model accuracy was 92-93% for each of them.

From my understanding, during the training the filters may learn to recognize various types of edges in the image (e.g, vertical, horizontal, round). This experiment made me wonder whether any of the filters end up being duplicate -- having the same or similar weights. Is there anything that prevents them from that?

",38076,,38076,,12/30/2020 16:14,12/30/2020 16:14,Is there anything that ensures that convolutional filters don't end up the same?,,1,3,,,,CC BY-SA 4.0 25111,2,,25109,12/11/2020 6:16,,4,,"

No, nothing really prevents the weights from being different. In practice though they end up almost always different because it makes the model more expressive (i.e. more powerful), so gradient descent learns to do that. If a model has $n$ features, but 2 of them are the same, then the model effectively has $n-1$ features, which is a less expressive model than that of $n$ features, and therefore usually has a larger loss function.

But even if the weights are different, some of them can be very similar. If you visualize the first layer of your convolution filters, and you have a large number of them (e.g. 100), you will see some of them are learning to detect roughly the same edges (same orientation and placement). These features are so similar, they are effectively redundant in the model and do not add to its predictive power.

There's actually an entire field of research on identifying redundant features and pruning them. Le Cun shows in Optimal Brain Damage that pruning out redundant features not only make the model smaller and faster for inference, but can also even improve the model's accuracy.

Here is a blog post for a high level overview of one of the pruning methods for more info.

",42699,,,,,12/11/2020 6:16,,,,0,,,,CC BY-SA 4.0 25112,2,,21118,12/11/2020 6:32,,3,,"

I think the colloquial understanding of Gödel's incompleteness theorems allows them to be too broadly applied. Gödel's second incompleteness regards the consistency of a formal system, which is a technical concept of formal systems that means the system cannot prove every formula. It is commonly framed as a system not being able to prove both a formula and its negation (e.g. $2+2=4$ and $2+2 \neq 4$), since many logical systems allow anything to be proven from a contradiction.

The second incompleteness theorem states that if a consistent formal system is expressive enough to encode basic arithmetic (Peano arithmetic), then that system cannot prove its own consistency. This implies that we must use a stronger system B to prove the consistency of A. The system needs to be able to represent arithmetic because that is what is used to define the representability conditions of Gödel's proof that allowed him to formally construct the self-referential formulae central to the incompleteness theorems.

Here I diverge with my own opinion on this, I feel the concept of consistency in formal systems has no obvious bearing on the limits of artificial intelligence. An intelligent agent need not know anything about formal consistency to reach its level of intelligence -- the vast majority of humans have never encountered this concept, and yet they are still intelligent. Even many mathematicians don't give it a second thought unless they are in the trenches of mathematical logic. One would have to take an overly narrow view of artificial intelligence to allow Gödel's second incompleteness to serve as a limitation to it.

I caution against the popular informal restatements of Gödel's incompleteness theorems. These theorems were undoubtedly earth-shattering in the study of foundational mathematics and still have grand implications today, but projecting those results too far away from their rigorous origins is going to lead to many stray conclusions.

",20955,,20955,,12/11/2020 18:12,12/11/2020 18:12,,,,0,,,,CC BY-SA 4.0 25113,2,,25105,12/11/2020 6:56,,0,,"

The authors describe their belief in Section 3:

Due to the lack of samples in the early stage, the adaptive learning rate has an undesirably large variance, which leads to suspicious/bad local optima.

Diving further, in section 3.2:

Adam uses the exponential moving average to calculate the adaptive learning rate. For gradients $\{g_1,\dots,g_t\}$, their exponential moving average has a larger variance than their simple average. Also, in the early stage ($t$ is small), the difference of the exponential weights of $\{g_1,\dots,g_t\}$ is relatively small (up to $1−β^{t−1}_2$).

It seems like the root issue is that exponential moving average, while great with many data samples, have too large of a variance with few data samples. It is this variance that sometimes allow the gradient descend in a bad optima.

",42699,,,,,12/11/2020 6:56,,,,0,,,,CC BY-SA 4.0 25114,2,,25106,12/11/2020 7:06,,2,,"

Part of the answer is that new architectures are often variants of existing architectures. There are some rough heuristics people follow (e.g. using powers of 2 in layer sizes, changing layer sizes according to a schedule, adding normalization layers, etc.). However, even when building off an existing architecture, there can still be a lot of toying around with different configurations and hyperparameters to require copious training time.

Researchers working for a company will likely have the luxury of some serious compute, e.g. GPU clusters or cloud services like AWS. This can really make a huge difference over training a model on your PC GPU. Further, many models can be trained at once. Many academic institutions also employ cluster computing on dedicated GPU clusters - for example, here at the University of Utah our department has a cluster with 8 really excellent GPUs.

One last thing I will note, is that it's not always necessary to train the model to completion on the full training set to get a sense of which network configuration and hyperparameter settings are working better than others. If I have a massive dataset that will take days to train on, I may take a smaller subset of the training data and do some quick comparisons between different models to get a feel for what changes have the greatest impact on test performance. It's not completely reliable, but it can guide intuition to narrow down the number of models you want to train on the full dataset to convergence. For example, using this technique I was able to find that predictions from my model were fairly invariant to the exact layer sizes I was using and was far more sensitive to a hyperparameter in my optimization solver.

",20955,,,,,12/11/2020 7:06,,,,0,,,,CC BY-SA 4.0 25115,2,,25102,12/11/2020 7:25,,4,,"

Second-order optimization algorithms like Hessian optimization have more information on the curvature of the loss function, so converge much, much faster than first-order optimization algorithms like gradient descent. I remember reading somewhere that if you have $n$ weights in the neural network, one iteration of a second-order optimization algorithm will reduce the loss function at approximately the same rate as $n$ iterations of a standard first-order optimization algorithm. However, with recent advancements to gradient descent (momentum, adaptive rates, etc), the difference isn't as large anymore -- @EmmanuelMess pointed out a paper that states:

The performance of the proposed first order and second order methods with adaptive gain (BP-AG, CGFR-AG, BFGS-AG) with standard second order methods without gain (BP, CGFR, BFGS) in terms of speed of convergence evaluated in the number of epochs and CPU time. Based on some simulation results, it’s showed that the proposed algorithm had shown improvements in the convergence rate with 40% faster than other standard algorithms without losing their accuracy.

Here is a great post explaining the background behind the math of why this is the case.

Also, second-order gradients can help the optimizer identify states like saddle points, and help the solver get out of those states. Saddle points give standard gradient descent a lot of issues, as the gradient descent has difficulty and is slow to move out of the saddle point. Fixing the saddle point issue is one of the motivations for improving gradient descent over the last two decades (SDG with momentum, adaptive learning rates, ADAM, etc). More info.

The issue though is that to compute the second-order derivative requires a matrix $n^2$ in size, as opposed to gradient descent which requires a matrix $n$ in size. The memory and computation become intractable for large networks, especially if you have millions of weights.

Some approaches exist which efficiently approximates the second-order optimizations, solving the tractability problem. A popular one is L-BFGS. I haven't played around with it much, but I believe L-BFGS is not as popular as gradient descent algorithms (such as SGD-M, ADAM) because it is still very memory demanding (requires storing about 20-100 previous gradient evaluations), and cannot work in a stochastic context (you cannot sample mini-batches to train on; you must train the entire dataset in one pass per iteration). If those two are not an issue for you, then L-BFGS works pretty well I believe.

",42699,,2444,,12/12/2020 11:26,12/12/2020 11:26,,,,5,,,,CC BY-SA 4.0 25116,2,,25097,12/11/2020 7:34,,1,,"

Yes, Transformers can be used to work with audio data, such as audio processing (audio classification, speaker identification, etc) (Audio ALBERT), speech-to-text (Streaming Automatic Speech Recognition with the Transformer Model), and text-to-speech (Neural Speech Synthesis with Transformer Network).

",42699,,,,,12/11/2020 7:34,,,,0,,,,CC BY-SA 4.0 25121,2,,13010,12/11/2020 15:14,,0,,"

If you know the latitude-longitude of the trucks and the center, you can do the following,

Given the latitude-longitude location of the center and the radius in which you want to search the presence of a truck(say R) around center, you can find the latitude-longitude bounds of the space around center within R radius by : Link1

You can find python implementation here:Link2

Once you know the bounds, you can simply check if the truck's location falls within the latitude-longitude bounds.

",41585,,,,,12/11/2020 15:14,,,,0,,,,CC BY-SA 4.0 25124,1,,,12/11/2020 23:42,,1,17,"

I have been reading recently about value and policy iteration. I tried to code the algorithms to understand them better and in the process I discovered something and I am not sure why is the case (or if my code is doing the right thing)

If I compute the expected value until convergence I get the following grid:

If I compute the optimal value with value iteration I will get the following.

As you can see, the values are different but their relative magnitude is similar, i.e. the 3rd column in the last row has the greatest value in both computations.

I believe this makes sense, as the expected value will tend to accumulate values in "promising" cells. But I assume that the first computation won't help much because it does not tell us what is the optimal policy, whereas the second, does.

Is my understanding correct?

",42853,,2444,,12/12/2020 12:09,12/12/2020 12:09,Are the relative magnitudes of the learned and optimal state value function the same?,,0,0,,,,CC BY-SA 4.0 25127,1,25130,,12/12/2020 1:42,,2,1346,"

I am trying to understand and reproduce the Proximal Policy Optimization (PPO) algorithm in detail. One thing that I find missing in the paper introducing the algorithm is how exactly actions $a_t$ are generated given the policy network $\pi_\theta(a_t|s_t)$.

From the source code, I saw that discrete actions get sampled from some probability distribution (which I assume to be discrete in this case) parameterized by the output probabilities generated by $\pi_\theta$ given the state $s_t$.

However, what I don't understand is how continuous actions are sampled/generated from the policy network. Are they also sampled from a (probably continuous) distribution? In that case, which type of distribution is used and which parameters are predicted by the policy network to parameterize said distribution?

Also, is there any official literature that I could cite which introduces the method by which PPO generates its action outputs?

",37982,,37982,,12/16/2020 20:56,12/16/2020 20:56,How are continuous actions sampled (or generated) from the policy network in PPO?,,1,4,,,,CC BY-SA 4.0 25128,1,,,12/12/2020 6:05,,1,17,"

Context: Double Q-learning was introduced to prevent the maximization bias from q-learning. Instead of learning a single Q-network, we can learn two (or in general $K > 1$) and our Q-estimate would be the min across all these Q-networks.

Question: Does it make sense to share the layers of these Q-networks (except the last layer)?

So, instead of having 2 networks of size [64, 64, 2] (with ~8.5K parameters in total) we can have one network of size [64, 64, 4] (with ~4.3K params).

I couldn't see much of a downside to this, but all the implementations I've seen keep two completely different networks.

",36922,,2444,,12/12/2020 11:02,12/12/2020 11:02,Would it make sense to share the layers (except the last one) of the neural networks in Double DQN?,,0,0,,,,CC BY-SA 4.0 25129,2,,20158,12/12/2020 6:13,,1,,"

To me, Bellman update is simply supervised learning: right hand side (bootstrap) is a sample of the left hand side (conditional expectation).. The Bellman equation simply explains that the right hand side is such a sample.

",36922,,,,,12/12/2020 6:13,,,,0,,,,CC BY-SA 4.0 25130,2,,25127,12/12/2020 6:22,,2,,"

As long as your policy (propensity) is differentiable, everything's is good. Discrete, continuous, other, doesn't matter! :)

A common example for continuous spaces is the reparameterization trick, where your policy outputs $\mu, \sigma = \pi(s)$ and the action is $a \sim \mathcal{N}(\mu, \sigma)$.

",36922,,2444,,12/12/2020 12:05,12/12/2020 12:05,,,,6,,,,CC BY-SA 4.0 25131,1,25151,,12/12/2020 6:30,,2,394,"

Before deep learning, I worked with machine learning problems where the data had a large class imbalance (30:1 or worse ratios). At that time, all the classifiers struggled, even after under-sampling the represented classes and creating synthetic examples of the underrepresented classes -- except Random Forest, which was a bit more robust than the others, but still not great.

What are guidelines for class distribution when it comes to deep learning (CNNs, ResNets, transformers, etc)? Must the representation of each class be 1:1? Or maybe it's "good enough" as long as it is under some ratio like 2:1? Or is deep learning completely immune to class imbalance as long as we have enough training data?

Furthermore, as a general guideline, should each class have a certain minimum number of training examples (maybe some multiple of the number of weights of the network)?

",42699,,42699,,12/13/2020 1:33,12/17/2020 0:40,How robust are deep networks to class imbalance?,,1,2,,,,CC BY-SA 4.0 25133,2,,25089,12/12/2020 10:26,,0,,"

If I understand what you mean correctly then the answer is basically nothing. Fundamentlaly all ML algorithms are ding the same thing, which is to optimize some weights for a certain output (this is true even for non-parametric methods in an implict way, but lets not dive too deep here). The only differences are:

  1. Dataset the model is trained on
  2. Specific dimensionality of inputs and outputs to make them compatible with the input data and the output labels.
  3. Complexity that can be expressed by the model (the number of weights and their structure in layers in the case of Neural Networks).
  4. Changes to the optimization process (gradient clipping, regularization, ecc.)
  5. Structural changes to the architecture that embed assumptions about the specific problem setting (e.g. 2D-Convulutions embed assumptions for images, softmax activations embeds the assumption for classification with probabilities, hidden states embed the assumptions about memory and "fogetting", ecc...)

Therefore, if you have a "similar" problem setting where you can assume that 3, 4, 5 can be the same without problems, then you can just make the appropriate changes in 2 and change the dataset (1) to get a model that works on something different.

Of course, being able to tell how "similar" a problem is to another and what possible things could be different is quite tricky and requires a lot of knowledge about the domain of Machine Learning and how every algorithm works and why.

In essence, I'm saying that in principle you could take a model and train it for a new setting with very minor and simple changes that don't require a lot of knowledge. However, without the wider knowledge on the field of ML/AI you won't be able to tell if what you are doing is ok and how well can it work in general.

",42903,,,,,12/12/2020 10:26,,,,1,,,,CC BY-SA 4.0 25134,1,,,12/12/2020 12:07,,0,936,"

I’ve set up a neural network model to experiment with predicting foreign exchange rates based on various economic data. The model learned fine and the test data is OK ($R^2 = 0.88$).

But I can't figure out how to input data for scenarios where new data is outside the range of the datasets used to train the model. For example, if US debt is increased (again), then it will be outside the data range used for the training datasets, so, when normalised using the same parameters as the training dataset (0-1 scale), it will be greater than 1, so the model rejects it.

Everything I've read says to normalise the new data using the training data parameters (understandably), but I can't find anything that explains how to use the model to make predictions where new data is outside the range of the training datasets.

In parallel, I've used a regression model, but I'm fairly new to neural networks and would like to find a way of using these for this kind of prediction model. Any help gratefully received.

",42905,,2444,,12/12/2020 17:36,12/12/2020 17:36,How to deal with predictions for data outside the range of the training dataset in neural networks?,,0,6,,,,CC BY-SA 4.0 25137,1,,,12/12/2020 14:58,,1,429,"

When Proximal Policy Optimization (PPO) was released, it was accompanied by a paper describing it.

Later, the authors at OpenAI introduced a second version of PPO, called PPO2 (whereas the original version is now commonly referred to as PPO1). Unfortunately, the several changes made between PPO1 and PPO2 are pretty much undocumented (as stated over here).

Someone associated with OpnenAI's baselines Deep Reinforcement Learning repository commented that the main advancement of PPO2 (compared to PPO1) was the use of a more advanced parallelism strategy, leading to improved performance. Unfortunately, the person omitted naming further changes made.

Now, I was wondering if anyone is aware of a (reliable) source of information or (preferably) even some published literature that lists all the numerous differences between PPO1 and PPO2.

",37982,,2444,,12/12/2020 16:37,12/12/2020 16:37,What are the differences between Proximal Policy Optimization versions PPO1 and PPO2?,,0,0,,,,CC BY-SA 4.0 25138,1,,,12/12/2020 15:11,,0,110,"

I've read in this discussion that "reinforcement learning is a way of finding the value function of a Markov Decision Process".

I want to implement an RL model, whose state space and action space dimensions would increase, as the MDP progresses. But I don't know how to define it it terms of e.g. Q-learning or some similar method.

Precisely, I want to create a model, that would generate boolean circuits. At each step, it could perform four different actions:

  • apply $AND$ gate on two wires,
  • apply $OR$ gate on two wires,
  • apply $NOT$ gate on one wire,
  • add new wire.

Each of the first three actions could be performed on any currently available wires (targets). Also, the number of wires will change over time. It might increase if we perform fourth action, or decrese after e.g. application of an $AND$ gate (taking as input two wires and outputting just one).

",17411,,17411,,12/13/2020 12:04,12/13/2020 12:04,How to implement RL model with increasing dimensions of state space and action space?,,0,8,,,,CC BY-SA 4.0 25139,1,,,12/12/2020 15:11,,1,40,"

if I make an application for movies and each user in the system can rate the movies. And I want to make a recommendation system to recommend movies to active user based on his rating for other movies. using item based collaborative filtering using KNN.
when we find the similarities between the movies and pick the top k items, which approach is correct?

1- calculate the similarities between all movies and then take the top k for every movie the user rated it highly. (the dataset is a matrix represent the rating values for each item from each user)

2- The KNN is applied to all the movies that the user likes, one after the other, and we find the similarity between each movie and the films that the user did not rate, so that for each film we take the top K of similar films (the user not rated yet) for each movie the user rated highly, then show it to the user . (the dataset for each time we apply knn is a matrix contain rating for each item the user rated and all other items that the user not rated yet).

",42879,,42879,,12/12/2020 15:21,12/12/2020 15:21,what is the correct approach for KNN in item based recommendation system?,,0,0,,,,CC BY-SA 4.0 25142,2,,25099,12/12/2020 18:48,,3,,"

CNNs work by applying filters over the entire image. The same filter is applied at every pixel in the image. That is, the same weights are used at every pixel.

Note, when I say "at every pixel" this means across the spatial dimension HxW of the image. You can also have attention in the channel dimension. See for example Squeeze and Excitation: https://arxiv.org/pdf/1709.01507.pdf

While this is one of the strengths of CNNs, since it drastically reduces the number of parameters of a network, you can imagine that it may not make sense to treat every part of the image the same regardless of the content. This is what the attention gate is for.

By performing an element-wise multiplication of the output of a CNN layer with a gate tensor (typically clamped to the 0-1 range with a sigmoid) we can effectively down weigh or ignore features in certain areas of the image.

CNNs without AG can typically learn the same things but may need more channels and layers. Attention gates allow to treat filters differently depending on the content and hence make it easier to learn with less filters.

What you use as the attention tensor can vary. In the paper you referenced, the tensor was one of the input tensors of the convolutional layer. In this case, it is called self-attention. However, with attention in general this gate tensor may also come from other information such as another network.

",42911,,,,,12/12/2020 18:48,,,,1,,,,CC BY-SA 4.0 25143,2,,6231,12/12/2020 20:41,,0,,"

Okay, so instead of telling you to just not have recurrent connections, i'm actually going to tell you how to identify them.

First thing you need to know is that recurrent connections are calculated after all other connections and neurons. So which connection is recurrent and which is not depends on the order of calculation of your NN. Also, the first time when you put data into the system, we'll just assume that every connection is zero, otherwise some or all neurons can't be calculated.

Lets say we have this neural network: Neural Network

We devide this network into 3 layers (even though conceptually it has 4 layers):

Input Layer  [1, 2]
Hidden Layer [5, 6, 7]
Output Layer [3, 4]

First rule: All outputs from the output layer are recurrent connections.

Second rule: All outputs from the input layer may be calculated first.

We create two arrays. One containing the order of calculation of all neurons and connections and one containing all the (potentially) recurrent connections. Right now these arrays look somewhat like this:

Order of 
calculation: [1->5, 2->7 ]

Recurrent:   [ ]

Now we begin by looking at the output layer. Can we calculate Neuron 3? No? Because 6 is missing. Can we calculate 6? No? Because 5 is missing. And so on. It looks somewhat like this:

3, 6, 5, 7

The problem is that we are now stuck in a loop. So we introduce a temporary array storing all the neuron id's that we already visited:

[3, 6, 5, 7]

Now we ask: Can we calculate 7? No, because 6 is missing. But we already visited 6...

[3, 6, 5, 7,] <- 6

Third rule is: When you visit a neuron that has already been visited before, set the connection that you followed to this neuron as a recurrent connection. Now your arrays look like this:

Order of 
calculation: [1->5, 2->7 ]

Recurrent:   [6->7 ]

Now you finish the process and in the end join the order of calculation array with your recurrent array so, that the recurrent array follows after the other array. It looks somethat like this:

[1->5, 2->7, 7, 7->4, 7->5, 5, 5->6, 6, 6->3, 3, 4, 6->7]

Let's assume we have [x->y, y]

Where x->y is the calculation of x*weight(x->y)

And

Where y is the calculation of Sum(of inputs to y). So in this case Sum(x->y) or just x->y.

There are still some problems to solve here. For example: What if the only input of a neuron is a recurrent connection? But i guess you'll be able to solve this problem on your own...

",42913,,42913,,12/13/2020 13:50,12/13/2020 13:50,,,,2,,,,CC BY-SA 4.0 25144,1,,,12/12/2020 20:57,,0,610,"

I ran into a 2019-Entrance Exam question as follows:

The answer mentioned is (4), but some search on google showed me maybe (1) and (2) is equal to (4). Why would k-means be the algorithm with the highest bias? (Can you please also provide references to valid material to study more?)

",42854,,2444,,12/12/2020 21:33,6/6/2022 7:06,Why does k-means have more bias than spectral clustering and GMM?,,2,1,,,,CC BY-SA 4.0 25145,2,,25144,12/12/2020 21:28,,1,,"

I'm not an expert on clustering, but here's my take below. Note that this is only based on theoretical arguments, I haven't had enough clustering experience to say if this is generally true in practice.

K-means vs GMM

K-means has a higher bias than GMM because it is a special case of GMM. K-means specifically assumes the clustering is spherical (meaning each dimension is weighted equally important) and that the clustering problem is a hard clustering problem (each data point can only belong to one label). So, theoretically, K-means should perform equal to GMM (under very specific conditions) or worse. More info

K-means vs GMM (identity covariance matrix)

K-means has a higher bias than GMM (identity covariance matrix) because it is also a special case. K-mean specifically assumes the hard clustering problem, but GMM does not. Because of this, GMM has stronger estimates for the mean of the centroids. More specifically,

[GMM] estimates the cluster means as weighted means, not assigning observations in a crisp manner to one of the clusters. In this way it avoids the problem explained above and it will be consistent as ML estimator (in general this is problematic because of issues of degeneration of the covariance matrix, however not if you assume them spherical and equal).

In practice, if you generate observations from a number of Gaussians with same spherical covariance matrix and different means, K-means will therefore overestimate the distances between the means, whereas the ML-estimator for the mixture model will not.

So, theoretically, K-means should perform equal to GMM (identity covariance matrix) or worse. More info

K-means vs Spectral clustering

K-means has a higher bias then spectral clustering because spectral clustering effectively uses K-means after processing more information from the matrices.

Spectral clustering usually is spectral embedding, followed by k-means in the spectral domain.

So yes, it also uses k-means. But not on the original coordinates, but on an embedding that roughly captures connectivity. Instead of minimizing squared errors in the input domain, it minimizes squared errors on the ability to reconstruct neighbors. That is often better.

More info

",42699,,,,,12/12/2020 21:28,,,,0,,,,CC BY-SA 4.0 25148,1,25149,,12/12/2020 22:22,,7,2538,"

I have a difficult time understanding the "multi-head" notion in the original transformer paper. What makes the learning in each head unique? Why doesn't the neural network learn the same set of parameters for each attention head? Is it because we break query, key and value vectors into smaller dimensions and feed each portion to a different head?

",15498,,2444,,12/12/2020 22:54,11/30/2021 15:18,What is different in each head of a multi-head attention mechanism?,,2,0,,,,CC BY-SA 4.0 25149,2,,25148,12/12/2020 22:39,,8,,"

The reason each head is different is because they each learn a different set of weight matrices $\{ W_i^Q, W_i^K, W_i^V \}$ where $i$ is the index of the head. To clarify, the input to each attention head is the same. For attention head $i$:

\begin{align} Q_i(x) &= x W_i^Q \\ K_i(x) &= x W_i^K \\ V_i(x) &= x W_i^V \\ \text{attention}_i(x) &= \text{softmax} \left(\frac{Q_i(x) K_i(x)^T}{\sqrt{d_k}} \right) V_i(x). \end{align}

Notice that the input to each head is $x$ (either the semantic + positional embedding of the decoder input for the first decoder layer, or the output of the previous decoder layer). More info

The question as to why gradient descent learns each set of weight matrices $\{ W_i^Q, W_i^K, W_i^V \}$ to be different across each attention head is very similar to "Is there anything that ensures that convolutional filters end up the same?", so maybe you might find the answer there helpful for you:

No, nothing really prevents the weights from being different. In practice though they end up almost always different because it makes the model more expressive (i.e. more powerful), so gradient descent learns to do that. If a model has n features, but 2 of them are the same, then the model effectively has n−1 features, which is a less expressive model than that of n features, and therefore usually has a larger loss function.

",42699,,2444,,11/30/2021 15:18,11/30/2021 15:18,,,,5,,,,CC BY-SA 4.0 25150,1,25190,,12/13/2020 1:31,,0,142,"

I know convolutional neural networks are commonly used for image recognition, but I was wondering if they would be able to distinguish between predominantly text-based documents vs something like objects. For example, if you trained using images of the first page of invoices matched to a vendor name, could you get a CNN to predict the vendor based on an image? If not, is there a different AI technique better suited that is purely image-based, or would it require OCR and leveraging the text in the invoice?

Update: based on a comment, my ask my not be clear. I'm not trying to see if the CNN can differentiate between a document (mostly text based image) and a photo image. I want to know if based on a gif/jpeg/png of a document (no OCR performed) a CNN would be able to classify the documents, which basically could be used as a means of identifying the vendor.

",42916,,42916,,12/14/2020 14:19,12/14/2020 20:42,Can a convolutional neural network classify text document images?,,1,4,,,,CC BY-SA 4.0 25151,2,,25131,12/13/2020 2:40,,2,,"

@nbro pointed out the paper A systematic study of the class imbalance problem in convolutional neural networks, which tested class imbalance LeNet for MNIST, on a custom CNN for CIFAR-10, and on ResNet for ImageNet. The paper found that by artificially creating class imbalance on those data sets, the neural networks are significantly deteriorated. The ROC AUC drops by 5-10%, and accuracy decreases by 20-30%. These effects are worsened on more complex tasks.

There are 3 noteworthy class imbalance approaches to partially alleviate this:

  • Undersampling: sampling the over-represented class less often
  • Oversampling: sampling the under-represented class more often
  • Thresholding: after the neural network learns the weights, during inference, multiply the output class probability by a weight (the class prior, which is different for each class). The weight is the inverse of the class representation of the dataset (i.e. the inverse of $\frac{numInstances(c)}{\displaystyle \sum_i numInstances(i)}$, where $c$ is the current class and $numInstances(i)$ is the number of unique instances of class $i$ in the training set.

The paper concludes with the following best practices:

Regarding the choice of a method to handle CNN training on imbalanced dataset we conclude the following.

  • The method that in most of the cases outperforms all others with respect to multi-class ROC AUC was oversampling.

  • For extreme ratio of imbalance and large portion of classes being minority, undersampling performs on a par with oversampling. If training time is an issue, undersampling is a better choice in such a scenario since it dramatically reduces the size of the training set

  • To achieve the best accuracy, one should apply thresholding to compensate for prior class probabilities. A combination of thresholding with baseline and oversampling is the most preferable, whereas it should not be combined with undersampling.

  • Oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance. The higher a fraction of minority classes in the imbalanced training set, the more imbalance ratio should be reduced.

  • Oversampling does not cause overfitting of convolutional neural networks, as opposed to some classical machine learning models.

The last point is very interesting, because oversampling is known to cause overfitting in classical machine learning models and many have advised against doing it.

",42699,,2444,,12/17/2020 0:40,12/17/2020 0:40,,,,0,,,,CC BY-SA 4.0 25152,1,25475,,12/13/2020 5:56,,3,940,"

I know this question may be so silly, but I can not prove it.

In Stanford slide (page 17), they define the formula of SGD with momentum like this:

$$ v_{t}=\rho v_{t-1}+\nabla f(x_{t-1}) \\ x_{t}=x_{t-1}-\alpha v_{t}, $$

where:

  • $v_{t+1}$ is the momentum value
  • $\rho$ is a friction, let say it's equal 0.9
  • $\nabla f(x_{t-1})$ is the gradient of the objective function at iteration $t-1$
  • $x_t$ are the parameters
  • $\alpha$ is the learning rate

However, in this paper and many other documents, they define the equation like this:

$$ v_{t}=\rho v_{t-1}+\alpha \nabla f(x_{t-1}) \\ x_{t}=x_{t-1}- v_{t}, $$

where $\rho$ and $\alpha$ still have the same value as in the previous formula.

I think it should be

$$v_{t}=\alpha \rho v_{t-1}+\alpha \nabla f(x_{t-1})$$

if we want to multiply the learning rate inside the equation.

In some other document (this) or normal form of momentum, they define like this:

$$ v_{t}= \rho v_{t-1}+ (1- \rho) \nabla f(x_{t-1}) \\ x_{t}=x_{t-1}-\alpha v_{t} $$

I can not understand how can they prove those equations are similar. Can someone help me?

",41287,,41287,,12/31/2020 1:42,12/31/2020 6:33,How are these equations of SGD with momentum equivalent?,,1,5,,,,CC BY-SA 4.0 25153,1,25184,,12/13/2020 6:05,,1,103,"

I made a simple feedforward neural network (FFNN) to predict $x$ from $\sin(x)$. It failed. Does it mean the model has overfitted? Why doesn't it work?

set.seed(1234567890)
Var3 <- runif(500, 0, 20)
mydata3 <- data.frame(Sin=sin(Var3),Var=Var3)
set.seed(1234567890)
winit <- runif(5500, -1, 1)
#hidUnit <- c(9,1)
set.seed(1234567890)
nn3 <-neuralnet(formula = Var~Sin,data = mydata3,
                hidden =c(4,2,1),startweights =winit,
              learningrate = 0.01,act.fct = "tanh")

plot(mydata3, cex=2,main='Predicting x from Sin(x)',
     pch = 21,bg="darkgrey",
     ylab="X",xlab="Sin(X)")
points(mydata3[,1],predict(nn3,mydata3), col="darkred", 
       cex=1,pch=21,bg="red")

legend("bottomleft", legend=c("true","predicted"), pch=c(21,21),
       col = c("darkgrey","red"),cex = 0.65,bty = "n")
",42920,,42920,,12/13/2020 23:43,12/14/2020 10:10,Why does my neural network to predict $x$ given $\sin(x)$ not generalize?,,1,3,,,,CC BY-SA 4.0 25154,2,,5625,12/13/2020 6:27,,2,,"

No, not all fully observable environments are episodic. Let's take a look again at the definitions from the book:

Fully Observable Environment (section 2.3.2)

If an agent’s sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action

Episodic Environment (section 2.3.2)

In an episodic task environment, the agent’s experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes.

Take note of the "crucial" part at the end of the definition of episodic environment. A fully observable environment that is not episodic (and therefore sequential in the book's taxonomy) is chess. Chess is fully observable because the player can view the positions of all active pieces on the chess board, and that is all the information that needs to be known in order to take the optimal action. But chess is not episodic, because the player's current move depends on all previous moves, and the current move will have downstream effects in later turns.

In fact, if you look at Figure 2.6 in the book on pg. 45, they provide three examples of fully observable sequential (i.e. not episodic) environments: crossword puzzles, chess, and backgammon. There are of course many more. Most games are sequential as that is the main appeal of them - how to best sequence my moves now in order to ensure victory over my opponent at a future time?

",20955,,2444,,1/24/2021 0:37,1/24/2021 0:37,,,,4,,,,CC BY-SA 4.0 25155,1,,,12/13/2020 7:43,,1,105,"

It may already be obvious that I am just a practitioner and just a beginner to Deep Learning. I am still figuring out lots of "WHY"s and "HOW"s of DL.

So, for example, if I train a feed-forward neural network, or an image classifier with CNNs, or just an OCR problem with GRUs, using something like Keras, and it performs very poorly or takes more time to train than it should be, it may be because of the gradients getting vanished or exploding, or some other problem.

But, if it is due to the gradients getting very small or very big during the training, how do I figure that out? Doing what will I able to infer that something has happened due to the gradient values?

And what are the precautions I should take to avoid it from the beginning (since training DL models with accelerated computing costs money) and if it has happened, how do I fix it?


This question may sound like a duplicate of How to decide if gradients are vanishing?, but actually not, since that question focuses on CNNs, while I am asking about problem with gradients in all kinds of deep learning algorithms.

",38060,,38060,,12/13/2020 13:02,12/13/2020 13:02,How do I infer exploding or vanishing gradients in Keras?,,0,4,,,,CC BY-SA 4.0 25157,1,,,12/13/2020 11:15,,3,92,"

I have been reading this TensorFlow tutorial on transfer learning, where they unfroze the whole model and then they say:

When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training=False when calling the base model. Otherwise the updates applied to the non-trainable weights will suddenly destroy what the model has learned.

My question is: why? The model's weights are adapting to the new data, so why do we keep the old mean and variance, which was calculated on ImageNet? This is very confusing.

",36107,,2444,,12/13/2020 13:06,12/13/2020 13:06,Why shouldn't batch normalisation layers be learnable during fine-tuning?,,0,0,,,,CC BY-SA 4.0 25158,1,25164,,12/13/2020 12:33,,1,1213,"

I am reading college notes on state search space. The notes (which are not publicly available) say:

  1. To do state-search space, the strategy involves two parts: defining a heuristic function, and identifying an evaluation function.

  2. The heuristic is a smart search of the available space. The evaluation function may be well-defined (e.g. the solution solves a problem and receives a score) or may itself be the heuristic (e.g. if chess says pick A or B as the next move and picks A, the evaluation function is the heuristic).

  3. Understand the difference between the heuristic search algorithm and the heuristic evaluation function.

I'm trying step 3 (to understand). Can I check, using the A* search as an example, that the:

Heuristic function: estimated cost from the current node to the goal, i.e. it's a heuristic that's calculating the simplest way to get to the goal (in A*; $h(n)$), so the heuristic function is calculating $h(n)$ for a series of options and picking the best one.

Evaluation function: $f(n) = g(n) + h(n)$.

",42926,,2444,,2/6/2021 18:33,2/6/2021 18:33,What is the difference between the heuristic function and the evaluation function in A*?,,1,0,,,,CC BY-SA 4.0 25164,2,,25158,12/13/2020 14:03,,2,,"

What is the difference between the heuristic function and the evaluation function in A*?

The evaluation function, often denoted as $f$, is the function that you use to choose which node to expand during one iteration of A* (i.e. decide which node to take from the frontier, determine the next possible actions and which next nodes those actions lead to, and add those nodes to the frontier). Typically, you expand the node $n$ such that $f(n)$ is the smallest, i.e. $n^* = \operatorname{argmin}f(n)$.

In the case of informed search algorithms (such as A*), the heuristic function is a component of $f$, which can be written as $f(n) = g(n) + h(n)$, where $h(n)$ is the heuristic function. The heuristic function estimates the cost of the cheapest path from $n$ to the goal. Just for completeness, $g(n)$ is the actual cost from the start node to $n$ (which can be computed exactly during the search). In the case of uninformed search algorithms, you can actually view the evaluation function as just $f(n) = g(n)$, i.e. the heuristic function is always zero.

So, you're right.

For more details, you can take a look at section 3.5 (p. 92) of the book Artificial Intelligence: A Modern Approach (3rd edition), by Norvig and Russell (you can find the pdf online).

",2444,,2444,,12/13/2020 14:09,12/13/2020 14:09,,,,0,,,,CC BY-SA 4.0 25165,1,25191,,12/13/2020 14:45,,1,1945,"

In these notes, we have the following statement

The depth of a learned decision tree can be larger than the number of training examples used to create the tree

This statement is false, according to the same notes, where it is written

False: Each split of the tree must correspond to at least one training example, therefore, if there are $n$ training examples, a path in the tree can have length at most $n$

Note: There is a pathological situation in which the depth of a learned decision tree can be larger than number of training examples $n$ - if the number of features is larger than $n$ and there exist training examples which have same feature values but different labels.

I had written on my notes that the depth of a decision tree only depends on the number of features of the training set and not on the number of training samples. So, what does the depth of the decision tree depend on?

",42928,,2444,,12/13/2020 20:39,1/13/2021 22:04,What does the depth of a decision tree depend on?,,2,0,,,,CC BY-SA 4.0 25166,1,25167,,12/13/2020 16:20,,3,362,"

I want to determine some distance between two policies $\pi_1 (a \mid s)$ and $\pi_2 (a \mid s)$, i.e. something like $\vert \vert \pi_1 (a \mid s) - \pi_2(a \mid s) \vert \vert$, where $\pi_i (a\mid s)$ is the vector $(\pi_i (a_1 \mid s), \dots, \pi_i(a_n \mid s))$. I am looking for a sensible notion for such a distance.

Are there some standard norms/metrics used in the literature for determining a distance between policies?

",36978,,2444,,12/13/2020 21:03,12/13/2020 21:03,Are there some notions of distance between two policies?,,1,0,,,,CC BY-SA 4.0 25167,2,,25166,12/13/2020 17:01,,5,,"

Given that policies are probability distributions, in principle, you can use any metric or measure of distance that can be used to compare two probability distributions. (Note that notions of distance are not necessarily metrics in a mathematical sense).

A common measure is the Kullback–Leibler divergence (which is not a metric, in a mathematical sense, given that it does not satisfy certain required conditions for being a metric). For example, in section 4 of the PPO paper, the KL divergence is used as a regulariser (which is actually quite common, for instance, in the context of variational Bayesian neural networks). The TRPO also uses the KL divergence.

The Wasserstein metric has also been used in RL, for instance, in distributional RL (but, in this case, not to compare policies but distributions over value functions).

You can find more info about statistical distances here. The specific distance that you use may depend on the problem that you want to solve and the properties that you want your distance to have. For example, the KL divergence is unbounded above, so, if that's not desirable, you could choose another one. The paper On choosing and bounding probability metrics (2002, by Gibbs and Su) may also be useful. Here I also talk about the KL divergence and total variation.

",2444,,,,,12/13/2020 17:01,,,,2,,,,CC BY-SA 4.0 25173,1,,,12/13/2020 20:25,,2,246,"

I have a time series sequence with 10 million steps. In step $t$, I have a 400 dimensional feature vector $X_t$ and a scalar value $y_t$ which I want to predict during inference time and I know during the train time. I want to use a transformer model. I have 2 questions:

  1. If I want to embed the 400 dimensional input feature vector into another space before feeding into the transformer, what are the pros and cons of using let's say 1024 and 64 for the embedding space dimension? Should I use a dimension more than 400 or less?
  2. When doing position embedding, I cannot use a maximum position length of 10 million as that blows up the memory. What is the best strategy here if I want to use maximum position length of 512? Should I chunk the 10 million steps into blocks of size 512 and feed each block separately into the transformer? If so, how can I connect the subsequent blocks to take full advantage of parallelization while keeping the original chronological structure of the sequence data?
",15498,,,,,12/13/2020 20:25,How to handle long sequences with transformers?,,0,0,,,,CC BY-SA 4.0 25176,2,,25165,12/13/2020 22:02,,0,,"

While this sounds obvious, the depth of the tree depends on how your algorithm builds the tree. For a fixed dataset $\mathcal{D}$, there're many algorithms, such as ID3, C4.5, CART, etc. (and their variants) to build your tree. For the most part, these algorithms recursively partition the dataset, so it's never possible to get a tree larger than $|\mathcal{D}|$. Large/deep trees are also prone to overfitting and are computationally expensive, so these algorithms typically prune the tree so it's much smaller than $|\mathcal{D}|$.

In fact, Kearns and Mansour showed that under the weak learning assumption (i.e. at each node, there's a split that classifies the data at this node better than a random classifier, by $\gamma$) to achieve $\epsilon$ training error, it suffices to make $(1/\epsilon)^{\mathcal{O}(\log(1/\epsilon)/\gamma^2)}$ splits (and thus depth is upper bounded by this).

But of course, you can always cook up trees of arbitrary depth...

",36922,,,,,12/13/2020 22:02,,,,1,,,,CC BY-SA 4.0 25177,1,,,12/13/2020 22:15,,5,106,"

I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In section 3. Learning under sample selection bias, the author says the following:

We can separate classifier learners into two categories:

  • local: the output of the learner depends asymptotically only on $P(y \mid x)$
  • global: the output of the learner depends asymptotically both on $P(x)$ and on $P(y \mid x)$.

The term "asymptotically" refers to the behavior of the learner as the number of training examples grows. The names "local" and "global" were chosen because $P(x)$ is a global distribution over the entire input space, while $P(y \mid x)$ refers to many local distributions, one for each value of $x$. Local learners are not affected by sample selection bias because, by definition $P(y \mid x, s = 1) = P(y \mid x)$ while global learners are affected because the bias changes $P(x)$.

Then, in section 3.1.1. Naive Bayes, the author says the following:

In practical Bayesian learning, we often make the assumption that the features are independent given the label $y$, that is, we assume that $$P(x_1, x_2, \dots, x_n \mid y) = P(x_1 \mid y) P(x_2 \mid y) \dots P(x_n \mid y).$$ This is the so-called naive Bayes assumption. With naive Bayes, unfortunately, the estimates of $P(y \mid x)$ obtained from the biased sample are incorrect. The posterior probability $P(y \mid x)$ is estimated as $$\dfrac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)} ,$$ which is different (even asymptotically) from the estimate of $P(y \mid x)$ obtained with naive Bayes without sample selection bias. We cannot simplify this further because there are no independence relationships between each $x_i$, $y$, and $s$. Therefore, naive Bayes learners are global learners.

Since it is said that, for global learners, the output of the learner depends asymptotically both on $P(x)$ and on $P(y \mid x)$, what is it about $\dfrac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)}$ that indicates that naive Bayes learners are global learners?


EDIT: To be clear, if we take the example given for the local learner case (section 3.1. Bayesian classifiers), then it is evident:

Bayesian classifiers compute posterior probabilities $P(y \mid x)$ using Bayes' rule: $$P(y \mid x) = \dfrac{P(x \mid y)P(y)}{P(x)}$$ where $P(x \mid y)$, $P(y)$ and $P(x)$ are estimated from the training data. An example $x$ is classified by choosing the label $y$ with the highest posterior $P(y \mid x)$.

We can easily show that bayesian classifiers are not affected by sample selection bias. By using the biased sample as training data, we are effectively estimating $P(x \mid y, s = 1)$, $P(x \mid s = 1)$ and $P(y \mid s = 1)$ instead of estimating $P(x \mid y)$, $P(y)$ and $P(x)$. However, when we substitute these estimates into the equation above and apply Bayes' rule again, we see that we still obtain the desired posterior probability $P(y \mid x)$: $$\dfrac{P(x \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)} = P(y \mid x, s = 1) = P(y \mid x)$$ since we are assuming that $y$ and $s$ are independent given $x$. Note that even though the estimates of $P(x \mid y, s = 1)$, $P(x \mid s = 1)$ and $P(y \mid s = 1)$ are different from the estimates of $P(x \mid y)$, $P(x)$ and $P(y)$, the differences cancel out. Therefore, bayesian learners are local learners.

Note that we get $P(y \mid x)$. However, in the global case, it is not clear how we get $P(x)$ and $P(y \mid x)$ (as is required for global leaners) from $\dfrac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)}$.

",16521,,16521,,12/16/2020 7:32,12/16/2020 7:32,"$\frac{P(x_1 \mid y, s = 1) \dots P(x_n \mid y, s = 1) P(y \mid s = 1)}{P(x \mid s = 1)}$ indicates that naive Bayes learners are global learners?",,0,0,,,,CC BY-SA 4.0 25178,1,,,12/14/2020 2:02,,3,2232,"

I have a question regarding the time delay in reinforcement learning (RL).

In the RL, one has state, reward and action. It is usually assumed that (as far as I understand it) when the action is executed on the system, the state changes immediately and that the new state can then be analysed (influencing the reward) to determine the next action. However, what if there is a time delay in this process. For example, when some action is executed at time $t_1$, we can only get its effect on the system at $t_2$ (You can imagine a flow: the actuator is in the upstream region and the sensor is in the downstream region, so that there will be a time delay between the action and the state). How do we deal with this time delay in RL?

",42941,,40671,,5/14/2021 14:12,5/14/2021 14:12,How to deal with the time delay in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 25181,2,,5509,12/14/2020 6:15,,1,,"

Seems this question was asked and answered in openai/baselines github issue. The issue has been closed for a while.

Below is an answer provided by @matthiasplappert which has the most "thumbs up":

To clarify: PPO is an on-policy algorithm so you are correct that going over the same data multiple times is technically incorrect.

However, we found that PPO is actually quite okay with doing this and we still get stable convergence. This is likely due to the proximal trust region constrained that we enforce, which means that the policy cannot change that much anyway when going over the current set of transitions multiple times, making it still approximately on-policy. You can of course get rid of this but then you'll need more samples.

",42919,,,,,12/14/2020 6:15,,,,0,,,,CC BY-SA 4.0 25183,1,,,12/14/2020 9:30,,1,192,"

I was training a CNN model on TensorFlow. After a while I came back and saw this loss curve:

The green curve is training loss and the gray one is validation loss. I know that before epoch 394 the model in heavily overfitted, but I have no idea what happened after that.

Also, this is accuracy curves if it helps:

I'm using categorical cross-entropy and this is the model I am using:

and here is link to PhysioNet's challenge which I am working on: https://physionet.org/content/challenge-2017/1.0.0/

",42948,,52001,,6/14/2022 16:07,11/11/2022 17:04,Why do the training and validation loss curves diverge?,,1,8,,,,CC BY-SA 4.0 25184,2,,25153,12/14/2020 10:10,,1,,"

The labels are not unique for the input domain [0,20]. Think about sin(x)=0, x could be 0, pi, 2*pi, 3*pi, ..., n*pi, all are correct from a mathematical point of view, but this is not reflected in your MSE loss. At this point your NN has to guess the correct label from your input data. Predicting the mean of your input data is the safest bet for the network.

In essence you're trying to build a arcsin function with a NN. If only consider x values in [-0.5*pi,0.5*pi], the labels are unique and your network should work.

",37120,,,,,12/14/2020 10:10,,,,0,,,,CC BY-SA 4.0 25185,2,,25183,12/14/2020 10:17,,0,,"

Follow this answer from StackOverflow, I think your problem related to the second cases where your loss gets Nan value.

Maybe you should try to use the larger datatype (for example float 16 -> float 32)

",41287,,,,,12/14/2020 10:17,,,,11,,,,CC BY-SA 4.0 25187,1,,,12/14/2020 17:29,,0,91,"

How should this problem be framed in the domain of RL for preventing users from exceeding their bank account balance and being overdrawn?

For example, a user has 1000 in an account, and proceeds to withdraw 300, 400, 500, making the user overdrawn by 200 :((300+400+500) - 1000).

Treating this as a supervised learning problem, I could use logistic regression. The input feature is the transaction amounts. The input features for a training instance, 300,400,500 and the output feature occurs if the account is overdrawn or not overdrawn with corresponding values of 1 and 0 respectively. For simplicity, we will assume the number of transactions is consistent and is always 3.

For RL, a state could be represented as a series of transactions, but how should the reward be assigned?

Update:

Here my RL implementation of the problem:

import torch
from collections import defaultdict
gamma = .1
alpha = 0.1
epsilon = 0.1
n_episode = 2000
overdraft_limit = 1000

length_episode = [0] * n_episode
total_reward_episode = [0] * n_episode

episode_states = [[700,100,200,290,500] , [400,100,200,300,500] , [212, 500,100,100,200,500]]

def gen_epsilon_greedy_policy(n_action, epsilon):
    def policy_function(state, Q):
        probs = torch.ones(n_action) * epsilon / n_action
        best_action = torch.argmax(Q[state]).item()
        probs[best_action] += 1.0 - epsilon
        action = torch.multinomial(probs, 1).item()
        return action
    return policy_function

def is_overdrawn(currentTotal):
    return currentTotal >= overdraft_limit

# Actions are overdrawn or not, 0 - means it is not overdrawn, 1 - means that it will be overdrawn
def get_reward(action, currentTotal):
    if action == 0 and is_overdrawn(currentTotal):
        return -1
    elif action == 0 and not is_overdrawn(currentTotal):
        return 1
    if action == 1 and is_overdrawn(currentTotal):
        return 1
    elif action == 1 and not is_overdrawn(currentTotal):
        return -1
    else :
        raise Exception("Action not found") 

def q_learning(gamma, n_episode, alpha,n_action):
    """
    Obtain the optimal policy with off-policy Q-learning method
    @param gamma: discount factor
    @param n_episode: number of episodes
    @return: the optimal Q-function, and the optimal policy
    """
    Q = defaultdict(lambda: torch.zeros(n_action))
    for ee in episode_states : 
        for episode in range(n_episode):
            state = ee[0]
            index = 0
            currentTotal = 0
            while index < len(ee)-1 :
                currentTotal = currentTotal + state
                next_state = ee[index+1] 
                action = epsilon_greedy_policy(state, Q)
#                 print(action)
                reward = get_reward(action, currentTotal)
                td_delta = reward + gamma * torch.max(Q[next_state]) - Q[state][action]
                Q[state][action] += alpha * td_delta

                state = next_state
                index = index + 1

                length_episode[episode] += 1
                total_reward_episode[episode] += reward
                
    policy = {}
    for state, actions in Q.items():
        policy[state] = torch.argmax(actions).item()
    return Q, policy

epsilon_greedy_policy = gen_epsilon_greedy_policy(2, epsilon)

optimal_Q, optimal_policy = q_learning(gamma, n_episode, alpha, 2)

print('The optimal policy:\n', optimal_policy)
print('The optimal Q:\n', optimal_Q)

This code prints:

The optimal policy:
 {700: 0, 100: 0, 200: 1, 290: 1, 500: 0, 400: 0, 300: 1, 212: 0}
The optimal Q:
 defaultdict(<function q_learning.<locals>.<lambda> at 0x7f9371b0a3b0>, {700: tensor([ 1.1110, -0.8890]), 100: tensor([ 1.1111, -0.8889]), 200: tensor([-0.8889,  1.1111]), 290: tensor([-0.9998,  1.0000]), 500: tensor([ 1.1111, -0.8889]), 400: tensor([ 1.1110, -0.8890]), 300: tensor([-1.0000,  1.0000]), 212: tensor([ 1.1111, -0.8888])})

The optimal policy is to inform us if 700 is added to the balance, then the customer will not overdraw (0). If 200 is added to the balance, then the customer will overdraw(1). What avenues can I explore to improve upon this method as this is quite basic, but I'm unsure as to what approach I should take in order to improve the solution.

For example, this solution just looks at the most recent additions to the balance to determine if the customer is overdrawn. Is this a case of adding new features to the training data?

I'm just requesting a critique on this solution so I can improve it. How can I improve the representation of the state?

",12964,,43231,,12/29/2020 14:36,12/29/2020 14:36,How to frame this problem using RL?,,0,12,,,,CC BY-SA 4.0 25189,2,,18648,12/14/2020 20:21,,1,,"

Mathematical Interpretation

Note that equation (2.23) is simply calculating the conditional distribution of equation (2.21) and then finding the mean. Your question reduces to:

"Given normal variables $X$ and $Y$, why is $\mathbb{E}[Y|X] = \mu_y + Cov(X, Y)Cov(Y, Y)^{-1}(x - \mu_x)$? (note: in the book, $\mu_x = \mu_y = 0$)

Deriving the conditional probability mean is complicated (see The Bivariate Normal Distribution, page 3). A more intuitive look can be seen in the first graph in this page.

Here, the mean of $Y|X$ is linear in what value $X$ takes. The line starts at the intercept $\mu_y$, and increases with slope $\rho (\frac{\sigma_y}{\sigma_x}) = \frac{Cov(X,Y)}{\sigma_x \sigma_y}\left(\frac{\sigma_y}{\sigma_x}\right)=\frac{Cov(X,Y)}{\sigma_x^2} = \frac{Cov(X,Y)}{Var(x)} = \frac{Cov(X,Y)}{Cov(X,X)}$. So the mathematical interpretation of $Cov(X,Y)Cov(X,X)^{-1}$ is that it is the slope of the relationship between the mean of $Y|X$ and the value of $x$ that you are given. As you are given a higher value $x$, say $x + \delta$, then the mean of $Y|X$ raises by $Cov(X,Y)Cov(X,X)^{-1}\delta$.

Why is there even a $Cov(X,Y)Cov(X,X)^{-1}$ term there? For some reason, multiplying $Y|X$ by a $Cov(X,Y)Cov(X,X)^{-1}$ term makes $Y|X$ completely independent of $X$ (which makes sense as the definition of "conditional probability", because you are already given a value of $X$). This is just a mathematical property, I don't know if there's an intuitive explanation as to why.

Human Interpretation

In case your post just want an intuition as to why there is a $Cov(X,X)^{-1}$ in the prediction of a Gaussian process (and ignoring the conditional probability fluff), I don't think there's a real basis for this, it would only be coincidental as the authors simply used the conditional probability mean formula, but I would guess $Cov(X,X)^{-1}$ somehow normalizes the values of covariance matrix $Cov(X_*, X)$.

For example, if the training set $X$ has a lot of outliers and therefore extremely high variance (e.g. all non-diagonal entries in millions), then it is very likely that $Cov(X_*, X)$ would also be extremely high as $X_*$ follows the same distribution as $X$ (unless each data in $X_*$ matches the exact same variance in $X$). It doesn't make sense to multiply $y$ by millions though, as $y$ is already a somewhat decent estimator/prior.

It makes more sense to normalize $Cov(X_*, X)$ by dividing it with the training data variance $Cov(X,X) = Var(X)$ so that the ratio $Cov(X_*, X)Cov(X, X)^{-1}$ tends to be more closer to 1 when $X_*$ follows the same distribution as $X$ (which should be the case). If the ratio is exactly 1, then $X_*$ has the exact same distribution as $X$, so you just return the prior estimate $y$. If the ratio is far away from $1$, then the test set distribution is wildly different than the training set distribution, so you return a number far away from $y$.

",42699,,,,,12/14/2020 20:21,,,,0,,,,CC BY-SA 4.0 25190,2,,25150,12/14/2020 20:42,,0,,"

Yes, it is possible, CNNs can also be used for OCR (see the MNIST task and this blog), although it's not the common way for OCR because it is considered a bit overkill and inefficient. Furthermore, OCR is considered a solved problem already and don't need deep learning to do well, unless perhaps under unfavorable conditions like dark lighting, complex background of the document, weird fonts, occlusion of text, etc.

Breaking down your questions:

  1. a CNN can likely classify an image as "this image has a word document in it" vs "this image does not have a word document in it". The word document itself, and the outline of the words, should form distinct enough features for the CNN to discriminate against
  2. a CNN can likely identify the vendor based on the logo of the vendor if you have enough training images of it at various distortions (angles, occlusions, different lighting, etc)
  3. a CNN can likely do OCR well enough to extract the text, and then an entity extraction model can identify the vendor from the words. However, this approach would be uncommon as OCR is already considered a solved problem. But a neural network can potentially help preprocessing difficult OCR images (rotating image, identifying text boundaries, fix lighting, perform super-resolution, etc) to increase OCR accuracy
",42699,,,,,12/14/2020 20:42,,,,2,,,,CC BY-SA 4.0 25191,2,,25165,12/14/2020 21:55,,0,,"

As stated in the other answer, in general, the depth of the decision tree depends on the decision tree algorithm, i.e. the algorithm that builds the decision tree (for regression or classification).

To address your notes more directly and why that statement may not be always true, let's take a look at the ID3 algorithm, for instance. Here's the initial part of its pseudocode.

ID3 (Examples, Target_Attribute, Attributes)
    Create a root node for the tree
    If all examples are positive, Return the single-node tree Root, with label = +.
...

So, in general, the depth of the tree may not depend on the number of features, but it may just depend on the labels or training examples (which is a degenerate case), although, in most cases, it will also depend on the number of features, because each node represents a split of the training examples based on some condition that needs to be true for some feature (e.g. the height of the people must be less than 150cm).

",2444,,,,,12/14/2020 21:55,,,,0,,,,CC BY-SA 4.0 25192,2,,25178,12/14/2020 22:38,,2,,"

Most RL algorithms assume a discretization of time (although RL can also be applied to continuous-time problems [1]), i.e., in theory, it doesn't really matter what the actual time between consecutive time steps is, but, in practice, you may have delays in the rewards or observations, so you cannot perform e.g. the TD updates immediately. One natural solution to your problem would be to keep track (e.g. in a buffer) of the reward obtained and the next state that the agent ended up in after having taken a certain action in a certain state, or use some kind of synchronization mechanism (note that I've just come up with these solutions, so I don't know if this has been done or not to solve problems). In practice, this may not work (in all cases), for example, during real-time inference, where you need to decide quickly what you need to do even without full information about the current state or reward.

Note that, in RL, rewards are often said to be delayed, in the sense that

  1. you may know the consequences of an action only many time-steps after you have taken it (determining the consequences of an action is known as the credit assignment problem), or
  2. you may get a non-zero reward only when the agent gets to a goal/final state (in this last case, these rewards are also known as sparse).

These two problems are common in RL. However, if I understand correctly your concerns, this is a bit different than your problem, because your problem also involves the potential delay of the state or even reward that was supposed to arrive at a previous time step, which can be due e.g. to an erratic or broken sensor/actuator. For instance, if you are using DQN, which typically builds an approximation of the current state by concatenating the last frames captured by your camera, if you have delays in the frames that cause the natural order of the frames to be changed, this could lead to a bad approximation of the current state, which could actually lead to a catastrophic event. So, yes, this is an important problem that needs to be tackled.

Given that I am not really familiar with the actual existing solutions, I'll refer you to the paper Challenges of Real-World Reinforcement Learning that I read a few weeks ago, which mentions this issue and points you to other research work that attempted to address it. Take a look at this answer too, if you're more interested in delayed/sparse rewards.

",2444,,2444,,12/14/2020 23:24,12/14/2020 23:24,,,,0,,,,CC BY-SA 4.0 25193,1,25202,,12/14/2020 23:29,,0,59,"

Are the state-action values and the state value function equivalent for a given policy? I would assume so as the value function is defined as $V(s)=\sum_a \pi(a|s)Q_{\pi}(s,a)$. If we are operating a greedy policy and hence acting optimally, doesn't this mean that in fact the policy is deterministic and then $\pi(a|s)$ is $1$ for the optimal action and $0$ for all others? Would this then lead to an equivalence between the two?

Here is my work to formulate some form of proof where I start with the idea that a policy is defined to be better than a current policy if for all states then $Q_{\pi}(S,\pi^∗(s))\geq Vπ_{\pi}(s)$ :

I iteratively apply the optimal policy to each time step until I eventually get to a fully optimal time step of rewards

$$Vπ_{\pi}(s)≤Q_{\pi}(S,\pi^∗(s))$$ $$=Eπ[R_{t+1}+\gamma V_{\pi}(St+1)|St=s]$$ $$\leq E[Rt+1+\gamma Q_{\pi}(S_{t+1},\pi^∗(S_{t+1})|S_t=s]$$ $$\leq E[Rt+1+\gamma Rt+2+\gamma 2Q \pi^*(S_{t+2},\pi^∗(S_{t+2})|S_t=s]$$ $$\leq E[R_{t+1}+\gamma R_{t+2}+....|S_t=s]$$ $$=V\pi^∗(s)$$

I would say that our final two lines are in fact inequalities, and for me this makes intuitive sense in that if we are always taking a deterministic greedy action our value function and Q function are the same. As detailed here, for a given policy and state we have that $V(s)=\sum_a \pi(a|s)Q_{\pi}(s,a)$ and if the policy is optimal and hence greedy then $\pi(a|s)$ is deterministic.

",42966,,2444,,1/28/2023 13:06,1/28/2023 13:06,Are the state-action values and the state value function equivalent for a given policy?,,1,0,,,,CC BY-SA 4.0 25194,1,,,12/14/2020 23:32,,2,64,"

Suppose we want to estimate a continuous function $f:\mathbb R^2 \rightarrow \mathbb R$ based on a sample using a NN (around 1000 examples). This function is not bounded. Which architecture would you choose ? How many layers/neurons ? Which activation functions ? Which loss function ?

Intuitively, I would go with one hidden layer, 2 neurons, $L^2$ loss, and maybe the Bent identity for the output and a sigmoid in the hidden layer ?

What are the advantages of doing something "fancier" than that ?

Would you also have chosen to use a NN for this job or would you have considered a regression SVM for example or something else (knowing that precision is the goal)?

",42969,,,,,12/15/2020 21:50,Which NN would you choose to estimate a continuous function $f:\mathbb R^2 \rightarrow \mathbb R$?,,1,0,,,,CC BY-SA 4.0 25199,1,,,12/15/2020 6:35,,10,1027,"

In many implementations/tutorials of GANs that I've seen so far (e.g. this), the generator and discriminator start with no prior knowledge. They continuously improve their performance with training. This makes me wonder — is it possible to use a pre-trained discriminator? I have two motivations for doing so:

  1. Eliminating the overhead of training the discriminator
  2. Being able to use already existing cool models

Would the generator be able to learn just the same, or is it dependent on the fact that they start from scratch?

",38076,,,,,12/24/2020 16:47,Can I start with perfect discriminator in GAN?,,1,1,,,,CC BY-SA 4.0 25201,1,25203,,12/15/2020 11:33,,0,100,"

I am confused about the workings of the first- and every-visit MC.

My first question is, when we have multiple traces, do we average over traces or the total number of times we have visited that state?

So, in the example:

$$\tau_1 = \text{House} +3, \text{House} +2, \text{School} -4, \text{House} +4, \text{School} -3, \text{Holidays}$$ $$\tau_2 = \text{House} -2, \text{House} +3, \text{School} -3, \text{Holidays},$$

where we have states of either House, Holidays, or School, with the numerical values being the immediate rewards.

For every-visit MC to find the state value of HOUSE, with $\gamma$=1, my intuition would be to create a return list, R, that looks like the following

$$R_1=[3+2−4+4−3, 2−4+4−3, 4−3]= [2, −1, 1]$$ $$R_2=[−2+3−3, 3−3]=[−2, 0]$$

$$R_1+R_2=[2,−1, 1,−2, 0]$$

which, when averaged over 5 visits, is 0 and the correct answer, but I would like if you could confirm if the methodology is correct?

However, another approach would be to compute the average returns for each trace. Which is correct?

",42966,,2444,,12/16/2020 10:07,12/16/2020 10:07,"When we have multiple traces, do we average over traces or the total number of times we have visited that state?",,1,1,,,,CC BY-SA 4.0 25202,2,,25193,12/15/2020 12:17,,1,,"

In general they are not the same and that should be clear as to why -- mathematically you are conditioning on an extra random variable being known in the state-action value function. You have the correct relationship between them, but I think your understanding of the two may be slightly off. The state-action value function is a function of both $s$ and $a$ whereas the value function is a function of just $s$. As you have noted, the value function is equal to the expected value of the state-action value function, with the expectation taken over the action distribution induced by the policy.

However, if you have a deterministic policy, then two are equivalent only if you evaluate the state-action value function at the action that the deterministic policy gives. That is, $v_\pi(s) = Q(s, \pi(s))$. This is because, as you say, for a given state the policy will assign probability one to a certain action and 0 to all other actions in the action space and so the equation I just wrote is what the expectation in your question reduces to.

Now, you might think 'well the policy is deterministic so we are always going to choose this action, so my theory was correct'. This is not true. The definition of the state-action value function is the value of the expected future (possibly discounted) returns given that we are in state $s$ and given that we have taken action $a$. This means that we have already chosen our action and we are not choosing it according to the policy -- the policy will only choose all future actions.

You also say that "If we are operating a greedy policy and hence acting optimally" which is not true. Just because we act greedily it does not imply we are acting optimally. Value functions are functions of a policy. If the policy is completely random then you would get certain value and state-action value functions which you could act greedily to but this would not be optimal.

",36821,,,,,12/15/2020 12:17,,,,4,,,,CC BY-SA 4.0 25203,2,,25201,12/15/2020 12:30,,2,,"

For every visit MC you create a list for each state. Every time you enter a state you calculate the returns for the episode and append these returns to a list. Once you have done this for all the episode you want to average over you simply calculate the value of a state to be the average of this list of returns for the state.

First visit MC is almost the same except that you only append the returns to the state returns list for the first time you visit the state in an episode.

Your workings are correct, you average over the number of times you have visited the state (in every visit MC). So, in your example, you would get the value of HOUSE to be 0, as you stated. If you were doing first visit MC then the returns for episode 1 would be 2 and the returns for episode 2 would be -2, which you would then average over the two first visits to again give you 0 (this would not always be the case that they are equal after 2 episodes but in the limit both methods do converge to the true state value function).

",36821,,,,,12/15/2020 12:30,,,,2,,,,CC BY-SA 4.0 25204,1,,,12/15/2020 13:57,,0,310,"

Reinforcement Learning: An Introduction second edition, Richard S. Sutton and Andrew G. Barto:

  1. We made two unlikely assumptions above in order to easily obtain this guarantee of convergence for the Monte Carlo method. ... For now we focus on the assumption that policy evaluation operates on an infinite number of episodes. This assumption is relatively easy to remove. In fact, the same issue arises even in classical DP methods such as iterative policy evaluation, which also converge only asymptotically to the true value function.
  1. There is a second approach to avoiding the infinite number of episodes nominally required for policy evaluation, in which we give up trying to complete policy evaluation before returning to policy improvement. On each evaluation step we move the value function toward q⇡k, but we do not expect to actually get close except over many steps. We used this idea when we first introduced the idea of GPI in Section 4.6. One extreme form of the idea is value iteration, in which only one iteration of iterative policy evaluation is performed between each step of policy improvement. The in-place version of value iteration is even more extreme; there we alternate between improvement and evaluation steps for single states.

The original pseudocode:

Monte Carlo ES (Exploring Starts), for estimating $\pi \approx \pi_{*}$

Initialize:

$\quad$ $\pi(s) \in \mathcal{A}(s)$ (arbitrarily), for all $s \in \mathcal{S}$

$\quad$ $Q(s, a) \in \mathbb{R}$ (arbitrarily), for all $s \in \mathcal{S}, a \in \mathcal{A}(s)$

$\quad$ $Returns(s, a) \leftarrow$ empty list, for all $s \in \mathcal{S}, a \in \mathcal{A}(s)$

Loop forever (for each episode):

$\quad$ Choose $S_{0} \in \mathcal{S}, A_{0} \in \mathcal{A}\left(S_{0}\right)$ randomly such that all pairs have probability $\geq 0$

$\quad$ Generate an episode from $S_{0}, A_{0},$ following $\pi: S_{0}, A_{0}, R_{1}, \ldots, S_{T-1}, A_{T-1}, R_{T}$

$\quad$ $G \leftarrow 0$

$\quad$ Loop for each step of episode, $t=T-1, T-2, \ldots, 0$

$\quad\quad$ $G \leftarrow \gamma G+R_{t+1}$

$\quad\quad$ Unless the pair $S_{t}, A_{t}$ appears in $S_{0}, A_{0}, S_{1}, A_{1} \ldots, S_{t-1}, A_{t-1}:$

$\quad\quad\quad$ Append $G$ to $Returns\left(S_{t}, A_{t}\right)$

$\quad\quad\quad$ $Q\left(S_{t}, A_{t}\right) \leftarrow \text{average}\left(Returns\left(S_{t}, A_{t}\right)\right)$

$\quad\quad\quad$ $\pi\left(S_{t}\right) \leftarrow \arg \max _{a} Q\left(S_{t}, a\right)$

I want to make the same algorithm but with a model. The book states:

  1. With a model, state values alone are sufficient to determine a policy; one simply looks ahead one step and chooses whichever action leads to the best combination of reward and next state, as we did in the chapter on DP.

So based on the 1st quote I must use "stars exploration" and "one evaluation — one improvement" ideas (as well as in model-free version) to make the algorithm converge.

My version of the pseudocode:

Monte Carlo ES (Exploring Starts), for estimating $\pi \approx \pi_{*}$ (with model)

Initialize:

$\quad$ $\pi(s) \in \mathcal{A}(s)$ (arbitrarily), for all $s \in \mathcal{S}$

$\quad$ $V(s) \in \mathbb{R}$ (arbitrarily), for all $s \in \mathcal{S}$

$\quad$ $Returns(s) \leftarrow$ empty list, for all $s \in \mathcal{S}$

Loop forever (for each episode):

$\quad$ Choose $S_{0} \in \mathcal{S}, A_{0} \in \mathcal{A}\left(S_{0}\right)$ randomly such that all pairs have probability $\geq 0$

$\quad$ Generate an episode from $S_{0}, A_{0},$ following $\pi: S_{0}, A_{0}, R_{1}, \ldots, S_{T-1}, A_{T-1}, R_{T}$

$\quad$ $G \leftarrow 0$

$\quad$ Loop for each step of episode, $t=T-1, T-2, \ldots, 1$:

$\quad\quad$ $G \leftarrow \gamma G+R_{t+1}$

$\quad\quad$ Unless $S_{t}$ appears in $S_{0}, S_{1}, \ldots, S_{t-1}:$

$\quad\quad\quad$ Append $G$ to $Returns \left(S_{t}\right)$

$\quad\quad\quad$ $V\left(S_{t}\right)\leftarrow\text{average}\left(Returns\left(S_{t}\right)\right)$

$\quad\quad\quad$ $\pi\left(S_{t-1}\right) \leftarrow \operatorname{argmax}_{a} \sum_{s^{\prime}, r} p\left(s^{\prime}, r \mid S_{t-1}, a\right)\left[\gamma V\left(s^{\prime}\right)+r\right]$

— Here I update the policy in $S_{t-1}$ because the step before we update $V(S_{t})$ and changes to $V(S_{t})$ don't affect $\pi (S_{t})$, but affect $ \pi (S_{t-1})$, as $S_{t}$ is in $S'$ for $S_{t-1}$.

Pseudocode as images:

",42983,,42983,,12/17/2020 9:56,12/17/2020 9:56,"Is my pseudocode titled ""Monte Carlo Exploring Starts (with model)"" correct?",,0,4,,,,CC BY-SA 4.0 25205,1,,,12/15/2020 14:20,,7,1308,"

The KL Divergence is quite easy to compute in closed form for simple distributions -such as Gaussians- but has some not-very-nice properties. For example, it is not symmetrical (thus it is not a metric) and it does not respect the triangular inequality.

What is the reason it is used so often in ML? Aren't there other statistical distances that can be used instead?

",32583,,,,,12/22/2020 15:40,Why is KL divergence used so often in Machine Learning?,,2,4,,,,CC BY-SA 4.0 25206,2,,25194,12/15/2020 15:51,,1,,"

It depends on the complexity of your problem. $\mathbb{R}^2 \rightarrow \mathbb{R}^1$ looks simple, but I can give you some nonsense complicated examples that need a deep network. So, the complexity of the problem sets the number of layers and neurons. The kind of problem will determine the architecture of your network (if it needs memory or not). In most cases, the mean squared error is ok. However, for the activation function, I would go with the ReLU.

If the SVM is good enough for your problem, then go with it. Your question is general and an exact answer needs more information about the problem.

",41547,,2444,,12/15/2020 21:50,12/15/2020 21:50,,,,1,,,,CC BY-SA 4.0 25207,2,,12240,12/15/2020 16:29,,0,,"

First for clarity I want to differ "convolutional kernel" and "filter" here, let's say a filter has several convolutional kernels. Second in NLP 1D-convolution is mostly used, so different from CV, which typically use a $k \times k$ convolutional kernel, in NLP generally the kernel size is only $k$

Having listed above hope things could become a little clearer: If we adopt the same concepts as we seen in typical CV scenario, per my understanding, the size of convolutional kernel is only $k$, but the count of kernels, or the dimension of new "channel", is set to $2d$. One kernel maps $k \times d$ elements into one element, $2d$ kernels together convert the input to a vector of which the dimention is $2d$

As the calculation process is mapping a matrix $\mathbb{R}^{k \times d}$ to a vector $\mathbb{R}^{2d \times 1}$, if expressed in a form of matrix multiplication, or in a full-connected view, we can firstly expand the input matrix, concatenate all rows together to get a new big vector of which the dimension is $\mathbb{R}^{kd \times 1}$, then multiply a weight matrix $\boldsymbol{W} \in \mathbb{R}^{2d \times kd}$ could get the final vector

The key problem that roots the confusion is beause, in my opinion, the "kernel" imported here is a little different from the same concept used in a typical CNN. Again, if we describe in a traditional CNN terminology, the kernel size should be $k$, and there are $2d$ kernels applied.

Hope my understanding is correct.

",42984,,,,,12/15/2020 16:29,,,,0,,,,CC BY-SA 4.0 25208,1,,,12/15/2020 18:16,,0,65,"

I am training an AlexNet Convolutional Neural Network to classify images in a dataset. I want to know if there is any general rule for using data augmentation in training a neural network. How can I make sure about the amount of data, and how can I know if I need more data?

",33792,,2444,,12/16/2020 9:11,12/16/2020 9:11,Is there any rule of thumb to determine the amount of data needed to train a CNN,,0,4,,,,CC BY-SA 4.0 25210,1,25240,,12/16/2020 4:18,,4,249,"

(The math problem here just serves as an example, my question is on this type of problems in general).

Given two Schur polynomials, $s_\mu$, $s_\nu$, we know that we can decompose their product into a linear combination of other Schur polynomials.

$$s_\mu s_\nu = \sum_\lambda c_{\mu,\nu}^\lambda s_\lambda$$

and we call $c_{\mu,\nu}^\lambda$ the LR coefficient (always an non-negative integer).

Hence, a natural supervised learning problem is to predict whether the LR coefficient is of a certain value or not given the tuple $<\mu, \nu, \lambda>$. This is not difficult.

My question is: can we either use ML/RL to do anything else other than predicting (in this situation) or extract anything from the prediction result? In other words, a statement like "oh, I am 98% confident that this LR coefficient is 0" does not imply anything mathematically interesting?

",41751,,2444,,1/17/2021 16:53,2/16/2021 19:08,Can we use ML to do anything else other than predicting (in the case of mathematical problems)?,,1,1,,,,CC BY-SA 4.0 25215,1,,,12/16/2020 15:39,,2,78,"

I am working on image segmentation of MRI thigh images with deep learning (Unet). I noticed that I get a higher average dice accuracy over my predicted masks if I have less samples in the test data set. I am calculating it in tensorflow as

def dice_coefficient(y_true, y_pred, smooth=0.00001):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)

the difference is 0.003 if I have 4x more samples.

I am calculating the dice coefficient over each MRI 2D slice

Why could this be?

This figure shows how the accuracy decreases with the fraction of samples. I start with 0.1 of the data until the whole data set. The splitting of the data was random

",42357,,42357,,12/22/2020 20:21,12/22/2020 20:21,Why do I get higher average dice accuracy for less data,,0,4,,,,CC BY-SA 4.0 25216,1,25219,,12/16/2020 15:44,,2,110,"

I have a question about the training data used during the update/back-propagation step of the neural network in AlphaZero.

From the paper:

The data for each time-step $t$ is stored as ($s_t, \pi_t, z_t$) where $z_t = \pm r_T$ is the game winner from the perspective of the current player at step $t$. In parallel (Figure 1b), new network parameters $\Theta_i$ are trained from data ($s,\pi, z$) sampled uniformly among all time-steps of the last iteration(s) of self-play

Regarding the policy at time $t$ ($\pi_t$), I understood this as the probability distribution of taking some action that is proportional to the visit count to each child node, i.e. during MCTS, given some parent node (state) at time $t$, if some child node (subsequent state) $a$ is visited $N_a$ times and all children nodes are visited $\sum_b N_b$ times, then the probability of $a$ (and its corresponding move) being sampled is $\frac{N_a}{\sum_b N_b}$, and this parametrizes the distribution $\pi_t$. Is this correct? If this is the case, then for some terminal state $T$, we can't parametrize a distribution because we have no children nodes (states) to visit. Does that mean we don't add ($s_T, \pi_T, z_T$) to the training data?

Also, a followup question regarding the loss function:

$l = (z-v)^2 - \pi^T log\textbf{p} + c||\Theta||^2$

I'm confused about this $\pi^T$ notation. My best guess is that this is a vector of actions sampled from all policies in the $N$ X $(s_t, \pi_t, z_t)$ minibatch, but I'm not sure. (PS the $T$ used in $\pi^T$ is different from the $T$ used to denote a terminal state if you look at the paper. Sorry for the confusion, I don't know how to write two different looking T's)

",43016,,2444,,12/17/2020 1:22,12/17/2020 1:22,"In AlphaZero, do we need to store the data of terminal states?",,1,0,,,,CC BY-SA 4.0 25217,1,25225,,12/16/2020 15:58,,1,250,"

In the Attention is all you need paper, on the 4th page, we have equation 1, which describes the self-attention mechanism of the transformer architecture

$$ \text { Attention }(Q, K, V)=\operatorname{softmax}\left(\frac{Q K^{T}}{\sqrt{d_{k}}}\right) V $$

Everything is fine up to here.

Then they introduce the multi-head attention, which is described by the following equation.

$$ \begin{aligned} \text { MultiHead }(Q, K, V) &=\text { Concat}\left(\text {head}_{1}, \ldots, \text {head}_{\mathrm{h}}\right) W^{O} \\ \text { where head}_{\mathrm{i}} &=\text {Attention}\left(Q W_{i}^{Q}, K W_{i}^{K}, V W_{i}^{V}\right) \end{aligned} $$

Once the multi-head attention is motivated at the end of page 4, they state that for a single head (the $i$th head), the query $Q$ and key $K$ inputs are first linearly projected by $W_i^Q$ and $W_i^K$, then dot product is calculated, let's say $Q_i^p = Q W_i^Q$ and $K_i^p = K W_i^K$.

Therefore, the dot product of the projected query and key becomes the following from simple linear algebra.

$$Q_i^p {K_i^p}^\intercal = Q W_i^Q {W_i^K}^T K^T = Q W_i K^T,$$

where

$$W_i = W_i^Q {W_i^K}^T$$

Here, $W$ is the outer product of query projection by the key projection matrix. However, it is a matrix with shape $d_{model} \times d_{model}$. Why did the authors not define only a $W_i$ instead of $W_i^Q$ and $W_i^K$ pair which have $2 \times d_{model} \times d_{k}$ elements? In deep learning applications, I think it would be very inefficient.

Is there something that I am missing, like these 2 matrices $W_i^Q$ and $W_i^K$ should be separate because of this and that?

",43019,,43019,,12/17/2020 5:32,12/17/2020 5:32,"In the multi-head attention mechanism of the transformer, why do we need both $W_i^Q$ and ${W_i^K}^T$?",,2,0,,,,CC BY-SA 4.0 25218,1,,,12/16/2020 16:03,,5,548,"

I'm dealing with a (stochastic) Multi Armed Bandit (MAB) with a large number of arms.

Consider a pizza machine that produces a pizza depending on an input $i$ (equivalent to an arm). The (finite) set of arms $K$ is given by $K=X_1\times X_2 \times X_3\times X_4$ where $X_j$ denote the set of possible amounts of ingredient $j$.

e.g. $X_1=\{$ small, medium, large $\}$ (amount of cheese) or $X_2=\{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\}$ (slices of salami)

Thus, running the pizza machine with input $i$ is equivalent to pulling arm $i\in K$. Due to the different permutations, the number of arms $|K|$ is very large (between 100,000 and 1,000,000). Depending on the pulled arm $i$, the machine generates a pizza (associated with a reward that indicates how delicious the pizza is). However, the machine's rewards are non-static. Pulling an arm $i$ generates a reward according to an unknown (arm specific) distribution $P_i$, with all rewards drawn from $P_i$ beeing i.i.d.. In addition, it is possible to normalize all rewards to the interval [0,1].

The above problem corresponds to the standard problem of a stochastic MAB, but is characterized by the large number of arms. In the case of the pizza machine, several days of computation time are available to determine the best pizza, so the number of itarations is allowed to be large as well.

In my investigation of MAB algorithms addressing a large number of arms, I came across studies that could call up to a few thousand arms.

Are there algorithms that exist in the MAB domain, which specifically deal with large problem instances (e.g.with $|K|>100,000$)?

",21287,,43231,,12/27/2020 14:44,1/21/2022 18:01,Multi Armed Bandits with large number of arms,,1,1,,,,CC BY-SA 4.0 25219,2,,25216,12/16/2020 16:21,,0,,"

I'm not 100% sure whether or not they added any data for terminal game states, but it's very reasonable to indeed make the choice not to include data for terminal game states. As you rightly pointed out, we don't have any meaningful targets to update the policy head towards in those cases, and this is not really a problem because we would also never actually make use of the policy output in a terminal game state. For the value head we could provide meaningful targets to update towards, but again we would never actually have to make use of such outputs; if we encounter a terminal game state in a tree search, we just back up the true value of that terminal game state instead of making a call to the network to obtain a value function approximation.

In theory, I could imagine some cases where training the value head on terminal game states might be slightly beneficial despite not being strictly necessary; it could enable generalisation to similar game states that are not terminal (but close to being terminal), and speed up learning for those. For example, if you have a game where the goal is to complete a line of $5$ pieces, training the value head on terminal states where you actually have a line of $5$ pieces and have entirely won the game might generalise and speed up learning for similar game states where you may not yet have $5$ pieces in a line, but are very close to that goal. That said, intuitively I really don't feel like this would provide a big benefit (if any), and we could probably also come up with cases where it would be harmful.


In the $\pi^{\text{T}}$ notation, $\pi$ is a vector (for any arbitrary time step, the time step is not specified here) containing a discrete probability distribution over actions (visit counts of MCTS, normalised into a probability distribution), and the $\text{T}$ simply denotes that we take the transpose of that vector. Personally I don't like the notation though, I prefer something like $\pi^{\top}$ which is more clearly distinct from a letter $T$ or $\text{T}$.

Anyway, once you understand that to denote the transpose, you'll see that $\pi^{\top}\log(\mathbf{p})$ is a dot product between two vectors, which then ends up being a single scalar.

",1641,,,,,12/16/2020 16:21,,,,0,,,,CC BY-SA 4.0 25220,1,,,12/16/2020 16:26,,3,312,"

I have a question regarding the action space of the policy network used in AlphaZero.

From the paper:

We represent the policy π(a|s) by a 8 × 8 × 73 stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8 × 8 positions identifies the square from which to “pick up” a piece. The first 56 planes encode possible ‘queen moves’ for any piece: a number of squares [1..7] in which the piece will be moved, along one of eight relative compass directions {N,NE,E,SE,S,SW,W,NW}......The policy in Go is represented identically to AlphaGo Zero (29), using a flat distribution over 19 × 19 + 1 moves representing possible stone placements and the pass move. We also tried using a flat distribution over moves for chess and shogi; the final result was almost identical although training was slightly slower.

I don't understand why a stack of planes is used for the action space here. I'm also not entirely sure I understand how this representation is used. My guess is that for Chess, the 8x8 plane represents the board, and each square has a probability assigned to it of picking up a piece on that square (let's assume that all illegal moves haven't been masked yet, so all squares have probability mass on them). From there, we choose from possible 'Queen' type moves or 'Knight' type moves, which total to 73 different types of moves. Is this interpretation correct? How would one go from this representation to sampling a legal move (i.e. how is this used to parametrize a distribution I can actually sample moves from?)

During MCTS when expanding a leaf node, we get $p_a$, the probability of taking some action $a$ from the policy head, so I would also need to be able to go from this 'planes' representation to the probability of taking a specific action.

The paper also mentions trying out 'flat distributions', which I'm not entirely sure what this means either.

",43016,,43016,,12/16/2020 20:14,12/16/2020 20:14,Stack of Planes as the Action Space Representation for AlphaZero (Chess),,0,1,,,,CC BY-SA 4.0 25221,1,25224,,12/16/2020 17:20,,1,62,"

Does the image is logistic regression or SVM, and why?

",33475,,33475,,12/16/2020 23:49,12/16/2020 23:49,"Does the image is logistic regression or SVM, and why?",,1,9,,12/17/2020 20:47,,CC BY-SA 4.0 25222,1,25230,,12/16/2020 19:56,,1,547,"

In articles that describe neural architectures with multiple attention layers of the same form, are the weight matrices usually the same across the layers? Consider as an example, "Attention is all you need". The authors stack several layers of multi-head self-attention in which each layer has the same number of heads. Each head $i$ involves a trainable weight matrix $W_{i}^{Q}$. There is no subscript, superscript, or any other indication that this matrix is different for each layer. My questions is this: are there separate $W_{i}^{Q}$ for layers $1,2,3,...$ or is this a single matrix shared throughout layers?

My intuition is that the authors of the paper wanted to cut down on notation, and that the matrices are different in different layers. But I want to be sure I understand this, since I see the same kind of thing in many other papers as well.

",43025,,,,,12/17/2020 1:45,"In attention models with multiple layers, are weight matrices shared across layers?",,1,2,,,,CC BY-SA 4.0 25224,2,,25221,12/16/2020 20:49,,2,,"

The straight dashed-line shows the typical decision line in logistic regression or any linear classifier. The dashed-circle shows the decision line from SVM. Obviously, since the data is not linearly separable in the original 2D feature space, if someone makes a higher dimension space by taking into account non-linear interaction of the original 2 features then they can discriminate between x and o data using a linear discriminator applied in higher dimensions. This shows the beauty of kernel methods that can make a linear yet high-dimensional (infinite dimensions indeed) problem from a non-linear low-dimensional problem (finite dimensions actually).

",42330,,,,,12/16/2020 20:49,,,,5,,,,CC BY-SA 4.0 25225,2,,25217,12/16/2020 20:53,,1,,"

I'll use notation from the paper you cited, and any other readers should refer to the paper (widely available) for definitions of notation. The utility of using $W^Q$ and $W^K$, rather than $W$, lies in the fact that they allow us to add fewer parameters to our architecture. $W$ has dimension $d_{model} \times d_{model}$, which means that we are adding $d_{model}^2$ parameters to our architecture. $W^Q$ and $W^K$ each have dimension $d_{model} \times d_k$, and $d_k=\frac{d_{model}}{h}$. If we use these two matrices, we only add $2\frac{d_{model}^2}{h}$ parameters to our architecture, even though their multiplication (with the transpose) allows us to have the correct dimensions for matrix multiplication with $Q$ and $K$.

We do use $h$ attention heads, which then brings our number of parameters back up, but the multiple heads let the model attend to different pieces of information in our data.

",43025,,43025,,12/16/2020 21:01,12/16/2020 21:01,,,,2,,,,CC BY-SA 4.0 25228,1,25234,,12/16/2020 23:04,,5,5407,"

Can someone explain to me with a proof or example why you can't linearly separate XOR (and therefore need a neural network, the context I'm looking at it in)?

I understand why it's not linearly separable if you draw it graphically (e.g. here), but I can't seem to find a formal proof somewhere, I wanted to try and understand it with either an equation or example written down. I'm wondering if one exists (I guess it has to do with contradictions?), but I can't seem to find it? I have seen this, but it's more a reason than a proof.

",42926,,2444,,1/19/2021 2:19,10/2/2021 20:58,Is there a proof to explain why XOR cannot be linearly separable?,,2,0,,,,CC BY-SA 4.0 25229,2,,25217,12/17/2020 0:57,,0,,"

In practice, matrices $W^Q, W^K, W^V$ (each of size $d_{model}$ x $d_{model}$) are completely removed instead, and Transformer implementations just learn a single set of matrices $\{ W_i^{Q*}, W_i^{K*}, W_i^{V*} \}$ (each of size $d_{model}$ x $\frac{d_{model}}{h}$) for each head, where

$W_i^{Q*} = W^Q W_i^Q \\ W_i^{K*} = W^K W_i^K \\ W_i^{V*} = W^V W_i^V $

so that:

$Q_i(x) = x W_i^{Q*} = x W^Q W_i^{Q*} = Q W_i^Q\\ K_i(x) = x W_i^{K*} = x W^K W_i^{K*} = K W_i^K\\ V_i(x) = x W_i^{V*} = x W^V W_i^{V*} = V W_i^V\\ head_i(x) = softmax \left(\frac{Q_i(x) K_i(x)^T}{\sqrt{d_k}} \right) V_i(x)$.

I can confirm this with the original Transformer implementation in Tensor2Tensor, and also the BERT code that uses the encoding part of the Transformer.

",42699,,,,,12/17/2020 0:57,,,,2,,,,CC BY-SA 4.0 25230,2,,25222,12/17/2020 1:45,,1,,"

Weights are not normally shared across Transformer layers in vanilla Transformers. However, there has been research done in testing out sharing weights, and sometimes they improve the scores. Here are some examples:

ALBERT is an improvement on BERT (so only uses the encoding side, no decoder), and shows that sharing the attention weights only $\left\{ W_i^Q, W_i^K, W_i^V \right\}$ across all Transformer layers for large networks either result in the same accuracy or slightly improved accuracy, while significantly reducing model size. Sharing the position-wise FFN layer though hindered performance.

Text-to-Text Transfer Transformer shows that sharing the weights between encoder and decoder layer of the transformer (so e.g. layer 1 encoding weights = layer 1 decoding weights) barely affected accuracy (it dropped by 0.5%), but the model size is halved.

I'm sure there are more papers I have forgotten. Sharing weights is still an active area of research, but for vanilla transformers it is assumed they do not share weights.

",42699,,,,,12/17/2020 1:45,,,,0,,,,CC BY-SA 4.0 25234,2,,25228,12/17/2020 2:49,,3,,"

Before proving that XOR cannot be linearly separable, we first need to prove a lemma:

Lemma 1

Lemma: If 3 points are collinear and the middle point has a different label than the other two, then these 3 points cannot be linearly separable.

Proof: Let us label the points as point $A$, $B$, and $C$. $A$ and $C$ have the same label, and $B$ has a different label. They are all collinear with line $\mathcal{L}$.

Assume the contradiction, so a line can linearly separate $A$, $B$, and $C$. This means a line must cross between segment $AB$, and segment $BC$ to linearly separate these three points (by definition of linear separability). Let us label the point where the line crosses segment $AB$ and point $Y$, and the point where the line corsses segment $BC$ and point $Z$.

However, since segments $AB$ and $BC$ are collinear to line $\mathcal{L}$, points $Y$ and $Z$ also falls on line $\mathcal{L}$. Since only one unique line can cross 2 points, it must be that the only line that passes segments $AB$ and $BC$ and (therefore separates points $A$, $B$, and $C$) is line $\mathcal{L}$.

However, line $\mathcal{L}$ cannot linearly separate $A$, $B$, and $C$, since line $\mathcal{L}$ also crosses them. Therefore, no line exists can separate $A$, $B$, and $C$.

Main proof

Consider these 4 points that represent a XOR table. Let us label them clock-wise, so the top-left point as $A$, top-right point as $B$, bottom-right point as $C$, and bottom-left point as $D$. So $A$ and $C$ have the same label, and $B$ and $D$ have the same label. We want to show that points $A, B, C$ and $D$ cannot be linearly separable.

Assume the contradiction, and that there is a line that can separate these 4 points.

Imagine a fifth point that lies in the center, and let us label this as point $E$.

Since $E$ lies in the center, the three points $A, E$ and $C$ are collinear. Similarly, since $E$ lies in the center, the three points $B, E$ and $D$ are collinear.

Because we assume a line can linearly separate $A, B, C$ and $D$, then this line must label point $E$ as some label. If $E$ shares the same label as $A$ and $C$, then the points $B, E$ and $D$ will become "collinear points where the middle point has a different label", which by Lemma 1 cannot be linearly separable. Likewise, if $E$ shares the same label as $B$ and $D$, then the points $A, E$ and $C$ will become "collinear points where the middle point has a different label", which by Lemma 1 cannot be linearly separable.

Therefore it is impossible to give a label to $E$ while satisfying linear separability. As a result, our assumption must be false, and the four points $A, B, C$ and $D$ cannot be linearly separable.

",42699,,42699,,12/17/2020 9:57,12/17/2020 9:57,,,,1,,,,CC BY-SA 4.0 25236,2,,16187,12/17/2020 5:31,,0,,"

Discriminative models give the probability of an element in the feature space $x \in X$ belonging to a class $y \in Y$, i.e. $p(Y|X)$, where $Y$ is the set of classes in a classification problem.

The discriminative feature representation in this context means the feature map/s which are the output/s of the convolutional layers of the backbone (the convolutional layers of the neural network) which (presumably) differ on the basis of their class, and can be used to discriminate which class the original image belongs to, in the case of image classification.

",30962,,30962,,12/17/2020 5:58,12/17/2020 5:58,,,,0,,,,CC BY-SA 4.0 25238,1,,,12/17/2020 10:39,,1,147,"

I've been trying to understand RMSprop for a long time, but there's something that keeps eluding me.

Here is a screenshot from this video by Andrew Ng.

From the element-wise comment, from what I understand, $dW$ and $db$ are matrices, so that must mean that $S_{dW}$ is a matrix (or tensor) as well.

So, in the update rule, do they divide a matrix by another matrix? From what I saw on google, no such action exists.

",43043,,2444,,1/12/2022 9:17,1/12/2022 9:17,"In the update rule of RMSprop, do we divide by a matrix?",,1,0,,,,CC BY-SA 4.0 25239,1,,,12/17/2020 10:49,,1,264,"

Let's say we are training a new neural network from scratch. I calculate the mean and standard deviation of my dataset (assume I am training a fully convolutional neural net and my dataset is images) and I standardise each channel of all images based on that mean and standard deviation. My output will be another image.

I want to use for example VGG for perceptual loss (VGG's weights will be frozen). Perceptual loss is when you input your prediction to a pretrained network to extract features from it. Then you do the same for the ground truth and the L2 distance between the features from ground truth and features from prediction is called perceptual loss.

As far as I know, I am supposed to standardise my data based on the mean and standard deviation VGG was trained with (since I am using VGG for inference essentially), which is different than the mean and standard deviation of my dataset. What is the correct way to do this? Should I undo the standardization of my dataset by multiplying standard deviation and adding the original mean to the output of my network, and then restandardise using VGG's statistics to calculate the loss? Or should I continue without restandardising?

",19549,,19549,,12/22/2020 14:02,1/11/2023 23:06,How to normalize for perceptual loss when training neural net from scratch?,,1,0,,,,CC BY-SA 4.0 25240,2,,25210,12/17/2020 11:01,,2,,"

There are quite a few examples of papers where they try and 'teach' neural networks to 'learn' how to solve math problems. Most of the time, sadly, it comes down to training on a large dataset after which the network can 'solve' the sort of basic problems, but is unable to generalize this to larger problems. That is, if you train a neural network to solve addition, it will be inherently constrained by the dataset. It might be able to semi-sufficiently solve addition with 3 or even 4 digits, depending on how big your dataset is, but throw in an addition question containing 2 10 digit numbers, and it will almost always fail.

The latest example that I can remember where they tried this is in the General Language Model GPT-3, which was not made to solve equations per se, but does 'a decent job' on the stuff that was in the dataset. Facebook AI made an 'advanced math solver' with a specific architecture that i have not looked into which might disproof my point, but you can look into that.

In the end, this comes down to 'what is learning' and 'what do you want to accomplish'. Most agree that these network are not able to generalize beyond their datasets. Some might say that not being able to generalize does not mean that it is not learning. It might just be learning slower. I believe that these models are inherently limited to what is presented in the dataset. Given a good dataset, it might be able to generalize to cases 'near and in-between', but I have yet to see a case where this sort of stuff generalizes to cases 'far outside' the dataset.

",34383,,,,,12/17/2020 11:01,,,,4,,,,CC BY-SA 4.0 25247,2,,20214,12/17/2020 11:56,,1,,"

This is also a question I stumble upon, thanks for the explaination from ted, it is very helpfull, I will try to elaborate a little bit. Let's still use DeepMind's Simon Osindero's slide: The grey block on the left we are looking at is only a cross entropy operation, the input $x$ (a vector) could be the softmax output from previous layer (not the input for the neutral network), and $y$ (a scalar) is the cross entropy result of $x$. To propagate the gradient back, we need to calculate the gradient of $dy/dx_i$, which is $-p_i/x_i$ for each element in $x$. As we know the softmax function scale the logits into the range [0,1], so if in one training step, the neutral network becomes super confident and predict one of the probabilties $x_i$ to be 0 then we have a numerical problem in calculting $dy/dx_i$.

While in the other case where we take the logits and calculate the softmax and crossentropy at one shot (XentLogits function), we don't have this problem. Because the derivative of XentLogits is $dy/dx_i = y - p_i$, a more elaborated derivation can be found here.

",43045,,43045,,12/17/2020 12:09,12/17/2020 12:09,,,,0,,,,CC BY-SA 4.0 25248,1,,,12/17/2020 13:02,,1,199,"

I am using KerasRL DDPG to try to learn a policy on my own custom environment, but the agent is stuck in a local optima although I am adding the OrnsteinUhlenbeck randomization process. I used the exact same DDPG to solve Pendulum-v0 and it works, but my environment is a more complex with a continuous space/action space.

How do you deal with local optima problem in reinforcement learning? is it just an exploitation issue?

More details:

My state space is not pixels, it is numerical, in fact it's a metro line simulator and the state space is the velocity, the position of each train on the line and the number of passengers at each station. I need to control the different trains so I am not trying to control only one train but all the operational trains and each one can have different actions such as speed or not, stay longer on the next station or not etc.

1/ I am using the same ANN architecture for the actor and critic: 3 FC layers with (512, 256, 256) hidden units.

2/ Adam optimizer for both the actor and critic with a small lr=1e-7 and clipnorm=1.

3/ nb_steps_warmup_critic=1000, nb_steps_warmup_actor=1000

4/ SequentialMemory(limit=1000000, window_length=1)

5/ The environement is a simulator of a metro line with a continuous state and action space

",43047,,43047,,12/20/2020 21:17,12/20/2020 21:17,How to deal with KerasRL DDPG algorithm getting stuck in a local optima?,,0,11,,,,CC BY-SA 4.0 25253,1,,,12/17/2020 22:00,,1,95,"

I am aware that the attention mechanism can be used to deal with long sequences, where problems related to gradient vanishing and, more generally, representing effectively the whole sequence arise.

However, I was wondering if attention, applied either to seq2seq RNN/GRU/LSTM or via Transformers, can contribute to improving the overall performance (as well as giving some sort of interpretability through the attention weights?) in the case of relatively short sequences (let's say around 20-30 elements each).

",43057,,2444,,12/19/2020 18:28,1/8/2023 23:05,Can the attention mechanism improve the performance in the case of short sequences?,,1,0,,,,CC BY-SA 4.0 25254,1,,,12/17/2020 23:15,,5,137,"

Two of the most popular initialization schemes for neural network weights today are Xavier and He. Both methods propose random weight initialization with a variance dependent on the number of input and output units. Xavier proposes

$$W \sim \mathcal{U}\Bigg[-\frac{\sqrt{6}}{\sqrt{n_{in}+n_{out}}},\frac{\sqrt{6}}{\sqrt{n_{in}+n_{out}}}\Bigg]$$

for networks with $\text{tanh}$ activation function and He proposes

$$W \sim \mathcal{N}(0,\sqrt{s/n_{in}})$$

for $\text{ReLU}$ activation. Both initialization schemes are implemented in the most commonly used deep learning libraries for python, PyTorch and TensorFlow.
However, for both versions we have a normal and uniform version. Now the main argument of both papers is about the variance of the information at initialization time (which is dependent on the non-linearity) and that it should stay constant across all layers when back-propagating. I see how one can simply adjust the bounds $[-a,a]$ of a uniform variable in such a way that the random variable has the desired standard deviation and vice versa ($\sigma = a/\sqrt{3}$), but I'm not sure why we need a normal and a uniform version for both schemes? Wouldn't it be just enough to have only normal or only uniform? Or uniform Xavier and normal He as proposed in their papers?

I can imagine uniform distributions are easier to sample from a computational point of view, but since we do the initialization operation only once at the beginning, the computational cost is negligible compared to that from training. Further uniform variables are bounded, so there are no long tail observations as one would expect in a normal. I suppose that's why both libraries have truncated normal initializations.

Are there any theoretical, computational or empirical justifications for when to use a normal over a uniform, or a uniform over a normal weight initialization regardless of the final weight variance?

",37120,,37120,,12/25/2020 13:06,12/25/2020 13:06,Why is there a Uniform and Normal version of He / Xavier initialization in DL libraries?,,0,0,,,,CC BY-SA 4.0 25255,1,,,12/18/2020 0:13,,4,119,"

In the Deep Learning book by Goodfellow et al., section 11.4.5 (p. 438), the following claims can be found:

Currently, we cannot unambiguously recommend Bayesian hyperparameter optimization as an established tool for achieving better deep learning results or for obtaining those results with less effort. Bayesian hyperparameter optimization sometimes performs comparably to human experts, sometimes better, but fails catastrophically on other problems. It may be worth trying to see if it works on a particular problem but is not yet sufficiently mature or reliable

Personally, I never used Bayesian hyperparameter optimization. I prefer the simplicity of grid search and random search.

As a first approximation, I'm considering easy AI tasks, such as multi-class classification problems with DNNs and CNNs.

In which cases should I take it into consideration, is it worth it?

",36907,,2444,,12/19/2020 18:46,12/19/2020 18:46,"Bayesian hyperparameter optimization, is it worth it?",,1,0,,,,CC BY-SA 4.0 25256,2,,25238,12/18/2020 1:05,,1,,"

Yes, you are correct, $S_{dW}$, in this case, is the matrix/tensor and it will have the same shape as gradient matrix/tensor. The update rule divides each element in gradient to each gradient in $S_{dW}$ or you can say "element-wise division".

To avoid the zero variable in $S_{dW}$, the equation usually like this:

$$ W=W-\alpha \frac{dW}{\sqrt{S_{dW}+\epsilon}} \text{ where } \epsilon=10^{-8} $$

If you want to implement it manually with Python, you can take a look at this link

",41287,,,,,12/18/2020 1:05,,,,0,,,,CC BY-SA 4.0 25257,1,25384,,12/18/2020 3:55,,1,76,"

As part of a learning more about deep learning, I have been experimenting with writing ResNets with Dense layers to do different types of regression.

I was interested in trying a harder problem and have been working on a network that, given a private key, could perform point multiplication along ECC curve to obtain a public key.

I have tried training on a dataset of generated keypairs, but am seeing the test loss values bounce around like crazy with train loss values eventually decreasing after many epochs due to what I assume is overfitting.

Is this public key generation problem even solvable with a deep learning architecture? If so, am I doing something wrong with my current approach?

",43059,,,,,12/25/2020 22:37,Regression For Elliptical Curve Public Key Generation Possible?,,1,12,,,,CC BY-SA 4.0 25258,2,,25255,12/18/2020 4:56,,2,,"

Efficiently integrating HPO frameworks into an existing project is non-trivial. Most common datasets/tasks already have established architectures/hyperparameters/etc. and require only a few additional tuning parameters. In this case, the benefits (assuming they exist) brought by Bayesian HPO techniques lack behind development time (simplicity), and this is one dominant reason that users prefer grid search or random search over some more sophisticated Bayesian optimization techniques.

For a very large scale problem and an entirely new task (or dataset) for which the user has no intuition on "good hyperparameters", Bayesian HPO may be a good option to consider over random or grid search. This is because there may be a very large number of hyperparameters (due to insufficient domain knowledge), and integrating HPO into the project may result in far less time than grid search over all possible hyperparameters.

",33444,,,,,12/18/2020 4:56,,,,3,,,,CC BY-SA 4.0 25259,2,,25218,12/18/2020 5:14,,1,,"

Without any knowledge on the references you came across, I am assuming that the authors were considering common applications of MAB (planning, online learning, etc.) for which the time horizon is usually small. In such applications, we usually cannot afford a large average regret which is inevitable for standard MAB algorithms due to the $\sqrt{|K|}$ factor.

Depending on the application or additional constraints you can impose on your problem, there are several works that consider structured stochastic MABs in which there are much better guarantees than the pessimistic $\sqrt{K}$ bounds. One variant of structured MABs is learning on graphs [1], where one can obtain regret bounds growing with $\sqrt{\beta(G)}$, where $\beta(G)$ is the independence number of a graph $G$.

[1] F. Liu, Z. Zheng, N. Shroff, "Analysis of Thompson Sampling for Graphical Bandits Without the Graphs", ArXiv. 2018.

",33444,,,,,12/18/2020 5:14,,,,2,,,,CC BY-SA 4.0 25260,1,,,12/18/2020 6:00,,1,43,"

Sorry if I sound confused. I read that data to be fed to a machine are divided into training, validation and test data. Both training and validation data are used for developing the model. Test data is used only for testing the model and no tuning of the model is done using test data.

Why is there a need to separate out training and validation data since both sets of data are for developing/tuning the model? Why not keep things simple and combine both data sets into a single one?

",16839,,,,,12/19/2020 7:50,"Why can't we combine both training and validation data, given that both types of data are used for developing the model?",,2,0,,,,CC BY-SA 4.0 25261,2,,12340,12/18/2020 7:03,,0,,"

One possibility: If you are using a dropout regularization layer in your network, it is reasonable that the validation error is smaller than the training error. Because usually dropout is activated when training but deactivated when evaluating on the validation set. You get a more smooth (usually means better) function in the latter case.

",43069,,2444,,12/19/2020 23:40,12/19/2020 23:40,,,,0,,,,CC BY-SA 4.0 25262,1,25356,,12/18/2020 11:52,,1,74,"

In section 2 the paper Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review the author is discussing formulating the RL problem as a probabilistic graphical model. They introduce a binary optimality variable $\mathcal{O}_t$ which denotes whether time step $t$ was optimal (1 if so, 0 otherwise). They then define the probability that this random variable equals 1 to be

$$\mathbb{P}(\mathcal{O}_t = 1 | s_t, a_t) = \exp(r(s_t, a_t)) \; .$$

My question is why do they do this? In the paper they make no assumptions about the value of the rewards (e.g. bounding it to be non-positive) so in theory the rewards can take any value and thus the RHS can be larger than 1. This is obviously invalid for a probability. It would make sense if there was some normalising constant, or if the author said that the probability is proportional to this, but they don't.

I have searched online and nobody seems to have asked this question which makes me feel like I am missing something quite obvious so I would appreciate if somebody could clear this up for me please.

",36821,,,,,12/23/2020 18:30,"In RL as probabilistic inference, why do we take a probability to be $\exp(r(s_t, a_t))$?",,1,2,,,,CC BY-SA 4.0 25263,2,,25260,12/18/2020 12:30,,1,,"

Usually we are splinting the training dataset because we want to fine tune, find the best hyper-parameters, for our model. If we combine the validation and training dataset given a network complex enough we could achieve perfect performance for the given task. But having very good performance on the training dataset does not mean our model is useful. It might be the case that our model is very good on the training dataset, but it fails to generalize, so when given samples outside of the training dataset the performance will significantly drop (this is called over-fitting). By splitting the dataset, we are testing how well our model is performing on new/unseen data.

",20430,,,,,12/18/2020 12:30,,,,0,,,,CC BY-SA 4.0 25264,1,,,12/18/2020 13:57,,5,544,"

In the paper "ForestNet: Classifying Drivers of Deforestation in Indonesia using Deep Learning on Satellite Imagery", the authors talk about using:

  1. Feature Pyramid Networks (as the architecture)
  2. EfficientNet-B2 (as the backbone)

Performance Measures on the Validation Set. The RF model that only inputs data from the visible Landsat 8 bands achieved the lowest performance on the validation set, but the incorporation of auxiliary predictors substantially improved its performance. All of the CNN models outperformed the RF models. The best performing model, which we call ForestNet, used an FPN architecture with an EfficientNet-B2 backbone. The use of SDA provided large performance gains on the validation set, and land cover pre-training and incorporating auxiliary predictors each led to additional performance improvements.

What's the difference between architectures and backbones? I can't find much online. Specifically, what are their respective purposes? From a high-level perspective, what would integrating the two look like?

",43083,,18758,,1/5/2022 10:56,10/21/2022 15:09,What's the difference between architectures and backbones?,,2,1,,,,CC BY-SA 4.0 25265,1,,,12/18/2020 14:33,,0,95,"

I am new to machine learning, so I am not sure which algorithms to look at for my business problem. Most of what I am seeing in tools like KNIME are geared toward making a prediction/classification, and focusing on the accuracy of that single prediction/classification.

Instead, in general terms, I want to optimize toward maximum profit of a business process/strategy, rather than simply trying to choose the "best" transaction from within a set of possible transactions, which is quite different. The latter approach will simply give the best "transaction success percentage", without regard for overall profit of the strategy in the aggregate.

This is how the business problem is structured: Each Opportunity is a type of business strategy "game" between Entities. Each Entity is unaware, unaffected, and uninterested in conditions or events outside of the Opportunity, such as Observers. Each Opportunity is an independent event with no affect on other concurrent or future Opportunities, and with no effect on decisions unrelated to the Opportunity itself. Each Opportunity will have one and only one Awarded Entity, which is the Entity that "wins" the business process.

Observers, however, may create a Market for each Opportunity. Within such an independent, ephemeral Market, the Observers may bid among themselves as to which Entity will be the Awarded Entity for the Opportunity. Each Bid is a fixed-size transaction. A Bid is associated with only one Entity within the Opportunity. Thus, a Bid is a type of vote on the outcome of the Opportunity. There is no limit to the number of Bids that an Observer may place into the Market, but each Observer may only place Bids on a single Entity within the Market. Thus, the total amount of Bids on an individual Entity within the Opportunity represent the confidence level, within that Market, of the prediction.

At the resolution of the Opportunity, one Entity will be the Awarded Entity for the Opportunity. This determination is made based on factors outside of the control of the Observers. The Observers have no influence over the Opportunity nor the Entities within it. When the Awarded Entity is determined, the total value of Bids placed on that Entity are refunded to the Observers that placed them. Additionally, the value of all Bids placed on other Entities are shared among the Observers who bid correctly on the Awarded Entity. Each Observer that placed a correct Bid on the Awarded Entity is entitled to a fraction of the remaining Market value, in equal proportion to the number of fixed Bids placed. In other words, the Market is a zero sum scenario. Bids may be placed at any time during the duration of the Opportunity, from when its Market is created, up until a deadline just shortly before its resolution. The total number of outstanding Bids on each Entity is another data point that is available in real time, and which fluctuates during the duration of the Opportunity, based on total Bids placed and the ratios between Bids on each Entity participating in the Opportunity.

To support the Observers' evaluation and prediction of Awarded Entity within the scope of Opportunities, there are thousands of data points available, as well as extensive history and analytics regarding each Entity involved. Each Observer will employ their own unique strategy to predict Opportunity outcomes. The objective of this algorithm is to optimize a prediction strategy that does not optimize for "percentage of correct predictions", but rather "maximum gain". Rather than be "most correct most often", the model should strive to use the data to create advantages for maximum gain in the aggregate, rather than strive to be the most correct. An Observer is rewarded not for being correct most often, but for recognizing inefficiencies in the Bids within the Market.

I am considering hand-coding a genetic algorithm for this, so that I can write a custom fitness function that computes overall profitability of the strategy, and run the generations to optimize profit instead of individual selection accuracy. However, I'd rather use an out-of-the-box algorithm supported by a tool like KNIME if possible.

Thank you!

",14854,,14854,,12/24/2020 3:17,12/24/2020 3:17,"What is the best algorithm for optimizing profit, rather than making predictions?",,0,2,,,,CC BY-SA 4.0 25266,1,,,12/18/2020 15:22,,2,160,"

I came across Grenander's work "Probabilities on Algebraic Structures" recently, and I found that much of Grenander's work focused on what he called "Pattern Theory." He's written many texts on the matter, and, from what I've seen, they seem like an attempt to unify some mathematical underpinnings of pattern representation. However, I'm not sure what this really means in practice, nor how it relates to results we already have in learning theory. The mathematical aspect of the work is really quite intriguing, but I am skeptical as to its practicality.

Are there any applications of Grenander's pattern theory? Either for getting a better theoretical understanding of certain methods of pattern recognition or for directly implementing algorithms?

Some links to what I'm referring to:

",43084,,62466,,1/9/2023 19:43,1/10/2023 4:13,Are there applications of Grenander's pattern theory in pattern recognition or for implementing algorithms?,,1,0,,,,CC BY-SA 4.0 25268,1,25306,,12/18/2020 16:22,,4,602,"

I'm trying to learn about reinforcement learning techniques. I have little background in machine learning from university, but never more than using a CNN on the MNIST database.

My first project was to use reinforcement learning on tic-tac-toe and that went well.

In the process, I thought about creating an AI that can play a card game like Magic: The Gathering, Yu-gi-oh, etc. However, I need to think of a way to define an action space. Not only are there thousands of combinations of cards possible in a single deck, but we also have to worry about the various types of decks the machine is playing and playing against.

Although I know this is probably way too advanced for a beginner, I find attempting a project like this, challenging and stimulating. So, I looked into several different approaches for defining an action space. But I don't think this example falls into a continuous action space, or one in which I could remove actions when they are not relevant.

I found this post on this stack exchange that seems to be asking the same question. However, the answer I found didn't seem to solve any of my problems.

Wouldn't defining the action space as another level of game states just mask the exact same problem?

My main question boils down to:

Is there an easy/preferred way to make an action space for a game as complex as Magic? Or, is there another technique (other than RL) that I have yet to see that is better used here?

",43065,,2444,,1/17/2021 21:46,1/17/2021 21:46,How should I define the action space for a card game like Magic: The Gathering?,,1,0,,,,CC BY-SA 4.0 25269,1,25370,,12/18/2020 16:24,,1,129,"

In the back-propogation algorithm, the error term is:

$$ E=\frac{1}{2}\sum_k(\hat{y}_k - y_k)^2, $$

where $\hat{y}_k$ is a vector of outputs from the network, $y_k$ is the vector of correct labels (and we work out the error by calculating predicted-observed, squaring the answer, and then summing the answers for each $k$ (and dividing by 2).

How do you prove that if this answer is $0$ (i.e., if $E=0$), then $\hat{y}_k=y_k$ for all $k$?

",42926,,2444,,12/24/2020 12:43,12/24/2020 12:59,How do I prove that the MSE is zero when all predictions are equal to the corresponding labels?,,1,0,,,,CC BY-SA 4.0 25270,1,25271,,12/18/2020 17:08,,0,378,"

I have a trained LSTM language model and want to use it to generate text. The standard approach for this seems to be:

  1. Apply softmax function
  2. Take a weighted random choice to determine the next word

This is working reasonably well for me, but it would be nice to play around with other options. Are there any good alternatives to this?

",43089,,2444,,12/18/2020 21:28,12/18/2020 21:28,Are there any good alternatives to an LSTM language model for text generation?,,1,1,,,,CC BY-SA 4.0 25271,2,,25270,12/18/2020 20:50,,3,,"

The current state of the art in natural language generation are all auto-regressive transformer models. Transformers no longer use recurrent neural networks such as LSTM, because the recurrences makes long dependencies messy to calculate. Instead, Transformers only keep the attention layers, and apply attention on all the existing text so far, which can be done in parallel so therefore very fast, while being able to attend to long dependencies (e.g. understanding that "it" refers to "John" from 3 sentences ago). They are also faster to train than LSTMs (on powerful GPUs at least). The downside is more memory requirement, and you need large models and large datasets (LSTMs work better for small models and small datasets). Here is some background info on how they work.

Auto-regressive transformer models only use the decoder for text generation, and removes the encoder. Given an input, they predict the next word.

The most well-known one is GPT (GPT-3 has 175B parameters; GPT-2 has 1.5B parameters, and GPT-1 has 175M parameters) GPT is developed by OpenAI and is a commercial, paid-software if you want to use their official model, but I'm sure with a little digging you can find community-trained models that will perform slightly worse but is at least free to use. GPT is basically a vanilla transformer, but trained on a huge, huge dataset with a huge, huge model to achieve state-of-the-art performance.

Other auto-regressive transformer models include:

  • CTRL by Salesforce, which uses the novel idea of control codes to guide the style of generation (e.g. to generate Wikipedia article style text or book review style text).
  • XLNet by Google AI Brain team, which handles longer sequences more accuractely than the others because it re-introduces reccurrence back in the transformer model, allowing it to remember past sequences. Otherwise, vanilla transformers cannot handle dependencies that crosses sequences (note: a sequence is limited by the max length you can feed into the model, bottle-necked by your memory requirement, and can contain many sentences or paragraphs).
  • Reformer by Google Research, which is a more efficient transformer that significantly reduces the memory requirement while also being faster to compute on long sequences.

If your goal is just to generate English or another commonly researched language, you can use an existing pre-trained language model and avoid doing any training yourself. This saves a lot of time, and there should at least be free community-trained models readily available. Otherwise, for obscure tasks, you'd have to train one yourself, and these state of the art models will take immense resources.

",42699,,42699,,12/18/2020 21:25,12/18/2020 21:25,,,,0,,,,CC BY-SA 4.0 25272,2,,25253,12/18/2020 21:32,,1,,"

They shouldn't have any issues with short sequences, as short dependencies are easier to learn. The only difficult cases are long dependencies which is where most of the research is geared at. However, this is assuming that by "short sequence" you mean a sequence of text that is fully contained within itself, i.e. there is no cross-sequence dependencies.

For example, if you have a really long paragraph that doesn't fit in a transformer model, you would have to break that paragraph into many "short sequences", but each of these sequence may have a dependency that depends on another sequence, i.e. cross-sequence dependencies. For these cross-sequence dependencies, any model with recurrence should do better than ones without (e.g. RNN, LSTM, Transformer-XL).

If each short sequence is self-contained, then all of the models should perform pretty well.

",42699,,,,,12/18/2020 21:32,,,,2,,,,CC BY-SA 4.0 25273,2,,25264,12/18/2020 22:17,,0,,"

The vocabulary is definitely non-standard and a bit confusing, but Feature Pyramid Networks is used as a feature extractor, and its output is then fed into EfficientNet-B2 to be used to classify the image. One neural network model is concatenated at the end of the other.

So it seems like "architecture" is the front half of the neural network model which takes as input the satellite image and extracts image features, and then is directly connected to the back half of the model (hence "backbone"), which takes the features extracted from the "architecture" and makes a classification.

This terminology is definitely non-standard here, at least in the AI community, and if you ask anyone here I think it will be uncommon for them to naturally think about the words "architecture" vs "backbone" in this way unless they specialize in a similar field to the authors.

",42699,,18758,,1/5/2022 10:51,1/5/2022 10:51,,,,5,,,,CC BY-SA 4.0 25277,1,,,12/19/2020 2:31,,1,199,"

I am learning reinforcement learning with Q-learning using online resources, like blog posts, youtube videos, and books. At this point, I have learned the underpinning concepts of reinforcement learning and how to update the q values using a lookup table.

Now, I want to create a neural network to replace the lookup table and approximate the Q-function, but I am not sure how to design the neural network. What would be the architecture for my neural network? What are the inputs and outputs?

Here are the two options I can think of.

  1. The input of the neural network is $(s_i, a_i)$ and the output is $Q(s_i,a_i)$

  2. The input is $(s_i)$ and the output is a vector $[Q(s_i,a_1), Q(s_i,a_2), \dots, Q(s_i,a_N)]$

Is there any other alternative architecture?

Also, how to reason about which model would be logically better?

",41187,,2444,,12/19/2020 13:24,12/19/2020 13:24,How to build a Neural Network to approximate the Q-function?,,1,1,,,,CC BY-SA 4.0 25278,2,,25260,12/19/2020 7:50,,1,,"

There are two possible forms of overfitting. First related to the training only (fitting weights) and second related to architecture (fitting hyperparameters) and these two must be checked in two different stages. When you check performance of the given model you have two do this on unseen data, so you fit weights on training data and check it on validation (unseen to the weights fitting process) set and next you fit hyperparameters on validation set and check it on test (unseen to the hyperparameters fitting process) set.

",22659,,,,,12/19/2020 7:50,,,,2,,,,CC BY-SA 4.0 25279,2,,25277,12/19/2020 9:29,,2,,"

I had the same question when I first learned RL. The architectural design may depend on the task you're considering. Since you're moving from a tabular Q-learning to function approximation, I'm suspecting that you are considering a relatively small action space; in this case, you should use option 2 where the input is the state and the number of output nodes matches the number of available actions. One main reason for this choice: when exploiting the learnt function (i.e. policy is $\arg \max_a Q(s,a)$), for option (1) you would have to repeatedly run a forward pass for all available actions, whereas option (2) requires a single forward pass and run arg max either through numpy or the framework you're using.

",33444,,,,,12/19/2020 9:29,,,,0,,,,CC BY-SA 4.0 25281,2,,25205,12/19/2020 9:52,,2,,"

This question is very general in the sense that the reason may differ depending on the area of ML you are considering. Below are two different areas of ML where the KL-divergence is a natural consequence:

  • Classification: maximizing the log-likelihood (or minimizing the negative log-likelihood) is equivalent to minimizing KL divergence as typical used in DL-based classification where one-hot targets are commonly used as reference (see https://stats.stackexchange.com/a/357974). Furthermore, if you have a one-hot vector $e_y$ with $1$ at index $y$, minimizing cross-entropy $\min_{\hat{p}}H(e_y, \hat{p}) = - \sum_y e_y \log \hat{p}_y = - \log \hat{p}$ boils down to maximizing the log-likelihood. In summary, maximizing the log-likelihood is arguably a natural objective, and KL-divergence (with 0 log 0 defined as 0) comes up because of its equivalence to log-likelihood under typical settings, rather than explicitly being motivated as the objective.
  • Multi-armed bandits (a sub-area of reinforcement learning): Upper confidence bound (UCB) is an algorithm derived from standard concentration inequalities. If we consider MABs with Bernoulli rewards, we can apply Chernoff's bound and optimize over the free parameter to obtain an upper bound expressed in terms of KL divergence as stated below (see https://page.mi.fu-berlin.de/mulzer/notes/misc/chernoff.pdf for some different proofs).

Let $X_1, \dots, X_n$ be i.i.d. Bernoulli RVs with parameter $p$. $$P(\sum_i X_i \geq (p+t)n) \leq \inf_\lambda M_X (\lambda) e^{-\lambda t} = \exp(-n D_{KL}(p+t||p)).$$

",33444,,33444,,12/20/2020 12:15,12/20/2020 12:15,,,,1,,,,CC BY-SA 4.0 25283,1,34749,,12/19/2020 11:18,,4,94,"

I get the fundamental idea of how tilings work, but, in Barton and Sutton's book, Reinforcement Learning: An Introduction (2nd edition), a diagram, on page 219 (figure 9.11), showing the variations of uniform offset tiling has confused me.

I don't understand why all 8 of these figures are instances of uniformly offset tilings. I thought uniformly offset meant ALL tilings have to be offset an equal amount from each other which is only the case for the bottom left figure. Is my understanding wrong?

",42514,,2444,,1/29/2021 20:40,4/6/2022 23:04,How does uniform offset tiling work with function approximation?,,1,2,,,,CC BY-SA 4.0 25284,1,,,12/19/2020 12:46,,1,34,"

I'm developing a method to document and query representation as concept vectors (bag-of-concepts). I want to train a machine learning model on ranking (learning to rank a task). So I have document vector V1 and query vector V2. How should I use these two numerical vectors in learning the way to rank a task? What are the possible scenarios?

Do I calculate relevance (similarity) by cosine and then enter the result as a single feature into a neural network? Is it correct to apply Hadamard to produce a single vector representing the features of a document and query pair, and then train a neural network with it? Can two vectors (document and query vector) be entered into the Siamese network in order to evaluate the relevance? One told me this is not possible because the network only take raw text as input and extracts features. Hence, it is useless to enter a vector that was generated by my vectorization method.

",43101,,43231,,1/9/2021 15:55,1/9/2021 15:55,"On learning to rank tasks. Could it be that the input of the Siamese network is a vector, or should it be exclusively raw text?",,0,0,,,,CC BY-SA 4.0 25287,1,,,12/19/2020 13:46,,3,86,"

I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In section 3.2. Logistic Regression, the author says the following:

3.2. Logistic regression In logistic regression, we use maximum likelihood to find the parameter vector $\beta$ of the following model: $$P(y = 1 \mid x) = \dfrac{1}{1 + \exp(\beta_0 + \beta_1 x_1 + \dots + \beta_n x_n)}$$ With sample selection bias we will instead fit: $$P(y = 1 \mid x, s = 1) = \dfrac{1}{1 + \exp(\beta_0 + \beta_1 x_1 + \dots + \beta_n x_n)}$$ However, because we are assuming that $y$ is independent of $s$ given $x$ we have that $P(y = 1 \mid x, s = 1) = P(y = 1 \mid x)$. Thus, logistic regression is not affected by sample selection bias, except for the fact that the number of examples is reduced. Asymptotically, as long as $P(s = 1 \mid x)$ is greater than zero for all $x$, the results on a selected sample approach the results on a random sample. In fact, this is true for any learner that models $P(y \mid x)$ directly. These are all local learners.

This part is unclear to me:

However, because we are assuming that $y$ is independent of $s$ given $x$ we have that $P(y = 1 \mid x, s = 1) = P(y = 1 \mid x)$. Thus, logistic regression is not affected by sample selection bias, except for the fact that the number of examples is reduced.

What is meant by "the number of examples is reduced", and why is this the case?

",16521,,2444,,12/19/2020 19:13,2/5/2021 18:07,"What is meant by ""the number of examples is reduced"", and why is this the case?",,1,0,,,,CC BY-SA 4.0 25288,2,,25205,12/19/2020 14:11,,2,,"

In ML we always deal with unknown probability distributions from which the data comes. The most common way to calculate the distance between real and model distribution is $KL$ divergence.

Why Kullback–Leibler divergence?

Although there are other loss functions (e.g. MSE, MAE), $KL$ divergence is natural when we are dealing with probability distributions. It is a fundamental equation in information theory that quantifies, in bits, how close two probability distributions are. It is also called relative entropy and, as the name suggests, it is closely related to entropy, which in turn is a central concept in information theory. Let's recall the definition of entropy for a discrete case:

$$ H = -\sum_{i=1}^{N} p(x_i) \cdot \text{log }p(x_i) $$

As you observed, entropy on its own is just a measure of a single probability distribution. If we slightly modify this formula by adding a second distribution, we get $KL$ divergence:

$$ D_{KL}(p||q) = \sum_{i=1}^{N} p(x_i)\cdot (\text{log }p(x_i) - \text{log }q(x_i)) $$

where $p$ is a data distribution and $q$ is model distribution.

As we can see, $KL$ divergence is the most natural way to compare 2 distributions. Moreover, it's pretty easy to calculate. This article provides more intuition on this:

Essentially, what we're looking at with the KL divergence is the expectation of the log difference between the probability of data in the original distribution with the approximating distribution. Again, if we think in terms of $log_2$ we can interpret this as "how many bits of information we expect to lose".

Cross entropy

Cross-entropy is commonly used in machine learning as a loss function where we have softmax (or sigmoid) output layer, since it represents a predictive distribution over classes. The one-hot output represents a model distribution $q$, while true labels represent a target distribution $p$. Our goal is to push $q$ to $p$ as close as possible. We could take a mean squared error over all values, or we could sum the absolute differences, but the one measure which is motivated by information theory is cross-entropy. It gives the average number of bits needed to encode samples distributed as $p$, using $q$ as the encoding distribution.

Cross-entropy based on entropy and generally calculates the difference between two probability distributions and closely related to $KL$ divergence. The difference is that it calculates the the total entropy between the distributions, while $KL$ divergence represents relative entropy. Corss-entropy can be defined as follows:

$$ H(p, q) = H(p) + D_{KL}(p \parallel q) $$

The first term in this equation is the entropy of the true probability distribution $p$ that is omitted during optimization, since the entropy of $p$ is constant. Hence, minimizing cross-entropy is the same as optimizing $KL$ divergence.

Log likelihood

It can be also shown that maximizing the (log) likelihood is equivalent to minimizing the cross entropy.

Limitations

As you mentioned, $KL$ divergence is not symmetrical. But in most cases this is not critical, since we want to estimate the model distribution by pushing it towards real one, but not vice versa. There is also a symmetrized version called Jensen–Shannon divergence: $$ D_{JS}(p||q)=\frac{1}{2}D_{KL}(p||m)+\frac{1}{2}D_{KL}(q||m) $$ where $m=\frac{1}{2}(p+q)$.

The main disadvantage of $KL$ is that both the unknown distribution and the model distribution must have support. Otherwise the $D_{KL}(p||q)$ becomes $+\infty$ and $D_{JS}(p||q)$ becomes $log2$

Second, it should be noted that $KL$ is not a metric, since it violates triangle inequality. That is, in some cases it won't tell us if we are going the right direction when estimating our model distribution. Here is an example taken from this answer. Given two discrete distributions $p$ and $q$, we calculate $KL$ divergence and Wasserstein metric:

As you can see, $KL$ divergence remained the same, while the Wasserstein metric decreased.

But as mentioned in comments, Wasserstein metric is highly intractable in a continuous space. We still can use it by applying the Kantorovich-Rubinstein duality used in Wasserstein GAN. You can also find more on this topic in this article.

The 2 drawbacks of $KL$ can be mitigated by adding noise. More on it in this paper

",12841,,12841,,12/22/2020 15:40,12/22/2020 15:40,,,,5,,,,CC BY-SA 4.0 25293,1,,,12/19/2020 21:11,,0,42,"

The goal of this program is to predict a game outcome given a game-reference-id, which is a serial number like so:

id,totalGreen,totalBlue,totalRed,totalYellow,sumNumberOnGreen,sumNumberOnBlue,sumNumberOnRed,sumNumberOnYellow,gameReferenceId,createdAt,updatedAt 1,1,3,2,0,33,27,41,0,1963886,2020-08-07 20:27:49,2020-08-07 20:27:49 2,1,4,1,0,36,110,31,0,1963887,2020-08-07 20:28:37,2020-08-07 20:28:37 3,1,3,2,0,6,33,83,0,1963888,2020-08-07 20:29:27,2020-08-07 20:29:27 4,2,2,2,0,45,58,44,0,1963889,2020-08-07 20:30:17,2020-08-07 20:30:17 5,0,2,4,0,0,55,82,0,1963890,2020-08-07 20:31:07,2020-08-07 20:31:07 6,2,4,0,0,36,116,0,0,1963891,2020-08-07 20:31:57,2020-08-07 20:31:57 7,3,2,1,0,93,16,40,0,1963892,2020-08-07 20:32:47,2020-08-07 20:32:47

Here's the link for a full training dataset.

After training the model, it becomes difficult to use the model to predict the game output, since the game-reference-id is the only independent column, while others are random.

Is there a way to flip the features with the labels during prediction?

",43108,,2444,,12/20/2020 12:51,12/20/2020 12:51,Is it possible to flip the features and labels after training a model?,,1,1,,,,CC BY-SA 4.0 25294,1,26610,,12/19/2020 22:03,,6,148,"

Some algorithms in the literature allow recovering the input data used to train a neural network. This is done using the gradients (updates) of weights, such as in Deep Leakage from Gradients (2019) by Ligeng Zhu et al.

In case the neural network is trained using encrypted (homomorphic) input data, what could be the output of the above algorithm? Will the algorithm recover the data in clear or encrypted (as it was fed encrypted)?

",43113,,2444,,12/19/2020 23:47,4/22/2021 15:21,"During neural network training, can gradients leak sensitive information in case training data fed is encrypted (homomorphic)?",,1,2,,,,CC BY-SA 4.0 25295,1,,,12/19/2020 22:43,,1,58,"

I'm working with data that is ranked. So the inputs are 1,2,3 etc. This means the smaller numbers (ranks) are preferred to the larger ones. Hence the order is important. I want to estimate a number using regression; however, with the constraint that the order of the numbers must be monotonic non-linear.

Imagine the following input table:

1, 2
2, 3
3, 1

For instance, if the output is 1000 for each input, then the estimation could be:

1 * (800) + 2 * (100) = 1000
2 * (300) + 3 * (60) = 780
3 * (150) + 1 * (400) = 850

Evidently, the estimated Xs are monotonic non-linear decreasing. For the first column: 1 -> 800, 2 -> 600 (2x300), 3 -> 450 (3x150) for the second columns: 1 -> 400, 2 -> 200 (2x100), 3-> 180 (2x60)

So here's the question. Can I ensure my model (neural network) enforces the given constraint? I am using Keras.

",43115,,16909,,12/29/2020 14:34,12/29/2020 14:34,Can I constrain my neurons in a neural network in according to the orders of the input?,,0,3,,,,CC BY-SA 4.0 25296,1,,,12/19/2020 22:53,,3,149,"

In Chapter 8, section 8.5.2, Raul Rojas describes how the weights for a layer of a neural network can be calculated using a pseudoinverse of the sigmoid function in the nodes, he explains this is an example of symmetric relaxation.

But the chapter doesn't explain what asymmetric relaxation would be or how it is done.

So, what is asymmetric relaxation and how would it be done in a simple neural network using a sigmoid function in its nodes?

",14892,,14892,,12/30/2020 19:07,1/19/2023 23:02,What is asymmetric relaxation backpropagation?,,1,0,,,,CC BY-SA 4.0 25297,2,,25293,12/19/2020 23:11,,1,,"

The body of your post seems to be asking a completely separate question than the title of your post, so I will answer both:

"Body: How do I complete the goal of this program?"

Your dataset does not have the dependent variable, which is the outcome of the game (win/loss/draw). What I am assuming is that you have a way of looking up the outcome of the game from either the "id" field or the "gameReferenceId" field.

So you would have to augment the dataset with a new column, "gameOutcome", which has values (win/loss/draw), by looking up the outcome of each game (each row) and adding that to your dataset.

Once you have this, you have the 12 independent variables (the 12 columns already there), and the 1 dependent variable (the "gameOutcome"), and the prediction task should be straightforward from there.

"Title: Given a label, how do I predict features?"

(note: this section will not help your program)

What you are looking for are generative models. Generative models can generate instances given the label and/or some random seed. This is a completely different model than discriminative models, which given the instance predicts the label (this is the one you have).

The simplest generative model is a Naive Bayes model. While Naive Bayes is normally used as a discriminative model (to classify a label), it has enough information to generate instances given a label as well. Here are some tips on how to turn Naive Bayes into a generative model.

If you are looking for generative deep neural network models, this blog by OpenAI has a nice overview explaining the three modern approaches (generative adversarial networks, variational autoencoders, and autoregressive models). The blog explains at a high level what the strengths and weaknesses are of each approach, and links to some open source projects for you to try it out as well.

",42699,,,,,12/19/2020 23:11,,,,3,,,,CC BY-SA 4.0 25306,2,,25268,12/20/2020 13:20,,1,,"

There are several different ways you can model the state and action spaces in such sequential (extensive-form) environments/games. For environments with small action spaces or those typically introduced to beginning-RL students, the state space and action space remains constant along an agent's trajectory (termed normal form games when there are multiple agents). In sequential games which can be illustrated as trees, a "state" is analogous to "information set" which is defined as the sequence (tuple) of actions and observations since the beginning of the game's episode. Terminal states (leaf nodes) exist, and the action space $\mathcal{A}[x]$ at an information set $x$ can be defined as a union of action sequences that can be taken to each terminal state, not counting terminal states that cannot be reached from the current information set.

In the above examples, I discussed games such as the examples you stated where more than one agent can interact with the environment, but this is a generalization over RL and can be applied to when only one agent is maximizing its reward.

",33444,,,,,12/20/2020 13:20,,,,1,,,,CC BY-SA 4.0 25307,1,25322,,12/20/2020 14:14,,3,82,"

We were given a list of labeled data (around 100) of known positive cases, i.e. people that have a certain disease, i.e. all these people are labeled with the same class (disease). We also have a much larger amount of data that we can label as negative cases (all patients that are not on the known positive patient's list).

I know who the positives are, but how do I select negative cases to create a labeled dataset of both positives and negatives, on which to first train a neural network, and then test it?

This is a common problem in the medical field, where doctors have lists of patients that are positive, but, in our case, we were not given a specific list of negative cases.

I argued for picking a number that represents the true prevalence of the disease (around 1-2%). However, I was told that this isn't necessary and to do a 50:50 split of positives to negatives. It seems that doing it this way will not generalize outside our test and train datasets.

What would you do in this case?

",41283,,2444,,12/21/2020 23:38,12/21/2020 23:38,"How do I select the (number of) negative cases, if I'm given a set of positive cases?",,1,4,,,,CC BY-SA 4.0 25308,1,,,12/20/2020 16:39,,1,64,"

Suppose that a neural network is trained with encrypted (for example, with homomorphic encryption and, more precisely, with the Paillier partial scheme) data. Moreover, suppose that it is also trained with encrypted weights.

If the neural network's weights are decrypted, is the performance of the neural network theoretically preserved or affected?

",43113,,2444,,12/21/2020 18:05,12/21/2020 18:05,"Is the performance of a neural network, which was trained with encrypted data and weights, affected if the weights are decrypted?",,0,1,,,,CC BY-SA 4.0 25312,1,,,12/20/2020 19:35,,1,335,"

I want to help people with cancer who are under chemotherapy, and generally people who have lost their hair to Virtually Try-On Toupees/Wigs on their head.

VTO must support both the frontal and side positions of the head.

At first, I thought I could use traditional deep-learning to find landmarks, then place the hairstyle on the head.

But the results were unrealistic and inaccurate.

2D Placing of Hair:

Some Hairstyles and Portraits fit well together:

But some don't:

They need manual modification:

Sometimes the original hair needs to be erased:

GANs promise more natural results with less manual engineering

changing hairstyle using GAN:

changing hairstyle using GAN:

So, I decided to use GANs for this Task.

Is it a good choice or is there an alternate solution?

",43128,,43231,,12/29/2020 14:27,12/29/2020 14:27,Hairstyle Virtual Try On,,0,5,,,,CC BY-SA 4.0 25315,1,30245,,12/20/2020 21:44,,1,1134,"

I came across the term MNLI-(m/mm) in Table 1 of the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. I know what MNLI stands for, i.e. Multi-Genre Natural Language Inference, but I'm just unsure about the -(m/mm) part.

I tried to find some information about this in the paper GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding, but this explained only the basic Multi-Genre Language Inference concept. I assume that the m/mm part was introduced later, but this doesn't make any sense because the BERT paper appeared earlier.

It would be nice if someone knows this or has a paper that explains this.

",43132,,2444,,12/21/2020 0:28,8/18/2021 15:48,What is MNLI-(m/mm)?,,1,0,,,,CC BY-SA 4.0 25316,1,,,12/20/2020 21:49,,6,1465,"

What does the Bellman equation actually say? And are there many flavours of that?

I get a little confused when I look for the Bellman equation, because I feel like people are telling slightly different things about what it is. And I think the Bellman Equation is just basic philosophy and you can do whatever you want with that.

The interpretations that I have seen so far:

Let's consider this grid world.

+--------------+
| S6 | S7 | S8 |
+----+----+----+
| S3 | S4 | S5 |
+----+----+----+
| S0 | S1 | S2 |
+----+----+----+
  • Rewards: S1:10; S3:10
  • Starting Point: S0
  • Horizon: 2
  • Actions: Up, Down, Left, Right (If an action is not valid because there is no space, you remain in your position)

The V-Function/Value:

It tells you how good is it to be in a certain state.

With a horizon of 2, one can reach:

S0==>S3 (Up)   (R 5)
S0==>S0 (Down) (R 0)
S0==>S1 (Right)(R10)
S0==>S0 (Left) (R 0)

From that onwards

S0==>S3 (Up)   (R 5)
S0==>S0 (Down) (R 0)
S0==>S1 (Right)(R10)
S0==>S0 (Left) (R 0)

S1==>S4 (Up)   (R 0)
S1==>S1 (Down) (R10)
S1==>S2 (Right)(R 0)
S1==>S0 (Left) (R 0)

S3==>S6 (Up)   (R 0)
S3==>S0 (Down) (R 0)
S3==>S3 (Right)(R 5)
S3==>S2 (Left) (R10)

Considering no discount, this would mean that it is R=45 good to be in S0, because these are the options. Of course, you can't grab every reward, because you have to decide. Do I need to consider the best next state yet, because this would obviously reduce my expected total reward, but as I can only make two steps it would tell me what is really possible. Not what the overall Reward R(s) in that range is.

The Q-Function/Value

This function takes a state and an action, but I am not sure. If that means that I have a reward function that just considers my actions as well to give me a reward. Because in the previous example I just have to land on a state (It doesn't really matter how I get there). But this time I get a reward, when I choose a certain action. R(s,a) But otherwise I do not rate the best action and select that next state to calculate the next state. I choose every next step and from that I choose the 2nd next.

Optimization V-function or Q-function

This works the same as V-Function or Q-Function, but it just considers the next best award. Some sort of greedy approach:

First step:

S0==>S3 (Up)   (R 5) [x]
S0==>S0 (Down) (R 0) [x]
S0==>S1 (Right)(R10)
S0==>S0 (Left) (R 0) [x]

Second Step:

S1==>S4 (Up)   (R 0) [x]
S1==>S1 (Down) (R10) 
S1==>S2 (Right)(R 0) [x]
S1==>S0 (Left) (R 0)

So, this would say that is the best I can do in two steps. I know that there is a problem, because when I just follow a greedy approach I risk that I won't get the best result, if I would have had a reward of 1000 on S2 later.

But still, I just want to know, if I have a correct understanding. I know there might be many flavours and interpretations but at least I want to know that is the correct name of these approaches.

",43131,,42731,,12/21/2020 19:29,12/21/2020 19:29,What is the Bellman Equation actually telling?,,2,5,,,,CC BY-SA 4.0 25317,1,,,12/20/2020 23:06,,1,37,"

There are few challenges I am facing when building a resume recommendation for a particular job positing.

Let's say we convert the resume into a vector on n-dimensions and job description also as an n-dimension vector and in order to see how similar, we can use any similarity metrics like cosine.

Now, for me the biggest problem with such approach is not able to provide more importance to the job title required. Some times, for a cloud engineer position I am getting java developer resume recommended in top 10 just because some of the skills/keywords are overlapping between two so their embeddings becomes similar.

I want to provide more weightage to the job title as well. What are some possible things I can do to make my recommendations consider or put bit emphasis on job title.

Note:- A common job title lookup in resume will fails because people write job titles in multiple ways. (java engineer (or) java developer) | (cloud engineer or aws engineer) etc.,

How can I overcome this issue?

",32867,,32867,,12/21/2020 8:21,11/4/2022 19:50,Building a resume recommendation for a job post?,,0,2,,,,CC BY-SA 4.0 25318,2,,25199,12/20/2020 23:09,,7,,"

If you start with perpect discriminator, loss function will be saturated, and gradient of loss will be very small, so feedback for the generator also will be small, and learning will be slow down as a result. Actually, it is allways desired for discriminator and generator to learn balancedly. Additionally, it is claimed that Wasserstein Loss take care of this problem.

You can find more information in this article. (I strongly suggest to read)

Also from the paper " Towards principled methods for training generative adversarial networks":

In theory, one would expect therefore that we would first train the discriminator as close as we can to optimality (so the cost function on $\theta$ better approximates the JSD), and then do gradient steps on $\theta$, alternating these two things. However, this doesn’t work. In practice, as the discriminator gets better, the updates to the generator get consistently worse. The original GAN paper argued that this issue arose from saturation, and switched to another similar cost function that doesn’t have this problem. However, even with this new cost function, updates tend to get worse and optimization gets massively unstable.

Note 1: $\theta$ is parameters of generator

Note 2: JSD: Jensen–Shannon divergence. For optimal discrimantor, loss is equal to $L(D^{ \ast }, g_{\theta})= 2 JSD(P_{r}, P_{g}) - 2log(2)$

Also from the paper of Wasserstein Loss:

The fact that the EM distance is continuous and differentiable a.e. means that we can (and should) train the critic till optimality. The argument is simple, the more we train the critic, the more reliable gradient of the Wasserstein we get, which is actually useful by the fact that Wasserstein is differentiable almost everywhere. For the JS, as the discriminator gets better the gradients get more reliable but the true gradient is 0 since the JS is locally saturated and we get vanishing gradients ... In Figure 2 we show a proof of concept of this, where we train a GAN discriminator and a WGAN critic till optimality. The discriminator learns very quickly to distinguish between fake and real, and as expected provides no reliable gradient information. The critic, however, can’t saturate, and converges to a linear function that gives remarkably clean gradients everywhere.

",41615,,41615,,12/24/2020 16:47,12/24/2020 16:47,,,,2,,,,CC BY-SA 4.0 25320,1,,,12/20/2020 23:37,,3,205,"

To derive the policy gradient, we start by writing the equation for the probability of a certain trajectory (e.g. see spinningup tutorial):

$$ \begin{align} P_\theta(\tau) &= P_\theta(s_0, a_0, s_1, a_1, \dots, s_T, a_T) \\ & = p(s_0) \prod_{i=0}^T \pi_\theta(a_i | s_i) p(s_{i+1} | s_i, a_i) \end{align} $$

The expression is based on the chain rule for probability. My understanding is that the application of the chain rule should give up this expression:

$$ p(s_0)\prod_{i=0}^T \pi_\theta(a_i|s_i, a_{i-1}, s_{i-1}, a_{i-2}, \dots, s_0, a_0) p(s_{i+1} | s_i, a_i, s_{i-1}, a_{i-1}, \dots, a_0, s_0) $$

Then the Markov property should be applicable, producing the desired equality. This should only depend on the latest state-action pair.

Here are my questions:

  1. Is this true?

  2. I watched this lecture about policy gradients, and at this time during the lecture, Sergey says that: "at no point did we use the Markov property when we derived the policy gradient", which left me confused. I assumed that the initial step of calculating the trajectory probability was using the Markov property.

",24080,,2444,,12/29/2020 19:24,12/29/2020 19:24,Policy gradient: Does it use the Markov property?,,0,4,,,,CC BY-SA 4.0 25321,2,,25316,12/20/2020 23:41,,2,,"

For a Markov Decision Process $(\mathcal{S}, \mathcal{A}, P, R)$ (here $P(s, s') = \mathbb{P}(S_{t+1} = s' | S_t = s, A_t = a))$;, let us define the value of being in a certain state. That is, $$v_\pi(s) = \mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[\sum_{i=0}^\infty \gamma^{i+t}r(s_{t+i}, a_{i+t}) | S_t =s\right].$$ That is, the value of being in state $s$ at time $t$ is equal to the expected value of the discounted sum of future rewards, where the expectation is taken with respect to $\pi(\cdot | s)$ the action policy and the environment dynamics $P$.

We will now define $$\sum_{i=0}^\infty \gamma^{i+t}r(s_{t+i}, a_{i+t}) = G_t$$

Now, we can rewrite this as $$\mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[G_t | S_t =s\right] = \mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[ r(s_t, a_t) + \gamma G_{t+1} | S_t =s\right].$$ The RHS is now in a nicer format and we can express it as $$\mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[G_t | S_t =s\right] = \mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[ r(s_t, a_t) + \gamma v_\pi(s') | S_t =s\right],$$ where $s'$ is the state that we transition into at time $t+1$. Note that $s'$ is a random variable and we are taking the expectation over this according to the action policy and the environment dynamics -- this is because the state we transition into depends first on the action we select and then the environment dynamics $P$.

You can see that we now have $$v_\pi(s) = \mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[ r(s_t, a_t) + \gamma v_\pi(s') | S_t =s\right].$$ This is the Bellman equation (at least a form of it) and it expresses a recursive relationship between the values of states. That is, the value of being in state $s$ is equal to the expected immediate reward from being in this state plus the value of being in the state that we transition into. This relationship is useful in Reinforcement Learning as many algorithms use this equation to form update rules to approximate the value/state-action value function of the MDP, such as the SARSA algorithm, so it is more than just a philosophy, it is the driving force behind many of the RL algorithms.

Now, when I say at least a form of it, that is because in RL it is also common to see a Bellman equation for the state action value function $$q_\pi(s, a) = \mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[ G_t | S_t = s, A_t = a\right] = \mathbb{E}_{a_i \sim \pi, s_i \sim P}\left[ r(s_t, a_t) + v_\pi(s') | S_t = s, A_t = a\right];$$ noting that the value function $v$ is the expectation of $q$ over the action space, i.e. we marginalise out the action we condition on -- thus there is the recursive relationship.

As pointed out in the comments for this question, it is also worth noting that Bellman equations originated in Dynamic Programming, which exist to solve planning problems such as the knapsack problem.

",36821,,36821,,12/21/2020 10:35,12/21/2020 10:35,,,,2,,,,CC BY-SA 4.0 25322,2,,25307,12/21/2020 0:05,,4,,"

Short answer

To select the proper dataset to construct, you should first figure out a metric to use to compare, and then select the dataset construction that gives the better metric. There is no single best metric, it depends on the task and your interpretation on what type of error is more important.

If you believe it is important that errors should not be normalized across class, then use the overall accuracy, and keep your dataset distribution same as the natural distribution (so 1-2% positive cases).

If you believe it is important that errors should be normalized across class, then use PR-AUC or ROC-AUC, and re-balance your dataset so that the samples a little more closer to 1:1. The exact ratio will only be determined after testing and comparing the PR-AUC or ROC-AUC metrics.

How to select the best metric?

Two popular metrics are ROC-AUC and PR-AUC. ROC curves (Receiver Operating Characteristics) plot the true positive rate vs false positive rate, while PR curves (Precision and Recall) plot the precision vs recall. AUC stands for "area under curve", because you can achieve any single point in the curve by specifying the classifier threshold, so the sum of all points (i.e. the entire area under the curve) is the most general way of comparing if one model is doing better than another.

Although both ROC curves and PR curves equalize class imbalance at some capacity, PR curves are more sensitive to class imbalance. The paper The Relationship Between Precision-Recall and ROC Curves concludes that if the PR-AUC is good then the ROC-AUC will always also be good, but not the other way around. The difference is due to the fact that if the dataset has huge class imbalance, a false positive hurts PR curves significantly more than ROC curves.

On the other hand, total accuracy does not normalize class imbalance at all, so therefore favors the majority class.

As a result:

  • if you do not care about normalizing the measures of class imbalance, choose total accuracy, which will optimize for the most # of correct cases (regardless of class)
  • if you want to normalize your metric across class imbalance, and normalizing false positive errors across classes is at all important to you, choose PR-AUC
  • if you want to normalize your metric across class imbalance, and don't care about normalizing false positive errors, PR-AUC or ROC-AUC may both be good for you

If it helps, for most imbalance problems, people usually go for PR curves.

By the way, (this paper) studies class imbalance in neural networks by optimizing the ROC curves, and show that you should definitely have equal numbers of positive and negative examples. So if you want the best performance in terms of ROC-AUC, you should do the 50:50 split. I haven't read any similar study that optimizes for PR-AUC, but my intuition tells me that it will have the same conclusion (you should do 50:50 split to optimize for PR-AUC as well).

",42699,,42699,,12/21/2020 3:15,12/21/2020 3:15,,,,0,,,,CC BY-SA 4.0 25327,2,,25316,12/21/2020 11:38,,1,,"

Just to add to the previous answer some more background and intuition. The background of Bellman equation comes from optimal control theory of dynamic systems of form (in discrete time case) \begin{equation} s_{k+1} = f_d(s_k, a_k) \tag{1} \end{equation} where $s_k$ represents state at time $k$ and $a_k$ action at time $k$. The goal is to optimize multistage objective function of form \begin{equation} V_N(s_0) = \sum_{k=0}^N J(s_k, a_k) \end{equation} while satisfying dynamic constraints $(1)$ where $J(\cdot)$ is a stage cost at time $k$. The product of this optimization are optimal control policies $a_k = \pi_k(s_k)$ which provide optimal value for the multistage objective function. Bellman's principle of optimality states that, for the multistage optimization problems, the objective function value at timestep $k$ should satisfy \begin{align} V^*_{k}(s_k) &= \min_{a_k}[J(s_k, a_k) + V^*_{k-1}(s_{k+1})]\\ &= \min_{a_k}[J(s_k, a_k) + V^*_{k-1}(f_d(s_k, a_k))] \end{align} This can be of course proven, you can find the proof in any optimal control/dynamic programming book.

This also makes intuitive sense. Consider that you are at timestep $N$ (end of trajectory you want to optimize). You only need to consider 1 action $a_N$. You would now go through all possible states $s_N$ and pick action $a_N$ which minimizes your stage cost $J(s_N, a_N)$ for all those states separately. After you did that, you would go one step backwards in time and find optimal action $a_{N-1}$ for all states $s_{N-1}$. According to the Bellman optimality principle, you only need to consider action $a_{N-1}$ in your optimization, because you already know action $a_N$ (calculated previously) which would minimize $J(s_N, a_N)$ for any possible future state $s_N$. Then you would keep going backwards in time until $k=0$. This is very useful because you don't need to consider all $N$ timesteps at once, you only consider one timestep at a time which prevents combinatorial explosion for large $N$. Bellman optimality principle has been adapted to many different applications, including RL and stochastic systems.

",20339,,20339,,12/21/2020 11:48,12/21/2020 11:48,,,,0,,,,CC BY-SA 4.0 25328,1,,,12/21/2020 18:20,,1,35,"

I’m reading the paper Spectral Networks and Deep Locally Connected Networks on Graphs and I’m having a hard time understanding the notation shown in the picture below (the scribbles are mine):

Specifically, I don’t understand the notation for the matrix F. Why does it include an i and a j?

",27433,,27433,,1/18/2021 16:24,1/18/2021 16:24,Spectral Networks and Deep Locally Connected Networks on Graphs,,0,0,,,,CC BY-SA 4.0 25329,1,25339,,12/21/2020 19:00,,3,357,"

I am searching for an academic (i.e. with maths formulae) textbook which covers (at least) the following:

  • GAN
  • LSTM and transformers (e.g. seq2seq)
  • Attention mechanism

The closest match I got is Deep Learning (2016, MIT Press) but it only deals with part of the above subjects.

",,user34314,2444,,12/21/2020 22:47,7/14/2021 15:42,"Recent deep learning textbook (i.e. covering at least GANs, LSTM and transformers and attention)",,2,0,,,,CC BY-SA 4.0 25331,2,,25329,12/21/2020 20:23,,1,,"

I recommend Introduction to Deep Learning by Eugene Charniak ISBN 978-0-262-03951-2 (MIT 2018). It mentions GAN & LSTM & Attention (all three occurs in the index).

But read also Pitrat's last book: Artificial Beings: The Conscience of a Conscious Machine - it does cover machine learning (but not in the "deep learning" sense) but was published before 2016.

And see also RefPerSys. If you speak French, also see this and more generally the AFIA organization.

",3335,,3335,,12/21/2020 21:08,12/21/2020 21:08,,,,11,,,,CC BY-SA 4.0 25332,1,,,12/21/2020 20:41,,4,59,"

Regularization of weights (e.g. L1 or L2) keeps them small and standardized, which can help reduce data overfitting. From this article, regularization sounds favorable in many cases, but is it always encouraged? Are there scenarios in which it should be avoided?

",38076,,2444,,12/21/2020 20:57,12/21/2020 20:57,When is using weight regularization bad?,,0,0,,,,CC BY-SA 4.0 25333,2,,13221,12/22/2020 5:03,,3,,"

My guess:

If you add a few CNN layers before the input of the given model and train only those layers while keeping the given model's parameters frozen, you might get better result.

Essentially these few extra layers would "transform" your input image into the appropriate shape, but with more accuracy since its trained and not hard coded.

",35679,,,,,12/22/2020 5:03,,,,0,,,,CC BY-SA 4.0 25334,1,,,12/22/2020 6:50,,2,48,"

I have seen at quite a few places the use of two-tower architecture. This(Fig 6) is one of the examples. Each tower computes embedding of a concept which is orthogonal to the concepts in the rest of the towers. Also, I have seen that the next step on getting an n-dimensional embedding from the towers an inner product is taken to measure Cosine similarity.

I would like to know how to interpret this Inner Product. The two n-dimensional embeddings represents different concepts. What is the meaning of similarity here ?

",43164,,,,,12/22/2020 6:50,Interpretation of Inner Product in a two-tower model,,0,0,,,,CC BY-SA 4.0 25335,1,25405,,12/22/2020 8:30,,3,72,"

I need to solve a video classification problem. While looking for solutions, I only found solutions that transform this problem into a series of simpler image classification tasks. However, this method has a downside: we ignore the temporal relationship between the frames.

So, how can I perform video classification, with a CNN, by taking into account the temporal relationships between frames?

",43170,,2444,,12/29/2020 11:11,12/29/2020 11:11,How can I do video classification while taking into account the temporal dependencies of the frames?,,1,1,,,,CC BY-SA 4.0 25337,2,,25239,12/22/2020 11:44,,0,,"

I am a bit confused with the part of "training a new neural network from scratch" and then the part of "VGG's weights will be frozen" because the answer changes since they are different cases. It can also happen, from what I read, that you are using VGG as backbone of another network for transfer learning but it is not specified either. Anyway, I will try my best.

The cases I see:

  1. Training a network from scratch: no, you don't need to undo your normalization. In fact, when training from scratch you can normalize however you want.
  2. Training a network from scratch starting from pretrained weights: no, you don't need to change your normalization. Although it would help to faster convergence because the weights are fitted to other kind of normalization. But as the network trains it will fit to your new normalization without any problem.
  3. Training a network that uses VGG as backbone with frozen weights: no, you don't need to change your normalization although it would help. In spite of having your backbone fitted for a specific kind of normalization your network head is still an universal approximator that would fit to any other kind of normalization. However if you use the same normalization you are easing the network's job (that's why we normalize in the first place)

When you really need to force same normalization is in inference. When you want to infer you really need to do the same normalization in preprocessing or you won't have any meaningful results.

",26882,,,,,12/22/2020 11:44,,,,3,,,,CC BY-SA 4.0 25339,2,,25329,12/22/2020 19:02,,1,,"

There are a few more books that were published after 2016 that cover some of the topics you are interested in. I've not read any of them, so I don't really know whether they are good or not, but I try to summarise if they cover some of the topics you may be interested in.

  • Deep Learning with Python (2017), by Francois Chollet (author of the initial Keras library), which covers GANs in section 8.5 (p. 305), but it does not seem to cover transformers and attention mechanisms, although it covers other intermediate/advanced topics (not sure to which extent), such as text generation with LSTMs, DeepDream, Neural Style Transfer and VAEs

  • Grokking deep learning (2019), by Andrew Trask, which seems to cover some intermediate/advanced topics (such as LSTMs and related tasks), but no transformers or GANs (unless I missed them); you can find the accompanying code here

  • Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play, by David Foster, which covers many variants of GANs, VAEs and other stuff

The first transformer was published in 2017, so I guess there may not yet be a book that extensively covers it and other related models, such as the GPT models (if you're interested in CV, check this blog post, although it seems to list books that cover mostly traditional CV techniques). The attention mechanisms are older and can probably be found in textbooks that cover machine translation topics (such as seq2seq models with LSTMs), such as this one.

",2444,,2444,,12/22/2020 19:09,12/22/2020 19:09,,,,0,,,,CC BY-SA 4.0 25340,1,25342,,12/22/2020 20:58,,1,113,"

In AlphaZero, we collect ($s_t, \pi_t, z_t$) tuples from self-play, where $s_t$ is the board state, $\pi_t$ is the policy, and $z_t$ is the reward from winning/losing the game. In other DeepRL off-policy algorithms (I'm assuming here that AlphaZero is off-policy (?)) like DQN, we maintain a memory buffer (say, 1 million samples) and overwrite the buffer with newer samples if it's at capacity. Do we do the same for AlphaZero? Or do we continually add new samples without overwriting older ones? The latter option sounds very memory heavy, but I haven't read anywhere that older samples are overwritten.

",43016,,,,,12/23/2020 0:08,Is there a training data capacity limit for AlphaZero (Chess)?,,1,0,,,,CC BY-SA 4.0 25342,2,,25340,12/22/2020 23:56,,1,,"

AlphaZero is on-policy*, which partially answers your question.

An on-policy algorithm is not the same as an online policy though, it is not required that updates are made on every step. It is simply required that all data used in the update is taken from the same "current" policy.

In practice, AlphaZero buffers results from games played with the current policy to create a dataset used to update its neural networks. That buffer is then emptied after the data has been used.

From the AlphaZero paper:

At the end of the game, the terminal position $s_T$ is scored according to the rules of the game to compute the game outcome $z: −1$ for a loss, $0$ for a draw, and $+1$ for a win. The neural network parameters $\theta$ are updated so as to minimise the error between the predicted outcome $v_t$ and the game outcome $z$, and to maximise the similarity of the policy vector $p_t$ to the search probabilities $\pi_t$.

This implies only a single game is buffered in this way before running each update and then discarding the dataset generated in that game. Theoretically the same approach could be used with any number of games for each update step (provided the training system has capacity to store more moves).


* AlphaZero is on-policy because the core algorithm requires using a specific policy and then updating it to match an improved version of the same policy discovered using MCTS for planning during play.

It could be possible to construct an off-policy update mechanism using similar MCTS routine. I am not sure why this is not considered, but suspect it would be due to complexity/efficiency of the algorithm compared to the ease of generating new game data.

",1847,,1847,,12/23/2020 0:08,12/23/2020 0:08,,,,5,,,,CC BY-SA 4.0 25343,2,,7522,12/23/2020 0:10,,0,,"

Nowadays, there are many resources that cover the back-propagation algorithm and some of them provide step-by-step examples.

However, in addition to the other answer, I would like to mention the online book Neural Networks and Deep Learning by Nielsen that covers the back-propagation algorithm (and other topics) in detail and, at the same, intuitively, although some could disagree. You can find the associated source code here (which I had consulted a few years ago when I was learning about the topic).

",2444,,,,,12/23/2020 0:10,,,,0,,,,CC BY-SA 4.0 25344,1,,,12/23/2020 6:14,,2,93,"

In reinforcement learning, there are model-based versus model-free methods. Within model-based ones, there are policy-based and value-based methods.

AlphaGo Deepmind RL model has beaten the best Go human player. What kind of reinforcement model does it use? Why is this particular model appropriate for Go game?

",16839,,,,,12/24/2020 1:25,What kind of reinforcement learning method does AlphaGo Deepmind use to beat the best human Go player?,,0,1,,,,CC BY-SA 4.0 25345,1,,,12/23/2020 7:28,,0,34,"

While training a CNN model, I used an l1_l2 regularization (i.e. I applied both $L_1$ and $L_2$ regularization) on the final layers. While training, I saw the training and validation losses are dropping very nicely, but the accuracies aren't changing at all! Is that due to the high regularization rate?

",42948,,2444,,12/23/2020 19:23,12/23/2020 19:23,What is the effect of too harsh regularization?,,0,3,,,,CC BY-SA 4.0 25346,1,,,12/23/2020 7:56,,0,118,"

I recently tried to reproduce the results of double Q-learning. However, the results are not satisfying. I have also tried to compare double Q learning with Q-learning in Taxi-v3, FrozenLake without slippery, Roulette-v0, etc. But Q-learning outperforms double Q-learning in all of these environments.

I am not sure whether if there is something wrong with my implementation as many materials about double Q actually focus on double DQN. While at the same time of checking, I wonder is there any toy example that can exemplify the performance of double Q-learning?

",43200,,36821,,12/27/2020 10:18,12/27/2020 10:18,Is there any toy example that can exemplify the performance of double Q-learning?,,0,4,,,,CC BY-SA 4.0 25348,1,,,12/23/2020 11:12,,1,186,"

This is my first question on this forum and I would like to welcome everyone. I am trying to implement DDQN Agent playing Othello (Reversi) game. I have tried multiple things but the agent which seems to be properly initialized does not learn against random opponent. Actually the score is about 50-60% won games out of nearly 500. Generally if it gets some score after first 20-50 episodes it stays on the same level. I have doubts on the process of learning and how to decide when the agent is trained. Current flow is as follows:

  1. Initialize game state.
  2. With epsilon greedy policy choose the action to make based on currently available actions depending on game state
  3. Get opponent to make his action
  4. Get the reward as number of flipped places that remain after opponent move.
  5. Save the observation to replay buffer
  6. If number of elements in replay buffer equals or higher than batch size do the training.

What I do not know is when do I know when to stop the training. Previously this agent trained against MinMax algorithm learned how to win 100% games because MinMax played exactly the same every time. I would like the agent to generalize the game. Right now I save the network weights after the game is won but I think it does not matter. I can't see that this agent find some policy and improve over time. Whole code for the environment, agent and training loop can be found here: https://github.com/MikolajMichalski/RL_othello_mgr I would appreciate any help. I would like to understand how the RL works :)

",43161,,,,,12/23/2020 11:12,DDQN Agent in Othello (Reversi) game struggle to learn,,0,8,,,,CC BY-SA 4.0 25349,1,,,12/23/2020 13:11,,2,44,"

I have a dataset consisting of a set of samples. Each sample consists of two distinct desctized signals S1(t), S2(t). Both signals are synchronous; however, they show different aspects of a phenomena.

I want to train a Convolutional Neural Network, but I don't know which architecture is appropriate for this kind of data.

I can consider two channels for input, each corresponding to one of the signals. But, I don't think convolving two signals can produce appropriate features.

I believe the best way is to process each signal separately in the first layers, then join them in the classification layers in the final step. How can I achieve this? What architecture should I use?

",43208,,43231,,12/25/2020 2:47,12/25/2020 2:47,Appropriate convolutional neural network architecture when the input consists of two distinct signals,,2,0,,,,CC BY-SA 4.0 25350,2,,25349,12/23/2020 14:07,,1,,"

I don't know what you mean by desctized signals but if I understand your question correctly, separating two signal and passing them through same architecture of CNN (even with different parameters) is not a good idea. Because when they are together (as different channels) they will be treated differently by the CNN (each channel has its own parameters) and even this way the network is able to combine these two signals and get better results by information extracted from this combination.

",41547,,,,,12/23/2020 14:07,,,,1,,,,CC BY-SA 4.0 25352,1,25363,,12/23/2020 16:09,,1,1336,"

I'm trying to understand the solution to question 4 of this midterm paper.

The question and solution is as follows:

I thought that the process for updating weights was:

error = target - guess
new_weight = current_weight + (error)(input)

I do not understand for example, for number 2 below, how that sum is determined. For example, I want to understand whether to update the weight or not. The calculation is:

x1(w1) + x2(w2)
(10)(1) + (10)(1) = 20
20 > 0, therefore update.

But the equation to obtain the same answer in the solution is:

1(10 + 10) 20
20 > 0, therefore update.

I understand that these two equations are essentially the same, but written differently. But for example, in step 5, what do the elements in g5 mean. What do the -8, -16 and -2 represent?

p.s. I know in a previous (now deleted) post of mine, I asked a question related to the use of LaTeX instead for maths equations. If someone can show me a simple way to convert these equations online, I'm more than happy to use it. However, I'm unfamiliar with this software, so I need some sort of converter.

",42926,,43231,,12/25/2020 11:10,12/25/2020 11:10,What is the equation to update the weights in the perceptron algorithm?,,1,0,,,,CC BY-SA 4.0 25353,2,,25349,12/23/2020 16:09,,0,,"

You can safely give both signals as input in different channels. Actually, it's the best way. This way, the network is able to find low-degree patterns that involve both signals early in training. This will therefore enable the early discovery of more complex patterns, too.

Differently from what one might understand from your question, the two signals will not be convolved against eachother, as it's typically done in signal processing. The convolution taking place is that of the first layer kernels with a two-component signal (the one you give as input). It can happen that there's a first order pattern that can only be recognised by looking at both signals at the same time. If that's not the case, the kernels will ignore one signal or the other (having the corresponding weights a value of zero).

",27444,,,,,12/23/2020 16:09,,,,2,,,,CC BY-SA 4.0 25356,2,,25262,12/23/2020 18:30,,0,,"

After doing some further reading, it turns out that negative rewards are an assumption for this distribution to hold. However, the author notes that as long as you don't receive a reward of infinity for any action then it is possible to re-scale your rewards by subtracting the maximum value of your potential rewards so that they are always negative.

",36821,,,,,12/23/2020 18:30,,,,0,,,,CC BY-SA 4.0 25358,1,25382,,12/23/2020 20:39,,0,346,"

I am totally new to artificial intelligence and neural networks and have a broad question that I hope is appropriate to ask here.

I am an ecologist working in animal movement and I want to use AI to apply to my field. This will be one of the few times this has been attempted so there is not much literature to help me here.

My dataset is binary. In short, I have the presence (1) and absence (0) of animal locations that are associated with a series of covariates (~20 environmental conditions such as temperature, etc.). I have ~1 million rows of data to train the model on with a ratio of 1:100 (presence:absence).

Once trained, I would like a model that can predict if an animal will be in a location (or give a probability) based on new covariates (environmental conditions).

Is this sort of thing possible using AI?

(If so, where should I be looking for resources? I write in R, should I learn Python?)

",43221,,2444,,12/22/2021 10:34,12/22/2021 10:36,Can you use machine learning for data with binary outcomes?,,1,1,,,,CC BY-SA 4.0 25361,2,,25287,12/24/2020 3:24,,-1,,"

I think what he means is that, while distribution $P(y \mid x,s)$ has three variables, $P(y \mid x)$ has two variables. The number of parameters required to describe a distribution (or samples to approximate it) grows exponentially with the number of variables (for more info see for instance Ian goodfellow, .. deep learning").

",43208,,43208,,1/6/2021 9:50,1/6/2021 9:50,,,,1,,,,CC BY-SA 4.0 25363,2,,25352,12/24/2020 6:09,,1,,"

I will tell you my knowledge, correct me if I am wrong.

Perceptron Learning Algorithm (PLA) is a simple method to solve the binary classification problem.

Define a function:

$$ f_w(x) = w^Tx + b $$

where $x \in \mathbb{R}^n$ is an input vector that contains data points and $w$ is a vector with the same dimension as $x$ which present for the parameters of our model.

Call $y=label(x)=\{1,-1\}$ where $1$ and $-1$ are the label of each $x$ vector.

The PLA will predict a class like this:

$$ y=label(x)=sgn(f_w(x))=sgn(w^Tx+b) $$

(The definition of sgn function can be found in this wiki)

We can understand that PLA tries to define a line (in 2D, or a plane in 3D, and hyperplane in more than 3 dimensions coordinate, I will assume it in 2D from now on) which separate our data into two areas. So how can we find that line? Just like every other machine learning problems, define a cost function, then optimize the parameters to have the smallest cost value.

Now, let define the cost function first, you can see that if a data point lies in the correct area, $y$ and $f(x)$ have the same sign, which means $y(w^Tx+b) > 0$ and otherwise. Similar to your example, I will define: $$ g(x)=y(w^Tx+b) $$

We ignore all the points in the safe zone ($g(x)>0$), only update to rotate or move the line to adapt with the misclassified points ($g(x)\le 0$), here, you can understand why we only update if $g(x)\le0$.

We need to define a cost function to minimize it, so our cost function will become: $$ L(w)=\displaystyle\sum_{x_i\in U}(-y_i(w^Tx_i+b)) $$ where

  • $U$is the set of the misclassified points
  • $y_i$ is the label of data point $i$-th
  • $x_i$ is the $i$-th data vector
  • $w$ and $b$ is parameters of our model

For each data point, we have the derivative is $$ \frac{\partial L}{\partial w} = -y_ix_i \\ \frac{\partial L}{\partial b} = -y_i $$

Finally, update them by Stochastic gradient descent (SGD), we get: $$ w = w - \frac{\partial L}{\partial w} = w + y_ix_i \\ b = b - \frac{\partial L}{\partial b} = b + y_i $$

For your last question, notice that the weight and bias changed from $4$-th updated, so we have: $$ y_5 = 1, x_5 = (4,8), w = (-2,-2), b = -2 \\ \Rightarrow g_5 = +1 \times (4\times(-2) + 8\times(-2)+ (-2)) = -8 -16 -2 $$

",41287,,,,,12/24/2020 6:09,,,,0,,,,CC BY-SA 4.0 25364,1,25372,,12/24/2020 8:21,,0,56,"

What ensemble methods are used in the state-of-the-art models?

When I surveyed the state-of-the-art methods of classification and detection, e.g. ImageNet, COCO, etc., I noticed that are few or even no references to the use of ensemble methods like bagging or boosting.

Is it a bad idea to use them?

However, I observed that many use ensemble in Kaggle competitions.

What makes it so different between the two groups of researchers?

",18276,,2444,,12/24/2020 12:05,12/25/2020 9:52,What ensemble methods are used in the state-of-the-art models?,,1,0,,,,CC BY-SA 4.0 25366,1,,,12/24/2020 9:01,,1,27,"

I was going through this paper by Hansen. This paper proposes policy improvement by first converting set of $\alpha$ vectors into finite state controller and then comparing them to obtain improved policy. The whole algorithm is summmarised as follows:

Section 4 of this paper explains the algorithm with example. I am unable to get the example, specifically how it forms those states, what are the numbers inside each state and how they are exactly calculated:

",41169,,36821,,12/25/2020 10:16,12/25/2020 10:16,Understanding example for Improved Policy Iteration for POMDPs,,0,0,,,,CC BY-SA 4.0 25367,1,,,12/24/2020 9:52,,3,396,"

In lecture 4 of this course, the instructor argues that GNNs are generalizations of CNNs, and that one can recover CNNs from GNNs.

He presents the following diagram (on the right) and mentions that it represents both a CNN and a GNN. In particular, he mentions that if we particularize the graph shift operator (i.e the matrix S, which in the case of a GNN could represent the adjacency matrix or the Laplacian) to represent a directed line graph, then we obtain a time convolutional filter (which I hadn't heard of before watching this, but now I know that all it does is shift the graph signal in the direction of the arrows at each time step).

That part I understand. What I don’t understand is how we can obtain 2D CNNs (the ones that we would for example apply to images) from GNNs. I was wondering if someone could explain.

EDIT: I found part of the answer here. However, it seems that the image convolution as defined is a bit different from what I’m used to. It seems like the convolution considers pixels only to the left and above of the “current” pixel whereas I’m used to convolutions considering both left, right, above, and below

",27433,,27433,,12/27/2020 12:58,3/9/2021 23:00,Are Graph Neural Networks generalizations of Convolutional Neural Networks?,,1,2,,,,CC BY-SA 4.0 25368,1,25374,,12/24/2020 11:29,,1,370,"

Reading the Retrace paper (Safe and efficient off-policy reinforcement learning) I saw they often use a matrix form of the Bellman operators, for example as in the picture below. How do we derive those forms? Could you point me to some reference in which the matter is explained?

I am familiar with the tabular RL framework, but I'm having trouble understanding the steps from operators to this matrix form. For example, why does $Q^{\pi} = (I -\gamma P^{\pi})^{-1}r$? I know that for the value $V$ we can write \begin{align} V = R + \gamma P^{\pi} V \\ V - \gamma P^{\pi} V = R \\ V (I -\gamma P^{\pi}) = R \\ V = R(I - \gamma P^{\pi})^{-1} \end{align} but this seems slightly different.

Picture from Safe and efficient off-policy reinforcement learning:

",32583,,2444,,12/24/2020 12:17,12/24/2020 18:07,How to derive matrix form of the Bellman operators?,,1,1,,,,CC BY-SA 4.0 25369,1,25396,,12/24/2020 11:38,,2,1123,"

My company has full access to beta testing for GPT-3. We wanted to try it for some games or game mechanics within Unity3D. Is it possible to use it for dialogues or with unity scripts?

The Documents of OpenAI does not say anything about this possibility, so I'm not sure.

",43190,,43231,,12/24/2020 12:27,12/28/2020 21:12,Is it possible to integrate the GPT-3 by OpenAPI inside Unity3D or any game-engine?,,2,0,,,,CC BY-SA 4.0 25370,2,,25269,12/24/2020 12:42,,2,,"

This is very easy to prove.

Let's first prove that, if $\hat{y}_k = y_k$, then the $E = 0$. I will leave all steps, so that it's super clear.

\begin{align} E &=\frac{1}{2}\sum_k(\hat{y}_k - y_k)^2 \\ &=\frac{1}{2}\sum_k(y_k - y_k)^2\\ &=\frac{1}{2}\sum_k(0)^2\\ &=\frac{1}{2}\sum_k 0\\ &=\frac{1}{2} 0\\ &=0\\ \end{align}

To prove the other way around, i.e. if $E = 0$, then $\hat{y}_k = y_k$, you can do as follows

\begin{align} \frac{1}{2}\sum_k(\hat{y}_k - y_k)^2 &=E\\ &=0 \end{align} Recall now that any number squared is non-negative (i.e. positive or zero). Given that $(\hat{y}_k - y_k)^2 $ is non-negative, then $\sum_k(\hat{y}_k - y_k)^2$ is a sum of non-negative numbers. The only way that a sum of non-negative numbers is equal to zero is if all numbers are zero, so we must have $\hat{y}_k = y_k$ (because any non-zero number squared is non-zero).

(Note that $E$ is the mean squared error, i.e. a loss function, and it's not the back-propagation algorithm, which is just the algorithm that you use to compute partial derivatives of $E$ with respect to the parameters of the model, which are not even visible in the way you wrote $E$).

",2444,,2444,,12/24/2020 12:59,12/24/2020 12:59,,,,0,,,,CC BY-SA 4.0 25371,1,,,12/24/2020 13:57,,3,71,"

I'm trying to understand Artificial Life (e.g. here for a simple background) in Computational Evolution.

I understand that in this set of methods, you set up a dynamic environment (e.g. the ecology of the environment) and then you set a series of rules; e.g.:

  • You need energy to reproduce.
  • You intake energy from food sources.
  • For nourishment, you can eat plants, animals, or steal food.
  • You must stay alive until you reproduce.
  • Every action consumes energy.
  • When you have no energy left, you die.

I think I need a set of rules that govern the survival of an artificial life. You run the environment and see what persists (there's a set of rules instead of a fitness score), and the individuals that survive are said to be successful.

I can imagine a scenario where a successful organism in this environment consumes a lot of food, reproduces, but possibly runs out of energy and dies. I'm wondering if there's ever a situation where an organism does very little (or nothing), and still be successful? I'm not sure if this question makes sense, please let me know if clarification is needed. Given the specified environment, I want to know if the most active organism will always be the most successful. The most active organism would be the one that obtains the most food/energy/reproduces the most. Or is it possible to not be the most active organism and still be successful?

",42926,,43231,,12/25/2020 11:09,9/28/2021 23:06,Is it possible that the fittest individuals in an Artificial Life population may be successful by not actively pursuing the rules of the environment?,,1,1,,,,CC BY-SA 4.0 25372,2,,25364,12/24/2020 14:18,,2,,"

In my opinion, it is not because ensemble methods are not good, just the state-of-the-art and Kaggle competitions are two different fields.

Kaggle competitions can be understood as an industry project where the target (accuracy, distance value, etc) is the most important, and they can select some computationally expensive way such as ensemble methods to reach it.

The state-of-the-art models in other ways belong to the research area, where the most important is the contribution for science, you can not just combine a lot of models then call it is the research (and so unfair with some small researcher groups). If you want to contribute something depend on ensemble idea, it should be like this paper.

",41287,,41287,,12/25/2020 9:52,12/25/2020 9:52,,,,3,,,,CC BY-SA 4.0 25373,1,,,12/24/2020 15:20,,0,388,"

I would like to build a real-time binary classifier that can predict an event of interest that is occurring as soon as it starts. These are electromyographic signals, and the event classification should be able to classify the event as early as possible. This is because the next stage of the algorithm has to make a decision before the end of the event.

I don't know what kind of algorithm/approach I should use here. I suppose RNN with LSTM cells should do the job, but the dataset is quite small as physiological signals are not easy to gather.

I have seen many algorithms that windowed the signals (from the training set) and labeled each window as an event of interest if at least part of the event is contained in the window. Each window is then fed to a machine learning algorithm. Then, the prediction uses a sliding window in real-time. But this approach doesn't take into account the temporal aspect of the event as there is no link between each window seen by the ML algorithm.

Do you have any tips or resources I could use to solve the problem?

",43239,,43239,,12/26/2020 18:13,1/16/2023 1:02,How to do early classification of time series event with small dataset?,,1,0,,,,CC BY-SA 4.0 25374,2,,25368,12/24/2020 17:54,,2,,"

There's not much to derive here it's simply a definition of Bellman operator, it comes from Bellman equation. If you're wondering why \begin{equation} Q^{\pi} = (I - \gamma P^{\pi})^{-1}r \tag{1} \end{equation} they state that $Q^{\pi}$ is a fixed point which means if you apply Bellman operator to it you get the same value \begin{equation} T^{\pi}(Q^{\pi}) = Q^{\pi} \end{equation} You can easily check that since from $(1)$ \begin{equation} r = (I-\gamma P^{\pi})Q^{\pi} \end{equation} if you plug it in definition of Bellman operator you get \begin{align} T^{\pi}(Q^{\pi}) &= r + \gamma P^{\pi} Q^{\pi}\\ &= (I - \gamma P^{\pi})Q^{\pi}+ \gamma P^{\pi} Q^{\pi}\\ &= Q^{\pi} \end{align}

",20339,,20339,,12/24/2020 18:07,12/24/2020 18:07,,,,4,,,,CC BY-SA 4.0 25375,1,,,12/24/2020 20:53,,1,36,"

I saw this post: Google AI generates images of 3D models with realistic lighting and reflections.

Is there any project to test this capability for combining two 3d cad files of a similar model and get a new combined model as shown (Hoogle AI ...)? I would like to use software such as Blender, SolidWorks or some project via Google Colab (testable without the need to install any software).

",33936,,2444,,12/25/2020 12:17,12/25/2020 12:17,Searching for 3D cad AI synthesis project (New CAD file form two similar CAD model),,0,2,,,,CC BY-SA 4.0 25376,1,,,12/25/2020 8:41,,1,17,"

I was reading this paper by Hansen.

It says the following:

A correspondence between vectors and one-step policy choices plays an important role in this interpretation of a policy. Each vector in $\mathcal{V}'$ corresponds to the choice of an action, and for each possible observation, choice of a vector in $\mathcal{V}$. Among all possible one-step policy choices, the vectors in $\mathcal{V}'$ correspond to those that optimize the value of some belief state. To describe this correspondence between vectors and one-step policy choices, we introduce the following notation. For each vector $\mathcal{v}_i$ in $\mathcal{V}'$, let $a(i)$ denote the choice of action and, for each possible observation $z$, let $l(i,z)$ denote the index of the successor vector in $\mathcal{V}$. Given this correspondence between vectors and one-step policy choices, Kaelbling et al. (1996) point out that an optimal policy for a finite-horizon POMDP can be represented by an acyclic finite-state controller in which each machine state corresponds to a vector in a nonstationary value function.

I am unable to guess how the left-side finite-state controller is formed from the right side belief space diagram. Does the above text provide enough explanation for the conversion? If yes, I am not really able to fully get it. Can someone please explain?

",41169,,2444,,12/25/2020 12:14,12/25/2020 12:14,How to obtain the policy in the form of a finite-state controller from the value function vectors over the belief space of the POMDP?,,0,0,,,,CC BY-SA 4.0 25379,1,,,12/25/2020 12:05,,2,52,"

I am currently studying the paper Learning and Evaluating Classifiers under Sample Selection Bias by Bianca Zadrozny. In section 3.4. Support vector machines, the author says the following:

3.4. Support vector machines
In its basic form, the support vector machine (SVM) algorithm (Joachims, 2000a) learns the parameters $a$ and $b$ describing a linear decision rule $$h(x) = \text{sign}(a \cdot x + b),$$ whose sign determines the label of an example, so that the smallest distance between each training example and the decision boundary, i.e. the margin, is maximized. Given a sample of examples $(x_i, y_i)$, where $y_i \in \{ -1, 1 \}$, it accomplishes margin maximization by solving the following optimization problem: $$\text{minimize:} \ V(a, b) = \dfrac{1}{2} a \cdot a \\ \text{subject to:} \ \forall i : \ y_i[a \cdot x_i + b] \ge 1$$ The constraint requires that all examples in the training set are classified correctly. Thus, sample selection bias will not systematically affect the output of this optimization, assuming that the selection probability $P(s = 1 \mid x)$ is greater than zero for all $x$.

How does the constraint that all examples in the training set are classified correctly imply that sample selection bias will not systematically affect the output of the optimisation? Furthermore, why is it necessary to assume that the selection probability is greater than zero for all $x$? These are not clear to me.

",16521,,,,,12/25/2020 12:05,How does the support vector machine constraint imply that sample selection bias will not systematically affect the output of the optimisation?,,0,0,,,,CC BY-SA 4.0 25382,2,,25358,12/25/2020 18:44,,2,,"

Of course, you can use AI (especially Deep Learning) in your application. Your covariates will be the input to your AI model and the model should predict the probability of presence. The model has no problem with binary data and binary data is common in this field.

Also, note that the 1:100 ratio is not good and the network will probably learn to output absence for any input (this way it gets 99% accuracy but really it's not doing anything). So, you should probably balance them (using almost the same data, or telling the network to pay more attention to presence data (by weighting the related loss)).

I think nowadays you can find Deep Learning in any popular coding language. But most of the DL community uses Python and it's really easy to learn. If you want to learn Deep Learning there are a lot of sources on the internet. But I suggest you the Deep Learning courses of Deeplearning.ai in Coursera (If you have a lot of time) and CS231n of Stanford university on youtube (If you have time).

",41547,,2444,,12/22/2021 10:36,12/22/2021 10:36,,,,0,,,,CC BY-SA 4.0 25383,2,,25373,12/25/2020 22:15,,0,,"

One kind of system you could look into are Echo State Networks (ESNs). They are relatively cheap to train and can learn to predict output signals to an arbitrary degree of precision.

All you need to train the system is some labeled training data. Thus, if you have a sequence of measurements/feature values and the corresponding sequence of class labels, you can train and fine-tune these kinds of systems to output some required class label very quickly after the onset of an event encoded in the measured feature values.

",37982,,,,,12/25/2020 22:15,,,,4,,,,CC BY-SA 4.0 25384,2,,25257,12/25/2020 22:37,,0,,"

I don't know of any neural network that can do cryptography well, so you would have to do a little experimenting yourself. The main thing that sticks out to me is that doing operations in the elliptical curve requires the modulus operator since it works in finite field, and I think neural networks have a hard time learning the modulus operator in general. So I would focus on that first. Some things to help the network learn the modulus operator:

  • I would try to increase the hidden layers to a number 4x-10x bigger than the input dimension, which maps it to a higher dimension to hopefully learn more complex behavior.
  • I would use less layers, maybe only 2-4 hidden layers, to speed up the development time.
  • Most importantly, I would train with a LOT of data points (25% of the total possible finite field possibilities). I don't think the neural network can learn unless the number of data is this high.
  • For reference, someone got a network to learn the modulus operator using these points.

For rapid iterating, I would test with a much smaller finite field. E.g. use a 8 bit security and see if the neural network can do well with that first before moving on to the full 256-bit security key (or whatever your end goal is).

Taking this a step further, I would first test to see if the neural network can even perform a point addition in the elliptical curve well, because if it doesn't then it definitely can't do point multiplication which is needed to compute the public key.

",42699,,,,,12/25/2020 22:37,,,,0,,,,CC BY-SA 4.0 25386,2,,18497,12/26/2020 4:59,,1,,"

Today one of the challenges is learning representations/concepts that are causally invariant. Once we have good representations then we can work on the reasoning aspect. There are 2 camps of people today. One believes that symbolic manipulation cannot be achieved properly by deep networks. Hence, they separate the task of extracting a lower-dimensional representation of objects from the visual scenes from the task of reasoning with knowledge-graphs. The other camp feels that we can do end-to-end training of a neural network and it can learn how to jointly learn a good lower-dimensional representation for each symbol along with learning how to reason with them. I am no expert at this but here are a few papers that I find are worthy for you to read -

  1. Neuro Symbolic Concent Learner Paper, Code
  2. Learning Reasoning Strategies in End-to-End Differentiable Proving Paper, Code
  3. Neuro-Symbolic Visual Reasoning: Disentangling “Visual” from “Reasoning” Paper
  4. Knowledge Infused Learning (K-IL): Towards Deep Incorporation of Knowledge in Deep Learning Paper
  5. Visual Concept-Metaconcept Learning Paper Project Page
  6. CVPR 2020 workshop on Neuro-Symbolic Visual Reasoning and Program Synthesis youtube videos

If you are looking for community maintained lists, here is one list of papers

",7330,,7330,,12/26/2020 5:57,12/26/2020 5:57,,,,0,,,,CC BY-SA 4.0 25388,1,,,12/26/2020 11:00,,1,27,"

I am training a neural network with a dataset that has 51 classes and 6766 data in it. I used 80% for the training set, 10% for validation, and 10% for the test. After training I got confusion matrix and I find out the last class is missed in the test set. So, I used data augmentation and used 27064 data and 80-10-10 splits again, but the last class name is missed again. I changed the size of the test split but the problem was not solved, and in every trial that I made, only the last class name is missed. How can I solve this?


EDIT: my dataset is images and in the original dataset, I have 104 data from the last class, after augmentation the entire dataset the last class has 416 data.

",33792,,33792,,12/29/2020 8:24,12/29/2020 8:24,The last target name is missed in the test set,,0,7,,,,CC BY-SA 4.0 25390,1,25394,,12/26/2020 15:42,,0,72,"

I am trying to make a neural network framework from scratch in C++ just for fun, and to test my backpropagation, I thought it would be an easy way to test the functionality if I give it one input - a randomized size 10 vector, and one output: a size 5 vector containing all 1s, and train it a bunch of times to see if the loss will decrease. Essentially trying to make it overfit

The problem is that for each run that I do, the loss either shoots up and goes to nan, or reduces a lot, going to 0.000452084 or other similar small values. However even in the low end of things, my output (which should be close to all 1s, as the "ground truth") is something like:

0.000263654
1e-07
8.55893e-05
1e-07
0.999651

The only close value close to 1 being the last element.

My network consists of the input layer 10 neurons, one 10 neuron dense layer with RELU activation, and another 5 neuron dense layer for output, with SoftMax activation. I am using categorical cross entropy as my loss function, and I am normalizing my gradient by dividing it by the norm of my gradient if it is over 1.0. I initialize my weights to be random values between -0.1 and 0.1

To calculate the gradient of the loss function, I use -groundTruth/predictedOutput. To calculate the other derivatives, I dot the derivative of that layer with the gradient of the previous layer with respects to its activation function.

Before this problem I was having exploding gradients, which the gradient scaling fixed, however it was very weird that that would even happen on a very small network like this, which could be related to the problem I am currently having. Is the implementation not correct or am I missing something very obvious?

Any ideas about this weird behavior, and where I should look first? I am not sure how to show a minimal reproduceable example as that would require me to paste the whole codebase, but I am happy to show pieces of code with explanation. Any advice welcomed!!

",43270,,,,,12/26/2020 20:54,"Loss randomly changing, incorrect output (even for low loss) when trying to overfit on a single set of input and output",,1,1,,,,CC BY-SA 4.0 25392,1,,,12/26/2020 19:33,,2,376,"

I am attempting a project involving training an agent to play a game using deep reinforcement learning. This project has a few features that complicate the design of the neural network:

  • The action space per state is very large (can be over 1000 per state)
  • The set of actions available in each state very wildly between states, both in size and the actions available.
  • The total action-space (the union of each state's action-space) is way too large to enumerate.
  • The action space is discrete, not continuous.
  • The game is adversarial, multi-agent.

Most RL neural networks I've seen involve the input of a state, and an output of a constance action size, where each element of the output is either an action's q-score or probability. But since my game's action space non-constant per state, I believe this design will not work for this game.

I have seen an alpha-go style network, which outputs a probability for all actios ever possible, and then zeros out actions not possible in the given state and re-normalized the probabilities. However, since the total action-space in my game is way to large to enumerate, I don't believe this solution will work either.

I have seen several network designs for large, discrete action spaces, including:

  • design a network to input a state-action pair and output a single score value, and train it via a value-based algorithm (such as q-learning). To select an action given a state, pass in every possible state-action pair to get each action's score, then select the action with the highest score.
  • (Wolpertinger architecture) have a network output a continous embedding, which is then mapped to a discrete action, and train it via deterministic policy gradient.
  • divide actions into a sequence of simpler actions, and train an RNN to output a sequence of these simpler actions. Train this network via a value-based algorithm (such as q-learning).

However, all of these solutions are designed for either value-based or deterministic policy gradient algorithms; none of them output probabilities over the action space. This seems to be an issue since at least a very large portion of the multi-agent deep-RL algorithms I've seen involve a network that outputs a probability over the action-space. Therfore, I don't want to limit myself to value-based and deterministic-policy algorithms.

How can I design a neural network that outputs a probability over the action space for my game? If not, what would be some good solutions to this problem?

",38448,,,,,3/4/2022 11:33,"Designing Policy-Network for Deep-RL with Large, Variable Action Space",,0,1,,,,CC BY-SA 4.0 25394,2,,25390,12/26/2020 20:54,,0,,"

Softmax activation always adds up to 1, because it's designed to deal with probabilities (in problems of classification, those probabilities represent how likely the network thinks an object belongs to a specific class). You can verify that by summing up the numbers of your output layer. So currently your network is trying to do the impossible, to produce output that sums up to 5, instead of 1. Therefore, loss will never become stable. Since you want your output layer to produce all ones, you need to use some other activation, for example, linear. Linear activation does not have the same constraint that Softmax does.

",38076,,,,,12/26/2020 20:54,,,,1,,,,CC BY-SA 4.0 25395,1,,,12/26/2020 21:15,,2,148,"

After a series of convolutions, I am up-sampling a compressed representation, I was curious what is the methodology I should follow to choose an optimum kernel size for up-sampling.

  1. How will the filter (or kernel) size affect the transpose convolution operation (e.g. when using ConvTranspose2d)? Will a larger kernel help upsample with better detail or a small-sized kernel? And how would padding fit in this scenario?

  2. At what rate should the depth(Channels i.e number of filters) decrease while upsampling i.e from (Dx24x24) to (D/2 or D/4, 48, 48) Eg: If i/p to TransposeConvolution is (cxhxw)64x8x8 how o/p quality be different for o/p of shape 32x16x16 and 16x16x16?

",43254,,43254,,7/3/2021 9:48,7/3/2021 9:48,How will the filter size affect the transpose convolution operation?,,0,5,,,,CC BY-SA 4.0 25396,2,,25369,12/26/2020 21:41,,3,,"

Yes, OpenAI will release an API for GPT-3, so any developer can integrate it into their application. I don't believe the document for their API is public yet, so we don't know what the final interface will look like, but it's likely to be a simple REST API. In the future, I imagine your developers can take advantage of their API, or alternatively there will be community-made scripts for you to use/copy.

The pricing for using their API is explained here. Note that they charge per token, which might be important in case your game plans to make live calls to GPT-3 during gameplay (as opposed to mining a huge corpus of answers to build an offline database).

The use cases of GPT-3 suggests that you can legally use them for commercial products, although I couldn't find a definitive license or user agreement document.

",42699,,,,,12/26/2020 21:41,,,,0,,,,CC BY-SA 4.0 25399,2,,22667,12/27/2020 11:17,,2,,"

I don't have a definite answer, but only a suspicion/idea:

Looking at Figure 1 from the WGAN paper, we clear see that the JS divergence on the right is not continuous at $0$, hence not differentiable at $0$. However, the EM plot on the left is continuous also at $0$. You could now argue that we have a kink there, so it should not be differentiable there either, but they might have a different notion of differentiability, I am honestly not sure about it right now.

",43286,,,,,12/27/2020 11:17,,,,1,,,,CC BY-SA 4.0 25400,1,25404,,12/27/2020 11:39,,1,115,"

I have a card game where on a player's turn, the player sequentially draws two cards. Each card may be drawn from another player's discard stack (face up), or from the deck (face down).

Thinking how to encode this into an action space, I could naively assume the two draws are independent. The action space would simply be a binary vector of 2 * (1 + (number_of_players - 1)), which I could post-filter to limit for empty draw piles (and can't draw from own pile).

However, when playing the game myself, I noticed that it's sometimes advantageous to draw the initial card from the deck, then select the draw pile for the second card based on the value of the first one drawn. But how would this be encoded into an action space? Would it be better to think of these are two separate actions, even thought they are part of the same "turn"?

",5154,,,,,12/27/2020 17:11,RL: Encoding action conditioned on previous action,,1,0,,,,CC BY-SA 4.0 25404,2,,25400,12/27/2020 15:31,,2,,"

It is hard to say for certain without knowing full details and results of experiments.

However, if the game allows for splitting decisions up, it will likely be better for the agent to take advantage of extra knowledge of the value of any previously hidden card just taken from the draw pile.

In general, if each player decision is taken sequentially, resulting in changes to state, then it is a separate action on a separate time step according to the MDP theoretical model used in reinforcement learning (RL). You might want to describe/notate the time steps differently so that they match how the game play proceeds. However, for the purposes of RL, each decision point should be on a new time step, and should result in a new state, new value estimates etc.

Similarly, whether or not the current choice is the player's first card or second card to be drawn needs to be part of the state. This detail of the state might already be covered by the number of cards in the player's hand, if logically the number of cards is always the same at each stage. However, if hand size can vary for other reasons, it is worth adding an explicit flag for "first draw choice" or similar so that the agent can use the information.

You have some freedom for encoding the action space. If drawing cards is the only possible action in this game at all stages, then a binary output vector of 1 + (number_of_players - 1) dimensions would be suitable. Other encodings may work well too, it depends if there is any logical structure to the choices or some derived data that encodes useful game information.

It may be useful to arrange the action choices so that the index for drawing from each player's discard pile is considered relatively to the current player's turn. That is, instead of actions being arranged $[draw, discard P1, discard P3, discard P4, discard P5]$ for P2, they would be arranged $[draw, discard P3, discard P4, discard P5, discard P1]$ and for P3 would be different: $[draw, discard P4, discard P5, discard P1, discard P2]$ . . . that would inherently allow for the cyclical nature of turns. State representation would need to similarly rotate knowledge about each player to match this. You might not need to do this, but I would recommend it for games where there is a lot of common logic regarding action choices relative to turn position that you could take advantage of. The opposite would apply (and you would use absolute player positions) if there were important differences throughout the game between being P1, P2, P3 etc.

",1847,,1847,,12/27/2020 17:11,12/27/2020 17:11,,,,1,,,,CC BY-SA 4.0 25405,2,,25335,12/27/2020 16:56,,1,,"

Look at spatio-temporal CNNs which extend the image-based CNN in 2D to 3D to handle time. These are commonly used to detect or classify action in a video. People have used them to identify specific actions in various sports such as kicking a soccer ball, throwing a baseball or dribbling a basketball. They have been used to identify fire, smoke, deep fakes, and violence in videos. Below are some articles to help get started.

Source code:

",5763,,5763,,12/27/2020 17:04,12/27/2020 17:04,,,,0,,,,CC BY-SA 4.0 25406,1,,,12/27/2020 17:46,,8,290,"

I implemented the unet in TensorFlow for the segmentation of MRI images of the thigh. I noticed I always get a higher validation accuracy by a small gap, independently of the initial split. One example:

So I researched when this could be possible:

  1. When we have an "easy" validation set. I trained it for different initial splitting, all of them showed a higher validation accuracy.
  2. Regularization and augmentation may reduce the training accuracy. I removed the augmentation and dropout regularization and still observed the same gap, the only difference was that it took more epochs to reach convergence.
  3. The last thing I found was that in Keras the training accuracy and loss are averaged over each iteration of the corresponding epoch, while the validation accuracy and loss is calculated from the model at the end of the epoch, which might make the the training loss higher and accuracy lower.

So I thought that if I train and validate on the same set, then I should get the same curve but shifted by one epoch. So I trained only on 2 batches and validated on the same 2 batches (without dropout or augmentation). I still think there is something else happening because they don't look quite the same and at least at the end when the weights are not changing anymore, the training and validation accuracy should be the same (but still the validation accuracy is higher by a small gap). This is the plot:

Is there anything else that can be increasing the loss values, this is the model I am using:

def unet_no_dropout(pretrained_weights=None, input_size=(512, 512, 1)):
inputs = tf.keras.layers.Input(input_size)
conv1 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
conv1 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
conv2 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
conv3 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
conv4 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
#drop4 = tf.keras.layers.Dropout(0.5)(conv4)
pool4 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv4)

conv5 = tf.keras.layers.Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
conv5 = tf.keras.layers.Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
#drop5 = tf.keras.layers.Dropout(0.5)(conv5)

up6 = tf.keras.layers.Conv2D(512, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
    tf.keras.layers.UpSampling2D(size=(2, 2))(conv5))
merge6 = tf.keras.layers.concatenate([conv4, up6], axis=3)
#merge6 = tf.keras.layers.concatenate([conv4, up6], axis=3)
conv6 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
conv6 = tf.keras.layers.Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)

up7 = tf.keras.layers.Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
    tf.keras.layers.UpSampling2D(size=(2, 2))(conv6))
merge7 = tf.keras.layers.concatenate([conv3, up7], axis=3)
conv7 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
conv7 = tf.keras.layers.Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)

up8 = tf.keras.layers.Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
    tf.keras.layers.UpSampling2D(size=(2, 2))(conv7))
merge8 = tf.keras.layers.concatenate([conv2, up8], axis=3)
conv8 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
conv8 = tf.keras.layers.Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)

up9 = tf.keras.layers.Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
    tf.keras.layers.UpSampling2D(size=(2, 2))(conv8))
merge9 = tf.keras.layers.concatenate([conv1, up9], axis=3)
conv9 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
conv9 = tf.keras.layers.Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv9 = tf.keras.layers.Conv2D(2, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv10 = tf.keras.layers.Conv2D(1, 1, activation='sigmoid')(conv9)

model = tf.keras.Model(inputs=inputs, outputs=conv10)

model.compile(optimizer = Adam(lr = 2e-4), loss = 'binary_crossentropy', metrics = [tf.keras.metrics.Accuracy()])
#model.compile(optimizer=tf.keras.optimizers.Adam(2e-4), loss=combo_loss(alpha=0.2, beta=0.4), metrics=[dice_accuracy])
#model.compile(optimizer=RMSprop(lr=0.00001), loss=combo_loss, metrics=[dice_accuracy])

if (pretrained_weights):
    model.load_weights(pretrained_weights)

return model

and this is how I save the model:

model_checkpoint = tf.keras.callbacks.ModelCheckpoint('unet_ThighOuterSurfaceval.hdf5',monitor='val_loss', verbose=1, save_best_only=True)
model_checkpoint2 = tf.keras.callbacks.ModelCheckpoint('unet_ThighOuterSurface.hdf5', monitor='loss', verbose=1, save_best_only=True)

model = unet_no_dropout()
history = model.fit(genaug, validation_data=genval, validation_steps=len(genval), steps_per_epoch=len(genaug), epochs=80, callbacks=[model_checkpoint, model_checkpoint2])
",42357,,4709,,3/10/2022 20:30,9/7/2022 3:06,Validation accuracy higher than training accurarcy,,1,1,,,,CC BY-SA 4.0 25410,1,25412,,12/27/2020 20:13,,4,290,"

In many articles (for example, in the YOLO paper, this paper or this one), I see the term "unified" being used. I was wondering what the meaning of "unified" in this case is.

",43305,,2444,,12/27/2020 22:47,1/8/2021 17:11,What is a unified neural network model?,,1,2,,,,CC BY-SA 4.0 25411,1,,,12/27/2020 21:34,,5,838,"

The WGAN paper concretely proposes Algorithm 1 (cf. page 8). Now, they also state what their loss for the critic and the generator is.

When implementing the critic loss (so lines 5 and 6 of Algorithm 1), they maximize the parameters $w$ (instead of minimizing them as one would normally do) by writing $w \leftarrow w + \alpha \cdot \text{RMSProp}\left(w, g_w \right)$. Their loss seems to be $$\frac{1}{m}\sum_{i = 1}^{m}f_{w}\left(x^{\left(i\right)} \right) - \frac{1}{m}\sum_{i = 1}^{m} f_{w}\left( g_{\theta}\left( z^{\left( i\right)}\right)\right).\quad \quad (1)$$

The function $f$ is the critic, i.e. a neural network, and the way this loss is implemented in PyTorch in this youtbe video (cf. minutes 11:00 to 12:26) is as follows:

critic_real = critic(real_images)

critic_fake = critic(generator(noise))

loss_critic = -(torch.mean(critic_real) - torch.mean(critic_fake))

My question is: In my own experiments with the CelebA dataset, I found that the critic loss is negative, and that the quality of the images is better if the negative critic loss is higher instead of lower, so $-0.75$ for the critic loss resulted in better generated iamges than a critic loss of $-1.26$ e.g.

Is there an error in the implementation in the youtube video of Eq. (1) and Algorithm 1 of the WGAN paper maybe? In my opinion, the implementation in the video is correct, but I am still confused then on why I get better images when the loss is higher ...

Cheers!

",43286,,2444,,1/25/2021 19:05,1/25/2021 19:05,Wasserstein GAN: Implemention of Critic Loss Correct?,,0,4,0,,,CC BY-SA 4.0 25412,2,,25410,12/27/2020 21:38,,2,,"

A unified neural network model consists of one neural network as opposed to other models that rely on two or more neural networks.

For example, from page two of the YOLO paper:

2. Unified Detection

We unify the separate components of object detection into a single neural network. Our network uses features from the entire image to predict each bounding box. It also predicts all bounding boxes across all classes for an image simultaneously. This means our network reasons globally about the full image and all the objects in the image. The YOLO design enables end-to-end training and realtime speeds while maintaining high average precision.

In the paper by Xu and Wang, they add a branch to handle tracking to the Faster R-CNN architecture, which was designed for object detection. This is a 'unification' of two models.

In the paper by Ebrahimi et al., on pages 42-43, you can see they are fusing multiple neural networks into one unified model:

We propose a unified user geolocation method that relies on a fusion of neural networks, incorporating different types of available information: tweet message, users' social relationships, and metadata fields embedded in tweets and profiles.

",5763,,2444,,12/27/2020 22:49,12/27/2020 22:49,,,,0,,,,CC BY-SA 4.0 25414,1,25415,,12/28/2020 1:08,,-1,117,"

The term codon is used in the context of grammatical evolution (GE), sometimes, without being explicitly defined. For example, it is used in this paper, which introduces and describes PonyGE 2, a Python library for GE, but it's not clearly defined. So, what is a codon?

",2444,,,,,12/29/2020 0:03,"What is a ""codon"" in grammatical evolution?",,1,0,,,,CC BY-SA 4.0 25415,2,,25414,12/28/2020 1:08,,1,,"

Grammatical evolution

To understand what a codon is, we need to understand what GE is, so let me first provide a brief description of this approach.

Grammatical evolution (GE) is an approach to genetic programming where the genotypes are binary (or integer) arrays, which are mapped to the phenotypes (i.e. the actual solutions, which can be represented as trees, which, in turn, represent programs or functions), using a grammar (for example, expressed in Backus-Naur form). So, the genotypes (i.e. what is mutated, combined, or searched) and the phenotypes (the actual solutions, which are programs) are different in GE, and the genotype needs to be mapped to the phenotype to get the actual solution (or program), but this is not the case in all GP approaches (for example, in tree-based GP, the genotype and the phenotype are the same, i.e. trees, which represent functions).

Codons

In GE, a codon is a subsequence of $m$ bits of the genotype (assuming that genotypes are binary arrays). For example, let's say that we have only two symbols in our grammar, i.e. a (a number) and b (another number). In this case, we only need 2 bits to differentiate the two. If we had 3 symbols, a, b and + (the addition operator), we would need at least a sequence of 2 bits to define each symbol. So, in this case, we could have the following mapping

  • a is represented by 00 (or the integer 0),
  • b is represented by 01 (or 1), and
  • + is represented by 10 (or 2)

The operation a+b could then be represented by the binary sequence 001001 (or the integer sequence 021). The 2-bit subsequences 00, 01 and 10 (or their integer counterparts) are the codons.

What do we need codons for?

In GE, codons are used to index the specific choice of a production rule. To understand this, let's define a simple grammar, which is composed of

  • a set of non-terminals (e.g. functions) $N = \{ \langle \text{expr} \rangle, \langle \text{op} \rangle, \langle \text{operand} \rangle, \langle \text{var} \rangle \}$ ,
  • a set of terminals (e.g. specific numbers or letters) $\mathrm{T}=\{1,2,3,4,+,-, /, *, \mathrm{x}, \mathrm{y}\}$,
  • a set of production rules $P$, and
  • an initial production rule $S = \langle \text{expr} \rangle $.

In this case, the set of production rules $P$ is defined as follows

\begin{align} \langle \text{expr} \rangle & ::= \langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle \; | \; \langle \text{operand} \rangle \\ \langle \text{op} \rangle & ::= + \; | \; - \; | \; * \; | \; / \\ \langle \text{operand} \rangle & ::= 1 \; | \; 2 \; | \; 3 \; | \; 4 \; | \; \langle \text{var} \rangle \\ \langle \text{var} \rangle & ::= \mathrm{x} \; | \; \mathrm{y} \end{align} So, there are four productions rules. To be clear, $\langle \text{var} \rangle ::= \mathrm{x} \; | \; \mathrm{y}$ is a production rule. The symbol $|$ means "or", so the left-hand side of each production is a non-terminal (and note that all non-terminals are denoted with angle brackets $\langle \cdot \rangle$), which is defined as (or can be replaced with) one of the right-hand choices, which can be a non-terminal or terminal. The first choice of each production rule is at index $0$. The second choice at index $1$, and so on. So, for example, in the case of the production $\langle \text{var} \rangle ::= \mathrm{x} \; | \; \mathrm{y}$, $\mathrm{x}$ is the choice at index $0$ and $\mathrm{y}$ is the choice at index $1$.

The codons are the indices that we use to select the production rule's choice while transforming (or mapping) the genotype into a phenotype (an actual program).

So, we start with the first production, in this case, $S = \langle \text{expr} \rangle$. If it's a non-terminal, then we replace it with one of its right-hand side choices. In this case, there are two choices

  • $\langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$ (choice at index $0$)
  • $\langle \text{operand} \rangle$ (choice at index $1$)

If our genotype (integer representation) is, for example, $01$ (note that this is a sequence of integers), we would replace $\langle \text{expr} \rangle$ with $\langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$, then we would replace the first $\langle \text{expr} \rangle$ in $\langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$ with $\langle \text{operand} \rangle$, so we would get $\langle \text{operand} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$, and so on and so forth, until we get an expression that only contains terminals, or the genotype is terminated. There can be other ways of mapping the genotype to the phenotype, but this is the purpose of codons.

Codons in biology

The term codon has its origins in biology: a subsequence of 3 nucleotides is known as a codon, which is mapped to an amino acid in order to produce the proteins. The set of all mappings from codons to amino acids is known as genetic code. Take a look at this article for a gentle introduction to the subject. So, in GE, codons also have a similar role to the role of codons in biology, i.e. they are used to build the actual phenotypes (in biology, the phenotypes would be the proteins or, ultimately, the organism).

Notes

Codons do not have to be 2-bit subsequences, but they can be $m$-bit subsequences, for some arbitrary $m$. The term codon is similar to the term gene, which is also often used to refer to specific subsequences of the genotype (for example, in genetic algorithms), although they may not be synonymous (at least in biology, genes are made up of sequences of codons, so they are not synonymous). Moreover, the binary $m$-bit codons can first be mapped to integers, so codons can also just be integers, as e.g. used here or illustrated in figure 2.2 of this chapter.

Further reading

You can find more info about codons in the book Genetic Programming: An Introduction by Wolfgang Banzhaf et al., specifically sections 9.2.2. (p. 255), 9.2.3. (an example) and 2.3 (p. 39), or in chapter 2 of the book Foundations in Grammatical Evolution by Dempsey et al.

",2444,,2444,,12/29/2020 0:03,12/29/2020 0:03,,,,0,,,,CC BY-SA 4.0 25418,2,,20118,12/28/2020 5:33,,0,,"

The question is generalizability. I completely agree though but, ideally the policy found will generalize to more complex environments the model hasn't seen. You could also run a planner on a new scenario but the issue is that it would be too computationally demanding for real time.

",32390,,,,,12/28/2020 5:33,,,,0,,,,CC BY-SA 4.0 25421,1,,,12/28/2020 8:18,,1,187,"

I have time-series data. When I take an action, it impacts the next state, because my action directly determines the next state, but it is not known what the impact is.

To be concrete: I have $X(t)$ and $a(t-1)$, where $X(t)$ is n-dimensional time-series data and $a(t)$ is 1-dimensional time-series data. At time $t$, they together represent the observation/state space. Also, at time $t$, the agent makes a decision about $a(t)$. This decision (action) $a(t)$ directly defines the next state space $X(t+1)$ and rewards, by some function $f$ & $g$, $f(a(t), X(t)) = X(t+1)$ and $g(a(t), X(t)) = R(t+1)$.

I have to estimate this impact, i.e. where I will end up (what will be the next state). I decided to use a model-based RL algorithm, because, from my knowledge, model-based RL does exactly this.

Can you advise me on a good paper and Github code, to implement this project?

As I noticed, there do not exist many works on Model-based RL.

",43317,,2444,,12/28/2020 22:36,10/21/2022 11:06,Model-based RL for time series data,,2,1,,,,CC BY-SA 4.0 25422,2,,25421,12/28/2020 9:58,,0,,"

To my knowledge, there does not exist anything along the lines of model-based reinforcement learning with time-sensitive data. I think the best chance you have is to try to abstract the data that you have into a model which is not time-sensitive.

What would happen when you get past the timestamps of your original data? I am guessing that when testing this model, you will have states with time stamps past your original latest timestamp. Or are you only using your model in the time stamps of your training data? Is it going to be a useful model then?

Firstly, please note that the topics of this forum are mostly theoretical. This question leans on the boundary. If you frame your question a bit differently, ending your post with a clear question and also putting that question in the title, you will get more responses.

I feel like the question is 'What are the possibilities for model-based reinforcement learning for time-sensitive data?' But this question is inherently vague as it implies that you have a model according to the definitions of models in RL, which are not time sensitive. If you would like a better answer then please provide more information or ask your question on a more practical forum if it is more practical than what I guessed.

Disclaimer: would have just put some comments under the post as this is not a real sufficient answer, but my reputation is not high enough for that :P

",34383,,,,,12/28/2020 9:58,,,,0,,,,CC BY-SA 4.0 25423,1,,,12/28/2020 14:49,,2,34,"

Firstly as an example here is the architecture of YOLOv2

I am trying to understand the depth of an output of a convolutional layer. For example, the first convolutional layer has the shape 3x3x32. So there are 32 filters with shape 3x3, but each filter has 3 layers and these 3 layers convolve over 3 layers of the input. At the end, values of the 3 layers are summed up and to generate 1 layer. For 32 filters, we get an output with 32 layers.

If we look at the next layer, 64 filters with size 3x3 and each filter should have 32 layers. Because input has 32 layers. Is this inference true? If it is not, how does it work?

",43305,,2444,,12/28/2020 15:40,12/28/2020 15:40,Do filters have as many layers as the depth of the input in CNNs?,,0,2,,12/28/2020 15:54,,CC BY-SA 4.0 25426,1,25429,,12/28/2020 16:19,,2,348,"

I have two questions regarding the Selection and Expansion steps in the Monte Carlo Tree Search Algorithm. In order to state the questions, I recall the algorithm that I believe is the one most commonly associated with the MCTS. It is described as a repeated iteration of the following four steps:

  1. Selection: Start from root R. Choose a leaf node L by iteration of some choice algorithm that determines which child node to choose at each branching. UTC being a prominent choice.
  2. Expansion: Create one or more offspring nodes, unless L is terminal. Choose one of them, say C.
  3. Simulation: Play the game from C, randomly or according to some heuristic.
  4. Backpropagation: Update rewards and number of simulations for each node on the branch R-->C.

When implementing this algorithm by myself I was unclear about the following interpretation of step 1 and 2:

Q1. When expanding the choices at the leaf node L, do I expand all, a few or just one child? If I expand all, then the tree grows exponentially large on each MCTS step, I suspect. When I expand one or a few, then either the selection step itself becomes problematic or the term leaf does. The first problem arises, because after the expansion step the node L is no longer a leaf and can never be chosen again during the selection step and in turn all the children that were not expanded will never be probed. If, however, the node L keeps being a leaf node, contrary to graph-theoretic nomenclature, then during the selection step one would need to check at each node, whether there are non-expanded child-nodes. According to which algorithm should one then choose whether to continue down the tree or expand at this non-leaf "leaf" some more yet unexpanded children?

Q2. Related to the first question, but slightly more in the direction of the exploitation-exploration part of the selection, I am puzzled about the UTC selection step, which again raises issues for each of the above-mentioned expansion methods: In case that a few or all child-nodes are chosen during expansion at the leaf, one is faced with the problem that some of those nodes will not be simulated in that MCTS step and subsequently will have a diverging UTC value $w_i/n_i + c \sqrt{\frac{\ln{N_i}}{n_i}}\to \infty$, since $n_i\to 0$. On the other hand, in case that only one child is chosen, we are facing the issue that no UTC value can be assigned to the "unborn" children on the way. In other words, one cannot use UTC to decide whether to choose a child node according to UTC at each branching or to expand the tree at that node (since all nodes within the tree may have some unexpanded child nodes).

",43332,,,,,12/28/2020 16:53,"Unclear definition of a ""leaf"" and diverging UTC values in the Monte Carlo Tree Search",,1,2,,,,CC BY-SA 4.0 25427,1,,,12/28/2020 16:23,,3,306,"

I want to build a player for the following game: You have a board where position 1 is your player, position 2 is the rival player, -1 is a blocked cell and some positive value is a bonus. You can move up, down, left, or right. Also, each bonus has a timer until it disappears (number of steps). Furthermore, each move has a timeout limit. At the end game, when at least one of the players is stuck, we check the scores and announce the winner.

Board example:

 -1 -1  0  0  0 -1 -1  -1
 -1  0 -1 -1 -1  0  0  340
 -1 -1  0  0  0 -1  0   0
 -1  0  0 -1  1 -1  0  -1
 -1  0  0 -1 -1  0  0   0
  0  0 -1 -1 -1  0  2  -1
  0 -1  0  0 -1  0  0  600
 -1 -1  0  0 -1 -1 -1  -1
  0 -1  0  0  0  0 -1  -1

I'm using the MiniMax algorithm with a time limit to play the game. If we got to children, we return $\infty$ for a player win, $-\infty$ for the rival win, and $0$ for a tie. If we got to a specific depth, we calculate the heuristic value. If we got timeout in some place in MiniMax, then we return the last calculated direction. I'm trying to figure out a good strategy to win this game or get to a tie if no solution is possible.

What heuristic function would you define?

What I thought - four factors:

  1. $f_A$ - The number of steps possible from each direction from the current position.
  2. $f_B$ - The analytical distance from the center.
  3. $f_C=\max_{b\in Bonus}\frac{X * I}{Y}$ - where $X$ is value of the bonus, $I$ is $1$ if we can get to the bonus, before it disappears (otherwise $0$) and $Y$ is the distance between the bonus and the player.
  4. $F_D$ - The distance between the players. The final formula: $$ f(s)=0.5\cdot(9-f_A(s))+0.2\cdot f_C(s)-0.2\cdot f_D(s)-0.1\cdot f_B(s) $$

I'm not sure if it will be a good strategy for that game or not. How would you define the heuristic function? It should also be quick to calculate it because the game has a timeout for each move.

In order words, what will give us the best indication that our player is going to win/lose/tie?

",43333,,2444,,1/5/2021 19:22,1/6/2021 11:07,Strategy for playing a board game with Minimax algorithm,,1,0,,,,CC BY-SA 4.0 25428,1,,,12/28/2020 16:26,,1,1745,"

Suppose that a simple feedforward neural network (FFNN) contains $n$ hidden layers, $m$ training examples, $x$ features, and $n_l$ nodes in each layer. What is the space complexity to train this FFNN using back-propagation?

I know how to find the space complexity of algorithms. I found an answer here, but here it is said that the space complexity depends on the number of units, but I think it must also depend on the input size.

Can someone help me in finding its worst-case space complexity?

",43329,,2444,,12/28/2020 17:24,10/20/2022 22:02,What is the space complexity for training a neural network using back-propagation?,,1,4,,,,CC BY-SA 4.0 25429,2,,25426,12/28/2020 16:53,,1,,"

Q1. When expanding the choices at the leaf node L, do I expand all, a few or just one child?

Expanding all nodes or expanding just one node are both possible. There are different advantages and disadvantages. The obvious disadvantage of immediately expanding them all is that your memory usage will grow more quickly. I suppose that the primary advantage is that you no longer need to keep track separately of "actions that are legal but for which I did not yet create child nodes" and "already-created nodes", which I guess might sometimes lead to better computational efficiency (especially when memory isn't a concern for you, for instance if you do relatively few iterations anyway or if you have a huge amount of memory available).

In the more "modern" variants of MCTS that also use trained policy networks (like all the work inspired by AlphaGo / AlphaGo Zero / AlphaZero), it typically makes most sense to expand all children at once, because the trained network will immediately give you policy values for all children anyway, so then you can prime them all at once with those policy head outputs and also immediately start using all of those values.

In the case where you choose to expand only one child at a time, indeed the terminology of "leaf" nodes becomes confusing / incorrect. I never really like the term "leaf" node in the context of MCTS anyway; they're really not leaf nodes (unless they happen to represent terminal game states), they're just nodes for which did not yet choose to instantiate all the child nodes. It's a bit more verbose, but I prefer referring to them as "nodes that have not yet been fully expanded". That would change the description of the Selection phase to something more like:

Selection: Start from root R. Choose a node L that has not yet been fully expanded (or a terminal node) by iteration of some choice algorithm that determines which child node to choose at each branching.


In case that a few or all child-nodes are chosen during expansion at the leaf, one is faced with the problem that some of those nodes will not be simulated in that MCTS step and subsequently will have a diverging UTC value $w_i/n_i + c \sqrt{\frac{\ln{N_i}}{n_i}}\to \infty$, since $n_i\to 0$.

That's true, but I don't see that as a problem. Suppose we're at a non-fully-expanded node; a node where maybe some legal actions already have corresponding child nodes, or maybe none do, but at least some legal actions still don't have corresponding child nodes. We can view these as having a visit count of $0$, which we can view as leading to a UCB1 value of $\infty$. Since UCT picks nodes (or actions) according to an $\arg\max$ over the UCB1 values, we can simply think of this as UCT always preferring to pick actions that we did not yet expand over actions that we've already expanded and created a child node for. This leads to an implementation where we'll only re-visit a node that we've already visited before if we have already visited each of its possible siblings at least once too.

",1641,,,,,12/28/2020 16:53,,,,1,,,,CC BY-SA 4.0 25430,2,,25428,12/28/2020 17:41,,0,,"

I will not tell you what the exact space complexity of training an FFNN with GD and BP is (because that actually depends on the specific implementation of GD and BP and I don't want to dive into the details of some specific implementation now, maybe later!), but I will guide you towards the specific answer, which you should be able to figure out alone (although it may take some time because you need to understand all the details of the BP algorithm), if you understand this answer.

The space complexity of an algorithm is just the amount of memory that you need to use during the execution of the algorithm. The space complexity, like the time complexity, is typically expressed as a function of the size of the input (or the number of inputs that you have) and, usually, in big-O notation, i.e. in the limiting case. So, $n$ is not the same thing as $\mathcal{O}(n)$, $\Omega(n)$ or $\Theta(n)$. Moreover, you can also express the space/time complexity both in the worst, best, or average case, and this is orthogonal to upper (expressed with $\mathcal{O}$), lower ($\Omega$), or tight ($\Theta$) bounds (check this).

If you use gradient descent (GD) and back-propagation (BP) to train an FFNN, at each training iteration (i.e. a GD update), you need to store all the matrices that represent the parameters (or weights) of the FFNN, as well as the gradients and the learning rate (or other hyper-parameters). Let's denote the vector that contains all parameters of the FFNN as $\theta \in \mathbb{R}^m$, so it has $m$ components. The gradient vector has the same dimensionality as $\theta$, so we need at least to store $2m + 1$ parameters.

Depending on how you implement BP, you may need more memory. For example, if you need to store all the intermediate terms of the partial derivatives, that will require more memory. To compute exactly the amount of required memory, you will have to expand the gradient vector into all their components (which may not be a pleasant experience). As I just said, this only contributes to the space complexity if you need to store these intermediate components, so, ultimately, the space complexity of an algorithm depends on the specific implementation of the algorithm.

Moreover, to be precise, we cannot just say that the space complexity is $2m + 1$ or whatever the amount of memory that you require is (although many careless or ignorant people will just say that), because we are not expressing this complexity as a function of the size of the input in the limiting case (which is usually done when expressing the space complexity of an algorithm), the number of layers or the number of units per layer (and you probably want to express the space complexity as a function of these 3 possible variable hyper-parameters).

If you take a look at this answer, where I describe how to compute the time complexity of the forward pass of an MLP (or FFNN) as a function of the number of inputs and outputs, the number of layers, and units per layers, then you can express the space complexity for training an FFNN in the same way. Given that you are already familiar with how space and time complexities of an algorithm are calculated (and given that this answer is already quite long), I will not repeat the description here.

In any case, to answer one of your questions more directly, yes, the space complexity will depend on the number of inputs that you have, because the number of inputs will determine the number of weights in the first layer, which you need to store in memory. This is true in the case of FFNNs (or MLPs) but note that this would not be true in the case of CNNs (i.e. the number of parameters in the convolutional layers does not depend on the size of the input), and that's why CNNs are often said to be more memory efficient.

",2444,,2444,,12/29/2020 14:58,12/29/2020 14:58,,,,9,,,,CC BY-SA 4.0 25431,2,,24891,12/28/2020 20:58,,-1,,"

Spectral Graph Convolution

We use the Convolution Theorem to define convolution for graphs. The Convolution Theorem states that the Fourier transform of the convolution of two functions is the pointwise product of their Fourier transforms:

$$\mathcal{F}(w*h) = \mathcal{F}(w) \odot \mathcal{F}(h) \tag{1}\label{1} $$ $$ w * h = \mathcal{F}^{-1}(\mathcal{F}(w)\odot\mathcal{F}(h)) \tag{2}\label{2}$$ Here $w$ is the filter in spatial domain(time domain) and $h$ is the signal in spatial domain(time domain). For images this signal $h$ is a $2D$ matrix and for other cases this $h$ can be a $1D$ signal.

Assume we have $n$ number of node in a graph. In graph fourier transform the eigenvalues carry the notion of frequency. $\Lambda$ is the $ n \times n$ egenvalue matrix and it is a diagonal matrix. We can write equation 2 as:

$$w * h = \phi(\phi^{T}w \odot \phi^{T}h) = \phi\hat{w}(\Lambda)\phi^{T}h \tag{3}$$

Here $\phi$ is the eigenvector matrix of graph Laplacian $\in R^{n \times n}$, $\hat{w}(\Lambda)$ is the filter in spectral domain(frequency domain) $\in R^{n \times n}$ a diagonal matrix, $h$ is the $1D$ graph signal $\in R^{n}$ in spatial domain and w is the filter in spatial domain $\in R^{n}$.

Vanilla Spectral GCN

We define the spatial convolutional layer such that given layer $h^{l}$ , the activation of the next layer is:

$$h^{l+1}=\sigma(w^l*h^l) \tag{4}\label{4},$$

where $\sigma$ represents a nonlinear activation and $w^l$ is a spatial filter and $h$ is the graph signal.

We can perform the above equation in terms of spectral graph convolution operation as:

$$h^{l+1}=\sigma(\hat{w}^l*\hat{h}^l) \tag{5}\label{5},$$

where $\hat{w}$ is the same filter but in the spectral domain(frequency domain). In case of vanilla GCN this equation yeild to:

$$ h^{l+1} = \sigma(\phi\hat{w}^{l}(\Lambda)\phi^{T}h) \tag{6}\label{6}$$

Now, we will learn the $\hat{w}$ using backpropagation.

This vanilla GCN has several limitations, like larger time complexity and this does not guarantee localization in the spatial domain that we get from CNN's filter.

In next works, such as SplineGCNs, ChebNet, Klipf and Welling's GCN, and many other works address those issues, and try to solve them.

Note that we can think of ChebNet and Klipf and Welling's GCN as a message-passing system, but, in the background, they are computing spectral convolution and also they use some standard assumption that's why we do not need any eigenvector and we implement them in the spatial domain, but still they are spectral convolution.

There is also another branch in graph convolution called spatial graph convolution. I only talked about the spectral graph convolution.

",28048,,28048,,12/29/2020 8:22,12/29/2020 8:22,,,,4,,,,CC BY-SA 4.0 25432,2,,25296,12/28/2020 21:09,,0,,"

I'll give you my initial $0.02 for symmetric relaxation or relaxation in general in working with neural networks. The book covers 'Weight perturbation' and this is a basic outline of that. Say you want to host a wedding and every person gives you a 'must-have' list of requirements for them to attend. You can abide by all the requirements of each wedding guest or start 'uninviting' guests whose restrictions cause too many complications.

There are several kinds of relaxation. I've only used Lagrangian relaxation, so my experience is biased to that application. Think of it like this: you are traveling from New York to LA and you want to optimize for time, if you 'relax' the constraints, you can just fly instead of driving. This, however, creates an increased cost of the air ticket. By relaxing the constraints you remove the isolating requirement that you must travel by car.

Symmetric relaxation can be a challenging subject, so I'll include a few links academic research

Academic research arxiv.org is another site I use for research. Hope this helps.

I also found a link on Medium which is another good source for application, theory, and implementation of algorithms. Medium Lagrangian Relaxation

",34095,,2444,,12/28/2020 22:02,12/28/2020 22:02,,,,1,,,,CC BY-SA 4.0 25433,2,,25369,12/28/2020 21:12,,1,,"

I found these links so hopefully they help.

https://openai.com/blog/openai-api/
https://nordicapis.com/on-gpt-3-openai-and-apis/
",34095,,,,,12/28/2020 21:12,,,,0,,,,CC BY-SA 4.0 25437,1,,,12/29/2020 6:40,,0,88,"

I was having looking at this lecture by Ian Goodfellow and my doubt is around 18:00 timestamp where he explains generation of adversarial examples using FGSM.

He mentions that the there is a linear relationship between the input to the model and the output as the activation functions are piece-wise linear with a small number of pieces. I'm not very clear what he means by input and output. Is he referring to inputs and outputs of a single layer or the input image and final output?

He states that the relation between the parameters (weights) of a model and the output are non-linear which is what makes it difficult to train a neural network, thus it is much easier to find an adversarial example.

Could someone explain what is linear in what? and how linearity helps in adversarial example construction?

EDIT: As per my understanding FGSM method relies on the linearity of the loss function with respect to the input image. It constructs an adversarial example by perturbing the input in the direction of the gradient of the loss function w.r.t image. I am not able to understand why this works?

",43272,,43272,,1/1/2021 13:19,1/26/2022 18:09,Why is it easier to construct adversarial examples relative to training neural networks?,,1,0,,,,CC BY-SA 4.0 25438,1,,,12/29/2020 8:44,,1,23,"

Computer programs have been produced for games such as Chess, Go, Poker, StarCraft 2, Dota. The best ones, Deep Blue and AlphaGo , AlphaZero, Pluribus,... are now considered better than the best human players. More to the point, the computers' game results have been influencing human play.

Apparently, computers are not yet better than human players in Bridge. There can be computer simulations of various hands and hypothetical opposing hands. But what progress have computers made in playing human players in tournaments? Have any new theories of bidding or play evolved as a result of computer-human interaction in Bridge?


This is question was asked at Board & Card Games Q&A however I think here it might get better answer

",43351,,43351,,12/29/2020 11:16,12/29/2020 11:16,What progress has been made in computerized bridge play?,,0,0,,,,CC BY-SA 4.0 25439,1,,,12/29/2020 9:27,,4,378,"

AI reached a super-human level in many complex games such as Chess, Go ,Texas hold’em Poker, Dota2 and StarCarft2. However it still did not reach this level in trick-taking card games.

Why there is no super-human AI playing imperfect-information, multi-player, trick-taking card games such as Spades, Whist, Hearts, Euchre and Bridge?

In particular, what are the obstacles for making a super-human AI in those games?


I think those are the reasons that makes Spades hard for AI to master:

  1. Imperfect information games pose two distinct problems: move selection and inference.

  2. The size of the game tree isn't small, however larger games have been mastered.

    I. History size: $14!^4 = 5.7\cdot10^{43}$

    II. There are $\frac{52!}{13!^4}= 5.4\cdot10^{28}$ possible initial states.

    III. Each initial information set can be completed into a full state in $\frac{39!}{13!^3}=8.45\cdot10^{16} $ ways

  3. Evaluation only at terminal states.

  4. Multiplayer games:

    I. harder to prune - search algorithms are less effective

    II. opponent modeling is hard

    III. Goal choosing - several goals are available, need to change goals during rounds according to the reveled information.

  5. Agent need to coordinate with a partner: conventions, signals.

",43351,,43351,,1/21/2021 6:40,11/25/2022 17:07,"Why multiplayer, imperfect information, trick-taking card games are hard for AI?",,1,6,,,,CC BY-SA 4.0 25440,2,,23087,12/29/2020 10:31,,1,,"

First, is it even possible to use DDPG for multi-dimensional continuous action spaces?

Yes, DDPG was primarily developed to deal with continuous action space you can find out more here, here and here.

I have not found any code examples to learn from and many of the papers I have read are near the limit of my understanding in this area.

You can check kerasRL ddpg implementation it is quite easy to understand and can solve your problem of a model with multiple outputs..

why might my actor network be outputting values clustered near its max/min values, and why would the values in either cluster all be the same?

I think it is just a convergence problem try out small learning rates and/or small target_model_update param.

If this doesn't work for you check TRPO and/or PPO..

",43047,,,,,12/29/2020 10:31,,,,1,,,,CC BY-SA 4.0 25442,1,,,12/29/2020 13:48,,1,158,"

I have been looking at the NER example with Trax in this notebook. However, the notebook only gives an example for training the model. I can't find any examples of how to use this model to extract entities from a new string of text.

I've tried the following:

  • Instantiate the model in 'predict' mode. When trying this I get the same error reported in https://github.com/google/trax/issues/556 AssertionError: In call to configurable 'SelfAttention' (<class 'trax.layers.research.efficient_attention.SelfAttention'>)
  • Instantiate the model in 'eval' mode and then running model(sentence) as I would with other models. In this case the instantiation works but I get the following error when running the model: TypeError: Serial.forward input must be a tuple or list; instead got <class 'numpy.ndarray'>. Presumably this is because in 'eval' mode the model needs 2 entries passed in rather than one sentence

How can I use this Reformer to extract entities from a new sentence?

",43256,,2444,,11/30/2021 15:33,11/30/2021 15:33,How can I use this Reformer to extract entities from a new sentence?,,0,1,,,,CC BY-SA 4.0 25447,1,,,12/29/2020 22:33,,1,53,"

From the AlphaZero paper:

The input to the neural network is an N × N × (M T + L) image stack that represents state using a concatenation of T sets of M planes of size N × N . Each set of planes represents the board position at a time-step t − T + 1, ..., t, and is set to zero for time-steps less than 1

From the original AlphaGo Zero paper:

Expand and evaluate (Figure 2b). The leaf node $s_L$ is added to a queue for neural network evaluation, $(d_i(p), v) = f_\Theta(d_i(s_L))$, where $d_i$ is a dihedral reflection or rotation selected uniformly at random from i∈[1..8]

Ignoring the dihedral reflection, the formula in the original paper $f_\Theta(s_L)$ implies that only the board corresponding to $s_L$ is passed to the neural network for evaluation when expanding a node in MCTS, not including the 7 boards from the 7 previous time steps. Is this correct?

",43016,,,,,12/29/2020 22:33,Are inputs into AlphaZero the same during the evaluate step in MCTS and during test time?,,0,0,,,,CC BY-SA 4.0 25448,1,,,12/29/2020 23:48,,1,391,"

I was reading the paper called Improved Techniques for Training GANs. And, in the one-sided label smoothing part, they said that optimum discriminator with label smoothing is

$$ D^*(x)=\frac{\alpha \cdot p_{data}(x) + \beta \cdot p_{model}(x)}{p_{data}(x) + p_{model}(x)}$$

I could not understand where this is come from. How did we get this result?

Note: By the way, I knew how to find optimal discriminator in vanilla GAN i.e. $$ D^*(x) = \frac{p_{r}(x)}{p_{r}(x) + p_g(x)} $$

",41615,,,,,7/9/2021 22:01,Optimum Discriminator for label smoothed GAN,,1,1,,,,CC BY-SA 4.0 25450,2,,25448,12/30/2020 1:54,,1,,"

The equation most likely comes from one of the following references:

David Warde-Farley and Ian Goodfellow. Adversarial perturbations of deep neural networks. In Tamir Hazan, George Papandreou, and Daniel Tarlow, editors, Perturbations, Optimization, and Statistics, chapter 11. 2016. Book in preparation for MIT Press.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the Inception Architecture for Computer Vision. ArXiv e-prints, December 2015.

I was not able to retrieve the book but Szegedy et al. discuss on page 6 (see "Model Regularization via Label Smoothing").

Ian Goodfellow discusses this topic on page 31 of his tutorial:

I. Goodfellow. NIPS 2016 Tutorial: Generative Adversarial Networks. ArXiv e-prints April 2017.

A similar question was asked and answered in the Data Science community: https://datascience.stackexchange.com/questions/28764/one-sided-label-smoothing-in-gans

",5763,,,,,12/30/2020 1:54,,,,1,,,,CC BY-SA 4.0 25451,1,25472,,12/30/2020 2:03,,4,748,"

From the AlphaGo Zero paper, during MCTS, statistics for each new node are initialized as such:

${N(s_L, a) = 0, W (s_L, a) = 0, Q(s_L, a) = 0, P (s_L, a) = p_a}$.

The PUCT algorithm for selecting the best child node is $a_t = argmax(Q(s,a) + U(s,a))$, where $U(s,a) = c_{puct} P(s,a) \frac{\sqrt{\sum_b N(s,b)}}{1 + N(s, a)}$.

If we start from scratch with a tree that only contains the root node and no children have been visited yet, then this should evaluate to 0 for all actions $a$ that we can take from the root node. Do we then simply uniformly sample an action to take?

Also, during the expand() step when we add an unvisited node $s_L$ to the tree, this node's children will also have not been visited, and we run into the same problem where PUCT will return 0 for all actions. Do we do the same uniform sampling here as well?

",43016,,,,,12/31/2020 0:19,How does AlphaZero's MCTS work when starting from the root node?,,1,5,,,,CC BY-SA 4.0 25457,1,,,12/30/2020 7:32,,4,1420,"

A model can be roughly defined as any design that is able to solve an ML task. Examples of models are the neural network, decision tree, Markov network, etc.

A function can be defined as a set of ordered pairs with one-to-many mapping from a domain to co-domain/range.

What is the fundamental difference between them in formal terms?

",18758,,18758,,1/14/2022 23:31,1/14/2022 23:31,What is the fundamental difference between an ML model and a function?,,4,1,,,,CC BY-SA 4.0 25458,1,27140,,12/30/2020 8:41,,10,3124,"

I'm trying to understand the R1 regularization function, both the abstract concept and every symbol in the formula. According to the article, the definition of R1 is:

It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.

$R_1(\psi ) = \frac{\gamma}{2}E_{pD(x)}\left [ \left \| \bigtriangledown D_{\psi}(x) \right \|^2 \right ]$

I have basic understanding of how GAN's and back-propagation works. I understand the idea of punishing the discriminator when he deviates from the Nash equilibrium. The rest of it gets murky, even if it might be basic math. For example, I'm not sure why it matters if the gradient is orthogonal to the data.

On the equation part, it's even more unclear. The discriminator input is always an image, so I assume $x$ is an image. Then what is $\psi$ and $\gamma$?

(I understand this is somewhat of a basic question, but seems there are no blogs about it for us simple non-researchers, math challenged people who fail to understand the original article )

",43381,,2444,,12/30/2020 11:44,8/31/2021 19:07,Can someone explain R1 regularization function in simple terms?,,1,3,,,,CC BY-SA 4.0 25459,1,,,12/30/2020 10:04,,5,1212,"

I'm following this blog post which enumerates the various types of attention.

It mentions content-based attention where the alignment scoring function for the $j$th encoder hidden state with respect to the $i$th context vector is the cosine distance:

$$ e_{ij} = \frac{\mathbf{h}^{enc}_{j}\cdot\mathbf{h}^{dec}_{i}}{||\mathbf{h}^{enc}_{j}||\cdot||\mathbf{h}^{dec}_{i}||} $$

It also mentions dot-product attention:

$$ e_{ij} = \mathbf{h}^{enc}_{j}\cdot\mathbf{h}^{dec}_{i} $$

To me, it seems like these are only different by a factor. If we fix $i$ such that we are focusing on only one time step in the decoder, then that factor is only dependent on $j$. Specifically, it's $1/\mathbf{h}^{enc}_{j}$.

So we could state: "the only adjustment content-based attention makes to dot-product attention, is that it scales each alignment score inversely with the norm of the corresponding encoder hidden state before softmax is applied."

What's the motivation behind making such a minor adjustment? What are the consequences?


Follow up question:

What's more, is that in Attention is All you Need they introduce the scaled dot product where they divide by a constant factor (square root of size of encoder hidden vector) to avoid vanishing gradients in the softmax. Any reason they don't just use cosine distance?

",16871,,16871,,12/30/2020 10:19,4/19/2021 8:55,What's the difference between content-based attention and dot-product attention?,,1,0,,,,CC BY-SA 4.0 25460,2,,25457,12/30/2020 10:29,,5,,"

A model as a set of functions

In some cases in machine learning, a model can be thought of as a set of functions, so here's the first difference.

For example, a neural network with an arbitrary vector of parameters $\theta \in \mathbb{R}^m$ is often denoted as a model, then a specific combination of these parameters represents a specific function. More specifically, suppose that we have a neural network with 2 inputs, 1 hidden neuron (with a ReLU activation function, denoted as $\phi$, that follows a linear combination of the inputs), and 1 output neuron (with a sigmoid activation function, $\sigma$). The inputs are connected to the only hidden unit and these connections have a real-valued weight. If we ignore biases, then there are 3 parameters, which can be grouped in the parameter vector $\theta = [\theta_1, \theta_2, \theta_3] \in \mathbb{R}^3 $. The set of functions that this neural network represents is defined as follows

$$ f(x_1, x_2) = \sigma (\theta_3 \phi(x_1 \theta_1 + x_2 \theta_2)) \tag{1}\label{1}, $$

In this case, the equation \ref{1} represents the model, given the parameter space $\Theta = \mathbb{R}^3$. For any specific values that $\theta_1, \theta_2,$ and $\theta_3$ can take, we have a specific (deterministic) function $f: \mathbb{R} \rightarrow [0, 1]$.

For instance, $\theta = [0.2, 10, 0.4]$ represents some specific function, namely

$$ f(x_1, x_2) = \sigma (0.4 \phi(x_1 0.2 + x_2 10.0)) \tag{2}\label{2} $$ You can plot this function (with Matplotlib) for some values of the inputs to see how it looks. Note that $x_1$ and $x_2$ can be arbitrary (because those are just the inputs, which I assumed to be real numbers).

This interpretation of a model is roughly equivalent to the definition of a hypothesis class (or space) in computational learning theory, which is essentially a set of functions. So, this definition of a model is useful to understand the universal approximation theorems for neural networks, which state that you can find a specific set of parameters such that you can approximately compute some given function arbitrarily well, given that some conditions are met.

This interpretation can also be applied to decision trees, HMM, RNNs, and all these ML models.

A model in reinforcement learning

The term model is also sometimes used to refer to a probability distribution, for example, in the context of reinforcement learning, where $p(s', r \mid s, a)$ is a probability distribution over the next state $s'$ and reward $r$ given the current state $s$ and action $a$ taken in that state $s$. Check this question for more details. A probability distribution could also be thought of as a (possibly infinitely large) set of functions, but it is not just a set of functions, because you can also sample from a probability distribution (i.e. there's some stochasticity associated with a probability distribution). So, a probability distribution can be considered a statistical model or can be used to represent it. Check this answer.

A function as a model

A specific function (e.g. the function in \ref{2}) can also be a model, in the sense that it models (or approximates) another function. In other words, a person may use the term model to refer to a function that attempts to approximate another function that you want to model/approximate/compute.

",2444,,2444,,3/2/2021 10:18,3/2/2021 10:18,,,,0,,,,CC BY-SA 4.0 25463,1,,,12/30/2020 14:51,,0,158,"

Suppose I have the following toy data set:

Each instance has multiple labels at a time.

You can see I have 2 instances for Label2. However, only one instance for the other labels. It means that we have class imbalanced issues.

I read about adding class weights for an imbalanced dataset. However, I could not understand how it actually works and why it is beneficial.

Can anyone explain this method generally, as well as according to my given toy data set?

In addition to that, how do we handle these missing labels (nan)?

",41756,,2444,,11/10/2021 12:11,11/10/2021 12:11,"What does ""adding class weights for an imbalanced dataset"" mean in the case of multi-label classification?",,1,0,,,,CC BY-SA 4.0 25464,1,,,12/30/2020 16:13,,2,102,"

I found this question very interesting, and this is a follow up on it.

Presumably, we'd want all the filters to converge towards some complementary set, where each filter fills as large a niche as possible (in terms of extracting useful information from the previous layer), without overlapping with another filter.

A quick thought experiment tells me (please correct me if I'm wrong) that if two filters are identical down to maximum precision, then without adding in any other form of stochastic differentiation between them, their weights will be updated in the same way at each step of gradient descent during training. Thus, it would be a very bad idea to initialise all filters in the same way prior to training, as they would all be updated in exactly the same way (see footnote 1).

On the other hand, a quick thought experiment isn't enough to tell me what would happen to two filters that are almost identical, as we continue to train the network. Is there some mechanism causing them to then diverge away from one another, thereby filling their own "complementary niches" in the layer? My intuition tells me that there must be, otherwise using many filters just wouldn't work. But during back-propagation, each filter is downstream, and so they don't have any way of communicating with one another. At the risk of anthropomorphising the network, I might ask "How do the two filters collude with one another to benefit the network as a whole?"


Footnotes:

  1. Why do I think this? Because the expession for the partial derivative of the $k$th filter weights with respect to the cost $\partial W^k/\partial C$ will be identical for all $k$. From the perspective of back-propagation, all paths through the filters look exactly the same.
",16871,,16871,,12/30/2020 16:37,1/1/2021 7:30,Is there anything that ensures that convolutional filters end up different from one another?,,2,0,,,,CC BY-SA 4.0 25465,1,,,12/30/2020 16:56,,1,112,"

In my problem, the agent does not follow the successive order of states, but selects with $\epsilon$-greedy the best pair (state, action) from a priority queue. More specifically, when my agent goes to a state $s$ and opens its available actions $\{ a_i \}$, then it estimates each $(s,a)$ pair (regression with DQN) and stores it into the queue. In order for my agent to change to state $s'$, it picks the best pair from the queue instead of following one of the available actions $\{ a_i \}$ of $s$. I note that a state has a partially-different action set from the others.

However, in this way, how can I model my MDP if my agent does not follow the successive order of states?

More specifically, I have a focused crawler that has an input of a few seeds URLs. I want to output as many as possible relevant URLs with the seeds. I model the RL framework as follows.

  • State: the webpage,
  • Actions: the outlink URLs of the state webpage,
  • Reward: from external source I know if the webpage content is relevant.

The problem is that, while crawling, if the agent keeps going forward by following the successive state transition, it can fall into crawling traps or local optima. That is the reason why a priority queue is used importantly in crawling. The crawling agent does not follow anymore the successive order of state transitions. Each state-action pair is added to the priority queue with its estimated action value. For each time, it selects the most promising state-action pair among all pairs in the queue. I note that each URL action can be estimated taking into account the state-webpage where it was extracted.

",36055,,2444,,12/31/2020 15:50,1/3/2021 8:38,How can I model a problem as an MDP if the agent does not follow the successive order of states?,,1,14,,,,CC BY-SA 4.0 25467,1,25470,,12/30/2020 17:23,,3,136,"

In the machine learning literature, I often see it said that something is "embedded" in some space. For instance, that something is "embedded" in feature space, or that our data are "embedded" in dot product space, etc. However, I've never actually seen an explanation of what this is supposed to mean. So what does it actually mean to say that something is "embedded" in some space?

",16521,,2444,,12/30/2020 20:04,12/30/2020 20:44,"In the machine learning literature, what does it mean to say that something is ""embedded"" in some space?",,1,1,,,,CC BY-SA 4.0 25468,1,,,12/30/2020 17:26,,1,360,"

I was experimenting seq2seq model which was the bi-LSTM encoder/decoder with attention. When I compared the training times on GPU vs CPU while varying the batch size, I got

  1. CPU on the small batch size (32) is fastest. It's even ~1.5 time faster than GPU using the same batch size.
  2. When I increase the batch size (upto 2000), GPU becomes faster than CPU due to the parallelization.

(2) looks reasonable to me. However, I am a bit perplexed by the observation (1). The average sequence length is around 15~20. Even with the batch size 32, I expected that GPU should be faster but it turned out not. I used PyTorch LSTM.

Does it look normal? In RNN-style seq2seq, could CPU be faster than GPU?

",43391,,,,,12/30/2020 17:26,Training speed in GPU vs CPU for LSTM,,0,1,,,,CC BY-SA 4.0 25469,2,,25457,12/30/2020 20:31,,0,,"

In simple terms, a neural network model is a function approximator which tries to fit the curve of the hypothesis function. A function itself has an equation which will generate a fixed curve:

If we have the equation (i.e., the function), we do not need neural network for its input data. However, when we only have some notion of its curve (or the input and output data) we seek a function approximator, so that for new, unseen input data, we can generate the output.

Training this neural network is all about getting as close an approximation to the original (unknown function) as possible.

",33781,,33781,,12/31/2020 6:48,12/31/2020 6:48,,,,1,,,,CC BY-SA 4.0 25470,2,,25467,12/30/2020 20:38,,1,,"

Embedding is the process of representing data (from a source domain) in a new (or target) domain. Usually, the source domain is discrete, and the target domain is continuous. For example, embedding words into the continuous vector space can be done by the word2vec method.

The main reason behind using the embedding is doing meaningful mathematical computations in the target domain, which is not possible or straightforward in the source domain. For example, summing two words "brother" - "man" + "woman" not meaningful in the word and character levels. However, when using word2vec, embedding("brother") - embedding("man") + embedding("woman") can be meaningful and comparable with other embedded vectors; It should be near the embedded vector of "sister".

",4446,,4446,,12/30/2020 20:44,12/30/2020 20:44,,,,1,,,,CC BY-SA 4.0 25471,2,,25457,12/30/2020 23:56,,2,,"

Any model can be considered to be a function. The term "model" simply denotes a function being used in a particular way, namely to approximate some other function of interest.

",43400,,,,,12/30/2020 23:56,,,,0,,,,CC BY-SA 4.0 25472,2,,25451,12/30/2020 23:58,,3,,"

I looked at the Python pseudo-code attached to the Data S1 of the Supplementary Materials of the AlphaZero paper. Here is my findings:

  • Contrary to the paper, AlphaZero does not store $\{N(s, a), W(S, a), Q(s, a), P(s, a)\}$ statistics for each edge $(s,a)$. Instead, AlphaZero stores $\{N(s), W(S), Q(s), P(s)\}$ statistics for each node $s$.
  • When a leaf node $S_L$ is expanded, it's visit count, value scores, and action policies are immediately updated in $\{N(s), W(S), Q(s), P(s)\}$, so $N(s)$ is at least $1$. This is why in the paper, the backprop step updates for all time steps $t \le L$ rather than $t < L$. It makes sense to update $s_L$ even though there is no corresponding $a_L$ to pair it with.
  • Therefore, when a new leaf node is expanded, the value $U(s, a)$ of a child of that leaf node will be nonzero, since $\sqrt{\sum_b N(s,b)}$ is actually computed as $N(s_{parent})$ in the code, which is at least 1.
  • Oddly enough, I think there might be a bug in the pseudocode, because at the beginning on the first iteration (starting at the root node), $U(s,a) = 0$ for all child nodes of the root node. This is because at the first iteration, $N(s_{root}) = 0$. The value of all child nodes will be $0$, and since the authors chose to break ties according to Python's max function, the algorithm simply chooses the first element it finds in case of a tie.
  • After the first iteration, $N(s_{root}) > 0$ and so $U(s,a) \neq 0$ and things proceed as normal since the backprop step will have updated the visit count of the root node. So this possible bug/unintuitive behavior only affects the first iteration. It is extremely minor and insignificant, and does not affect the outcome of the MCTS, which is probably why it went unnoticed.
",42699,,42699,,12/31/2020 0:19,12/31/2020 0:19,,,,11,,,,CC BY-SA 4.0 25475,2,,25152,12/31/2020 6:33,,4,,"

The first two equations are equivalent. The last equation can be equivalent if you scale $\alpha$ appropriately.

Equation 1

Consider the equation from the Stanford slide:

$$ v_{t}=\rho v_{t-1}+\nabla f(x_{t-1}) \\ x_{t}=x_{t-1}-\alpha v_{t}, $$

Let's evaluate the first few $v_t$ so that we can arrive at a closed form solution:

$v_0 = 0 \\ v_1 = \rho v_0 + \nabla f(x_0) = \nabla f(x_0)\\ v_2 = \rho v_1 + \nabla f(x_1) = \rho \nabla f(x_0) + \nabla f(x_1)\\ v_3 = \rho v_2 + \nabla f(x_2) = \rho^2 \nabla f(x_0) + \rho \nabla f(x_1) + \nabla f(x_2)\\ \dots \\ v_t = \displaystyle \sum_{i=0}^{t-1} \rho^{t-1-i} \nabla f(x_i) $

So the closed form update is:

$$x_t = x_{t-1} - \alpha \displaystyle \sum_{i=0}^{t-1} \rho^{t-1-i} \nabla f(x_i)$$

Equation 2

Now consider the equation from the paper: $$ v_{t}=\rho v_{t-1}+\alpha \nabla f(x_{t-1}) \\ x_{t}=x_{t-1}- v_{t}, $$

We again evaluate the first few $v_t$ to arrive at a closed form solution:

$v_0 = 0 \\ v_1 = \rho v_0 + \alpha \nabla f(x_0) = \alpha \nabla f(x_0)\\ v_2 = \rho v_1 + \alpha \nabla f(x_1) = \rho \alpha \nabla f(x_0) + \alpha \nabla f(x_1)\\ v_3 = \rho v_2 + \alpha \nabla f(x_2) = \rho^2 \alpha \nabla f(x_0) + \rho \alpha \nabla f(x_1) + \alpha \nabla f(x_2)\\ \dots \\ v_t = \displaystyle \sum_{i=0}^{t-1} \rho^{t-1-i} \alpha \nabla f(x_i) $

So the closed from update is:

$$x_t = x_{t-1} - \displaystyle \sum_{i=0}^{t-1} \rho^{t-1-i} \alpha \nabla f(x_i)$$

As you can see, this is equivalent to the previous closed form update. The only difference is if $\alpha$ is inside or outside the summation, but since it is a constant, it doesn't really matter anyways.

Equation 3

As for the last equation

$$ v_{t}= \rho v_{t-1}+ (1- \rho) \nabla f(x_{t-1}) \\ x_{t}=x_{t-1}-\alpha v_{t} $$ Let's do the same thing:

$v_0 = 0 \\ v_1 = \rho v_0 + (1-\rho) \nabla f(x_0) = (1-\rho) \nabla f(x_0)\\ v_2 = \rho v_1 + (1-\rho) \nabla f(x_1) = \rho (1-\rho) \nabla f(x_0) + (1-\rho) \nabla f(x_1)\\ v_3 = \rho v_2 + (1-\rho) \nabla f(x_2) = \rho^2 (1-\rho) \nabla f(x_0) + \rho (1-\rho) \nabla f(x_1) + (1-\rho) \nabla f(x_2)\\ \dots \\ v_t = \displaystyle \sum_{i=0}^{t-1} \rho^{t-1-i} (1-\rho) \nabla f(x_i) $

And so the closed form update is:

$$x_t = x_{t-1} - \alpha \displaystyle \sum_{i=0}^{t-1} \rho^{t-1-i} (1-\rho) \nabla f(x_i)$$

This equation is equivalent to the other two as long as you scale $\alpha$ by a factor of $\displaystyle \frac{1}{1-\rho}$.

",42699,,,,,12/31/2020 6:33,,,,2,,,,CC BY-SA 4.0 25476,2,,25464,12/31/2020 7:56,,1,,"

Yes, your thought experiment is correct, and the concept is known as breaking the symmetry. This is why biases can be initialized to $0$ (bias initialization doesn't matter), but weights should be randomly initialized to different numbers -- to break the symmetry. Otherwise, if not, the network will function as if it has $n-1$ filters (or however many filters that are unique) instead of the full $n$ filters.

As for your main question, if two filters are initialized to very similar values, they may branch out as long as that is what minimizes the training loss. There is no collusion or coordination going on; each filter updates completely independently. You can even freeze all the other filters and only perform gradient descent on one filter at a time. Each filter just follows the direction of their gradient to minimize the training loss.

Consider the backprop equations as defined by this online book:

The gradient of the current layer's weights depends on

  1. The future layers' weights, errors, and activation function's derivatives
  2. The current layer's activation function's derivative, and
  3. The previous layer's outputs.

Each weight in the layer (i.e. each filter in the layer) looks at different parts of these three components (indexed by $j$ and $k$ in equation $BP4$). It is this different perspective that allows them to update their gradients in different directions, even if their initial weights are very similar to each other. Note that it is possible that they end up with the same gradient, but it is very unlikely.

",42699,,42699,,12/31/2020 8:40,12/31/2020 8:40,,,,2,,,,CC BY-SA 4.0 25477,1,25482,,12/31/2020 13:21,,1,125,"

As I've been dabbling into the sliding window concept, I stumbled on a question that asked me to find the number of windows needed on a 1D image of $W$ size, knowing the window size $K$ and the stride $S$.

As much as I tried, I couldn't find a formula by myself (the closest I got was this one : $N=\frac{W + x(K-S)}{K}$ where $x$ was the number of overlapping rectangle zones, which seemed to be $x=N-1$ but the reccurence wasn't what I was looking for and it could be wrong as I was reasoning through induction).

I find the right formula on Internet at last (this one : $N=\frac{W-K+2P}{S}+1$ with $P$ the padding but my problem didn't needed one) but I can't find the proof of it.

Is there any place where I could find the proof ?

",43411,,,,,12/31/2020 16:13,Output volume proof for convolutional neural network,,1,0,,,,CC BY-SA 4.0 25478,1,,,12/31/2020 13:39,,2,75,"

The traditional setting of multiagent reinforcement learning (MARL) is the mode in which there is set of agents and external environment. And the reward is given to each agent - individually or collectively - by the external environment.

My question is - is there MARL model in which the reward is given by one agent to the other agent, meaning that one agent is incurring costs and other agent - revenue (or maybe even a profit?

Effectively that means distributed supervision: only some agents face the environment with real reward/supervision and then this supervision is more or less effectively propgated to other agents that learn/do their own specialized tasks that are part of collective task ececuted/solved distributively in MARL.

",8332,,8332,,12/31/2020 14:25,1/2/2021 20:09,Is there multi-agent reinforcement learning model in which (some of the) reward is given by other agent and not by the external environment?,,1,1,,,,CC BY-SA 4.0 25480,1,,,12/31/2020 14:32,,0,60,"

The following images are

a) The weights of a logistic regression model trained on MNIST.

b) The sign of the weights of a logistic regression

How do these images represent the weights?

Would be grateful for any help.

Source of the research paper

",35616,,,,,12/31/2020 14:32,How do we interpret the images of weights in logistic regression,,0,4,,,,CC BY-SA 4.0 25481,1,,,12/31/2020 14:57,,1,70,"

I'm well aware of the inner workings of CNN models for object detection, and although I've not worked on a semantic segmentation problem I can imagine how it works.

With these types of models, we need to say "segment out the humans", or "segment out the X". But what about when I say something like "segment out the subject of this photo, whatever it happens to be". For example, see this service: https://removal.ai/

Without too much imagination I might guess that they apply a multiclass segmentation model and just show any foreground pixels, no matter what class they belong to. So we'd hope that the subject is in one of the classes that the model was trained for, and that there are no other class instances in the image that shouldn't be captured. But is there a more general way?

",16871,,16871,,12/31/2020 15:16,1/4/2021 0:39,How does general image background removal AI work?,,1,0,,,,CC BY-SA 4.0 25482,2,,25477,12/31/2020 16:13,,2,,"

You can think about the problem in the following way (without padding, as the padding case is a simple extension of base case with $\tilde{W}:=W + 2P$).
You want to know how many windows are necessary to cover an image of size $W$, given a window of size $K$ and stride $S$. So your image is a vector with indices $1, 2\dots, W$; as you put the first window on the image, the window will cover the indices from $1$ to $K$. As we apply stride (meaning that we translate the window), we will get a sequence considering the last covered index. The first element of this sequence is $i_1=K$, where the $1$ indexing $i$ is not the number of times we apply stride but the number of covering windows we have. So in this first case applying no stride and we have 1 covering window. Applying stride once to our window, the window will cover the indices form $S$ to $K+S$, so $i_2=K+(2 -1 )S$. In general you get $i_n=K+(n-1)S$.
Now, if we can exactly cover $W$ then there is a number $N$ such that $i_N=W$. This means $$K+(N-1)S=W,$$ which rearranging gives $$N=\frac{W-K}{S} + 1.$$

",42424,,,,,12/31/2020 16:13,,,,2,,,,CC BY-SA 4.0 25483,1,25517,,12/31/2020 16:22,,2,2422,"

I try to apply Transformers to an unusual use case - predict the next user session based on the previous one. A user session is described by a list of events per second, e.g. whether the user watches a particular video, clicks a specific button, etc. Typical sessions are around 20-30 seconds, I pad them to 45 seconds. Here's a visual example of 2 subsequent sessions:

x axis is time in seconds, y axis is the list of events (black line divides the 2 sessions). I extend the vocabulary with 2 additional tokens - start and end of a session (<sos> and <eos>), where <sos> is a one-hot vector at the very beginning and <eos> - similar vector at the end of the session (which makes this long red line).

Now I use these extended vectors of events as embeddings and want to train a Transformer model to predict the next events in the current session based on previous events in this (target) session and all events in the previous (source) session. So pretty much like seq2seq autoregressive models, but in a bit unusual settings.

Here's the problem. When I train a Transformer using the built-in PyTorch components and square subsequent mask for the target, my generated (during training) output is too good to be true:

Although there's some noise, many event vectors in the output are modeled exactly as in the target. After checking train-val-test split is correct, my best guess is that the model cheats by attending to the same day in the target, which the mask should have prevented. The mask is (5x5 version for brevity):

[[0., -inf, -inf, -inf, -inf],
 [0., -inf, -inf, -inf, -inf],
 [0., 0., -inf, -inf, -inf],
 [0., 0., 0., -inf, -inf],
 [0., 0., 0., 0., -inf]]

Note that since I use <sos> in both - source and target - mask[i, i] is set to -inf (except for mask[0, 0] for numerical reasons), so the output timestamp i should not attend to the target timestamp i.

The code for the model's forward method:

def forward(self, src, tgt):
    memory = self.encoder(src)
    out = self.decoder(tgt, memory, self.tgt_mask.type_as(tgt))
    out = torch.sigmoid(out)
    return out

I also tried to avoid the target mask altogether and set it to all -inf (again, except for the first column for numerical stability), but the result is always the same.

Am I using the mask the wrong way? If the mask looks fine, what other reasons could lead to such a "perfect" result?


After shifting the target to the right as suggested in the accepted answer I get the following result:

Which is much more realistic. One suspicious thing is that out[t] now resembles tgt[t - 1], but it can be explained by the fact that the user state tends to be "sticky", e.g. if a user watches a video at t - 1, most likely he will watch it at t as well.

",43413,,43413,,1/3/2021 22:43,1/4/2021 2:03,Transformers: How to use the target mask properly?,,1,12,,,,CC BY-SA 4.0 25484,1,,,12/31/2020 17:19,,1,17,"

I've come across two types of image classification tasks

  1. cat/dog classification the whole picture is either a cat or a dog. Simple.
  2. this image contains a cat classification. There's a whole chaotic scene, and the image may contain a cat nestled in there somewhere.

Type 2 seems to be way more prevalent in real life application. Here are just some examples:

  • Determining the sex of an insect. Maybe the male and female look pretty much the same, but the male has a small bump in some location that takes up a tiny part of the image.
  • Determining the presence of an animal call in an audio spectrogram.
  • Finding defects in road surfaces.

My question is: for task type 2, what are some key modifications we'd make to the normal approach of post-training on an ImageNet trained architecture like Resnet? Shouldn't the architecture be modified somehow to be more fitted to task type 2?

Before someone mentions using object detection algorithms I'd like to add the rule that we only have global image labels, not bounding box annotation or co-ordinates of any sort.

",16871,,,,,12/31/2020 17:19,Considerations when doing image classification where the object is not the subject,,0,0,,,,CC BY-SA 4.0 25485,1,,,12/31/2020 17:30,,1,98,"

I have implemented an AI agent to play checkers based on the design written in the first chapter of Machine Learning, Tom Mitchell, McGraw Hill, 1997.

We train the agent by letting it plays against its self.

I wrote the prediction to get how good a board is for white, so when the white plays he must choose the next board with the maximum value, and when black plays it must choose the next board with the minimum value.

Also, I let the agent explore other states by making it choose a random board among the valid next boards, I let that probability to be equal to $0.1$.

The final boards will have training values:

100 if this final board is win for white.

-100 if this final board is lose for white.

0 if this final board is draw.

The intermediate boards will have a training value equal to the prediction of the next board where it is white turn.

The model is based on a linear combination of some features (see the book for full description).

I start by initializing the parameters of the model to random values.

When I train the agent, he lost against himself always or draw in a stupid way, but the error converges to zero.

I was thinking that maybe we should let the learning rate smaller (like 1e-5), and when I do that the agent learns in a better way.

I think that this happened because of the credit assignment problem, because a good move may appear in a loose game, therefore, considered a loose move, so white will never choose it when he plays, but when we let the learning rate to be very small that existence of a good move in a losing game will change its value by a very little amount, and that good move should appear more in win games so its value converges to the right value.

Is my reasoning correct? and if not so what is happened?

",36578,,36578,,1/2/2021 9:48,1/2/2021 9:48,learning rate and credit assignment problem in checkers,,0,4,,,,CC BY-SA 4.0 25486,1,,,12/31/2020 17:43,,1,221,"

I want to do alpha-beta pruning on this tree:

  1. Consider nodes J and K. K is the max. Therefore, node D has an alpha value of 20, node B has a beta value of 20.

  2. Move to Node E. Pass the beta value of 20 to node E. Node L has an alpha value of 30, therefore, at this point 30 (alpha) > 20 (beta) and we can prune the E to M branch.

  3. Now is my question. My original beta value at node B was 20, and the alpha value passed up to node A was 20. Then, in step 2, I changed the alpha value to be 30. Do i then change the beta value at node B to be 30, and the alpha value at node A to be 30? (and therefore pass 30 as the alpha value to node C)? Or do I keep the original value of 20 at node B, A and C?

",42926,,,,,10/3/2021 16:09,Does the alpha/beta value of parent nodes change if the alpha beta value of the child node changes?,,1,0,,,,CC BY-SA 4.0 25487,2,,25464,12/31/2020 17:45,,0,,"

Here I am just trying to simplify what @user3667125 already said uses math arguments

Say we have a cost function $J(x, y; F(\cdot; \Theta))$ which regards training a NN $F(\cdot, \Theta)$ with $x$ input and $y$ expected output

Gradient descent tells us how to upgrade each $\theta_{i} \in \Theta$ and it is

$$ \theta_{i}(t+1) = \theta_{i}(t) - \alpha \frac{\partial J(x,y; \Theta)}{\partial \theta_{i}} $$

with $t$ training time

So let's focus on a specific component of the NN $f(x; \theta)$ and from its perspective, we can say the computation is

$$ h(f(g(x); \theta), y) $$

with

  • $h(x,y)$ representing all the subsequent components + loss function
  • $g(x)$ representing all the previous computation
  • $x$ input
  • $y$ expected output

so its update is

$$ \Delta \theta(t) = \frac{\partial h(f(g(x(t), \theta), y(t)))}{\partial \theta} $$

with

  • $x(t)$ the concrete input at $t$ time
  • $y(t)$ expected output at $t$ time
  • $\theta(t)$ the concrete value of the parameter at $t$ time

Applying the chain rule we have

$$ \Delta \theta(t) = h'(f(g(x(t), \theta(t))), y(t)) f'(g(x(t)), \theta(t)) $$

so the gradient observed by a certain parameter $\theta$ depends on

  • $\theta(t)$ the current value of the parameter
  • $x(t)$ the current input and more specifically by $g(x(t))$ the processing of this input by the previous part of the NN
  • $y(t)$ the expected output
  • $h'(\cdot)$ the gradient of the subsequent part of the network

So even if we have 2 weights with the same value $\theta_{i}(t) = \theta_{j}(t) \quad i \neq j$ at a certain point in training time, they can still see different gradients since

  • the upstream processing $g(x(t))$ can be different
  • the gradient backpropagating from the downstream processing $h'(\cdot, y(t))$ can be different
",1963,,1963,,1/1/2021 7:30,1/1/2021 7:30,,,,1,,,,CC BY-SA 4.0 25488,2,,25463,12/31/2020 18:58,,1,,"

The paper A systematic study of the class imbalance problem in convolutional neural networks is a great overview on class imbalance approaches. Section 2 summarizes various methods commonly used. They categorize "Adding Class Weights for an imbalanced dataset" under the technique "Cost sensitive learning":

Cost sensitive learning. This method assigns different cost to misclassification of examples from different classes [44]. With respect to neural networks it can be implemented in various ways. One approach is threshold moving [22] or post scaling [23] that is applied in the inference phase after the classifier is already trained. Similar strategy is to adapt the output of the network and also use it in the backward pass of backpropagation algorithm [45]. Another adaptation of neural network to be cost sensitive is to modify the learning rate such that higher cost examples contribute more to the update of weights. And finally we can train the network by minimizing the misclassification cost instead of standard loss function

Without further context, "Adding Class Weights for an imbalanced dataset" can mean many things as enumerated by the above. But if I had to guess, the most common meaning is that they weigh the misclassification cost differently per label by multiplying it with a different weight variable.

For example, maybe label 1, 2, and 3, when misclassified, gets a 1x multiplier in the training loss (the standard weight), but label 4 gets a 3x multiplier because it is roughly 3x more important, and label 5 gets a 0.5x multiplier because it is 0.5x less important.

Here is an example of how to do this with Keras.

(FYI the paper above recommends you over-sample the minority class rather than using the cost-sensitive learning approach to help with class imbalance)

As for missing labels, the quickest solution is just to skip those training instances when training for that particular missing label. If you want to still use the training instances somehow, a common approach is to fix the missing label with semi-supervised learning approaches.

One way to do semi-supervised learning is self-training, where your model is trained on only the labeled instances, and then makes predictions on the missing labels. High-confidence predictions of the missing labels are then added to the training data, and the model is trained on the new training data. This process repeats until convergence of the training data.

",42699,,,,,12/31/2020 18:58,,,,0,,,,CC BY-SA 4.0 25489,2,,25457,12/31/2020 19:05,,1,,"

Every model is a function. Not every function is a model.

A function uniquely maps elements of some set to elements of another set, possibly the same set.

Every AI model is a function because they are implemented as computer programs and every computer program is a function uniquely mapping the combination of the sequence of bits in memory and storage at program start up, plus inputs, to the sequence of bits in memory and storage, plus output, at program termination.

However, a 'model' is very specifically a representation of something. Take the logistic curve:

$$ f(x) = \frac{L}{1 + e^{k(x-x_{0})} } $$

Given arbitrary real values for $L$, $k$, and $x_{0}$, that's a function. However, given much more specific values learned from data, it can be a model of population growth.

Similarly, a neural network with weights initialized to all zeros is a function, but a very uninteresting function with the rather limited codomain $\{0\}$. However, if you then train the network by feeding it a bunch of data until the weights converge to give predictions or actions roughly corresponding to some real world generating process, now you have a model of that generating process.

",43397,,,,,12/31/2020 19:05,,,,2,,,,CC BY-SA 4.0 25490,2,,2452,12/31/2020 19:24,,0,,"

What you're describing is a CRUD app plus a recommender system. A CRUD app on its own doesn't perform any similarity ranking or recommendation functions. Stack Exchange is doing at least keyword matching and possibly semantic parsing as a feature layer on top of the basic CRUD functions.

",43397,,,,,12/31/2020 19:24,,,,0,,,,CC BY-SA 4.0 25491,1,25497,,12/31/2020 21:22,,3,241,"

I was pondering on the loss function of GAN, and the following thing turned out

\begin{aligned} L(D, G) & = \mathbb{E}_{x \sim p_{r}(x)} [\log D(x)] + \mathbb{E}_{x \sim p_g(x)} [\log(1 - D(x)] \\ & = \int_x \bigg( p_{r}(x) \log(D(x)) + p_g (x) \log(1 - D(x)) \bigg) dx \\ & =-\left[CE(p_r(x), D(x))+CE(p_g(x), 1-D(x)) \right] \\ \end{aligned} Where CE stands for cross-entropy. Then, by using law of large numbers: \begin{aligned} L(D, G) & = \mathbb{E}_{x \sim p_{r}(x)} [\log D(x)] + \mathbb{E}_{x \sim p_g(x)} [\log(1 - D(x)] \\ & =\lim_{m\to \infty}\frac{1}{m}\sum_{i=1}^{m}\left[1\cdot \log(D(x^{(i)}))+1\cdot \log(1-D(x^{(i)}))\right]\\ & =- \lim_{m \to \infty} \frac{1}{m}\sum_{i=1}^{m} \left[CE(1, D(x))+CE(0, D(x))\right] \end{aligned}

As you can see, I got a very strange result. This should be wrong intuitively because in the last equation first part is for real samples, and the second is for generated samples. However, I am curious about where are the mistakes?

(Please explain with math).

",41615,,2444,,12/10/2021 17:50,12/10/2021 17:50,Where is the mistake in my derivation of the GAN loss function?,,2,0,,,,CC BY-SA 4.0 25492,2,,14112,12/31/2020 22:44,,1,,"

The difference is much simpler than you might have anticipated: In the quantum computing community, machine learning algorithms designed to be used on quantum computers as opposed to classical computers, would fall under "quantum machine learning". There's really nothing more to it!

There is a short paper published in Nature called "Quantum Machine Learning" which was mentioned before, and it might give you all the answers you need about what "Quantum Machine Learning" is.

",19524,,,,,12/31/2020 22:44,,,,2,,,,CC BY-SA 4.0 25493,2,,25491,12/31/2020 23:35,,1,,"

$\textbf{Remark.}$ I'd leave this as a comment if I could.

Regarding notation (which I believe may be the cause of your issue here), the loss function is better written as \begin{align*} \operatorname{Loss} &= \frac{1}{m}\sum_{i=1}^m \left(\log D\big(x^{(i)}\big) + \log\Big(1-D\big(G\big(z^{(i)}\big)\big)\right)\\ &\approx \mathbb{E}_x[\log D(x)] + \mathbb{E}_z[\log(1-D(G(z)))], \end{align*} where the noise vectors, $z$, come from a suitable distribution, and $G(z)$ denotes the output of the generator; the $\approx$ symbol here implicitly assumes that the appropriate form of the Law of Large Numbers (LLN) applies.

Most importantly, the dependence on G is not trivial (for instance, what if $G$ never learns and always produces the same output?).

Also, the expectations should depend on their respective distributions, even when using LLN. For example, think of how you calculate the expectation of a discrete random variable.

",43417,,,,,12/31/2020 23:35,,,,3,,,,CC BY-SA 4.0